Definition and Purpose of Cloaking
– Cloaking is a search engine optimization (SEO) technique.
– It presents different content to search engine spiders and users’ browsers.
– Content is delivered based on IP addresses or User-Agent HTTP headers.
– Cloaking can deceive search engines for black hat SEO purposes.
– It can also be used to inform search engines about non-textual content.
Cloaking as a Spamming Technique
– Cloaking is often used as a spamdexing technique.
– It attempts to manipulate search engines for higher rankings.
– Cloaking can trick search engine users into visiting misleading sites.
– Pornographic content can be cloaked within non-pornographic search results.
– Cloaking is a form of the doorway page technique.
Differences Between Search Engine Cloaking and DMOZ Cloaking
– Search engine cloaking is intended to deceive search engine spiders.
– DMOZ cloaking aims to fool human editors.
– Cloaking decisions can be based on HTTP referrer, user agent, or IP.
– Search engine spiders behave differently from natural user behavior.
– Some cloakers give the fake page to everyone except major search engine referrals.
Cloaking versus IP Delivery
– IP delivery is a variation of cloaking with different content served based on IP.
– With cloaking, search engines and people never see each other’s pages.
– IP delivery can be used to determine the requester’s location for targeted content.
– IP delivery is used by Google for AdWords and AdSense advertising programs.
– IP delivery is a crude method for language determination compared to HTTP headers.
Related Concepts and Technologies
– See also: Spamdexing, Doorway page, Keyword stuffing, Link farms, URL redirection.
– Technology: Content negotiation, Geo targeting.
– References: Cloaking | Google Search Central, Google Developers, Eberwein, Helgo (2012).
– Further reading: Baoning Wu and Brian D. Davison’s study on cloaking and redirection.
– Cloaking is categorized as black hat search engine optimization.
This article needs additional citations for verification. (October 2021) |
Cloaking is a search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the user's browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page. When a user is identified as a search engine spider, a server-side script delivers a different version of the web page, one that contains content not present on the visible page, or that is present but not searchable. The purpose of cloaking is sometimes to deceive search engines so they display the page when it would not otherwise be displayed (black hat SEO). However, it can also be a functional (though antiquated) technique for informing search engines of content they would not otherwise be able to locate because it is embedded in non-textual containers, such as video or certain Adobe Flash components. Since 2006, better methods of accessibility, including progressive enhancement, have been available, so cloaking is no longer necessary for regular SEO.
Cloaking is often used as a spamdexing technique to attempt to sway search engines into giving the site a higher ranking. By the same method, it can also be used to trick search engine users into visiting a site that is substantially different from the search engine description, including delivering pornographic content cloaked within non-pornographic search results.
Cloaking is a form of the doorway page technique.
A similar technique is used on DMOZ web directory, but it differs in several ways from search engine cloaking:
- It is intended to fool human editors, rather than computer search engine spiders.
- The decision to cloak or not is often based upon the HTTP referrer, the user agent or the visitor's IP; but more advanced techniques can be also based upon the client's behaviour analysis after a few page requests: the raw quantity, the sorting of, and latency between subsequent HTTP requests sent to a website's pages, plus the presence of a check for robots.txt file, are some of the parameters in which search engines' spiders differ heavily from a natural user behaviour. The referrer tells the URL of the page on which a user clicked a link to get to the page. Some cloakers will give the fake page to anyone who comes from a web directory website, since directory editors will usually examine sites by clicking on links that appear on a directory web page. Other cloakers give the fake page to everyone except those coming from a major search engine; this makes it harder to detect cloaking, while not costing them many visitors, since most people find websites by using a search engine.