Strategies Utilised to Avoid Google Indexing

| | 0 Comments

Have you at any time necessary to avoid Google from indexing a certain URL on your world-wide-web internet site and displaying it in their lookup motor results pages (SERPs)? If you manage website websites extended ample, a working day will most likely come when you need to know how to do this.

The a few procedures most frequently utilised to avoid the indexing of a URL by Google are as follows:

Working with the rel=”nofollow” attribute on all anchor features employed to hyperlink to the web page to protect against the back links from becoming followed by the crawler.
Utilizing a disallow directive in the site’s robots.txt file to avert the webpage from becoming crawled and indexed.
Utilizing the meta robots tag with the written content=”noindex” attribute to prevent the web site from becoming indexed.
Whilst the dissimilarities in the 3 approaches seem to be refined at first look, the efficiency can change significantly dependent on which approach you opt for.

Employing rel=”nofollow” to protect against Google indexing

Lots of inexperienced webmasters try to prevent Google from indexing a specific URL by using the rel=”nofollow” attribute on HTML anchor aspects. They add the attribute to every single anchor component on their web page employed to connection to that URL.

Which include a rel=”nofollow” attribute on a link stops Google’s crawler from next the connection which, in convert, helps prevent them from getting, crawling, and indexing the target webpage. While this system might do the job as a small-time period remedy, it is not a feasible extended-phrase alternative.

The flaw with this strategy is that it assumes all inbound backlinks to the URL will involve a rel=”nofollow” attribute. The webmaster, on the other hand, has no way to protect against other internet web pages from linking to the URL with a adopted website link. So google index download that the URL will ultimately get crawled and indexed making use of this process is really higher.

Using robots.txt to avert Google indexing

Yet another typical technique utilised to prevent the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be extra to the robots.txt file for the URL in dilemma. Google’s crawler will honor the directive which will prevent the site from getting crawled and indexed. In some cases, nonetheless, the URL can nevertheless look in the SERPs.

At times Google will show a URL in their SERPs while they have by no means indexed the contents of that web page. If enough website internet sites website link to the URL then Google can normally infer the matter of the website page from the connection text of those people inbound inbound links. As a result they will display the URL in the SERPs for connected lookups. When employing a disallow directive in the robots.txt file will protect against Google from crawling and indexing a URL, it does not promise that the URL will never ever look in the SERPs.

Making use of the meta robots tag to protect against Google indexing

If you have to have to prevent Google from indexing a URL whilst also preventing that URL from remaining displayed in the SERPs then the most productive technique is to use a meta robots tag with a articles=”noindex” attribute in just the head component of the net site. Of program, for Google to basically see this meta robots tag they need to have to initial be capable to find out and crawl the website page, so do not block the URL with robots.txt. When Google crawls the webpage and discovers the meta robots noindex tag, they will flag the URL so that it will hardly ever be revealed in the SERPs. This is the most powerful way to prevent Google from indexing a URL and displaying it in their lookup results.