Approaches Used to Avert Google Indexing

| | 0 Comments

Have you at any time wanted to stop Google from indexing a particular URL on your world-wide-web website and exhibiting it in their lookup engine benefits internet pages (SERPs)? If you deal with net internet sites extended adequate, a working day will probably appear when you need to have to know how to do this.

The a few approaches most generally utilized to stop the indexing of a URL by Google are as follows:

Working with the rel=”nofollow” attribute on all anchor components made use of to connection to the webpage to protect against the backlinks from being adopted by the crawler.
Applying a disallow directive in the site’s robots.txt file to stop the web site from currently being crawled and indexed.
Utilizing the meta robots tag with the material=”noindex” attribute to prevent the page from getting indexed.
While the differences in the a few strategies appear to be delicate at 1st look, the effectiveness can vary substantially relying on which strategy you choose.

Employing rel=”nofollow” to reduce Google indexing

A lot of inexperienced site owners try to avoid Google from indexing a individual URL by utilizing the rel=”nofollow” attribute on HTML anchor features. They include the attribute to every single anchor component on their web page utilized to connection to that URL.

Like a rel=”nofollow” attribute on a url stops Google’s crawler from next the website link which, in switch, prevents them from discovering, crawling, and indexing the focus on page. Even though this method may well work as a short-time period alternative, it is not a practical extended-time period remedy.

The flaw with this technique is that it assumes all inbound one-way links to the URL will include things like a rel=”nofollow” attribute. The webmaster, even so, has no way to stop other world-wide-web internet sites from linking to the URL with a adopted url. So the odds that the URL will inevitably get crawled and indexed using this approach is very high.

Applying robots.txt to avoid Google indexing

One more frequent process utilised to protect against the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be added to the robots.txt file for the URL in problem. Google’s crawler will honor the directive which will avert the web page from getting crawled and indexed. In some instances, nevertheless, the URL can nevertheless surface in the SERPs.

Sometimes Google will show a URL in their SERPs even though they have under no circumstances indexed the contents of that website page. If google inverted index of website internet sites website link to the URL then Google can generally infer the topic of the site from the connection text of those inbound hyperlinks. As a outcome they will present the URL in the SERPs for relevant lookups. While using a disallow directive in the robots.txt file will stop Google from crawling and indexing a URL, it does not promise that the URL will never ever show up in the SERPs.

Employing the meta robots tag to avoid Google indexing

If you need to have to avert Google from indexing a URL although also avoiding that URL from staying exhibited in the SERPs then the most productive strategy is to use a meta robots tag with a material=”noindex” attribute inside of the head ingredient of the website webpage. Of program, for Google to in fact see this meta robots tag they need to have to to start with be able to find out and crawl the website page, so do not block the URL with robots.txt. When Google crawls the page and discovers the meta robots noindex tag, they will flag the URL so that it will in no way be shown in the SERPs. This is the most powerful way to avert Google from indexing a URL and exhibiting it in their look for final results.