BLOG

SOMETIMES, WE EXPRESS!

Why search engines are not web crawling your website/blog? Part 1

  • 01/11/2017

While you may have put in your heart and soul along with creativity and keywords, that is not guarantee for your blog to surpass the bottom result hurdle. If you really think that your blog is content and keyword rich and there is no apparent reason why it is not being listed in the top search results, there could be more than one reasons to it.

Simply put, for your website or blog to come up as a top search result it needs to be crawled more often and better which can be done only when your website or blog is good enough to be crawled. Here, good does not refer to being creative, but being equipped with all that a googlebot would typically look for.

Following are a list of reasons which could be making your blog or website an uninteresting proposition for web crawlers.

Crawlers being barred because of meta tags or robots.txt files

This is a typical scenario which can be dealt with rather easily by running a test check of your website or blog’s meta tags and robots.txt file. Usually, this is the most common place for causes of web crawlers not attending your website or blog. While this is a trivial check, it may solve a huge issue with a simple check.

Blocking web crawlers via robots.txt

While robots.txt, if used judiciously, can save a lot on crawl budgets by not letting web crawlers crawl the pages you don’t want to be crawled. Checking the robots.txt is to ensure that the crawlers are instructed to stay away only from the web pages you want them to steer clear of. You would obviously not want those bots to steer clear of your entire website or blog.

Blocking web crawlers through meta tags

If you have a ‘no follow’ link as a meta tag on your website or blog, it issues a directive to the web crawler to not follow the link url(s) available on your web page, although it may still crawl the content available.

Broken links break the crawler

Broken links don’t only break the web crawling process of search engine web crawlers but also the hearts of the users. Commercially speaking, you spend your crawlcrawling activity done by a bot – it’s like an outsourced labour, a mindless one. In case of broken links, your budget will continue to burn while the bots make futile attempts of crawling pages and links which are broken and yield no satisfactory value.

Following are the typical manifestations and causes of broken links

URL problems

An error or typo in the URL can lead to the url being broken for all web crawling purposes. This will disrupt the web crawling process.

Obsolete url

If your website has recently been revamped, you definitely need to check if any link is being pointed or redirected to webpages that are no longer available. This is a typical problem with websites that have recently undergone a makeover.

Don’t burn your budget of qualified access pages

In case your website contains certain pages which are available only to a specific set of users like registered users or subscribers, it is advisable to set these pages for ‘ no follow’ using meta tags or robots.txt since the search engine crawler will anyway not be allowed access and would lead to unnecessary burning of budget.

These are the typical reasons why your website or blog is not being crawled and reflected as a top search engine result. However, these are not the only set of reasons. There’s definitely more to this list.

Watch this space for more in the series.


Get A Quote