‘Meta Robots Directives’ is a complex term for ‘Meta Tags’. These tags contain instructions for the search engine crawler.
The crawler’s job is to index or crawl a web page’s content. A file called ‘robots.txt’ keeps track of all accessible web pages. This robots.txt gives directives to the crawler bot to crawl the ‘allowed’ web pages and ignore the ‘blocked’ ones.
There are two robot meta directives. One represents the HTML page (called the robots tag), and the other represents the web server as HTTP headers (x-robots tag).
Both of them perform the same functions of ‘noindex’ and ‘nofollow’. They differ only in implementation.
Meta directives tell crawlers how to crawl/index. The directives are set by webmasters. It is uncertain if the robots.txt conveys the directives to the crawler. Malicious web robots can turn rogue and ignore the fed directives.
The parameters upon which the search engine crawlers understand directives are mentioned below. They are not case-sensitive, but it’s good to follow the generic subset.
- Noindex – Tells the search engine bot to not index/crawl a web page
- Index – The opposite of Noindex (default if Noindex isn’t added)
- Follow – If the page is not indexed, the crawler must follow the links & form a relation
- Nofollow – Opposite of Follow
- Noimageindex – Asks the crawler to not index any images
- Noarchive – Tells the bot to not show a cached/archived link of this page on SERP (Search Engine Results Page)
- None – Uses Noindex and Nofollow together
- Nocache – Same as Noarchive, but only for Firefox and Explorer
- Nosnippet – Asks the crawler to not show a snippet (meta description) on SERP
Best SEO practices for meta directives
If a robots.txt file disallows a web URL, any meta directive on the page will be ignored. This means that all meta directives are discovered whenever a URL is crawled.
When using ‘Noindex, follow’, restrict the crawler, instead of using directives in the robots.txt file.
Using both meta-robots and the x-robots tag on the same page is redundant.
Malicious crawlers will ignore meta directives. If a webpage contains private information, this could lead to disaster. Choose a secure approach, such as password protection, to prevent visitors from checking highly-confidential pages.