🛍 [BLACK FRIDAY] DISCOUNT! + 2x Benefits + $15k Worth of Offers for FREE

SAPERNE DI PIÙ!

What is X-Robots-Tag?

The X-Robots-Tag is the part of the HTTP response header that tells search engines how to crawl and index a webpage. For example, the X-Robots-Tag below contains a noindex tag that informs search engines not to index the webpage.

X-Robots-Tag: noindex

This is what the same X-Robots-Tag looks like when included in the HTTP response header.

HTTP/1.1 200 OK 
Content-Type: text/html; charset=UTF-8 
X-Robots-Tag: noindex

How the X-Robots-Tag Works

Search engines use bots called crawlers to gather information about a webpage. When a crawler wants to access a webpage or other resource, it sends an HTTP request to the server containing it.

The server then responds with an HTTP response. If an X-Robots-Tag was configured for the webpage, then it will be included in the HTTP response header. The crawler bot then reviews the X-Robots-Tag to determine whether it is allowed to crawl the webpage and how it is allowed to do so.

For example, this X-Robots-Tag containing the nofollow tag instructs crawlers not to follow the links on the webpage. So, while the search engine can index and display the webpage on search results, it cannot follow the links.

X-Robots-Tag: nofollow

You can combine multiple X-Robots-Tag. In this case, you will separate each rule with a comma. For example, the X-Robots-Tag below contains the noindex and nofollow rules.

X-Robots-Tag: noindex, nofollow

You can also create rules that specifically instruct specific crawlers on how to crawl the page. For example, the X-Robots-Tag below instructs Googlebot not to index a webpage.

X-Robots-Tag: googlebot: noindex

Google only supports X-Robots-Tags directed at googlebot e googlebot-news. It does not support rules directed at its other crawlers.

Difference Between the X-Robots-Tag, Robots Meta Tag, and robots.txt File

The X-Robots-Tag, robots meta tag, e robots.txt file are collectively called robot exclusion protocols since they all provide search engines with instructions on how to crawl a URL. However, there are some differences between how they do so.

The robots.txt file provides sitewide instructions. That is, it instructs search engines on how to crawl the entire site or specific directories. On the other hand, the robots meta tag and X-Robots-Tag provide search engines with instructions on crawling specific pages or resources.

However, while the robots meta tag can only be applied to HTML webpages, the X-Robots-Tag can be applied to HTML webpages and non-HTML resources like images, videos, and PDF files.

Even though the X-Robots-Tag supports HTML webpages, most bloggers prefer the robots meta tag for webpages since it is less complex to set up and manage. So, it is typical to find bloggers using:

  • The robots.txt file to set rules for the entire site
  • The robots meta tag to create rules for HTML webpages
  • The X-Robots-Tag to set rules for non-HTML resources

That said, it is crucial to avoid setting conflicting directives in all three protocols, or else it may cause SEO issues that may affect the appearance and rankability of the site, content, and resources on search results pages.

Importance of the X-Robots-Tag

The X-Robots-Tag is essential because of its ability to instruct search engine crawlers on crawling non-HTML elements. Without it, it would have been challenging to prevent search engines from crawling these resources.

The X-Robots-Tag is also helpful for specifying crawl rules for a group of URLs. In this case, it is used with regular expressions (regex) that apply the same X-Robots-Tag rule to multiple URLs that follow the same pattern.

Without the X-Robots-Tag, bloggers would have to use the robots.txt file or set the crawl rules on the individual pages one by one. 

Some Common X-Robots-Tag Directives

There are multiple X-Robots-Tag directives. However, not all are supported by search engines. Below are the ones supported by Google. Keep in mind that other search engines may not support these directives or may interpret them differently. 

1 Noindex

The noindex rule instructs search engines not to display the resource or webpage on search results pages. 

X-Robots-Tag: noindex

2 Non seguire

The nofollow rule instructs search engines not to follow the links on the webpage or resource.

X-Robots-Tag: nofollow

3 Nessuno

The none rule is similar to the noindex and nofollow rule. It instructs search engines not to follow the links in the webpage or resource and not to display them on search results pages. It is the same as using the X-Robots-Tag: noindex, nofollow directive.

X-Robots-Tag: none

4 Nosnippet

The nosnippet rule instructs search engines not to display text or video snippets from the webpage in search results.  It also prevents the content from being used for Google AI overviews. 

X-Robots-Tag: nosnippet

5 Notranslate

The notranslate rule instructs Google not to provide any translation for the content on the webpage. If the rule is not present, Google will translate the title and snippet if they are in a different language from that of the visitor. 

X-Robots-Tag: notranslate

6 Noimageindex

The noimageindex rule instructs search engines not to index the images on the webpage. This means the images on the webpage will not be displayed in search results. 

X-Robots-Tag: noimageindex

7 Indexifembedded

The indexifembedded rule instructs search engines that they are allowed to index the content of a page if that content is embedded in another page using iframes or similar HTML elements. This rule is only applicable if the X-Robots-Tag contains a noindex tag.

X-Robots-Tag: noindex, indexifembedded

8 Max-snippet:[number]

The Max-snippet:[number] specifies the maximum amount of text the blogger permits Google to include in its text snippets or AI overviews. [number] will contain the digits corresponding to the maximum number of text you want Google to include in the text snippet.

  • Set it to -1 to allow Google to set the figure
  • Setting it to 0 is the same as using the nosnippet rule
X-Robots-Tag: max-snippet:10

9 Max-image-preview:[setting]

The max-image-preview:[setting] specifies the maximum size of image previews displayed on search results pages.  The settings may be none, standard, or large. 

  • None: The blogger does not want Google to display an image preview
  • Standard: The blogger wants Google to display the default image preview
  • Large: The blogger wants Google to display a larger image preview that may be the size of the viewport 
X-Robots-Tag: max-image-preview:none

10 Max-video-preview:[number]

The max-video-preview:[number] rule specifies the maximum number of seconds that can be used for a video preview.

  • Set it to -1 if you want Google to specify a limit
  • Set it to 0 if you want Google to display a static image instead of a video preview
X-Robots-Tag: max-video-preview:10

11 Unavailable_after:[date/time]

The unavailable_after:[date/time] rule instructs Google to not display the webpage on search results after the specified date. Google will reduce the rate at which it crawls that URL after the date. 

The date and time should be specified using a common date and time format like RFC 822, RFC 850, and ISO 8601. The rule may only include a date without a time. 

X-Robots-Tag: unavailable_after:2025-12-12
🇮🇹 Italiano