In our last tutorial, we talked about the Meta Box. Specifically, we talked about the General Tab in the Meta Box. This tutorial is about the advanced tab.
The Meta Box’s advanced tab is home to – well, advanced options. This is where you address the technical side of SEO – stuff people usually refer to as “technical SEO.” Often with a suppressed shudder.
Luckily for you, you have Rank Math. We’ve taken the pain out of the process. You don’t have to stare at masses of raw HTML and agonize over every tag and attribute. Instead, you can tick a few boxes and move on.
Actually, there’s not a lot to worry about here. Most of the time you’ll be fine with the default settings. There may be times when you need to tweak some of them, so let’s cover what each setting does.
Let’s discuss all the options one by one.
Use Default Robots Meta
At the top of the panel, there are options that cover the robots meta tags. Robots meta-tags are HTML tags that contain instructions for search spiders, crawlers, and any other kind of automated program that reads your page. It’s a place where you can tell Google what to do with your content on that specific page.
Most of the time, Rank Math will use the Default Robots Meta tags (which you can set inside the plugin’s settings – you can set different default values for pages and posts). But the advanced tab gives you the opportunity to use different Robots meta tags for each specific post or page.
So, what does a Robots meta tag do?
Specifically, it’s an opportunity to tell Google what not to do. When Google crawls a page, it usually:
- Indexes it
- Archives it (so people can see what the page looked like when the Google Bot visited)
- Follows the links on the page
- Indexes the images on the page
- Creates a snippet for use in search results.
But there are times when you don’t want Google to do one or more of these. Maybe you’ve published something that you only want to reveal to people who are on your site – some kind of special promotion for your loyal readers.
The rest of the world has no right to know about it! If Google indexes the page, any creep with an Internet connection can read about it and grab a discount!
That’s why Google gave us the right to opt-out of their index. There are 2 ways you can do that. The first is with a robots.txt file – that’s another topic (Rank Math has a tool for handling robots.txt files).
The other option is to add a special meta tag in the post (or page’s) head section. That’s what this section of the Advanced tab is all about.
As you probably noticed in the image above, the “Use Default Robots Meta” option is set to On. That means Rank Math will use the meta settings that you’ve configured in your settings.
If you’d like to set custom Robot tags, you’ll have to switch the setting Off.
Which will introduce a bunch of different options on the screen just below it?
There are a total of 5 meta robots tags. Let’s understand what each of them does.
So, you want to keep a page out of the search results? Just tick the box that says “No Index”.
Now Google will read the page, but it won’t add it to the index. It will still see the links and follow them. And it will still index any images it finds.
Before we move on to the next options, we should draw your attention to one very important point. If you don’t want this page to be indexed, then there’s no reason to carefully optimize the content for a keyword. It will never rank for any search because Google will not add it to the index.
The next option (“No Archive”) tells Google to hide the “cached” link in the search snippet. The cached link allows people to read your content without visiting your site. Instead of getting the page from your server, they see the text that Google read when it crawled the page.
Should you use this option?
That depends. If your website makes most of its revenue of ads, then its probably wise to turn No Archive on. Even if you monetize your website in other ways, there are always articles that you could choose to have cached. Posts which are on top of the funnel are a prime example. By enabling caching them for them (by setting No Archive off), users can also enter your funnel through connected pages. For pages that are very important, like a high converting landing page, a subscription page, or a lead magnet page, you can, and should turn caching off by turning on No Archive.
The next option is “No Follow”. This tells Google not to follow the links on the page. Specifically, that means the links should not pass PageRank, and Google shouldn’t crawl them.
Pagerank is the “link juice” that Google uses to estimate how popular a web page is. It’s a very important part of Google’s algorithm. Many people will argue that Page Rank is dead, but that’s not true. It’s true that Google has stopped sharing Page Rank for website and pages publicly, but Page Rank still makes a core part of the Google algorithm and will be at least for the next few years.
The “No Follow” meta tag is very similar to the rel=“nofollow” attribute you can add to links. Here’s the difference: the rel=“no follow” attribute only affects one link. The Robots No Follow meta-tag affects every single link on the page.
When Google sees the “No Follow” meta tag, it’s as if there were no links on the page. It ignores all of them – internal, external, even anchor links within the page.
So, when would you use it?
There’s no definite answer. But a common example would be a page with user-generated content. May it is a product review page, a forum post, or wishlist, it’s a good idea to No Follow the page to discourage malicious activity and spam.
A real-world example is Amazon Wishlists. If you’ve shopped on Amazon, you probably know that you can create wishlists on Amazon. What you might not know is that you can even include products from other websites on the list. Since a wishlist has to point to one or more specific products, it could be abused by spammers. Thankfully, the wishlist page is No Indexed and No Followed, which makes spamming a zero ROI activity.
No Image Index
The No Image Index robots meta-tag tells Google not to include any of the page’s images in the image search results. This can be a good or bad thing – it depends on your images.
If the images are interesting, people are likely to click through to your page to read more. If the images are boring or generic, they’re unlikely to.
There are other considerations to weigh, too. Sometimes you don’t want an image to be taken out of context. When the image is displayed in Google’s image search results, there is no context.
Another issue is copyright theft. Some website owners use image search to find pictures they can “repurpose” on their own sites. If you’ve put a lot of time and effort into creating an image (for instance, if it’s a work of art), you don’t want strangers to use it without your permission.
Then there’s the issue of “bandwidth theft”. Sometimes people aren’t content with stealing an image. They want you to also host it for them.
Instead of copying the image to their server, they add a “hot link” to the image’s original URL. When someone visits their site, the browser sends the request to your server.
You end up paying the cost of serving the image (possibly to thousands of people) while the thief reaps all the benefits.
So, should you let Google index your images?
Weigh up the pros and cons. And remember, we’re talking about the images on this particular page or post. This is not a site-wide setting (you can make it a site-wide setting by editing the default robots meta tag in Rank Math’s settings interface).
The No Snippet tag tells Google NOT to show a text snippet in your search listing. Google will just display your page title and URL.
At first, this seems like a massive disadvantage. Without a text snippet, Google users have very little information to go on. Is the result right for them? Can they trust the title? Suddenly, every other result looks more tempting.
So why would anyone ever use this tag? Well, there are times when you don’t want people to read your content until they reach your site.
Let’s say Bill had an incredibly offensive blog. Bill’s blog is so offensive he has to flash up a popover warning as soon as you visit it. To see the content, you have to tick a box agreeing to his terms and conditions, and unconditionally absolving him from any rage or trauma you may experience.
Bill loves offending people, so he wants his results to show up in the search engines. But his lawyers have warned him that his content is so inflammatory that he can’t let anyone read it (not even one word) without clicking the “I consent” link.
To avoid a million lawsuits, Bill uses the “No Snippet” meta tag on all his pages.
If that example sounds a little contrived, it’s probably because there are very few cases where it makes sense to use “No Snippet”.
To be honest, Google probably invented it to protect themselves from the nuts who accuse them of copyright theft when they see a snippet of their text used without permission. These people do exist, and they have tried to sue Google in the past.
Google hates duplicate content because their users hate it. Specifically, people get annoyed when they click on several results only to find the same text staring at them.
That’s why Google filters duplicate content out of the search results. What’s more, duplicate content is often a sign of web spam. Some black hat SEOs use duplicate content to build backlinks or generate millions of web pages targetting millions of long tail keywords.
But duplicate content isn’t always a bad thing. Sometimes content is so great that it gets republished. Newspapers, magazines, and TV do this all the time. And it happens on the web, too.
This causes problems with the way Google calculates the popularity of content. If an article is published in ten places, how can Google calculate it’s page rank?
Let’s say there are 2 articles on the same subject. For the sake of the example, we’ll pretend it’s about “space golf”. The first article (“What is Space Golf?”) is published once, on a single blog. It’s well received by the space-golf community (retired astronauts), and people link to it from their blogs. It gets a total of 50 backlinks.
The second article (“True Confessions of a Space Golfer”) is published in multiple places. It’s published on the writer’s blog. A “space gossip” website gets permission to publish it. And the author sells it to a magazine for retired astronauts – which also publishes it on their site.
Lots of people see this article, on different sites. The author’s original blog has a small audience, so it only gets two links there. The “space gossip” website is more popular, so the article gets 30 links on their site. The magazine’s site is really popular – the article gets 45 backlinks there.
So, which article is more popular (in terms of backlinks)? Obviously, it’s the “True Confessions of a Space Golfer”. This article was published on three sites and got 77 backlinks.
“What is Space Golf?” did OK. It got 50 backlinks, placing it in second place.
It’s obvious. Except it wasn’t obvious to Google in the past.
Here’s what would have happened in the past. Google would have indexed each of these pages, and it would have considered them to be four separate pages (one with the text of “What is Space Golf?” and three with “True Confessions of a Space Golfer”).
Sometime later, when a user searched for “space golf”, Google would have gathered these four pages into a list of results. Then it would have checked for duplicate content.
Three of the pages were the same – and that’s not OK. So two of them had to go. Google would keep the one with the most backlinks (the magazine for retired golfers).
Finally, Google counts the backlinks to work out the ranking. “What is Space Golf?” has 50 backlinks. “True Confessions of a Space Golfer” (on the magazine site) has 45. Clearly, “What is Space Golf?” is the most popular article.
Except it isn’t. This is a double injustice.
First, the wrong article is given the top ranking. Second, the original author’s site isn’t even included in the results!
Something had to change, so in 2009, a new HTML meta tag was created. The Canonical URL tag was recommended by Google, Microsoft Live (now Bing) and Yahoo.
The basic idea is simple. If you’re copying content from another page (with permission, of course), you should include the canonical URL tag. When a search engine sees this, it knows what page it should include in the results. The PageRank of the copied pages is sent directly to the original pages, too.
Canonical URLs have other uses in other types of sites (such as e-commerce stores) where multiple URLs can lead to the same content. It helps Google (and other engines) to pass the correct ranking to the right page. And it prevents them from panicking about duplicate content spam.
For example, on an online store, the same product can be reached from the following URLs.
Not to mention all the different categories, tags, filters, archives, and wishlists that the product could be listed in. You understand that all these links point to the same product, but Google doesn’t. To Google, a different link means different content, unless you specify so.
So, when would you use the Canonical URL tag? There are two cases that come to mind:
- When you’re republishing a page (not changing the URL – no, you use it if, for some reason, you’re publishing the exact same content on more than one page or post).
- When you’re republishing an article from another site.
On e-commerce stores, having a canonical URL on your products will help search engines understand that all the links are for the same product, regardless of the link they were found on.
Sitemaps are XML files that tell search engines about the pages of your site. Rank Math builds and updates your sitemap for you, and you can set it to ping Google whenever you publish a new post or page (or edit an old one).
The sitemap section gives you fine-grained control over the settings for each page and post. You can tell Rank Math to include the page in the sitemap, or leave it out.
If you don’t want a page to be indexed, don’t include it in the sitemap. Otherwise, you should include it.
Most of the time, you can leave this setting at their default values.
A redirect is a pretty magical thing that web servers can do. Let’s say you type a URL into your browser – something like http://www.example.com/123
As you would expect, a web page loads. But when you look at the address bar, you see:
The server at example.com sent a redirect signal to your web browser – it told it to go to a different URL, instead.
Rank Math makes it easy to set up redirections – you can do it through Rank Math’s SEO menu, or you can do it on the edit page for individual posts and pages.
In the redirect section, you can set up a redirect from the post (or page) URL to a different URL. It can be an internal URL (on your site) or an external one (a different site).
When setting up a redirect with Rank Math, you have to set up 2 things.
- Redirection Type
- Destination URL
The redirection type is as important as the destination URL, as there are temporary and permanent redirections. Setting a temporary redirection will direct the search engines to keep the original URL in their index and check if the redirection is in place everytime the URL is accessed. If a redirect is permanent, that won’t happen.
There are a couple more options for the redirection type as well. Here is the complete list.
Here is a brief summary of the options.
- 301 Permanent Move: The redirect is permanent, and future request to the URL should directly be taken to the destination URL
- 302 Temporary Move: The resource has been moved temporarily to the destination address. Future requests should be made to the original URL
- 307 Temporary Redirect: A 307 is quite similar to a 302 with some technical differences
- 410 Content Deleted: Use this redirect if the content on the original URL has been deleted. Useful for pillar content and in-depth guides which can be updated over time on different URLs
- 451 Content Unavailable for Legal Reasons: If you write about a topic which could be temporarily banned (religion, politics, or other sensitive topics) this redirect could come in handy.
After you’ve selected your redirect type, you’ll enter the destination URL, something like this.
To save the redirect, just save your post (as a draft or update it). Your redirect will be saved. If in the future, you want to delete a redirect, all you have to do is delete the URL in the destination field and save your post again. The redirect will be deleted.
You can also delete the redirect from the Redirect options in Rank Math, but that’s a method for another tutorial.
Was this article helpful?
Still need help?
Submit Your Question
Please give us the details, our support team will ge back to you.