🎃 [HALLOWEEN SALE] DISCOUNT! + 2x Benefits + $15k Worth of Offers for FREE

もっと詳しく知る!

What is an Index?

An index is a database containing the webpages and other content a search engine wants to display on its search results pages.

Search engines do not search the web in real time. Instead, they regularly crawl the web to find new and updated content. When they identify a piece of content they may want to include in search results, they add it to their index. 

When a visitor enters a search query into the search engine, it retrieves the relevant result from its index and displays it on the search results page.  

The process of adding content to the index is called indexing. While indexing takes you a step closer to appearing in search results, there is no guarantee that the indexed content will appear in search results. 

検索エンジンの仕組み

To understand indexing, you need to know how search engines work. The process of discovering content online and presenting it to visitors is a three-step process:

  • Crawling
  • Indexing 
  • Serving

Crawling is the process of discovering and browsing content available on the web. Google uses crawlers, which are also called bots or spiders, to crawl the web. 

These crawlers regularly look for new content published on the web. They also visit previously crawled content to check if it has been updated since their last visit. When the crawler identifies content that it wants to appear in search results, it adds it to its database. This database is called an index, and a webpage’s ability to get indexed is called indexability

Google reviews and evaluates the indexed content for relevance and quality. When a visitor enters a query into Google, Google retrieves the relevant content from its index and then displays it on its search results pages. 

Content that is not indexed cannot be displayed on search results pages. Therefore, if you want your content on Google results pages, you must ensure that Google can index it. 

Factors That Affect Indexing

Many factors can affect indexing. Some could be from your end, and others may even be from Google’s end. While you can typically not do anything about indexing issues from Google’s end, here are a few factors that may cause indexing issues from your end.

1 Robots.txt File

Your robots.txt file may contain rules restricting search engine bots from crawling certain parts of your site. You may have set up these rules yourself. However, they could also happen if there are misconfigurations in the file.

2 Meta Robots Tags

Google and many other search engines do not index webpages with a noindex meta tag. The tag is typically added to the webpage’s head tag. So, if you want Google to index a webpage, ensure that you have not applied the noindex meta tag to the page.

3 Server Errors

Google will typically reduce the rate at which it crawls a site when it encounters a slow server or too many server-side errors. These errors typically begin with 5xx, such as the HTTP 500 Internal Server and 503 Service Unavailable errors. Google will also reduce the rate at which it crawls your site when it repeatedly encounters a 429 Too Many Requests error.

4 Missing Content

Google cannot crawl or index content that is missing from the web. This sort of content is missing from its expected location and causes the webpage to return a 404 Not Found error. Google will typically deindex content that returns a 404 Not Found error for a considerable period.

5 クロール予算

クロール予算 is the number of pages a search engine bot will crawl during a given period. Google assigns a crawl budget to every site it crawls. The assigned crawl budget is not based on the site’s actual server capacity but on Google’s assessment of the site’s capacity.

6 XMLサイトマップ

XML sitemap helps search engines find and understand a website’s structure. While Google can find your content without the sitemap, it sometimes relies on it to discover it. In such situations, a missing, incomplete, or poorly structured sitemap can hinder the discovery and indexability of certain pages.

7 重複したコンテンツ

You can end up with duplicate content when you have the same or similar content on multiple URLs. These URLs could be on the same or multiple sites. To avoid this, Google recommends specifying the most relevant article using the rel=”canonical” tag.

8 Internal Linking Structure

Google relies on links to find your content. A well-organized internal linking structure would allow Google to discover every page on your site. Orphaned pages, that is, pages with no internal links pointing to them, may not be easily discovered and indexed.

9 HTTPS Protocol

Google prefers sites to use the more secure 翻訳 over the less secure HTTP. You can easily convert an HTTP site to HTTPS by installing a セキュア ソケット レイヤ (SSL) certificate on your site.

10 Thin or Low-Quality Content

Google only wants to display helpful pages on search results pages. So, it may not index pages with thin content or content that provides little value to visitors.

11 Domain or URL Structure

Multiple URLs can lead to the same content. This may affect indexing, particularly when the site has an unclear, long, or complex URL structure. This may also occur in the case of dynamic URLs and URLs that contain excessive parameters.

🇯🇵 日本語