The online world is constantly changing. Therefore, web developers and site owners need to stay informed on what is new and evolving with Google Search.

In this video, you can learn more about SEO and updates that you can implement and updates to the debugging and monitoring tools for Google Search.


While SEO has a lot of non-technical aspects, since this video is geared more toward developers, more technical aspects of SEO will be covered.  A large part of technical SEO gears towards making it possible for search engines to fetch HTML pages and understand the content hosted there. For this to happen, search engines need to know about the URLs. Once they are known, they can be fetched, and this is called crawling.


Once a page has been crawled to process any JavaScript, the results are analyzed. This step is known as indexing. Removing headings and other texts from the domain and storing them as tokens for an index of your URL is the easy part; the steps after that are more challenging.

How do search engines interpret content? Search engines use various machine-readable elements to tell us which pages to store in our index and which pages to skip. These elements are called structured data. With the processing, we look for links to a new page, links within the website, and links going out. The links should be well-formed HTML elements and point at valid and fetchable URLs.

We have crawling, following processing, or indexing, ending with looking for new URLs linked on those pages for future crawls. We also take the data gathered and use it to generate search results. Of course, there is much more to these technical aspects.

Crawling with HTTP/2

 One of the recent changes in Google’s fetching of web pages is the shift to enable HTTP/2 crawling. HHTP/2 is the next version of HTTP, which is the protocol primarily used for transferring data on the internet. With HTTP/2, a single TCP can be opened to request multiple files in parallel efficiently. As a result, Googlebot, the crawler, will no longer need to spend as much time crawling the server as before.

In SEO terms, this falls into what is known as the crawl budget, which is a mix of how many URLs Google wants to crawl from your website, the crawl demand, and how many URLs our systems think your server can handle without problem, the crawl capacity. With HTTP/2 crawling, a single connection can include multiple URLs. Thus, HTTP/2 crawling gives systems the ability to request more URLs with a similar load on the servers. The decision to crawl with HTTP/2 is based on whether the server supports it and whether our systems determine a possible efficiency gain.

Since starting this experiment, Google is now crawling more than half of all URLs with HTTP/2. In addition, the number of connections and the bandwidth has gone down significantly using stream multiplexing and header compression. These improvements help both our crawling as well as your website serving infrastructure.

Structured Data

When it comes to machine-readable information on a page, Google primarily relies on structured data embedded in the HTML pages. JSON-LD has become the most popular way websites provide structured data, like all of Google’s modern search-specific metadata can be provided through JSON-LD and the vocabulary.

Starting 10 years ago, is a globally accepted standard and open vocabulary for expressing information. It is an essential part of the open web infrastructure as it offers websites a stable foundation to build on. is constantly being expanded as new technologies and features are needed. A shared vocabulary allows developers to maximize their implementations and enables others to use the data.

Google recently open-sourced Schemarama, which provides data parsing and validation tools, making it easier for you to consume data and integrate automated testing into your production pipelines. In addition, the search gallery within the developer documentation can tell you more about structured data, and everything users can do with it. For example, there is recipe structured data, new options for online and offline events, options for home activities, and practice problems for students building a website. Giving extra information does not increase your website’s ranking in Search but allows your pages to present more visibly in the search results when users are looking for them.

Structured data is used for videos, and while there are a few kinds of supported structured data, this information can be important if embedding videos to your website. You can host videos either on your website or with many popular video hosting platforms, with both being supported by Google Search. In addition, search now allows users to directly skip to important moments in a video, making it quicker and easier for users to access your content. To enable these features, there are two types of markup.

With clip markup, a webpage can provide information on segments or clips within a video. The search results will show these segments, making it easier for users to go right to a part of a video. If you cannot easily list the segment information for all hosted videos, you can use seek markup. Google can use machine learning to analyze your video content and automatically determine relevant segments with seek mark up.

Search Console

The primary set of tools we recommend for debugging and monitoring websites for Search is called Search Console.

To get started with Search Console,  users will need to verify ownership of the website they are working on. If someone else is working on your site, it is as simple as having them add you to the website.

Search Console new added some features to make life a bit easier, starting with crawl stats. The new crawl stats report in Search Console gives you information such as how many requests were made, your servers’ responses, and any availability issues along the way. Having this information directly in Search Console makes it easier for developers to recognize and address issues.

Page experience report summarizes the good URLs on your website and information on other parts of the page experience ranking factor. In addition, information about the different aspects of the ranking factor, such as HTTPS, mobile usability, and the safe browsing status, is also available.


We reviewed SEO and how crawling, and indexing works. Next, we discussed crawling with HTTP/2 and how it can help the load on your site servers. Next, we looked at structured data and, finally, Search Console, the ever-evolving tool for debugging and monitoring a website for Search.

There is a lot that has changed in Google Search, and more changes are to come.

Sign Up for Educational Updates & News