One of the key aspects for the visibility of your website concerns the interaction with Google crawlers, the bots that scan your content to index it in search results. Two key elements of this interaction are Google's "Crawl Statistics" and the "Time to First Byte" (TTFB). Both of these factors can have a significant impact on how often Google visits your site and how quickly your content is indexed.
Google Crawl Statistics is a set of data that describes how Google's crawlers interact with your site. This data can include the number of crawl requests made, the time it took to download a page, the success or failure of the requests, and more. In general, your goal should be to have as low a crawl time as possible. A lower crawl time means that Google bots can crawl more pages in less time, which in turn can increase how often your site is indexed.
Time to First Byte (TTFB) is another key metric in the SEO world, we have extensively discussed it in this post. This is the time between when a client (such as a web browser or Google crawler) makes an HTTP request and when it receives the first byte of data from the server. Again, lower TTFB is generally better – it means data is transmitted faster, which can improve both the user experience and the effectiveness of Google's crawling.
A common belief is that the Google Bots crawling your site come from European data centers, however, a closer look reveals a different reality. Looking at your webserver's access.log files, it is possible to geolocate the IPs of the crawlers and discover that many of them come from the United States, in particular from Mountain View, California. This adds an additional latency of around 100ms, as can be seen by pinging the IP.
For example, let's consider this Google Bot IP 188.8.131.52 that we retrieved from a recent log file:
184.108.40.206 - - [14/May/2023:03:37:25 +0200] "GET /wp-includes/js/jquery/jquery.min.js HTTP/2.0" 200 31017 "https://www.ilcorrieredellacitta .com/news" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; +http://www.google.com/bot.html) Chrome/113.0.5672.63 Safari/537.36"
Most of you probably know that Google is in Mountain View and have heard of it thousands of times, yet few have any knowledge of where Mountain View is geographically.
Mountain View is a city located in the Silicon Valley region of Santa Clara county in the state of California in the United States of America. Located on the west coast of the American continent, Mountain View overlooks the Pacific Ocean. Its geographical position is diametrically opposite to Europe: if you consider the journey from east to west, Europe is on the other side of the Atlantic Ocean, thousands of kilometers away. This distance is amplified by the presence of the American continent between the two points.
The distance obviously involves a time to travel and even if we wanted to take the optical fiber as a means of propagation which has a speed close to that of light (but not that of light), the times of the various network devices must necessarily be added, such as routers and switch of the various HOPs as per the following traceroute for example.
Let's now try as an additional test to ping the IP 220.127.116.11 to measure the reverse latency time, i.e. from the web server to the Google bot.
We clearly see a latency of 104ms, and indicatively compared to an efficient European section that pings in a maximum of 20ms, we have an "extra" of about 80ms.
This latency may seem small, but it certainly adds up, and over time, especially if your site has many pages or is frequently visited by crawlers.
A direct consequence of higher crawl times is a visible decrease in crawl requests from Google. This is due to a limitation known as the "crawl budget", which is the number of pages that Google can and will crawl in a given period of time. If your site takes too long to respond to crawler requests, Google may decide to limit the number of pages it crawls, which could in turn reduce your site's visibility in search results.
For example, let's see a report from one of our former customers who recently migrated his site to a well-known high-performance hosting provider (at least they say so), noting what it entailed in practical terms.
It is self-evident and extremely evident that following the supplier change, the orange line of the average response time has literally soared, going from a good value of 160 milliseconds to a good 663 milliseconds, or more than 4 times slower. In other words, if you want to count the servant, Google bots take 4 times more time to retrieve the contents from the site than they used to before.
In fact, if we see the total scan requests before the migration and after the migration, we notice how they have gone from about 30 to just 10, with a loss of about 2/3, an extremely worrying value especially on editorial sites that tend to produce a lot of content every day.
By repeating the analysis of Google's scanning statistics, after about 20 days, we see that the average response time has gone from the optimal 160ms to the current 1550, with peaks of over 300 milliseconds, with a worsening of the scanning speed from 10 to 20 times.
The requests for scanning have obviously worsened further, going from about 30 to just 5000, effectively decreeing the death of the site on search engines, Google News and Discovery.
With regard to the TTFB of search engines, we can find an eloquent testimony through the SpeedVitals tool which reports the following metrics for what concerns European values:
Despite the importance of these metrics, many site owners and engineers aren't fully aware of their impact. Some may not know what Google crawl stats are, while others may not understand the relationship between TTFB and crawl rate. This lack of awareness can lead to suboptimal decisions regarding site hosting, site structure and search engine optimization, as well as being able to correctly evaluate a hosting provider like us who appears to a layman or a self-styled expert "sells hosting like everyone else".
For example, a site owner could choose a hosting provider based mainly on price, or on amazing commercial promises and sensational payoffs, without taking into account the fact that beyond marketing and beautiful promises, it is the facts that qualify the goodness of a provider, as well as that slower or more distant servers can increase TTFB and therefore reduce the effectiveness of Google crawls. Similarly, a tech might focus on things like keyword selection or site design, overlooking the fact that complex site structure or inefficient code can increase crawl time and reduce crawl budget, making even the best intentions.
Google crawl stats and TTFB are two critical factors that can greatly affect your site's visibility in search results. To optimize your site's interaction with Google's crawlers, it's essential to understand these metrics and factor them into any decisions about your site. This can include choosing a suitable hosting provider like ours, optimizing your server-side software stack, and constantly monitoring your crawl stats and TTFB.
CDN services as a solution or palliative to the problem.
At least theoretically, one of the most effective strategies for improving scan time is the implementation of a robust software stack, including a properly configured server-side caching system with as high a HIT ratio as possible.
However, even with the best server-side configuration, performing operations such as accelerating the SSL session handshake, enabling OCSP Stapling, TCP BBR and TCP Fast Open, painstaking tuning of the Kernel (as we usually do in Managed Server Srl) we are inevitably faced with the limit imposed by physical distance. This is especially true for sites hosted in Europe that need to interact with Google's crawlers based in Mountain View, California.
To overcome this challenge, many industry professionals are turning to Content Delivery Network (CDN) services in Platform as a Service (PaaS) mode. These solutions offer the possibility of having cached copies of site content on nodes closer to Mountain View through the use of AnyCast routing.
A CDN is a network of geographically distributed servers that work together to deliver web content quickly. These servers, known as points of presence (POPs), store copies of website content. When a user or bot requests access to this content, the request is sent to the nearest POP, thus reducing the time required to transmit the data.
Origin, in the context of a CDN, is the server or group of servers from which the original content comes. These origin servers deliver content to the CDN's POPs, who in turn distribute it to end users.
AnyCast routing is a method of routing network traffic where requests from a user are routed to the closest node in terms of response time. This can help significantly reduce TTFB since data doesn't have to travel long distances before reaching the user or bot.
In the context of computer networks, Anycast is a network traffic routing method that allows multiple devices to share the same IP address. This routing method is particularly popular in Content Delivery Network (CDN) services and Domain Name Systems (DNS), as it helps reduce latency and improve network strength.
To understand how Anycast routing works, imagine a network of servers distributed in different geographical locations around the world. Each of these servers shares the same public IP address. When a user sends a request to that IP address, the request is routed to the server "closest" to that user. Here, the term "closest" doesn't necessarily refer to the physical distance, but rather the distance in terms of network hops or the shortest response time. In practice, this means that the user will connect to the node that can respond to his request fastest.
The beauty of Anycast routing is that it is completely transparent to the end user. It doesn't matter where the user is or which server answers his request: the IP address is always the same. This makes the system extremely flexible and resilient. Should a server go offline or become overloaded, traffic can easily be redirected to another server without disruption to the user.
However, it is important to point out that “CDN” is an umbrella term that can refer to a wide variety of services, each with its own quirks. Not all CDNs are created equal, and to get the maximum benefits from a CDN, it's essential to understand its specifics and configure it correctly.
In many cases, using common commercial CDNs may not bring the expected benefits. For example, while a CDN may have POPs near Mountain View, misconfiguration or suboptimal CDN design can result in these POPs not having a cached copy of site content. In this case, requests from Google crawlers are redirected to the origin, potentially degrading response speed.
This problem can be especially serious if the origin doesn't have a local caching system capable of serving content to crawlers within milliseconds. In that case, using a CDN can end up slowing down requests rather than speeding them up, negatively impacting crawl time and, consequently, site visibility in Google search results.
The problem can be further exacerbated if the CDN is not properly configured for caching. Some CDNs offer a large number of configuration options, each of which can have a significant impact on performance. If these options are not configured correctly, the CDN may not be able to serve cached content effectively, which can lead to longer response times.
The best way to monitor Google Crawl Stats.
Regardless of the technologies used and the vendors involved, the most effective way to monitor Google crawl stats and average response time is through the use of Google Search Console. The Crawl Stats panel provides detailed and up-to-date information about Google's crawling activity of your website. We recommend that you access this section on a weekly basis to closely monitor your site's performance.
For optimal operation, the average response time should always be less than 200 milliseconds. This value represents the maximum theoretical time it should take for Googlebot to get a response from your server, and a lower value means that Googlebot can crawl your pages more efficiently, improving your crawl budget and consideration by Google. You can track your average response time through the link at Scan Statistics of Google Search Console.
The 200ms value is not a random value, but rather the maximum value explicitly indicated by Google https://developers.google.com/speed/docs/insights/Server?hl=it, and should be a best practice to follow regardless as outlined in the suggestions for improving server response.
If you notice that your average response time is over 200 milliseconds, or if you're having trouble with your scan stats, please don't hesitate to contact us. Our experts are available to help you find a quick solution and improve the performance of your website. Remember, a fast and responsive website is not only beneficial for your users, it is also a key factor in good SEO.