January 12 2024

Understand the logic of Varnish Cache

Guide to advanced understanding of Varnish Cache logic and its impact on website performance.

Varnish Cache, an advanced HTTP reverse proxy, is designed to maximize the speed of websites. By strategically placing itself between the client and server, Varnish not only significantly reduces Time To First Byte (TTFB) and lightens the load on the backend server, but also leverages its sophisticated caching mechanism to effectively manage both static and dynamic content. This approach to caching not only speeds up page loading for end users, but significantly contributes to the scalability of the server infrastructure, making handling unexpected traffic spikes a more manageable task.

A key aspect of Varnish is its proprietary configuration language, the Varnish Configuration Language (VCL). The VCL offers unprecedented flexibility, allowing detailed customization of caching policies and traffic management rules. To effectively use Varnish and exploit its full potential, it is essential to understand not only the underlying logic of Varnish, but also the flow of data through its different subroutines. This in-depth understanding of VCL and Varnish logic is crucial to optimizing a website's performance and building configurations that specifically address business needs. Our analysis below aims to explore this workflow and Varnish subroutines in detail, emphasizing the contribution of each step in boosting overall performance and efficiency.varnish-finite-state-machine

Phase 1: Initial Processing and Cacheability Determination.

In this section, we will explore Phase 1 of the Varnish Cache process, a critical point that determines the treatment of incoming HTTP requests. Known as the “Initial Processing and Cacheability Determination,” this is the phase where key caching management decisions are made. We will see how Varnish performs initial checks to evaluate whether a request can be served from the cache, thus optimizing content delivery and reducing the load on the backend server. We will also analyze the sophistication with which Varnish manages cookies, user authentication and HTTP header analysis to determine the most effective caching strategy.

vcl_recv : Reception and Preliminary Evaluation of Requests

When an HTTP request arrives, Varnish incorporates it into its initial lifecycle via the subroutine vcl_recv. This is the critical point where fundamental decisions are made that will influence the entire subsequent path of the request. At this stage, the Varnish Configuration Language (VCL) allows system administrators to write complex, highly configurable rules that examine every aspect of the incoming request.

This subroutine is responsible for a wide range of controls:

  • Cookie Control: Varnish can inspect request cookies to decide whether a request is personal and therefore not cacheable, or whether it can be ignored to aid caching.
  • Authentication and Authorization: Verifies the user's identity and permissions. If a resource requires authentication or has access restrictions, Varnish can pass the request directly to the backend without caching.
  • Header Analysis: HTTP headers are examined to determine whether the request meets the defined caching criteria. For example, headers like Cache-Control: no-cache may indicate that the response should not be cached.
  • Caching Policy Management: Custom settings can be written to handle specific scenarios, such as cache bypass based on query parameters, HTTP methods, or other business policies.

vcl_hash : Hash Calculation and Cache Matching

After the initial evaluation in vcl_recv, the request proceeds to the subroutine vcl_hash. Here, Varnish performs a critical task in the caching process: calculating a unique hash for each request. This hash is critical because it allows Varnish to efficiently identify whether a cached version of the response is already present, thus allowing fast delivery without having to query the backend server.

The hash calculation is influenced by factors such as:

  • Request URL: The main component of the hash is the URL, which ensures that requests for the same resource are grouped together.
  • Request Header: HTTP headers can affect caching. For example, variations in accepted languages ​​or requested content types may require separate cached versions.
  • Customization: Administrators can influence the hash calculation by adding or excluding specific headers or parameters, allowing for granular control over caching decisions.

The result of vcl_hash is an identifier that Varnish uses to quickly search its cache memories. If it finds a match, it follows the cache delivery path; otherwise, it proceeds with the request to the backend. Varnish's ability to do this extremely fast is what allows it to dramatically reduce TTFB and deliver significant improvements in responsiveness for end users.

Phase 2: Resolving Cache Requests (Cache Hits and Misses)

In this section, we will delve into Phase 2 of the Varnish Cache process, focusing on “Managing Cache Hits and Misses”. This phase is fundamental to the functioning of Varnish, since here it is determined whether a request can be satisfied directly from the cache (a “hit”) or whether it must be forwarded to the backend server (a “miss”). We will delve deeper into the logic and operations behind the subroutine vcl_hit, where Varnish decides whether a cached response can be served to the client. We will also look at the dynamics of vcl_miss and the complex management of situations in which requests do not match an existing cache entry. Additionally, we will discuss the concept of “Hit-for-Pass,” an essential feature for efficiently managing dynamic content or specific scenarios that require bypassing the cache. This phase is crucial to understanding how Varnish optimizes resources and delivers high performance, maintaining a balance between responsiveness and accuracy of delivered content.

vcl_hit : Optimizing Cached Content Delivery

When a cache “hit” occurs, the subroutine vcl_hit comes into action. A hit occurs when the hash is calculated in the vcl_hash matches an entry already present in the Varnish cache. In this scenario, the request does not need to be forwarded to the backend server, which results in a substantial improvement in the speed of content delivery.

Inside vcl_hit, critical operations take place:

  • Freshness check: Before delivering content from the cache, Varnish checks its “freshness”, comparing the age of the content with cache directives such as max-age o Expires. If the content is considered obsolete, Varnish can automatically renew it.
  • Custom Logics: Administrators can introduce custom logic to handle particular cases, for example to manage content that varies based on user sessions or to implement sophisticated invalidation strategies.
  • Grace Period control: Even if a piece of content is technically expired, Varnish can serve as a cache for a short “grace” period while new content is being fetched, thus ensuring continuity of service.

vcl_miss : Handling Unmatched Requests in Cache

A cache miss occurs when the request does not have a direct match in the cache. vcl_miss is the subroutine that handles these scenarios, and its functions include:

  • Fetching Decision: vcl_miss determines whether and how content should be retrieved from the backend server. This is the point at which you can decide to store the newly retrieved content for future requests, optimizing the use of the cache.
  • Configuring Caching Rules: Administrators can configure specific rules that define what types of content should be cached and for how long, customizing the caching policy based on traffic and content needs.

hit-for-pass : Bypass Cache when Necessary

The “hit-for-pass” mechanism is an advanced feature of Varnish that is activated when, despite the presence of a hit, the content should not be served from the cache. This can be crucial for:

  • Dynamic Content: For content that changes frequently or is unique to each user, such as user session data or personalized information, caching can be counterproductive.
  • Dynamic Configuration: Varnish allows you to dynamically configure these exceptions in the VCL, allowing you to bypass the cache when the defined criteria indicates that the content is most effectively served directly from the backend.

Phase 3: Implementation of Alternative Actions for Cache Management

In this phase we will dive into “Implementing Alternative Actions for Cache Management”, an essential phase for maintaining the integrity and currency of the cache. Here, we explore subroutines vcl_purge e vcl_ban, which allow administrators to perform cache invalidation actions in diverse and sophisticated ways. We'll dive deeper into how the PURGE command removes specific entries from the cache, while BAN allows you to invalidate groups of entries based on broader criteria. This phase underlines the importance of effective and selective cache management to ensure that the contents served are always up-to-date and relevant. Additionally, we will look at the subroutine vcl_pipe, used to bypass caching for specific content types, thus highlighting Varnish's flexibility and adaptability in handling various caching scenarios. Phase 3 is crucial to understanding how Varnish handles exceptions and maintains optimal performance even under dynamic conditions.

vcl_purge and BAN: Differentiated Invalidation Strategies

In Varnish, effective cache management isn't just limited to content storage and delivery; it is also essential to be able to invalidate content that is no longer current or correct. The subroutine vcl_purge is designed for this purpose: it allows you to selectively invalidate cached entries in a precise and targeted way.

  • PURGE: The PURGE command is used to remove individual entries from the cache. When a cached response becomes invalid, for example, due to a change in the original content, the PURGE command ensures that this specific response is purged from the cache. This method is optimal for invalidating individual objects and is typically invoked through HTTP requests with the PURGE method.
  • BAN: In contrast, BAN is a command that allows you to invalidate a large set of cache entries based on regular expressions or other complex criteria. With BAN, you can specify patterns that match response headers or other attributes, thereby bulk deleting all cache entries that match the criteria. This is especially useful when you need to invalidate multiple caches that share a common attribute, such as a section tag or content type.

The choice between PURGE and BAN depends on the specific need: PURGE when you need to act on a single resource, BAN for a broader and more powerful invalidation strategy.

vcl_pipe: Cache Bypass for Specific Content

The subroutine vcl_pipe represents a strategic choice for those contents which, by their nature, do not benefit from caching. Here are some key scenarios for using vcl_pipe:

  • Non-Cacheable Content: Some types of interactions, such as encrypted transactions or real-time data streams, are not suitable for caching. vcl_pipe allows you to route these requests directly to the backend without going through caching logic.
  • Real-Time Traffic Management: For requests that require instant updates or live data, such as stock quotes or interactive chats, vcl_pipe ensures that data is retrieved directly from the original source.

In summary, vcl_purge e vcl_ban provide system administrators with the tools needed to keep the cache up-to-date and relevant, while vcl_pipe offers an effective solution to handle exceptions that don't fit the caching model.

Phase 4: Communication Dynamics and Backend Response Management

In this phase we will focus on “Interaction with the Backend (Backend Workthread)”, a key component for handling requests that cannot be satisfied by the cache. In this step, we will delve deeper into the subroutine vcl_backend_fetch, where Varnish establishes a connection with the backend server to retrieve the requested data. We'll look at how crucial aspects such as configuring timeouts, maintaining keep-alive connections, and manipulating request headers are handled to optimize interaction with the backend. Additionally, we will discuss the role of vcl_backend_response, which determines whether and how responses from the backend can be cached by evaluating response headers as Cache-Control e Expires. This phase is also where you address errors in data recovery, with vcl_backend_error which comes into play to manage unexpected situations, offering fallback responses or recovery attempts. Understanding this phase is essential to appreciate how Varnish optimizes interactions with the backend, ensuring high performance and efficient request management.

vcl_backend_fetch: Optimizing Data Fetch

The subroutine vcl_backend_fetch it is the heart of Varnish's interaction with the backend server. At this stage, Varnish initiates an active connection with the backend to retrieve the resources requested by a “miss” request. The configuration of this phase is crucial and includes several technical aspects:

  • Timeout management: It is possible to set specific timeouts for connections to the backend, thus avoiding prolonged waiting times that could negatively impact the user experience.
  • Keep-Alive Connections: By keeping connections with the backend open (keep-alive), the overhead associated with opening new connections for each request is reduced, improving efficiency.
  • Setting Request Headers: Administrators can manipulate request headers sent to the backend to control content negotiation and other backend caching policies.

vcl_backend_response : Response Evaluation and Caching

After receiving the response from the backend, vcl_backend_response goes into action to evaluate and process the response. This subroutine has the task of analyzing the response and deciding its fate in relation to the cache:

  • Analysis of Response Headers: Headers like Cache-Control e Expires they are essential at this stage because they inform Varnish about the cacheability of the response. A detailed configuration at this stage allows you to respect the backend caching policies and ensure data consistency.
  • Custom Caching Rules: Administrators have the ability to override or supplement backend caching logic with custom rules to tailor caching behavior to their system's specific needs.

vcl_backend_error : Handling Backend Communication Errors

The interaction with the backend is not always successful. When an error occurs while retrieving data, the subroutine vcl_backend_error is designed to handle these unexpected events:

  • Implementation of Fallback Responses: In the event of an error, Varnish can provide a pre-configured fallback response, such as a custom error page or maintenance message.
  • Recovery Attempts: The configuration may include logic to automatically retry the request, potentially to an alternate backend, to ensure service resilience.

Through these mechanisms, Varnish ensures that every interaction with the backend is managed with maximum efficiency and that any problems are addressed with solutions that maintain high quality of service to end users. The backend phase is critical because it supports the robustness and scalability of the caching infrastructure, allowing Varnish to serve fresh, up-to-date content while maintaining fast response times and a good user experience.

Phase 5: Finalizing and Optimizing Content Delivery

In this phase we will explore the “Content Delivery”, a decisive moment in which the responses, whether coming from the cache or directly from the backend, are finally sent to the client. In this part we will focus on the subroutine vcl_deliver, where Varnish makes final adjustments and optimizations before actual delivery. This includes the adaptation of response headers, the possible compression of the content to improve its transmission, and the implementation of customized logic to optimize the end user experience. Phase 5 is crucial to ensure that the delivered content is not only fast to load, but also safe and in line with user expectations. This section highlights the importance of the final phase of the caching process, where Varnish's effectiveness in improving the general performance and usability of websites materializes.

vcl_deliver : Refinement and Presentation of the Response

The subroutine vcl_deliver represents the final stage in the journey of a request within Varnish. It is at this stage that the response, whether fetched from the cache or fresh from the backend, is refined and prepared for final delivery to the client. vcl_deliver is the point where the following essential actions can be carried out:

  • Editing Response Headers: Before the response is sent, headers can be removed or added to comply with security, privacy best practices or simply to adapt the header to the caching policy.
  • Content Optimization: In some cases, you can further compress content or perform other forms of optimization to reduce load time on the client side.
  • Custom Logging: This is also the time to implement custom request logging, which can provide valuable insights for performance analysis and optimization.
  • Final Cacheability Check: Even if a response was previously cached or newly retrieved, vcl_deliver allows you to carry out a final check of its cacheability before leaving it out of Varnish.

Impact on Performance and User Experience

The phase of vcl_deliver it is crucial not only to ensure that content is served optimally, but also to ensure that the user experience meets performance expectations. Since it is the last checkpoint before the response reaches the user's browser, any optimization at this stage can have a significant impact on the loading times perceived by the user.

Through the meticulous configuration of vcl_deliver, administrators can influence the final impression that users have of the site, both in terms of speed and quality of the content delivered.

Phase 6: Processing of Summary Responses and Error Messages

In this phase we delve into the "Management of Errors and Synthetic Responses", a fundamental aspect to guarantee fluid and professional management of anomalous situations. At this stage, the focus is on the subroutine vcl_synth, which is invoked when there is a need to generate synthetic responses, such as error pages or warning messages. This phase is crucial in providing end users with clear and useful information in case of errors or service interruptions, maintaining a high level of communication and transparency. We'll look at how vcl_synth allows administrators to fully customize these responses, ensuring they are in line with site branding and policies. Effective error handling and the ability to react to unexpected situations with appropriate responses are key elements to maintaining reliability and user trust, making this phase a key pillar in Varnish's overall caching strategy.

vcl_synth : Creating System Contents and Exception Handling

The subroutine vcl_synth plays a crucial role in handling situations where Varnish cannot provide a valid response through normal caching channels or from the backend. This stage of the process is dedicated to the generation of synthetic responses, which are contents dynamically generated by Varnish itself in response to particular errors or events. Key features include:

  • Generation of Custom Error Pages: When a request cannot be satisfied, instead of showing a generic error page, vcl_synth allows you to present a custom error page that can be designed to maintain consistency with your site's branding and provide helpful guidance to users.
  • Warning and Maintenance Messages: In case of scheduled maintenance or technical unexpected events, vcl_synth can be configured to provide clear and informative messages, ensuring users are aware of the current situation.
  • Handling of Exceptions: This phase also allows you to manage exceptional cases such as malformed requests or unexpected user behavior, offering a coherent and controlled response.

Customization and Configuration

The configuration of vcl_synth It is highly customizable thanks to the VCL, which allows administrators to precisely define how to handle various error scenarios. This includes:

  • HTTP Status Codes: Define which status codes to return for specific scenarios, improving communication with clients and monitoring systems.
  • Dynamic Content: Insert dynamic content into error pages, such as timestamps, error-specific messages, or troubleshooting information.
  • Detailed Logs: Configure detailed error logging to aid administrators in analyzing and resolving issues.

Impact on User Experience and Diagnostics

The effective implementation of vcl_synth not only helps maintain clear communication with users during errors, but also serves as a diagnostic tool for system administrators. By providing immediate, relevant feedback, administrators can quickly identify and resolve issues, improving system resilience and reliability.

In conclusion, vcl_synth represents the final safety net within the Varnish architecture. Its careful configuration and management ensure that even when things don't go as planned, the user experience remains as smooth and informative as possible, and administrators have the tools necessary for effective analysis and intervention.

Conclusion

Varnish Cache has established itself as a world-class enterprise solution for optimizing website performance. Thanks to its robust and flexible architecture, Varnish not only significantly improves page load times and reduces the load on backend servers, but also offers granular control and configurability that makes it ideal for high-traffic web applications and large sites. dimensions. Its effectiveness is reflected in the adoption by some of the most important websites globally. Prominent examples include platforms such as The New York Times, Wikipedia, or CNN, which rely on Varnish to ensure fast and reliable content delivery. This broad acceptance demonstrates Varnish's ability to meet the most demanding web optimization needs, making it a prime choice for businesses looking to improve user experience, optimize server resources, and scale effectively in an increasingly competitive digital environment .

Do you have doubts? Don't know where to start? Contact us!

We have all the answers to your questions to help you make the right choice.

Chat with us

Chat directly with our presales support.

0256569681

Contact us by phone during office hours 9:30 - 19:30

Contact us online

Open a request directly in the contact area.

INFORMATION

Managed Server Srl is a leading Italian player in providing advanced GNU/Linux system solutions oriented towards high performance. With a low-cost and predictable subscription model, we ensure that our customers have access to advanced technologies in hosting, dedicated servers and cloud services. In addition to this, we offer systems consultancy on Linux systems and specialized maintenance in DBMS, IT Security, Cloud and much more. We stand out for our expertise in hosting leading Open Source CMS such as WordPress, WooCommerce, Drupal, Prestashop, Joomla, OpenCart and Magento, supported by a high-level support and consultancy service suitable for Public Administration, SMEs and any size.

Red Hat, Inc. owns the rights to Red Hat®, RHEL®, RedHat Linux®, and CentOS®; AlmaLinux™ is a trademark of AlmaLinux OS Foundation; Rocky Linux® is a registered trademark of the Rocky Linux Foundation; SUSE® is a registered trademark of SUSE LLC; Canonical Ltd. owns the rights to Ubuntu®; Software in the Public Interest, Inc. holds the rights to Debian®; Linus Torvalds owns the rights to Linux®; FreeBSD® is a registered trademark of The FreeBSD Foundation; NetBSD® is a registered trademark of The NetBSD Foundation; OpenBSD® is a registered trademark of Theo de Raadt. Oracle Corporation owns the rights to Oracle®, MySQL®, and MyRocks®; Percona® is a registered trademark of Percona LLC; MariaDB® is a registered trademark of MariaDB Corporation Ab; REDIS® is a registered trademark of Redis Labs Ltd. F5 Networks, Inc. owns the rights to NGINX® and NGINX Plus®; Varnish® is a registered trademark of Varnish Software AB. Adobe Inc. holds the rights to Magento®; PrestaShop® is a registered trademark of PrestaShop SA; OpenCart® is a registered trademark of OpenCart Limited. Automattic Inc. owns the rights to WordPress®, WooCommerce®, and JetPack®; Open Source Matters, Inc. owns the rights to Joomla®; Dries Buytaert holds the rights to Drupal®. Amazon Web Services, Inc. holds the rights to AWS®; Google LLC holds the rights to Google Cloud™ and Chrome™; Facebook, Inc. owns the rights to Facebook®; Microsoft Corporation holds the rights to Microsoft®, Azure®, and Internet Explorer®; Mozilla Foundation owns the rights to Firefox®. Apache® is a registered trademark of The Apache Software Foundation; PHP® is a registered trademark of the PHP Group. CloudFlare® is a registered trademark of Cloudflare, Inc.; NETSCOUT® is a registered trademark of NETSCOUT Systems Inc.; ElasticSearch®, LogStash®, and Kibana® are registered trademarks of Elastic NV This site is not affiliated, sponsored, or otherwise associated with any of the entities mentioned above and does not represent any of these entities in any way. All rights to the brands and product names mentioned are the property of their respective copyright holders. Any other trademarks mentioned belong to their registrants. MANAGED SERVER® is a registered trademark at European level by MANAGED SERVER SRL Via Enzo Ferrari, 9 62012 Civitanova Marche (MC) Italy.

Back to top