Table of contents of the article:
We all know that application and website performance are critical to their success. The process for improving application or website performance, however, is not always clear. Code quality and infrastructure are obviously critical, but in many cases you can make significant improvements to the end-user experience of your application by focusing on some basic application delivery techniques. One such example is the implementation and optimization of caching in the application stack. This blog post covers techniques that can help both novice and advanced users get better performance from using the content caching features included in NGINX and NGINX Plus.
A content cache sits between a client and an "origin server" and saves copies of all content it sees. If a client requests cached content, it returns the content directly without contacting the origin server. This improves performance as the content cache is closer to the client and uses application servers more efficiently because they don't have to do the page generation work from scratch every time.
There are potentially multiple caches between the web browser and the application server: the client browser cache, intermediate caches, content delivery networks (CDNs), and the load balancer or reverse proxy that sits in front of the servers of applications. Caching, even at the reverse proxy / load balancer level, can dramatically improve performance.
For example, last week I took on the task of optimizing the performance of a website that was loading slowly. One of the first things I noticed was that it took more than 1 second to generate the main home page. After some debugging, I found that because the page was marked as non-cacheable, it was dynamically generated in response to each request. The page itself didn't change very often and wasn't customized, so it wasn't necessary. As an experiment, I marked the home page to be cached for 5 seconds by the load balancer and the fact alone resulted in a noticeable improvement. The time for the first byte dropped to a few milliseconds and the page loaded visibly faster.
We talked a lot about NGINX Webserver and how we prefer it for performance reasons to the more famous Apache.
However, we have talked little about FastCGI Cache, using Varnish Cache in the company for all high performance customers.
However, we are witnessing an evolution of many hosting providers who are proposing FastCGI Cache as a Full Page Cache solution for the NGINX webserver.
We will not be here to explain what a FPC (Full Page Cache ed) is, having already done it on several occasions on our blog, but we want to get right into the matter by talking about FastCGI Cache which at first glance may seem like a very caching solution. easier than enterprise-specific tools like Varnish Cache.
What is Nginx FastCGI Cache?
Before we talk about Nginx FastCGI Cache, let's talk about how your website works.
- When a user visits your WordPress page, the web browser sends an HTTP / HTTPS request to Nginx.
- Nginx passes the request to PHP-FPM and Nginx will capture all PHP codes when it tries to grab the page.
- PHP-FPM processes the page and queries the MariaDB / MySQL database to retrieve the page.
- PHP-FPM sends the generated “static” HTML page to Nginx.
- Nginx sends the generated HTML page to the web browser for the user.
NGINX includes a FastCGI module which has directives for caching dynamic content served by the PHP backend. The setting eliminates the need for additional page caching solutions like reverse proxies (think Varnish) or application-specific plugins. Content can also be excluded from caching based on request method, URL, cookies, or any other server variable.
When using Nginx FastCGI , this built-in Nginx module will sit between Nginx and PHP-FPM and is capable of generating a cached HTML page from PHP-FPM.
When another user visits the same WordPress page, your website will no longer perform the same PHP and database requests because the page is already cached and served by FastCGI.
As a result, your server response time will be much faster after the initial load.
Your PHP-FPM and MariaDB / MySQL load will be reduced.
Your server's CPU resource usage will be lower.
And finally, your server can handle more traffic with the same server specifications when using Nginx FastCGI Cache, ultimately allowing you to maintain a more affordable server without having to scale further.
The main problem of NGINX FastCGI Cache in the free version.
It must be said and remembered that NGINX has two distribution models, the Free version that everyone knows and we also use and the Plus version called NGINX Plus or NGINX + which is the paid commercial version.
The main and most important difference between the two versions for what concerns the FastCGI cache is that the functionality of PURGE ALL in the free version is missing by default.
In fact, it may happen that you need to clear all the cache in some specific configurations, let's imagine a blog that has the links of the last 5 news items in the footer and when writing a new news you have to clear all the cache of the entire site.
While with Varnish it is enough to send a BAN / or a PURGE ALL to clean the entire cache of the site with the same speed as performing a deletion operation on the Filesystem (perhaps less than a second), with NGINX FastCGI Cache in the Free version, you will need to recall one by one at the application level all the URLs of your site, taking at least 1 hour on a site with 5000 pages.
Obviously, to solve this problem, third-party modules for NGINX have been created that allow you to introduce the functionality of PURGE ALL such as, for example, ngx_cache_purge which you can find at this link: https://github.com/torden/ngx_cache_purge
In short, if you don't have broad shoulders to install a Caching Enterprise system such as Varnish, you can simply opt for the Free version by adding this module.