Table of contents of the article:
In the end someone had to do it, explain once and for all in a clear, precise and concise way why WordPress is slow, what it means to increase its performance, how to make it faster both from a development point of view and from a point of view. WordPress system and hosting.
This will allow you to draw correct conclusions and be able to untangle yourself among the thousands of offers of self-styled experts who every day try to sell you services of WordPress speed or services of speed up WordPress as some call it.
Warning: the rest of the article is rather technical and perhaps not suitable for the entrepreneur who excels at anything else in life and finds himself a slow site and has the only desire to make it fast. If you'd rather not delve into weird terminologies, such as cache, TCP BBR, Above the Fold, Critical Css and more, we encourage you to contact us directly in the contact section.
If, on the other hand, you are curious to understand what we do and the problems of slowness that we are going to solve, continue this reading.
As we said initially, unfortunately there is a rather dishonest stance in the panorama, the systems engineers say that a fast hosting and server is enough to have a fast site, and the developers argue instead that it is enough to have a site developed ad hoc (obviously from them tens of thousands of euros a year), so as not to have any problems in terms of performance and speed.
Here is what, for example, a well-known and very well prepared one writes WordPress developer on his site about us systems engineers.
What this gentleman above obviously forgets to mention is that the company we worked with together it went from a page load of about 6 seconds to 1,5 seconds ONLY and exclusively thanks to a skilful system tuning on the server side and only then a further improvement in the order of half a second scarce, thanks to a very expensive ad-hoc development approach costing something like tens and tens of thousands of euros a year as reported by the CEO of a company that we will omit for the sake of fairness and privacy of all parties.
It is therefore understood that very often an ad-hoc sartorial approach at the development level (such as developing plugins or themes from scratch) is possible only after the company has managed to obtain important economic results online with a low initial investment at the system level. which allowed to bring the site at a speed at least more than adequate to be successful online, and only at a later time possibly reinvest an important sum (of tens and tens of thousands of euros a year) for an ad hoc tuning also at the application level.
This does not mean that a development-level approach is not important, on the contrary we are the first to support the exact opposite, which is why often when we detect the opportunity we address customers who have already received brilliant results thanks to our consultancy. systems engineering, towards developers with whom we collaborate in a mutual balance and respect for the respective profession and professionalism.
We will never dream of saying that development is not important in terms of performance, unlike those who, in order to bring water to their mill, have no qualms in minimizing the importance of an optimized and performing hosting, as well as the professional figure of the systems engineer, improvising himself as such with at least questionable results.
We just want to highlight how it is easier to go from 6 seconds to 1,5 by spending (investing) for example 50 or 100 € per month for a site in the embryonic stage that still has to obtain the right successes at an entrepreneurial and profit level, rather than focusing immediately on the development to tens of thousands of euro per year.
Strictly speaking, just to make a point of the unpleasant situation that is now taking over in the context of optimizing the performance and speed of WordPress, we wanted to write this post which, albeit with a hint of genuine and peaceful controversy, does not want to other than to represent the reality of cases that we have experienced (even in spite of ourselves) in the first person.
So let's see roughly how a WordPress installation is set up in its "complexity", in which we will list the main components such as the Server (physical), the CMS (WordPress) which is based in turn with PHP (server interpreter side) and the DBMS (MySQL).
We will therefore take care of breaking down the problem into single components: The Webserver, PHP, MySQL, and the WordPress application developed above it, going to mention the latest needs and metrics in the PageSpeed and Google field. Core Web Vitals, indicating in any case what could be the best solution in terms of performance and speed both for slow sites already existing and therefore already developed, and for sites that will have to be designed from scratch.
So let's start.
What is WordPress?
WordPress is a software platform of "blog"And Content Management System (CMS) open source that is a program that, by turning server side, allows the creation and distribution of a website formed by content textual o multimedia, manageable and updatable in a way dynamic. It was initially created by Matt Mullenweg and distributed under license GNU General Public License. It is developed in PHP with support for database manager MySQL.
It is the most used CMS in the world
According to W3TECHS https://w3techs.com/technologies/details/cm-wordpress currently it is used by 65,1% of all websites for which we know the content management system. This actually represents 42,7% of all websites online.
It is useless to praise its now well-known strengths regarding the community, the ease of learning, the use, the thousands of plugins to do anything, which makes it the most popular CMS ever.
We should focus on the known pains, that is WordPress is not a performing, light and fast system. WordPress is instead an extremely slow and cumbersome system, derived primarily from the technologies on which it was developed: PHP and MySQL.
The first is an outdated server side programming language developed since 1994, the other a relational DBMS which, having to satisfy some important properties and peculiarities of ACID compliant DBMS, are necessarily slow compared to modern solutions.
What is WooCommerce?
WooCommerce is a downloadable plugin within WordPress, which allows you to turn your site into a real one virtual shop. Launched in September 2011, WooCommerce is one platform, constantly improving and updating, both in terms of functionality and features. Used by traders, beginners or experts, to start a highly professional online sales business of products or services.
WordPress is slow, because it is based on PHP
The development of this programming language began in 1994, when Rasmus Lerdorf was struggling with the creation of script Common gateway interface ("Common interface for gateway" in Italian) in Perl to be used for periodic updates of its website. These little ones tool they performed simple and repetitive tasks, such as showing his curriculum vitae and recording the number of visitors to his personal page.
Initially, PHP was not intended - much less designed - to be a programming language in its own right. With the passage of time, and with the growth of the community of web developers who used it, it was decided to systematize the corpus of functions and scripts created. From this work was born the second release of PHP / FI, launched in November 1997.
PHP is not the fastest language we could write web applications in, yet we continue to do so for many other reasons. The sheer speed of a language is rarely the main deciding factor for many projects. Developer productivity, for one thing, is usually more important. And in many applications, the bottlenecks will not be in the application code; instead it is where the interaction with other systems takes place. For example, communicating with databases, APIs, and message queues takes time.
So why is PHP slow compared to other languages? PHP is a dynamic and interpreted language. This means that it is not compiled into machine language but rather read at runtime. PHP also has a share-nothing architecture, so on every request it interprets everything from scratch. The consequence of this is that the performance is not as good as it is for compiled languages, but it also allows for functionality that compilable languages do not have.
Not needing to compile PHP can help with developer productivity in a few ways. Allows shorter feedback loops during development - the results of code changes can be viewed immediately without having to go through any compilation steps first. There is less need to worry about garbage collection and memory usage. Debugging runtime errors is easier because you can directly identify where they occur in the source code. It also allows dynamic code such as variable variables, dynamic types, and so on, although you need to be careful with these to avoid making your application difficult to test.
However, all this makes PHP a heavy and slow language if we wanted to compare it to more modern asynchronous languages such as GO or Node.js which unlike PHP are asynchronous and non-blocking languages and in fact more performing as we can easily see from the following image.
MySQL is slow, because it is a true relational DBMS with ACID properties.
Another fundamental component on which WordPress runs is the database server, in all cases a server MySQL or its fork and derivative as MariaDB o Tap Server.
MySQL fully complies with i ACID requirements for a transaction-safe RDBMS, as follows:
- Atomicity it is handled by storing the results of transactional statements (the modified lines) in a memory buffer and writing these results to disk and to the binary log from the buffer only once the transaction has been confirmed. This ensures that the statements in a transaction operate as an indivisible unit and that their effects are seen collectively or not at all.
- The consistency it is primarily handled by MySQL's logging mechanisms, which log all database changes and provide an audit trail for transaction recovery. In addition to the registration process, MySQL provides locking mechanisms that ensure that all tables, rows, and indexes that make up the transaction are locked out of the startup process long enough to commit the transaction or roll back.
- Server-side semaphore variables and blocking mechanisms act as traffic handlers to help programs manage their own traffic mechanisms. isolation . For example, MySQL's InnoDB engine uses fine-grained row-level locking for this purpose.
- MySQL implements the durability maintaining a binary transaction log file that tracks system changes during a transaction. In the event of a hardware failure or sudden system shutdown, restoring lost data is a relatively simple task using the latest backup in combination with the log on system reboot. By default, InnoDB tables are 100% durable (in other words, all transactions committed to the system before the crash can be rolled back during the restore process), while MyISAM tables offer partial duration.
This paradigm dates back to 1970, partially formalized (ACD, isolation came later) in a historic 1981 article entitled The Transaction Concept: Virtues and Limitations da Jim Gray and becoming mature in 1983 in a second document entitled Principles of Transaction-Oriented Database Recovery, written by Andreas Reuter e Theo Harder.
Relational databases for OLTP use - we cite for example the relational database engines produced by Oracle and Microsoft - they have been designed by placing the reliability and consistency of the data managed at the center, according to this paradigm.
This makes it a really powerful and stable DBMS capable of maintaining a state of data consistency, not losing them or generating errors when writing or reading.
ACID compliant databases are often used by financial institutions such as banks or casinos and mission critical systems where data must be intact and consistent.
An ACID compliant database having the need to offer these guarantees will necessarily have to have checks at the business logic level (internal application logic) which will necessarily make it slower than the new generation NOSQL DBs than unlike a SQL Acid DBMS compliant are just enough to read and write information without too many frills.
On the other hand, non-relational databases generally guarantee atomicity on the single instruction, regardless of how complex it is. This Eric Brewer coined the term BASE that NoSQL databases must respect:
- Basic Availability: a response to every request is guaranteed, whether successful or unsuccessful.
- soft state: the state of the system can change over time, even without user intervention.
- Any consistency: since you have soft state, there may be cases of inconsistency, which must be handled manually by the developer.
All this makes it clear that every time it is necessary to represent on a database a concept of the real world that contains the word transaction, unless you are suicidal and want to implement their own transaction management mechanism by hand, the choice necessarily falls on relational databases.
Leaving aside the ANSI standard SQL language, data normalization and other extremely technical and academic topics (remembering the Buon Prof. Montesi of the course of data bases at the University of Camerino), we can confidently affirm that a CMS developed WELL with a of data designed WELL on a NOSQL system will certainly be faster than the respective solution developed WELL on a relational DBMS.
WordPress is slow because some plugins are slow.
One of the standard cases that every WordPress developer will have come across in the course of their career is that of having a site that is all in all sufficiently performing and fast until a plugin has been installed that has ruinously slowed down the performance of the WordPress site.
This scenario is very frequent when you intend to add new features and functionalities to the site, such as perhaps adding a trivial visitor counter such as Post View Counts, a simple multilingual system such as WPML, or an e-Commerce system such as WooCommerce or even a backup system such as Updraft Update and Restore for example.
In reality, the list of problematic plugins is very long and an encyclopedia would not be enough to treat them all correctly.
Suffice it to say, therefore, that many international high-performance Hostings, including us, of course, have drawn up a list of plugins that should be discouraged or even banned, otherwise the site and its performance will slow down significantly.
We are in no way suggesting that any of these disallowed plugins is a "bad" plugin. Some of them, like related post plugins, can be great for content discoverability. However, as a managed WordPress host, our primary concern is to provide the fastest and most secure WordPress hosting experience possible. These plugins have been shown to have a negative impact on performance or security on our platform and we have decided to prevent their use.
As for unsafe plugins, we try to work with the plugin developer to get them fixed. During that time the plugin may be temporarily added to our disallowed users list and we will be happy to allow it again once the issue is resolved.
Caching plugin
Caching plug-ins can conflict with our platform's built-in caching framework. These plugins are known to cause direct conflicts and, if used, would impact your site's loading capacity:
- WP Super Cache
- WP Fast Cache
Many of the caching features these plugins offer are built into our servers by default as part of your managed WordPress hosting experience. We have your back, don't worry!
Backup plugin
We do not recommend the use of backup plugins as they unnecessarily bloat your site and can store files in an insecure way. Many of these plugins also run their backup jobs at inopportune times, slowing down MySQL queries and even causing timeouts on your site.
The following backup solutions are plug-ins not allowed:
- WP DB Backup : unnecessarily inflates the local memory of your site.
- WP DB Manager : .htaccess security is recommended, but using local storage is the primary concern as it only offers a local storage option.
- BackupWordPress - Duplicate a large number of files in local storage already present in our backups.
- VersionPress - In order to work, this plugin needs access to server-level functions that we do not allow for security reasons.
We make nightly backups of all WordPress websites hosted with us. These are done efficiently and automatically, and the data is stored securely on a separate server from your WordPress installation. Our automatic backups do not count towards your plan's local storage limits and we make these backups available for you to restore, copy or download as needed.
If you feel safer with an offsite secondary backup, let's allow for example VaultPress on our servers.
Server plugin and MySQL Thrashing
These plugins are not allowed because they cause a high load on the server or create too many database queries. They will directly affect the server load and ultimately hamper the performance of your site.
- Broken Link Checker - Overwhelms the server with a very large amount of HTTP requests
- MyReviewPlugin - Freezes the database with a significant amount of writes.
- LinkMan - Just like MyReviewPlugin above, LinkMan uses a non-scalable amount of database writes.
- Fuzzy SEO Booster : causes problems with MySQL as a site expands.
- WP PostViews : Writes inefficiently to the database on every page load.
- Blender tweets - Doesn't interact well with caching and can cause increased server load.
To track traffic in a more scalable way, both the statistics module in the plug-in jetpack of Automattic that Google Analytics they work great.
We recommend that you use one of the following tools to check for broken links on your site. Since they are not plugins, they will not have a negative effect on your site's performance.
Related posts Plugins
Almost all “Related Posts” plug-ins suffer from the same problems as MySQL, indexing and search. These problems make plugins extremely database intensive.
The ones we have outright banned are:
- Dynamic Related Posts
- SEO Auto Links & Related Posts
- Yet Another Related Posts Plugin
- Similar posts
- Contextual Related Posts
There are dedicated services that allow you to download related post features to their servers. Instead, we recommend that you look into one of the related mail services:
Email plugin
Just because you're able to send emails with WordPress doesn't always mean you should. We want our customers to experience the same better email experience as we provide with web hosting, so we recommend using a third party service. Specialized services such as MailChimp , constant Contact , AWeber and countless others offer complete email solutions for your business and will provide you with optimal results.
If your domain's email provider offers its own SMTP server, you can set it up as an outgoing server. Be sure to check with your email provider for their block mail, opt-in mail, and anti-spam policies before doing so.
Various plugins
Other plugins we have decided to proactively remove include:
- Hi Dolly!
- WP phpMyAdmin - Not allowed due to a security issue quite serious. We also offer plug-in free phpMyAdmin access from yours User portal .
- Sweet Captcha - After our Sucuri partners have revealed that Sweet Captcha service was used to distribute adware, we decided to follow the example of WordPress Plugin Repo and ban the plugin completely.
- Digital Access Pass (DAP) - While we don't actively remove it from sites, please note that it won't work properly on our platform due to using system-wide PHP and cron sessions. Instead, you'll want to use one of the other top-tier subscription plugins like Paid Memberships Pro , Restrict Content o S2Member .
Additional scripts
Some frequently used scripts are known to contain security vulnerabilities. Our platform periodically scans the file system to identify and fix or remove these scripts.
- TimThumb : Previous versions of TimThumb are known to contain vulnerabilities. When our system scan identifies a previous version, it will automatically update the script. After the update is completed, the system will notify you by email.
- Uploadify - Access to this script is blocked due to known security threats. The reasoning behind this has been extensively informed by this blog post of our partners in Sucuri.
Obviously the list is not exhaustive but only serves to account for how a plugin is enough to bring a WordPress site to its knees. Therefore, you should always be careful when installing a plugin asking yourself if we actually need that functionality and if we are installing the best plugin available to implement that particular function.
We may need to use a multilingual system, so which one to choose between WPML, PolyLang and MultilingualPress? Which pros, which cons, which one should we install?
This is the right approach that we should use whenever we want to add a function via plugin.
WordPress is slow, because the themes are slow.
In the same way as the aforementioned plugins can be slow, a theme can also be faster or slower depending on the theme and configuration of the same. There are extremely fast themes with a few queries to the database and very fast PHP code, and others extremely cumbersome with multiple queries to the database (perhaps to create a menu) which makes everything very complex, redundant and therefore slow.
We will not go too far into the discussion concerning WordPress Themes since there is little to say, except that a system developed ad-hoc will certainly be faster than one of those many General Purpose themes with a thousand unnecessary features that you can find on sites like ThemeForest.
If you already have a site in production with a theme already set up and you don't want to spend several thousand euros for the development of an ad-hoc theme (budget at least about 5000 - 10000 euros for at least a couple of months of work by a developer Competent WordPress), do not worry about how to optimize the current theme in use by intervening at the application level and try to solve the weight and slowness of the site by intervening on other points covered in this post, especially those on the server side.
Slow WordPress. But what does this mean?
Explaining what slow WordPress means can have multiple interpretations. Obviously it depends on the yardstick that is used and which for the sake of brevity we can represent with two distinct entities: the user and Google.
The user who needs to have a responsive and fast site to enjoy a satisfactory user experience (or user experience) that improves the stay on the site, reduces bounce rates, cart abandonments and therefore improves conversions and sales for the joy of the entrepreneur who sees an increase in turnover and presumably also in profits.
Google that by measuring the site through automated tests, as well as through real data in the field that are sent to Google by Chrome browsers (and only them) we weigh and evaluate us through modern Core Web Vitals simulating a slow 3G connection that today concerns a small part of real connections.
Google Page Speed
The test performed on the desktop version shows more or less how your site will be viewed by a user visiting from their laptop or desktop computer with generally good internet speeds.
Tests performed on a mobile device they will show how your site performs when you log in from a smartphone or tablet, usually with slower speeds and fewer resources.
This mobile test is purposely performed by simulating a limited 3G network rarely used in the modern digital world. Most use Google's test to quickly check their website and have no idea why their websites seem to perform so poorly on mobile devices.
If until some time ago it was common practice to justify itself by telling the end customer not to worry too much about what the Google PageSpeed Insight mobile test says, given that at a real navigation level, even a site with a seriously insufficient mobile score could be really fast in reality and load in less than two seconds, it is also true that with the Google ad that the score Core Web Vitals is no longer a vanity metrics (as some developers who are too proud and ignorant still want us to believe) but a ranking and positioning factor, today we must also satisfy this need.
Let's start from a concept, however, how does the visualization of a Web page happen when we write, for example, https://www.wikipedia.org?
Here are all the steps that our Browser will do from the request to the display of the content that can be read and also clicked:
- Site request by entering it in the browser navigation bar.
- DNS request to the nameserver of our connectivity provider which will ask the authoritative DNS for the corresponding IP of the wikipedia.org domain name
- Request from the vhost to the IP that was returned to us in step 2
- The Webserver accepts the request and discovers that the browser requests browsing in HTTPS secure mode
- The Webserver negotiates the SSL handshake with the client to establish the connection in secure mode
- The Webserver forwards the request to the handler for the site in question, in this case a PHP pool that will have to create processes (if they are not already started and waiting for requests) to start interpreting the PHP files.
- The PHP interpreter will read the files producing a bytecode that will recall the various WordPress files, themes, plugins and related queries to the MySQL database
- The database will receive the hundreds of Queries that it will execute more or less efficiently, returning a more or less large dataset record that will then be processed by PHP.
- PHP that received the data will produce an HTML and CSS layout which will be returned to the user's browser including images and multimedia.
- The user's browser will download the related static contents taking a variable time that depends on the weight of the data to be downloaded and the relative download speed.
- The browser will take the html layout and the style sheets to recompose everything, drawing the page in the structure that we are going to visualize (rendering).
As we have seen, therefore, behind a very banal visit to a website, a series of inevitable operations are established which can however be done well or badly.
Let's take the list above and start asking ourselves some questions if what we are doing we are doing in the best way.
- Site request by entering it in the browser navigation bar.
Are we using a modern browser of the latest generation? Is the connection good? If we are from smartphones, do we enjoy good coverage? Are we in 5G, 4G or 3G? Do we have any download speed limits from our mobile phone provider? We have used the protocol on our web server TCP-BBR for slow 3G connections allowing you to still use their maximum speed possible despite their slowness?
- DNS request to the nameserver of our connectivity provider which will ask the authoritative DNS for the corresponding IP of the domain name of our site.it
Is the nameserver we use as the authoritative nameserver for oursite.it fast? How many milliseconds does it take to return the correct IP to our smartphone operating system? Why don't we use an Anycast DNS perhaps on a specialized third party provider such as Amazon Route 53 or CloudFlare DNS?
- Request from the vhost to the IP that was returned to us in step 2
- The Webserver accepts the request and discovers that the browser requests browsing in HTTPS secure mode
What webserver are we using? How fast is it accepting a connection? How does it behave in front of over 10 thousand connections per second? How much does it impact the CPU and memory? Do you work on processes? A Thread? Do we use the very fast NGINX or the old and very slow Apache?
- The Webserver negotiates the SSL handshake with the client to establish the connection in secure mode
What SSL protocols do we use and what type of encryption do we want to use to establish the connections? What type of SSL certificate do we use? Do we also want to address the old operating systems with Windows Vista, Windows XP, Android 7 and older versions of MacOS El Captain? Maybe Let's Encrypt is no good and we need to adopt a commercial DV certificate to satisfy everyone and not get connection errors.
- The Webserver forwards the request to the handler for the site in question, in this case a PHP pool that will have to create processes (if they are not already started and waiting for requests) to start interpreting the PHP files.
How long does it take for the webserver to accept the connection? How long does it take for the webserver to activate the PHP process corresponding to the request? Does the PHP process need to be started from scratch or is the PHP process already in standby mode and can we save the process start-up time? If we use PHP-FPM then what kind of mode do we use? ondemand, static, dynamic? With what limits and in what contexts?
- The PHP interpreter will read the files producing a bytecode that will recall the various WordPress files, themes, plugins and related queries to the MySQL database
We need to read the file by accessing the disk. How fast is the drive? Do we have to write to the disk that we have done a file read operation or can we ignore it since it is superfluous data that would only lead us to waste writing time on the disk? Do we necessarily have to reread and execute the interpreted code every time we call index.php or can we use a precompiled bytecode to put in a cache to increase performance via OpCache? How often should we check if the files have changed and regenerate the new updated bytecode?
- The database will receive the hundreds of Queries that it will execute more or less efficiently, returning a more or less large dataset record that will then be processed by PHP.
Are we using DBMS-level Cache? Do the tables have indexes? Are there any chrones running at the WordPress level? Are there any expired or useless transients? Do we have giant recordsets returned from non-optimized plugins / themes?
- PHP that received the data will produce an HTML and CSS layout that will be returned to the user's browser including images and multimedia.
How fast is the webserver? We use compress JS and CSS content with gzip compression or better brotli ?
- The user's browser will download the related static contents taking a variable time that depends on the weight of the data to be downloaded and the relative download speed.
Static content such as images use compression systems such as webp images instead of the heavy png, jpg, jpeg? In case we were using a Static Cache like Varnish, should the images be cached in RAM or served directly from the disk? Can we use HTTP / 2 or HTTP / 3 to improve the parallelism of downloads without using the old damain sharding? Is the content on our site of national interest? Can we and should we use a CDN like Cloudflare? With which pros? With which against? Can we use webp images conditionally with CloudFlare depending on which browser is compatible or not, without having to use their $ 200 per month single site Business plan?
- The browser will take the html layout and the style sheets to recompose everything, drawing the page in the structure that we are going to visualize (rendering).
A 2000 piece page is slower to render than a 100 piece page, just as a 2000 piece puzzle is more complex to complete than a 100 piece puzzle.
Are we sure we are using not too large a number of elements? Are we sure we don't have to download CSS styles that aren't used on the page we're looking at? Are we sure we have set the cache times on the browser side in an intelligent way to avoid having to re-download unmodified elements such as the page logo at each visit? Does it make sense to wait until you have loaded all the resources such as images, fonts, style sheets and javascript and then start rendering the page or can we start as soon as possible and continue later while the user is already viewing something on the site? Can we avoid avoiding the “flickering” of the layout, that flickering and try not to put it back together in an annoying way under the eyes of the user?
For each point that we have listed (12) we have problems (more or less important and serious) and relative solutions to them.
In short, speeding up a WooCommerce site means analyzing all these aspects (or at least the most important ones) and carrying out operations that can solve or significantly improve the time lost for each operation.
The operations differ mainly in two macro-categories, server-side optimizations and application-side optimizations.
Let's see together how the two branches are divided and how at times they can partly overlap.
Server-side optimization.
By Server Side Performance Optimization we mean all those operations and features that concern the hardware and software aspect of the systems side, which should be dealt with by a verticalized Linux system engineer specialized in performance.
Hardware optimization and sizing
For example, a Linux system engineer specialized on WooCommerce performance will have the good sense of knowing how to size the hardware resources and the server on the basis of an excellent compromise between costs / performance and the real investment and expense possibilities of the customer.
Certainly he will opt for the right sizing of the CPU, the number of cores, the number of threads, the right amount of RAM memory, the technology of SSD disks or better still nVME to maximize the I / O speed on the disks, and therefore also on the MySQL database for example.
It will also take care of implementing the right server-side software components so that the hardware is used at its best and / or that the known problems of WooCommerce rather than other CMS can be solved with workarounds and fixes that may require installation and the configuration of additional software for very specific purposes.
Normally the options and operations always fall into the choice of the right service, the right configuration and the right integration with WooCommerce.
Network and kernel optimization
It will make sure that you have correctly implemented and configured TCP BBR to improve slow or high packet loss connections such as 3G connections as you can read in this article BBR TCP: the magic formula for network performance.
Installation of a light and fast Web Server
It will take care of installing a light in memory and extremely performing web server such as NGINX Webserver rather than the more popular Apache.
By now the market is mature and the supremacy of NGINX has finally been recognized seeing an extremely growing trend and an ever greater diffusion.
Regarding performance and benchmarks as well as other interesting features and comparisons, we refer you to this article: Apache VS NGINX. What is the best web server?
In fact, if you want to get to the point, you always fall back on the use of software systems and Cache for the different components.
Tuning Cache of MySQL or in any case of the Database as InnoDB Buffer Pool Cache
InnoDB (MySQL's higher performance engine than MyISAM) maintains a storage area called Buffer Pool for caching data and indexes in memory. Knowing how the Buffer Pool works and using it to keep frequently accessed data in memory is an important aspect of MySQL optimization.
THEMySQL database optimization is among the most important tuning in terms of web performance. When it comes to optimizing MySQL, you run into it irreparably InnoDB.
InnoDB is undoubtedly the best performing MySQL engine when it comes to handling type queries SELECT
. Its configuration includes several parameters, including innodb_buffer_pool_size.
InnoDB Buffer Pool indicates the size of RAM to be dedicated for storing indexes, caches, data structures and everything that revolves around InnoDB.
It is one of the most important parameters of the MySQL configuration and its value must be set according to the amount of RAM available and the services that operate on the server.
PHP bytecode cache like Zend OpCache
One of the biggest problems with websites is the loading time. One of the best ways to reduce loading time is to enable caching systems. There is not only the cache of HTML files, in fact OpCache is an opcode cache, which increases the speed of PHP websites by storing precompiled bytecode scripts in shared memory.
Caching PHP scripts means that the first time the script is run it is also precompiled and saved in memory. Each subsequent time the script is invoked it will not be necessary to run it again since opcache has stored the resulting code in RAM memory. This time saver brings you performance improvements, especially on websites that are constantly under stress.
The official definition says:
OPCache improves PHP performance by storing a precompiled bytecode script in shared memory, thus eliminating the need for PHP to load and parse scripts for each request.
In other words, when a PHP script is executed, it is compiled into opcode, a machine understandable code. OPCache stores this code in memory during the first run for reuse later. The PHP cache thus leads to performance increases. OPCache replaces APC and is an alternative to XCache, a PHP accelerator. Unlike Memcached which works on the database, Opcache works on PHP scripts.
This extension is bundled with PHP 5.5.0 and higher, and is available in PECL for PHP versions 5.2, 5.3 and 5.4.
Basically when the system compiles some code in PHP, the human readable code is converted into machine language and it takes time to compile all the scripts. So if the application makes a lot of requests cyclically, the performance could be improved by caching the various scripts. By enabling PHP OPcache, the process will run once and cache all scripts. The scripts will be stored in memory and only updates will be compiled and will continue to be archived.
OPCACHE can give you a noticeable performance boost and can significantly reduce website loading time. PHP7 OPcache uses 64MB of memory by default. No external libraries are needed to enable this extension.
WordPress Object Cache
The object cache of WP (object cache) is a code-side mechanism used to reduce hits on the database and improve the loading times and performance of our site. They are defined within the core wp-includes / cache.php, and they can be used through a predefined set of functions - similar to those made available for transients, from which they are distinguished by the fact that they are also a data storage system for the cache, which however is not available on all hosting and above all it requires a cache plugin in order to work.
It is a feature advanced that few programmers know, and even fewer actively use: but many caching and optimization plugins, after all, rely on it. By default, WP Object Cache is not a persistent form of data, and tends to last only for the very short duration of the HTTP request in question. Therefore, they are not stored for the future unless a special plugin is installed (recommended: W3 Total Cache). We're talking server-side caching, not client-side of course, so let's not get confused about this from the start.
The main advantage of using the cache to date is linked to improving the performance of the site's loading times, if it is not possible to intervene otherwise.
The purpose of object caching is to cache query results from the database.
An efficient database is one of the crucial factors for a fast website: WordPress is a content management system that naturally depends on its MySQL database.
Whenever users (or crawlers) make a request on your website, they generate database queries. If your site receives a large number of database requests, the queries can pile up quickly, overloading your MySQL server and slowing down your website.
The good news is that WordPress introduced its object caching class a long time ago - it was 2005 when the class named WP_Object_Cache it was implemented in the WordPress core.
Normally you can use Memcached or REDIS.IO as a backend to do object cache data storage.
Page Cache, aka WordPress page cache.
Memcached is one of the caching mechanisms that reside on your hosting server. It mainly deals with database queries which help to reduce database load resulting in a fast loading web page. If your website / store relies heavily on database queries, using Memcached for your WordPress website would significantly improve performance and reduce page load time.
Internet giants including YouTube, Reddit, Facebook, Twitter, and Wikipedia use Memcached to increase page load time. Google App Engine, Microsoft Azure, IBM Bluemix and Amazon Web Services also offer the Memcached service via an API.
Considering its importance in improving and decreasing page load time, we offer Memcached pre-installed on our managed WordPress hosting servers. However, sometimes you may need to configure your (WordPress) application to take full advantage of Memcached.
Memcached is used to speed up dynamic web applications such as e-commerce stores, registration / login websites, etc. reducing the database load. It stores the processed result so that every time a visitor requests the same query again, Memcached can respond to that instead of processing the query and responding. By keeping the server less busy, your visitors will experience faster loading time and better user experience.
There's an interesting and fun real-world story on GitHub, come on a reading to understand the typical Memcached use case.
Full Page Static Cache such as Varnish Cache, NGINX FastCGI Cache, LsCache or CloudFlare Cache
Surely the static full page on the server side is the most appreciable component in terms of performance for contents that are "the same for everyone", ie those of a blog, a newspaper or an ecommerce such as WooCommerce if you are not logged in.
In fact, a static full page cache allows you to store the content of a static page and re-propose it to those who requested it without going to run PHP and the queries to the MySQL database, saving important machine cycles and therefore allowing as we have seen in some cases once. loading from 12 seconds to 1 second by simply caching correctly with Varnish or NGINX FastCGI Cache.
Obviously it is not the panacea for all evils, because if the site is slow, once the user decides to log in perhaps to have a reserved price list, here is where the site begins to go at the real speed of how it would have gone without Cache.
But already being able to get excellent speed when not logged in, which happens in most ecommerce stores where you log in only at checkout, means making the difference between an ecommerce that has billed hundreds of thousands of euros at the end of the year and an ecommerce that closes. the budget with little change.
The most popular solutions currently on the market are: Varnish Cache which we have extensively talked about here Varnish hosting, NGINX FastCGI Cache which in our opinion is a quick and easy way to use a Full Page Cache if you don't have the attributes to use Varnish properly and its VCL configuration language, LsCache which is a rather new cache which is well suitable for the LiteSpeed webserver or the free version Open LiteSpeed and CloudFlare Cache.
LsCache
Regarding LsCache we can say that it is a limited and limiting product that is only good on the LiteSpeed commercial WebServer, it does not have an internal programming language to be able to work with very elaborate rules and therefore it is okay if you do not have special needs such as those of separating cache based on cookie, user agent, and lots of other very cool stuff that LSCache doesn't do.
NGINX Fast CGI Cache
NGINX Fast CGI Cache it would also be promising were it not that some of the features (see for example the possibility of PURGE ALL to clean all the cache) are present only in the commercial version Nginx Plus or NGINX + which is certainly not attractive in terms of costs for the annual commercial subscription .
As already mentioned for LSCache also NGINX FastCGI Cache does not have a configuration language that allows you to perform technical skills and style exercises to achieve the most complex and caching objectives.
Cloud Flare Cache
CloudFlare Cache is a Software as a Service commercial service that involves several subscribable plans ranging from the Free plan to the $ 200 per month Enterprise plan for single site.
By default CloudFlare does not offer a full page cache, i.e. it does not actually cache HTML such as pages, posts, products, but only cache multimedia resources such as images, JS and CSS. To obtain the HTML caching functions, it is necessary to have the possibility to exclude cookies and work with the subscription of $ 200 per month for each site.
As already mentioned in the introductory phase, the cost can certainly be affordable and even economic for companies that are already consolidated and are able to make a profit commensurate with the investment in this market-leading service that includes numerous options such as for example the protection of DDOS.
However, we have noticed that there is often confusion and so do hosting providers and experts in the sector, believing that the free plan acts as Full Page Cache, and therefore here is the flourishing of hosting companies that advertise a CDN implying a full page cache. and invite you to buy their most expensive hosting solution by advertising the availability of CloudFlare that they install in the Free version and that not having Full Page Cache functionality does not improve your speed in the least.
If you want to understand more about this huge mistake, read this article as well: CloudFlare CDN HTML cache?
Varnish cache
A Server-side Static Cache some would say, it would be more correct to say the static cache. Used by the busiest and most popular sites in the world, it is also the only server-side static cache that even in the Free version includes practically all the functions necessary to create complex caching rules in the real world.
Some shortcomings that are found exclusively in the commercial version Varnish Plus can be wisely filled by skilfully using ad-hoc configurations of NGINX Webserver in "combo".
Probably without hundreds of hours spent developing the best Varnish and Combo configurations, today we would be one of the many hosting and we would not have obtained the numbers and successes that have allowed us to reach hundreds of millions of page views per month and peaks of hundreds thousands of users per minute without crashing.
If you want to know something about Varnish we recommend this reading: WordPress hosting for newspapers and online publishing.
Regardless of the solution you want to adopt (we ALWAYS recommend Varnish) or CloudFlare for intercontinental traffic, these solutions must always be appropriately configured ad hoc for your WooCommerce installation. In fact, it is necessary to be able to manage and exclude session cookies, some specific pages such as those reserved for the user, the cart, the checkout, the wishlist in order to avoid embarrassing page collisions, i.e. user A arrives at the cart and sees the products placed in the User B's cart, and likewise other visitors start seeing content that isn't theirs.
All of this obviously translates into cart abandonment and a loss of sales and turnover.
XML files, sitemaps, Google feeds, timeouts if we work with warehouse import software and many many other things must be managed.
One would say to the novice user, “I install Varnish and configure it” but then it always ends up creating damage of mammoth proportions, so it is always better to rely on those like us who live only on this.