Table of contents of the article:
Having a slow database can mean in real life having equally slow software, high waiting times and related downtime that can impact both the mood of your days that become frustrating due to non-reactive software, and a definitely inadequate user experience.
On the real business of known cases we think of a call center in which with 500 employees a day on an 8-hour shift we recover 2 minutes of waiting for each shift of each operator, a good optimization of the database will allow us to reduce excessive downtime and to recover 1000 man minutes per day, which are 5000 per week, 20000 per month and 240 per year or 4000 total hours of empty waiting, for cost savings and increased productivity of about 80 euros per year at an average cost of 20 euros per hour.
This is just an example and a small tutorial that wants to open your eyes on how to optimize MySQL queries taking for example the MySQL DBMS, but the theory reported here is the basis of all modern DBMS such as PostgreSQL, Oracle, SQL Server from Microsoft and essentially all DBMS that use ANSI standard SQL syntax.
Before you can optimize slow queries, you need to find them.
MySQL has a built-in slow query (or slow query) log. To use it, open the file my.cnf and set the variable slow_query_log to “On”. Tax long_query_time about the number of seconds a query should take to be considered slow, for example 0.2. Set slow_query_log_file to the path where you want to save the file. Then run your code and any queries above the specified threshold will be added to that file.
Once you know what slow and problematic queries are, you can start exploring what slows them down. One tool offered by MySQL is the keyword EXPLAIN . Works with instructions SELECT, DELETE, INSERT, REPLACE and UPDATE . It is simply used to analyze and explain the query like this:
EXPLAIN SELECT picture.id, picture.title FROM picture LEFT JOIN album ON picture.album_id = album.id WHERE album.user_id = 1;
The result you will get is an explanation of how the data is accessed. A row is displayed for each table involved in the query:
The important parts to pay close attention to are the name of the table, the key used, and the number of rows scanned when running the query.
It basically scans 2.000.000 images, then scans 20.000 albums for each image. This means that it actually scans 40 billion rows for the album table. However, it is possible to make this process much more efficient.
Performance can be significantly increased by using indexes. Think of data as names in an address book. You can browse all the pages or you can drag the right letter card to quickly locate the name you need.
Use indexes to avoid unnecessary passes through tables. For example, you can add an index on picture.album_id in this way:
ALTER TABLE picture ADD INDEX(album_id);
Now if you run the query, the process no longer involves scanning the entire list of images. First, all albums are scanned to find the ones that belong to the user. Subsequently, the images are quickly located using the indexed index_id column. This reduces the number of scanned lines to 200.000. The query is also about 317 times faster than the original.
You can make sure that both tables use a key by adding the following index:
ALTER TABLE album ADD INDEX(user_id);
This time, the album table is not digitized in its entirety, but the right albums are quickly found using the user_id key . When these 100 albums are scanned, the associated images are located using the key album_id . Each table uses a key for optimal performance, making the query 380 times faster than the original.
This does not mean that you need to add indexes everywhere as each index makes writing to the database slower. Earnings on reading but then losing on writes in the database. So just add indexes that actually increase read performance.
Use EXPLAIN to confirm and remove any indexes that are not used in queries.
Do a profiling and subsequent analysis of the Queries to discover potential application problems.
Instead, it may happen to have a database with lean and efficient queries but nevertheless the server load skyrockets and in particular the load of the database server process.
It could be not a database problem, or rather not just a database problem, but of an application level such as, for example, a PHP script that erroneously cyclically calls a certain query or that executes an incorrect query not using adequate clauses that can allow a speed of execution of the query itself.
A classic example is the following symptomatology: the database is always the same, it has always been good for months or years, there are no peaks and volumes of accesses and visits different from the standard ones but all of a sudden the load of the server and MySQL begins to grow in an apparently unmotivated way.
Is the fault of the database or was an error made on the application side?
To analyze this eventuality by the system engineer, or investigate any problems deriving from a correct implementation of the application by the developer, you can use performance profiling tools such as New Relic or use tools such as Percona Toolkit that we have covered in the specific dedicated article on Percona Toolkit
Update to the latest version of MySQL or Rollback to a previous version.
As much as it may seem (and indeed it is) the egg of Columbus, it is worth remembering that upgrade to the latest version of MySQL (or related Forks) it could automatically improve query execution times and lower your DBMS load.
We have direct testimonials from our customers (at least 4 in 2021) who had this type of problem, that is tremendously slow queries that have become fast (from 10 seconds to 0,2 so to speak) simply by switching from Percona Server 5.6 to Percona Server 5.7.
Obviously the same concept applies to a version jump up as from MySQL 5.7 to MySQL 8.0. The benchmarks found online give a detailed understanding of the possible and potential advantages that can be obtained.
Surely it is a way to go right before you start wrapping your head on profiling and optimization.
It is understood how from an academic point of view and for purists it may seem an abomination to leave poorly designed slow queries that are executed quickly, but we must also evaluate the thing from the entrepreneurial point of view and real pragmatism.
In fact, it is often necessary to solve a problem within a couple of hours at most and with minimal costs. Trying this route won't make us SQL Standard ANSI XNUMXth dan black belts, but entrepreneurs have always appreciated fast, affordable, and workable solutions.
Often as they say "The important thing is that it works".
Slow queries and slow MySQL on WordPress (or other CMS)
When the aforementioned problem appears on Open Source CMS such as WordPress, often you do not even have the opportunity to realize what really happened on the application side so that the site with database and fast snappy before, becomes a pachyderm 10 minutes later.
Maybe the user hosted by our services has just thought of updating the last two or three WordPress plugins just released without asking too many questions because he is not a technician and because he has always done so without any problems whatsoever.
However, it happens more frequently than imagined that a wrong plugin, poorly written with a wrong business logic can lead to serious damage and drastically impact database performance, such as a plugin that starts writing junk on a shared WordPress table. like the wp_options table.
For a real example of what we are talking about I invite you to read this case of one of our clients
Only the understanding of what is happening on the application side, in fact, can allow you to understand the CPU load problems that are in no way attributable to the database design.
Still having trouble with MySQL speed?
If after reading our article and following our advice and you also have a company database or an application that is slow and does not satisfy you, please contact us to evaluate the improvement margins of your database and take advantage of the MySQL Optimization service in order to to have a fast and performing database.