Table of contents of the article:
Database efficiency is a critical real-life factor that directly impacts the speed and responsiveness of the software. When a database is slow, the user experience is compromised, leading to excessive wait times and, inevitably, frustration. These delays not only create a stressful work environment, but also result in operational inefficiencies and a grossly unsatisfactory user experience.
Take, for example, a call center with 500 employees working an 8-hour shift. In this scenario, if each operator spends an extra two minutes on each shift due to slow software, the costs of inefficiency quickly add up. A good database optimization can reduce this excessive downtime, allowing you to recover about 1000 minutes per day, which translates into 5000 minutes per week, 20,000 per month and a whopping 240,000 minutes, or 4000 hours, in a year. This recovery of time could lead to significant cost savings and an increase in productivity, estimated at around 80,000 euros per year, considering an average cost of 20 euros per hour.
This representation serves to highlight the tangible impact that a slow database can have on productivity and efficiency. While the specific example is for MySQL, the underlying principles are applicable to all modern database management systems (DBMSs), such as PostgreSQL, Oracle, Microsoft's SQL Server, and any other DBMS that uses the standard ANSI SQL syntax.
However, the key question is how do you identify the slow queries that are causing these delays.
Identify Slow Queries
To optimize the performance of a database, it is essential to first identify the problematic queries that are slowing down its operations. The first phase of this process involves the use of specific tools, such as the Percona Toolkit and its pt-query-digest component, which provide a detailed analysis of the database behavior. These tools can detect which queries are consuming the most time and resources, highlighting areas that need immediate attention to improve performance.
MySQL, one of the most popular database management systems, has a built-in slow query log feature, which can be particularly useful in this context. This log tracks queries that exceed a certain execution time, allowing you to identify those that may require optimization.
To enable slow query logging in MySQL, you need to open the my.cnf configuration file and set the slow_query_log variable to “On”. This will enable slow query logging.
After that, you need to set the long_query_time variable to the number of seconds a query should take before it is considered slow. For example, if you set this value to 0.2, all queries that take longer than 0.2 seconds will be considered slow and logged.
Finally, you need to specify the path to the log file using the slow_query_log_file variable. This is the file where slow queries will be logged.
Once the slow query log is turned on and configured, you can run your code as usual. Whenever a query exceeds the specified threshold, it will be logged in the log file. This will allow you to identify queries that are taking too long and could benefit from optimization.
Setting up and using the MySQL Slow Query Log can be a significant step toward optimizing the performance of your database, giving you the tools to identify problematic queries and make necessary changes to improve the speed and efficiency of your queries. database operations.
The next step, once you've identified slow and problematic queries, is to analyze and understand what exactly is slowing down their execution. MySQL offers a very useful tool for this purpose: the EXPLAIN keyword. This keyword can be used with a variety of statements, including SELECT, DELETE, INSERT, REPLACE and UPDATE, allowing you to analyze and explain the execution plan of a query.
Using EXPLAIN, you can better understand how MySQL interprets the query, providing a detailed analysis of the database execution plan. Here's an example of using EXPLAIN in a query:
EXPLAIN SELECT picture.id, picture.title FROM picture LEFT JOIN album ON picture.album_id = album.id WHERE album.user_id = 1;
The EXPLAIN command placed in front of the query instructs MySQL to return an execution plan for the query, rather than actually executing it.
The result of this command will be a detailed report explaining how the database intends to access the data to execute the query. Each row in the EXPLAIN result corresponds to a table involved in the query, providing detailed information about how data is extracted from that table.
The report includes information such as the type of join used, the key used for the join, the number of rows examined during query processing, and much more. This information can be extremely useful in identifying any inefficiencies or problems that could slow down the execution of the query.
EXPLAIN is, therefore, a powerful tool for analyzing and optimizing the performance of SQL queries in MySQL. It allows developers to better understand how their queries are interpreted and executed by the database, providing a valuable opportunity to improve the efficiency and performance of their applications.
The important parts to pay close attention to are the name of the table, the key used, and the number of rows scanned when running the query.
It basically scans 2.000.000 images, then scans 20.000 albums for each image. This means that it actually scans 40 billion rows for the album table. However, it is possible to make this process much more efficient.
Indexes are a powerful element in a database toolkit, with the ability to significantly increase query performance. You can think of indexes as the system of cards in an address book: instead of having to go through all the pages to find a specific name, you can simply drag the card of the corresponding letter to quickly get to the desired page.
The same principle applies to managing data in a database. Indexes can be used to eliminate unnecessary steps through data tables, thus reducing the time required to execute queries.
For example, you can add an index to the album_id column of the picture table with the following command: ALTER TABLE picture ADD INDEX(album_id);
Once the index has been created, execution of the query will no longer require a full scan of the image table. Instead, the database will first scan all albums to find those that belong to a specific user. Subsequently, matching images can be quickly located using the index on the album_id column.
This process greatly reduces the number of lines that need to be scanned. For example, if you previously needed 1.000.000 rows to scan, using the index could reduce this number to 200.000.
In terms of performance, the use of indexes can lead to significant improvements. The query could become up to 317 times faster than the original version without indexes.
In summary, implementing indexes is an effective method for optimizing the performance of a database. They allow the database to avoid full table scans, reducing the time required to run queries and improving the overall speed of database operations.
You can make sure that both tables use a key by adding the following index:
ALTER TABLE album ADD INDEX(user_id);
In this new scenario, the album table is no longer fully scanned. Instead, the appropriate records are located efficiently through the use of the user_id key. Once these 100 albums are identified and analyzed, the associated images are quickly found using the album_id key. Each table leverages a key performance boost, making the query up to 380x faster than the original non-indexed version.
However, it's important to note that it's not always beneficial to add indexes to every column. In fact, although indexes speed up read operations, they tend to slow down database write operations. In other words, indexes offer significant benefits in terms of read speed, but can have a negative impact on write speed to the database. Therefore, you should only add indexes where they actually provide a significant read performance benefit.
To confirm the effectiveness of the indexes, you can use the EXPLAIN command again. This tool can help you identify and remove any indexes that aren't significantly used in queries. It's a great practice to ensure your database is optimized for maximum efficiency and performance.
Do a profiling and subsequent analysis of the Queries to discover potential application problems.
In some circumstances, you may find yourself managing a database with queries that perform efficiently and quickly, yet, for some reason, the load on the server or database server process increases dramatically. In such situations, the cause may not be directly attributable to the database itself, or at least not exclusively. Instead, there may be problems at the application level.
A typical case could be that of a PHP script which, due to a programming error, cyclically invokes a certain query or executes a poorly formulated query, without using adequate clauses to optimize the execution speed of the query itself .
A classic example of this problem presents itself with the following symptoms: the database, which has worked perfectly for months or years, begins to suffer from an increase in the workload, although there have been no peaks or changes in the volume of accesses or visits . All of this happens suddenly and for no apparent reason. The question that arises is: is it the fault of the database or was there an error at the application level?
To deal with this eventuality, the system administrator – i.e. the expert who deals with the management and monitoring of the system – can use performance profiling tools such as New Relic, or he can use specialized tools such as Percona Toolkit, which we talked about in a previous article.
These tools allow you to perform in-depth analysis of application and database behavior, identifying any bottlenecks or performance issues. Through the use of these tools, you can then get a clear view of what is causing the increase in server load, thus allowing you to take targeted action and fix the problem, improving the overall efficiency of the system.
Update to the latest version of MySQL or Rollback to a previous version.
A direct yet effective approach to improve performance and reduce the load on your database management system is to upgrade to the latest version of MySQL or its forks. Although it might seem obvious, it is important to underline that upgrading to the latest version can automatically lead to an improvement in query execution times and a lowering of the DBMS load.
We received first-hand testimonials from numerous customers during 2021 (at least 4) who had faced similar problems: extremely slow queries that became much faster (from 10 seconds to 0,2, for example) by simply switching from Percona Server 5.6 to Percona Server 5.7.
Of course, the same concept also applies to further version migration, such as moving from MySQL 5.7 to MySQL 8.0. The benchmarks available online offer a detailed idea of the potential benefits that can be achieved.
This is definitely a path worth exploring before diving into the intricacies of query profiling and optimization. Updating your software can provide a quick and efficient solution, often resolving issues within a couple of hours, with minimal cost.
Certainly, from an academic point of view or for lovers of pure theory, it may seem wrong to leave slow and poorly designed queries that run faster. However, the entrepreneurial and pragmatic perspective of the situation must also be considered.
Often, the main goal is to solve a problem as quickly and cost-effectively as possible. Going this route won't make us absolute ANSI standard SQL experts, but business owners appreciate solutions that are fast, cheap, and that work.
As they often say, “the important thing is that it works”. The main thing is to achieve the desired result as efficiently and conveniently as possible.
Try changing the DBMS between MySQL to Percona Server or MariaDB.
If you are experiencing poor performance from your SQL queries against MySQL, one of the options to consider may be to consider using alternatives such as Percona Server or MariaDB.
Percona Server and MariaDB are two solid alternatives to traditional MySQL, offering performance improvements and additional features over the basic version.
Tap Server is a distribution of MySQL developed and maintained by Percona, a company specializing in open source database solutions. Percona Server offers a number of performance enhancements, such as more efficient memory management, more advanced query optimization, and an optimized storage architecture. Additionally, Percona Server supports XtraDB storage engine, which is an enhanced variant of MySQL's InnoDB storage engine known for its high performance and reliability. This makes it a popular choice for applications that require intensive transaction processing.
MariaDB, on the other hand, is a fork of MySQL originally developed by Monty Widenius, one of the co-founders of MySQL. MariaDB offers a number of performance improvements over MySQL, including code optimization, improved storage architecture, and more advanced communication protocol implementation. MariaDB also includes additional features, such as new data types, new SQL functions, and transaction management improvements. It is a popular choice for those who want to benefit from the performance improvements offered by MariaDB and at the same time maintain compatibility with existing applications developed for MySQL.
Both solutions, Percona Server and MariaDB, are fully compatible with MySQL and can be easily installed on most platforms. However, it is important to conduct proper load tests before making any significant changes to your database installation. This will allow you to evaluate whether any of these alternatives can actually offer a performance boost for your specific system needs. We recommend testing with real-life usage scenarios to evaluate the impact of the changes on your application and ensure a smooth transition.
If MySQL is not a binding DBMS, consider migrating the DBMS to the more performing PostgreSQL.
If you are facing performance issues or looking for a more performant alternative to MySQL, it might be worth exploring the possibility of migrating your database to a different database management system such as PostgreSQL.
PostgreSQL is an open-source relational DBMS that offers numerous advanced features and high performance. It is known for its reliability, robustness and ability to handle demanding workloads. While it may seem like a drastic solution, migrating to PostgreSQL can lead to major performance improvements and offer a wide range of advanced features.
One of the main features of PostgreSQL is its advanced query optimizer, which allows you to create efficient execution plans for queries. This results in faster processing of queries and reduced response times. In addition, PostgreSQL supports Atomicity, Consistency, Isolation, Durability (ACID) transactions, ensuring data consistency and protection against failures and outages.
PostgreSQL also offers a comprehensive set of data types, including JSON types, multidimensional arrays, and spatial geometries, allowing for more flexibility in data handling. It also supports stored procedures, triggers, and user-defined functions, providing extensive database customization and extension possibilities.
The PostgreSQL community is very active and offers extensive support and resources, including discussion forums, detailed documentation, and a large selection of extensions and plugins that can be used to enrich database functionality.
Migrating to PostgreSQL might require some initial effort in terms of planning and data conversion, but it could be a long-term strategic choice to improve performance and make the most of the advanced features offered by this powerful DBMS.
Importantly, database migration is a complex process and requires a thorough assessment of your system requirements and specifications. Before making a decision, it's a good idea to perform load tests and carefully weigh the advantages and disadvantages of a migration. However, always considering the PostgreSQL option may present new opportunities to improve your database performance and better meet your current and future needs.
According to various tests and benchmarks, PostgreSQL performs up to twice as fast as MySQL and its forks. However, migrating to a new database management system (DBMS) is not a simple task and requires a thorough assessment of both your dataset and database schema, as well as any changes that may be required at the application level.
If you are using a proprietary application, it may be particularly beneficial to migrate, as you will have the ability to adapt the application code to take full advantage of PostgreSQL features. However, it is important to keep in mind that migrating to a new DBMS can be a daunting task and take time and effort. Before going down this path, be sure to carefully weigh the pros and cons based on your specific needs.
Slow queries and slow MySQL on WordPress (or other CMS)
When the aforementioned problem appears on Open Source CMS such as WordPress, often you do not even have the opportunity to realize what really happened on the application side so that the site with database and fast snappy before, becomes a pachyderm 10 minutes later.
Maybe the user hosted by our services has just thought of updating the last two or three WordPress plugins just released without asking too many questions because he is not a technician and because he has always done so without any problems whatsoever.
However, it happens more frequently than imagined that a wrong plugin, poorly written with a wrong business logic can lead to serious damage and drastically impact database performance, such as a plugin that starts writing junk on a shared WordPress table. like the wp_options table.
For a real example of what we are talking about I invite you to read this case of one of our clients
Only the understanding of what is happening on the application side, in fact, can allow you to understand the CPU load problems that are in no way attributable to the database design.
Still having trouble with MySQL speed?
If you're facing performance issues with your SQL queries and your database isn't delivering the performance you want, we're here to help! We are experts in database performance optimization and can offer you tailor-made solutions to solve your speed problems.
We understand the importance of a high-performance database for your day-to-day business. Slow queries and long response times can slow down your system operations, causing delays and frustration for you and your users. It is imperative to ensure that your database is configured correctly and queries are optimized for maximum efficiency.
Our team of experts can perform detailed profiling of your database, analyzing problematic queries and identifying areas that require optimization. Using advanced tools like Percona Toolkit, EXPLAIN and other performance analysis techniques, we can pinpoint the queries that consume the most time and resources, identifying bottlenecks and opportunities for improvement.
Once the critical areas have been identified, we will develop a customized strategy to improve the performance of your database. We use query optimization techniques, intelligent indexing, data structure optimization and other advanced methodologies to optimize your database for fast response times and optimal performance.
Don't let performance issues slow you down! Contact us today and let our experts take care of your speed issues. We are here to offer you concrete and customized solutions to get the performance you want from your database. Don't waste your precious time, contact us and start solving your performance problems now!