Table of contents of the article:
If you too have been a systems engineer since 2005, with thousands of servers under your belt and thousands of cases different from each other, you will agree with me that nowadays we really can't stand it anymore. Otherwise you might believe many urban legends that are peddled by unscrupulous salesmen and without any knowledge of the facts other than that of mere profit. Working in the IT sector has become demotivating and unrewarding, given that the market is now controlled at the top by marketing and real urban legends, spread by individuals without qualifications or experience in the systems field. These salespeople often sell miracle solutions that promise unparalleled performance and reliability, without understanding the real technical and operational needs.
So let's start by listing those that are questionable statements and disproving the statement with logical reasoning and related documentation and references. Our goal is to provide clarity and truth in a sea of misinformation, drawing on years of practical experience and in-depth industry knowledge. As we have new myths to add and debunk, we will publish them updated in this post. By doing so, we hope to contribute to greater awareness and technical expertise, helping IT professionals make informed decisions and avoid the pitfalls of deceptive marketing.
Shared hosts are slow.
Primarily by the heaviness of the site to be hosted, by how many other sites are on the same machine, and by the resources that are hoarded by the other sites in virtual hosting on the same machine. It is clear that sharing hosting with other sites does not give guarantees regarding performance and consistency of performance, given that another site hosted on the same server as us could monopolize the resources and make ours suffer too. However, a shared hosting optimized with Varnish Cache, right caching strategies and policies, a good TTFB, fast protocols like HTTP/3 or QUIC could be much faster and more efficient than a €1000 per month dedicated server configured with Plesk or cPanel above without any tuning and optimization. Power is nothing without control. So, with the right optimizations, shared hosting can offer excellent performance, while a poorly configured dedicated server can be inefficient and slow. The key is in optimization and resource management.
If a hosting on our own server is hacked, attackers can attack our site.
Hosting with guaranteed resources is better than hosting in best effort.
Hosting with guaranteed resources is better than best effort hosting. Depends. If you have to choose to have guaranteed resources (minimum and maximum) of 1 core, 1GB of RAM, always prefer best effort hosting. It is true that you will have no guarantees on the minimum resources reserved for you, but it is also clearly true that for over 90% the best effort solution will produce better values and greater performance than a solution with dedicated but insufficient resources to run a site correctly. In the context of best effort hosting, resources are shared dynamically among all server users, often allowing you to benefit from higher performance peaks than what is guaranteed by hosting with fixed resources. This flexibility is particularly beneficial for sites with variable workloads or occasional spikes in traffic, where the system can allocate additional resources temporarily to maintain high performance. Furthermore, resource management in a best effort environment is optimized to maximize overall server efficiency, allowing less intensive websites to use excess resources left by others, thus improving the user experience in terms of speed and responsiveness.
With the Cloud you save because you only pay for what you consume.
Statement clearly false in at least 90% of real uses. It is true that the Cloud has a pay per use model rather than a Flat model (today both options actually exist), but the cost of the Cloud is normally four times (minimum) compared to the same solution on a dedicated server. To be clear, where on Amazon LightSail you buy 2vCPU and 4GB of RAM (to which add the costs of outgoing traffic), with the same cost you can buy a dedicated server with 12 threads (equivalent of 12vCPU) and 64GB of RAM, with performance at obviously higher level of I/O and system bus. The cloud offers flexibility, scalability, and ease of management, but these features often come with significant additional costs. Additionally, cloud resources are virtualized and shared, which can introduce overhead and performance limitations compared to a dedicated server with unique physical hardware. So, for high-performance applications and consistently high workloads, i Dedicated Servers they represent a more economical and high-performance solution compared to cloud services.
With the Cloud I can scale vertically from 1 CPU to 128 CPU with just one click.
Depends. Normally with some top suppliers on the market such as AWS, Google Cloud and Azure, it is obviously possible to do so with rather prohibitive costs. Common sense should always reign supreme; if we buy the Cloud without a real need for vertical scaling (increasing resources on the single instance), we are making a useless expense for an event that will never happen. Does it make sense to spend large amounts of money for basic performance, just because perhaps in the future we will need to scale? The answer is subjective and lies in the common sense of the analysis of particular events (traffic peaks, slashdotting effect, black friday) etc. It is important to carefully evaluate the current and future needs of your infrastructure and consider whether the additional cost of the cloud, with its ability to scale rapidly, justifies the investment over more static, traditional solutions. Furthermore, vertical scalability is only part of the equation; it is often more efficient and cost-effective to also consider horizontal scalability (adding more instances) to manage load increases.
The Cloud is more reliable than a Shared Hosting or a Dedicated Server.
Depends. Which Cloud? From which company? What virtualization technologies? What type and model of SAN? What backup and disaster recovery procedures? Do they make geographic replicas on different regions? If they don't do them by default, do you do them as a system administrator? The Cloud could be infinitely less reliable than shared hosting or a dedicated server. It is true that cars have 4 wheels, but it is not true that 4 wheels determine a car. The reliability of the cloud depends on multiple factors, including the quality of the provider's infrastructure, the configuration of resources, network management, and the security measures implemented. For example, top market providers such as AWS, Google Cloud and Azure offer robust infrastructures with high standards of redundancy and availability, but these services can have high costs and require careful configuration to reap the full benefits. On the contrary, a well-configured and managed shared hosting or dedicated server can offer a comparable or higher level of reliability, especially if supported by a team of expert systems engineers who implement best practices in terms of security, backup and disaster recovery.
SSH access must be denied because it exposes you to security risks.
Depends. SSH access with user and non-root privileges, granted in an environment where user and group policy is correct, does not expose you to any security problems. However, for a malicious user in an outdated system, it could be a priority path to proceed with privilege escalation and attempt to climb to root and therefore compromise the security of the entire server. Let's say that to overcome the incompetence of many systems experts, we prefer not to grant what should be a right. SSH access security can be significantly improved by using SSH keys instead of passwords, implementing two-factor authentication, and limiting SSH access to certain IP addresses. Additionally, keeping your system and packages updated, monitoring access logs, and applying appropriate firewall rules are measures that can further reduce the risks associated with SSH access.
Hostings must always provide a control panel such as Plesk / cPanel or similar.
Do you really need it? Or do you just need access to your files and your MySQL database? Because cPanel and Plesk are general purpose solutions that bring with them important performance and occasionally security issues. Therefore, on serious projects, where there is real traffic, millions of pages viewed per month or millions of turnover, we always try to obtain maximum performance, and maximum performance is not achievable with panels like Plesk or cPanel. However, if you are looking for maximum independence, perhaps to install hundreds of showcase sites, a control panel like Plesk or cPanel could be the solution to your independence problems. Using control panels simplifies management for less experienced users, but introduces overhead and potential vulnerabilities that can compromise system performance and security. In high-performance environments, a more streamlined and customized configuration is preferred, managed directly via SSH access and specific server resource management tools, to ensure maximum control and efficiency.
If I have a site for Italy it is better to have an Italian IP in an Italian datacenter for SEO purposes
Definitely false. What era do you live in, at the dawn of 1996? This rule could have been valid until the year 2000, then it was largely superseded. Most successful Italian sites today are located in datacenter facilities in Germany or France. Today we no longer think in terms of countries, but rather of continents. It is therefore correct for an Italian site to have a data center with good ping and low latency in Europe. Whether it is in Italy, Holland, France, it doesn't matter. What matters is the TTFB (Time To First Byte) and, if it is true that a TTFB in Italy or Germany can make a difference of even 30ms, we can only start talking about milliseconds when the big problems of speed and brought the TTFB to at least below 100ms. Nowadays, unfortunately, we tend to choose non-optimized systems in Italy which are still slow and with TTFB higher than the maximum 200ms recommended by Google. Therefore, it is essential to focus on overall infrastructure quality and performance optimization rather than simply the geographic location of the data center.
CloudFlare and CDNs speed up the website.
False. At least in the collective idea that is made of the use of CloudFlare and CDNs. If you have an Italian site, with a European audience (let's go back to the question above), CloudFlare won't improve anything, in fact it could even worsen the delivery of content. If, however, we are faced with an international site, also accessed at an intercontinental level, CloudFlare can certainly be a viable solution in reducing latency and accelerating the delivery of content. Furthermore, it is necessary to specify how CloudFlare is configured and the version of the plan purchased, taking into account the CDN functionality and the Full Page Cache functionality which are not a synonym and have completely different purposes. Correct configuration is essential to obtain the desired benefits; for example, enabling features like dynamic caching, resource minification, and compression can make a big difference. So, while CDNs like CloudFlare can significantly improve the performance of a site with global traffic, their effectiveness depends on the specific configuration and context of use.
However, we have said and written a lot about CloudFlare in this article.