Table of contents of the article:
In this article we will go to see what are the prerogatives and needs of those who do affiliate marketing and some of the characteristics unfortunately not taken into consideration by normal hosting who do not know the world of affiliation, nor the technical problems and needs that every good affiliate should have.
In reality the problem is not only of the Hosting companies but also of the Affiliate Marketer who often they are not technical figures but more Advertising experts such as FB ADS, Google AdSense, Outbrain and the like, and they manage to measure themselves to decide, but without going too far into the technical virtuosity at the basis of the profession of the professional affiliate marketer.
If you spend 100 euros a day on advertising, maximizing a 5% ROI could be rather irrelevant, but if you do like some of our customers who move hundreds of thousands of euros a day in advertising, that 5% is the difference between a profitable business, unprofitable or extremely profitable.
Since 2017 we have collaborated with Affiliation Park as systems engineers, at the same time we are currently suppliers of Top Italian Affiliates (whose names we cannot disclose for obvious reasons of confidentiality and privacy), but just think that they are the ones who make the records in WorldFilia.
In short, we know the sector very well, the players involved, the cases, the needs, the spite that the various affiliate marketers suffer (and do) between DDOS and fake leads and we have created a Hosting service supply system aimed precisely at that extremely delicate and demanding niche given the budgets at stake.
A brief review is a must for anyone who is not an expert in the sector and does not know what affiliate marketing is, (or lead generation as we like to call it).
What is Affiliate Marketing?
Affiliate marketing or affiliate marketing is the process of earning a commission by promoting the products of other people (or companies). You find a product you like, promote it to others, and earn a portion of the profit for every sale you make.
Obviously promoting a product means "showing it" in an attractive way to a potential buyer and through communication, copywriting, images and various creativity to induce him to purchase.
For every sale we make, we will receive a commission which is normally around 33% of the amount sold.
Part goes to the producer, part goes to the affiliate platform (see WorldFilia for example), part goes to the affiliate.
Obviously the affiliate to make the products known to an audience tries to target it, profile it and show it an inherent and target ad.
Quite often, Facebook ADSs are used to drive traffic like the one you are seeing.
That is Facebook in all respects is a source of traffic.
That is, it is the paid means through which our promotional messages are conveyed and the user is invited to our website or landing page to complete the sale.
Obviously if the site is not fast, it does not open, you do not see there will be no sale, but only a paid click on Facebook that will not lead to a conversion, or a sale.
Briefly you create an ADS campaign, select your ideal audience, set your daily budget. Facebook will show you the estimated reach of your ads. You can choose to show your ads to certain people and filter them by age, gender, location and interests.
Once the campaign has started it will begin to reach users. Obviously you will have an expense derived from the costs of Facebook campaigns, and a relative earning of commissions.
If we imagine the cost of the campaigns was 1000 euros and you earned 1000 euros in commissions, your ROI was 0 (you broke even), if you spent 1000, and you earned 2000, your ROI is 100%.
It is undisputed to say that on large volumes and large numbers spent in ADS a good average return is expected between 25 and 33%.
Now imagine that due to a very trivial error (resolution cost 4 euros) 10% of the "housewives" are unable to see and use your site, (but obviously you pay for the FB ADS ad anyway to Facebook).
Imagine that due to another mistake you lose 5% of conversions, always on the same campaign and on the same audience.
Also imagine that due to 2 other small mistakes, you lose another 5% of conversions.
Servant Account: You have lost 20% of conversions. That is 20% of sales. That is 20% of earnings.
The ideal site for any Affiliate Marketer
Seldom would an Affiliate Marketer bother with the technology aspects if a site converted enough. What really matters in this type of entrepreneurial activity are the earnings, the profits, the profits.
As trivial as it may seem in the definition itself, the ideal site for every affiliate marketer is a site that sells, a site that converts traffic into sales and allows you to have a positive return on investment, that is, to be able to obtain profits and not losses.
But since money does not fall from the sky and no sales funnel is perfect since there is always room for improvement, affiliate marketers become real hounds in identifying all the problems and anomalies in the sales funnel, helping us to determine which ones are the best practices for hosting that suits their needs.
But what does this mean? Means that a site must be fast, extremely fast, compatible with all devices (do you know Android 6 installed on the old Samsung Galaxy model of the 60-year-old housewife to whom you are offering the cream for the treatment of hallux valgus?), and all platforms .
In the same way it must work very well with affiliate platforms, social networks and remarketing actions, with Google Tag Manager, for example, and not suffer a high drop in click tracking.
Obviously the uptime must be 100%, and you must have H24 assistance on important issues such as DDOS Layer3 attacks or Layer7 applications capable of filtering targeted attacks of this type within 15 minutes.
The most common hosting mistakes for those who do Affiliate Marketing
We will leave out all aspects of a successful campaign in this chapter.
The creativity, the copy, the colors, the position of the buttons and the call to action, the fonts to choose, and so on, after all we are performance-oriented systems engineers and we don't want to consider ourselves lead generation experts, God forbid.
But we want to list the main mistakes that are commonly committed by affiliate marketers, often in a completely unconscious way since no one has ever communicated it and obviously they are not always aware of it.
It must be said that the points to be dealt with are several so that the rest of the article will be structured as follows: a bulleted list, with its title, a more or less broad description of the problem and where required a reference to a possible link inside the blog as a specific study. of the topic.
By doing so you can take this guide as a handbook with the various points to be crossed out once you want to bring into production a lead generation project based on websites (perhaps WordPress) or landing pages generated in self-hosted mode with the coupled standard WordPress plus some WYSIWYG tools like Elementor, Brizy, Divi, BeaverBuilder or similar.
In any case, we will deal with the arguments on a purely theoretical level, reminding you that what is expressed here is generally valid for every website present online, whether it is a site for affiliate marketing or lead generation or other.
1. Low Uptime and Frequent Downtime.
Having a site that is frequently down means having a site that makes you waste your budget unnecessarily, the campaigns also risk being stopped as the site is not reachable and also puts you in the psychological condition of not living peacefully. It is obvious that at least you need to have a Hosting provider with an Uptime of at least 99.99% declared and a proactive motoring system that signals every downtime as an Uptime Robot or if you want to do by yourself the excellent Uptime Kuma that we have reviewed here Uptime Kuma, Open Source and Self Hosted alternative to Uptime Robot and Status Cake.
If you realize that your Hosting Provider tends to have a lot of downtime, both in terms of frequency and in terms of duration, consider wisely to change hosting provider. In 2022 the standards are now very high and finding a network infrastructure that guarantees an uptime as indicated is not absolutely difficult.
2. Availability of technical assistance
If you don't already, you have to start considering Murphy's Law: if something can go wrong, sooner or later it will happen. In short, evaluate the worst situation that could happen to your website and your landing page, downtime, a hacker attack, a defacement, it can happen at any time on any day. Consider having a highly competent technical and technological contact person who can assist you with serious and blocking problems at any time of the day, even at lunchtime on Christmas day or at New Year's Eve to be clear. Evaluate a managed systems service rather than a normal webmaster, because in the event of serious problems, especially of a networking or hardware nature, it is much more likely that the system administrator will remove the chestnuts from the fire rather than a web developer. If you have both figures available and available, well for you: "Two is better than one" (Cit.).
Always consider having the opportunity to scale up to a second or third level support within 15 minutes, that is to be able to speak not only with the helpdesk of the company where you are hosted, but to be able to interface with a technical figure who can understand and above all solve the problem within a few minutes. Having assistance available and responding H24 is certainly useful, but it is more useful that in addition to the prompt response there is also a prompt resolution to the problem.
There are managed services like ours in which you can have 24-hour availability with technical figures and senior linux systems engineers.
It costs slightly more than a classic hosting, but in time of need you will understand the difference.
3. Lack of DDOS protection and mitigation tools at network and application level
Unfortunately, the affiliate marketing environment is a problematic environment, a very problematic one. Like the world of online gaming, the affiliate world is also targeted by rather elaborate DDOS attacks. A successful Affiliate Marketer must necessarily estimate that sooner or later he will be the victim of attacks of this type, both at the network level and at the application level.
It is therefore important that your hosting has adequate technological measures to limit and mitigate the damage and successfully reject all successful attempts.
As regards network attacks at level 3 of the ISO / OSI stack, it is good to verify that the reference provider has the right countermeasures in addition to the volumetric capacity to absorb the attack attempt.
We, for example, despite being an independent vendor in fact, always prefer have datacenters that have a collaboration with Arbor Networks (the current Netscout). The advantages are many such as, for example, being able to mitigate attacks of several hundred gigabits in an absolutely automated way without having to do anything.
In short, we limit ourselves to taking note of the e-mail notifications of when the attack begins, of the entry into operation of the automatic mitigation systems, and of when the attack ceases.
For DDOS attacks at Level 7 of the ISO / OSI model, that is the application ones, the thing is different and rarely you can come across automated mitigation systems as you must first analyze the attack (or attacks, plural), the relative patterns and then proceed with the application of the various filtering rules.
First of all, you need to have the ability to analyze and a proven method, as well as the right tools. As far as the analysis is concerned, it is obvious that one must contact a supplier who knows how to analyze, therefore understand the scope and type of the attack, the origin at a geographical level, and then know how to deploy firewall rules on what we have chosen as a tool of excellence, namely CloudFlare.
We've talked about CloudFlare well in several articles, including this one: DDOS attacks and extortion of payments in Bitcoin? How to protect yourself with CloudFlare
The alternative to DDOS mitigation would be to suspend all campaigns, rush into a hosting like ours, migrate everything urgently (support at least € 500 only for the urgency supplement in the onboarding phase of the new customer) and restart having lost sales and budget.
Obviously, among the selection criteria, the cost and hidden costs in DDOS mitigation solutions must also necessarily be evaluated. There are companies out there that charge $ 500 to $ 1000 an hour for the DDOS Managed mitigation service. This can certainly be fine for a multinational or a company of a certain importance or size, but it is not in the least possible for unstructured companies with adequate budgets.
Just think that in our managed service DDOS management is included at no additional cost as an added value for the end customer.
4. High DNS Latency
We wrote an entire article about choosing efficient high-speed, low-latency Nameservers: The importance of fast Authoritative DNS for your website speed.
Briefly we can say that having a DNS resolution time of 10ms instead of 200 ms means saving 0,2 seconds which in turn are added to other factors and other timings. If you also address problematic 3G connections in areas with less than ideal coverage (have you ever seen those nineteenth-century Italian structures with half-meter thick solid brick walls?), Having fast DNS is always a good thing.
We refer you to the article above, dismissing this chapter with truly dispassionate advice; Use CloudFlare's Nameservers.
5. Lack of TCP BBR for TCP Congestion Control
What if I told you that in 2022, most TCP / IP connections are handled by a control algorithm called CUBIC developed in the 80s?
Also for this topic we have written an article about: BBR TCP: the magic formula for network performance.
In the 80s it can be said that there was no information technology at moments, let alone Wi-Fi. For this reason it is clear and obvious that the traffic congestion control algorithms were designed and developed thinking that everything was connected by cable. If there is a communication delay or packet loss, perhaps the cable degrades the signal and does not support the rate that was initially negotiated for which CUBIC allowed to renegotiate the baud rate between two hosts by lowering the baud rate until these errors and packet losses were reduced to zero or almost zero.
Today things have changed, it's all wireless, all wi-fi, 3g, 4g, 5g, a packet loss or an increase in latency does not necessarily mean that "the cable does not support that speed", so there is no point in renegotiating speed if it is not really necessary to renegotiate it. TCP BBR is a congestion control protocol written by Google that allows you to squeeze the maximum out of a connection avoiding to renegotiate the nominal speed if not necessary.
Let's take a practical example: you have a 3G connection in a lousy part of the basement where you have decided to move the office in the summer to stay cool, however you manage to have a bandwidth of 1megabit, little, but not very little. The latency increases to 500ms instead of the standard 10ms, and the server with which you are viewing the latest electrostimulator for the abs made in China but sponsored by Cristiano Ronaldo, decides that since you are slow to renegotiate the speed to 0,1megabit or 100kbit second .
The site gets even slower, you get impatient, you close the site and it loses a sale.
Now think if instead of the CUBIC algorithm of the 80's, the server where the site runs had used TCP BBR. It would have reasoned differently and understood that it could continue to maintain a connection to a megabit instead of mistakenly reducing it to 0,1. You would not have closed the connection due to the slow site and the site would have made an extra sale.
TCP BBR concerns not only unexplained 3G connections like the example above, but all wi-fi connections, 3g, 4g, 5g, which in some moments can renegotiate the connection speed. 100 megabits is double 50 megabits and always allows you to download at double the speed.
Although TCP BBR is a product wanted and designed by Google and used on all its products as well as on Google Cloud, today the implementation is very easy starting from Kernel 4.10 and up.
Make sure your hosting is using it in his and your best interest. We do it as a practice ALWAYS.
TCP BBR, in short, always.
6. Lack of BROTLI compression
Brotli is a compression algorithm developed by Google, which is used to reduce the size of files sent to web browsers. The algorithm was released as open source in September 2015, and has since been adopted by most major browsers.
Brotli can compress files up to 20% smaller than gzip. This might not sound like much, but every kilobyte counts when it comes to site speed. This statistic comes from a Google search, where they tested Brotli against gzip on a number of different websites and found that Brotli was always better.
The advantage of Brotli over gzip is that it uses a dictionary and therefore needs to send only the keys in that dictionary instead of the entire sentence. This improves compressibility, especially on files that contain a lot of repeating text. The downside to Brotli is that it can be more expensive in terms of CPU time to compress, but this is a relatively minor consideration given that CPU time is cheap.
If you want to check the Brotli compression of a site we recommend this Online Tool: https://tools.keycdn.com/brotli-test
7. Lack of HTTP / 2
If we wanted to associate a song to explain the concept of HTTP / 2 I would choose “I Want it all” by Queen. Especially in the refrain "I want it all, and i want it NOW".
HTTP / 2 (originally called HTTP / 2.0) is a major revision of the HTTP networking protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google.
HTTP / 2 allows a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple simultaneous exchanges on the same connection. It also introduces the unsolicited push of impersonations from servers to clients.
The primary goal of HTTP / 2 is to reduce latency by allowing full multiplexing of requests and responses, minimizing protocol overhead through efficient compression of HTTP header fields, and adding support for request prioritization and server push.
Compared to HTTP / 1.1, the main differences are:
Multiplexing of requests over a single TCP connection versus multiple parallel connections in HTTP 1.1;
Binary fragmentation instead of text headings;
Huffman encoding for more efficient encoding of textual data;
Server push mechanism that allows a server to send additional responses before receiving a corresponding request from an endpoint;
Header compression using HPACK to reduce overhead.
The new version makes it possible to obtain:
- better performance,
- requests and reception of multiple objects in a single TCP connection thanks to multiplexing,
- lower latency,
- compression of HTTP headers,
- a more efficient use of resources.
8. Lack of use of Webp images
We all know how important images are in communication "a picture is worth a thousand words" (Cit.) But few know that the same image can weigh 1 Mega or 100Kbyte, or 10 times less.
Especially when we talk about high resolution PNG images that could have easily been served in optimized webp format.
You can think of the .WebP format as any other image format (JPG, GIF, PNG…) but with superior compression and quality characteristics. In other words, the .WebP format images can be smaller in size while remaining at a high level of quality.
The main benefit of using .WebP is that you can have high quality images with smaller sizes, which is great for your website performance. The smaller size means faster load times and more traffic to your website as users don't need to wait a long time to see the content.
The quality of a webp image is very high, so much so that in our previous tutorial we used it on a wedding photographer site without degrading the images in any way and saving important megabytes: Decrease the weight of a website with WebP images
There will always be someone who will say that it is impossible to use webps because Safari and iOS do not display them for compatibility reasons or that CloudFlare in the Free version, does not allow you to use them successfully. They are rather widespread urban legends which only means that they don't know how to do it, not that it can't be done, so please contact us.
9. Lack of adequate sizing of hardware resources
If we talk about self-hosted solutions, we will certainly keep in mind content management systems such as WordPress and related plugins. Of course, we can't say that WordPress is based on modern and fast technologies. If we were to compare modern asynchronous technologies like Node.js, Golang and DB noSQL like MongoDB, to PHP and MYSQL it would be a bit like comparing a supersonic plane and a small car like the Fiat Punto. However, the Fiat Punto is within everyone's reach, both in terms of costs and driving and therefore it is not surprising that there are more fiat punto than supersonic planes in the world, much less that most of the self-hosted landing solutions. page are made on WordPress.
However, there is a price to pay which is that of the correct hardware and software sizing (the software one we will see later). In fact, it may happen that when choosing the instance (dedicated or virtual) you make undersized purchases to the real needs that can be widely different based on the plugins used and based on the amount of traffic that we are able to bring to our site.
Certainly to work well, with the right memory reserved for the various software caches, the DB caches, and the execution space of the PHP interpreter, you should NEVER go below a configuration with 4core and at least 8GB and at least SSD disks, better if nVME.
So if you plan on going on Amazon AWS, for example, and get by with a budget but lower than the one proposed above, start budgeting a $ 80 per month budget ONLY for the instance. The system engineer will require you more or less the same for a monthly expense of about 150 dollars.
If you opt for different providers such as Hetzner.de (our reference provider for around 90% of the services offered) the costs are vastly different and much cheaper.
Obviously, at least in theory, the services of Amazon AWS are the non plus ultra in terms of availability and Uptime, so if you really want to avoid (always theoretically) even those 5 minutes of downtime a year, the only solution on the market today seems to be Amazon AWS. We always talk about the theoretical line, as we have seen in practice Amazon AWS down for over 4 hours and bringing down all their most important customers
However, choosing AWS at least for a manager means making the choice of the market, or having chosen the best solution that anyone would have chosen, and it is a great indemnity in the event of a service failure by putting forward one's reasons: "I chose Amazon and I made the choice of the market".
Which basically sounds a bit like saying that no one has ever been fired for buying Microsoft.
Some novice affiliate marketers will surely be tempted to save costs by choosing a Shared Hosting, well, there could be no worse choice than this of mixing a performing and demanding business with sites of the little bar near the house, the laundry and the little blog of Mrs. Pina from Mostrapiedi in province of Macerata who tells us about his 4 furry friends.
You have to choose dedicated instances. Whether they are VPS, Cloud, Dedicated Servers, in any case only one IP dedicated to you and completely available power without the limitations whatsoever that occur on unscrupulous services that take the site offline at the first peak.
10. Lack of adequate tuning of the PHP interpreter
Here we could spend a book of 1000 pages so much there would be to say. First of all you have to try to use the latest PHP version that is compatible with your site without giving operating errors. To date, for example, PHP 8.2 still seems to be incompatible with most WordPress plugins; therefore, the best choice is to focus on the version of PHP 7.4.
Be careful to have a PHP Caching system to speed up code execution. The current standard choice is based on Zend OpCache distributed with all the latest PHP versions of which you can find a description and explanation in this article Zend OpCache. How to speed up PHP?
Another aspect you should worry about is using the fastest possible PHP process spawn method. We know there are 3 predefined modes:
- On Demand
Well, each of these modes have a lag in the spawn of the PHP interpreter process at each request and therefore using the static mode, that is, ready to spawn and waiting to serve new requests is certainly faster than the other two modes.
As you can see from the benchmark below as the number of concurrent requests increases, the delay time between static and other modes can vary even in the order of 0,2 seconds (which in terms of performance is an eternity).
If you are a DIY daredevil and use scrauded panels like Plesk or cPanel, be careful to change the settings by bringing the process spawn to static mode.
11. Lack of adequate tuning of the MySQL DMBS.
Obviously if you are using WordPress, you will also use a Database like MySQL or its forks and derivatives (MariaDB, Percona Server for example).
To have good MySQL performance you should use an adhoc configuration tuning by expertly customizing my.cnf.
The innodb cache, the number of threads, the management of joins in memory rather than on disk and many other parameters that determine the speed and efficiency of a database must be set appropriately.
Furthermore, you need to make sure that your database DO NOT use MyISAM tables ma the modern and performing InnoDB tables. In short, when you load a database, your system administrator should also optimize and possibly convert these tables to the new formats.
MyISAM is the default storage engine starting with MySQL 3.23 with great performance for read speed. All MyISAM tables are stored on disk in three files. Files have names that begin with the table name and have an extension indicating the file type. A .frm file stores the format of the table; a data file has a .MYD (MYData) extension; and an index file has a .MYI (MYIndex) extension.
InnoDB supports transactions with committing, rollback and crash-recovery capabilities to protect user data. InnoDB's row-level locking (without escalation to coarser-grained locks) and Oracle-style non-blocking consistent reads increase multi-user concurrency and performance. InnoDB stores user data in clustered indexes to reduce I / O for common queries based on primary keys.
InnoDB is a fully ACID compatible database engine. ACID stands for Atomicity, Consistency, Isolation and Durability. These are four basic concepts of database systems management that are respected by the InnoDB storage engine.
The main advantage of using InnoDB over MyISAM is that you get row-level locking when using InnoDB. This means that it is possible to have multiple users accessing the same database at the same time without conflicts. Another advantage is that with InnoDB, when a database crashes, it will crash only for one user and not for all database users. With MyISAM, when a database crashes, it crashes for all users of that database.
12. Lack of a static cache like NGINX's Varnish Cache or FastCGI Cache
If you are looking to optimize and speed up the opening of your WordPress, WooCommerce or other site, you will surely have come across posts and advice that told you about installing Cache software or plugins.
If you were lucky you will have heard some professionals talk about too varnish cache, rather than useless Cache on the plugin side or the more trivial and less functional Cache server NGINX FastCGI Cache. It is a very promising cache and easy to install and configure; however, NGINX FastCGI Cache has some issues not really trivial at least in the Free version unlike the NGINX PLUS version.
Varnish Cache is a server-side software written in C language and therefore very performing. It was conceived and developed keeping in mind the best concepts and best practices for software development such as dynamic memory management, the use of threads, shared memory, POSIX standard sockets and many other precautions that would be effectively IMPOSSIBLE with a language. interpreted as PHP.
While the PHP caches mentioned above get activated after running the PHP cache code and therefore spawned (generated) a PHP interpreter process via PHP-FPM thread (Fast Process Manager), a cache Varnish responds first immediately without bringing up PHP and spawning new threads and processes in the slightest, and therefore saving a lot of CPU for these operations that can be very heavy in front of a very high Web traffic of thousands or tens of thousands of visitors.
If you want to understand Varnish better you can read article like this: Varnish hosting
Today there are several hosting providers who say they use Varnish but many use it in a non-functional way, with a very low HIT ratio and completely useless for traffic and Facebook Advertising campaigns.
13. Using SSL Certificates like Lets'Encrypt which don't work on old devices.
Imagine designing the landing page for the bunion product, and targeting all Italian housewives over the age of 50 on the FB ADS campaign. Now imagine that a part of them let's say roughly 5% use an inexpensive and somewhat elderly Android device. Now let's imagine that the housewife Carmela interested by the announcement and by the miraculous premises and promises clicks on the announcement to make the purchase. You use Lets'Encrypt because it is free and you save 10 euros of SSL certificate for HTTPS, she uses an old Samsung 8 because she too, like you, prefers to save.
Do you know what happens? That your site will give an error why a CA ROOT Authority Let's Encrypt has expired months and months ago, and therefore the HTTPS certificate that works correctly on all other devices generates a security exception on the site of Mrs. Carmela of Palermo.
So goodbye leads, goodbye sales, goodbye return on investment, goodbye advertising budget. Let me be clear, a 3-5% could also be an irrelevant value that could also be ignored; however, I am sure that certain products for certain personal buyers, rather than others, may be more likely to encounter outdated devices. And to think that it would have been enough to install a Domain Validated SSL certificate such as RapidSSL, Verisign and the like, to definitively solve the problem.
14. Having a high drop in Facebook campaigns due to the JS delay of optimization scripts.
It is common to use scripts like Wp Rocket to push core web vitals values. However, few people know that an unhealthy use of the JS delay has the side effect of not activating the Facebook pixel, generating a series of cascading reactions that go up to the deactivation of the campaign due to high drop.
The topic of Delay JS and the harmful effects on a Facebook campaign cannot be covered in a paragraph of this post; therefore, we have deliberately written a nice article about it that explains to both newbies and experts some problems that can be encountered when aiming for a high Google PageSpeed score without taking into account the techniques that are being used such as that of Delay JS : Delay JS to optimize the Core Web Vitals and low performance of Facebook ADS campaigns.
The philosophy to be adopted in improving the performance of landing pages for those who do affiliate marketing is not to use a single 50% improvement approach, but rather to have all those best practices to be applied to bring small improvements that once added together bring to important improvements in terms of speed and user experience.
It may seem obvious, but doing this optimization is not feasible using magic formulas or automatisms, but it requires manual optimization for each individual site which at best can take 30 or 60 minutes of optimization.
If you are planning an Advertising campaign to scale and in which you want to invest a nice budget, it is appropriate to evaluate all these aspects mentioned so far, in order to obtain the best performance and try to maximize profit and conversions.