Table of contents of the article:
Introduction
User experience is one of the key elements for the success of any website, and loading speed is a crucial factor affecting user perception. In addition, website speed has a significant impact on search engine rankings, as search engines place more and more weight on website speed in their ranking algorithm.
To improve the speed of the website, one of the important factors to consider is the TTFB (Time To First Byte), i.e. the time elapsed between the client's request to the server and the receipt of the first response from the server. A low TTFB can help increase website loading speed and improve the user experience.
However, the connection process between client and server can be a major limitation for TTFB. TCP's three-way handshake requires three messages to establish a connection, and this process can take additional time, increasing TTFB and slowing down the website.
To overcome this problem, it was developed TCP Fast Open, a technology that allows you to reduce the time required to establish a TCP connection, improving network performance and website loading speed. By implementing TCP Fast Open, the server can send data to the client already during the handshake phase, reducing the time it takes to establish the connection and improving TTFB and website latency.
What is TCP Fast Open?
TCP Fast Open (TFO) is a network performance optimization technology of the Transmission Control Protocol (TCP) that helps reduce the time it takes to open network connections. In practice, TFO provides for the sending of critical data during the opening phase of the connection itself, without having to wait for the outcome of the three-way handshake, which normally takes place between the client and the server. This reduces connection opening delays, improving overall network performance.
Goals of the post
The goal of this post is to provide a complete overview of TCP Fast Open, explaining how the technology works, its advantages and disadvantages, and how to enable it on servers and clients. In particular, the post is aimed at those who have a basic knowledge of the TCP protocol and want to learn more about the subject, for example to improve the performance of their network or to better understand the protocol's operating mechanisms.
This post aims to have both an introductory and an implementation purpose for general users and insiders such as computer systems engineers who would like to improve the efficiency of their network, the performance of TCP/IP and improve its speed especially in a context in which you want to use the HTTP or HTTP/2 rather than HTTP3 or QUIC protocol which normally uses UDP rather than TCP connections.
Understand how TCP works
The TCP communication model
The TCP protocol is one of the most used communication protocols at the network level. It is based on a byte stream communication model, i.e. data is sent and received in the form of a continuous stream of bytes, without interruptions. The protocol ensures communication reliability, i.e. it ensures that data is received correctly and in order, through the use of sequence numbers and acknowledgments of receipt.
The three-way handshake
To establish a TCP connection between a client and a server, a three-way handshake mechanism, called "three-way handshake", is used. Basically, the client sends a SYN packet to the server, the server replies with a SYN/ACK packet, and finally the client sends an ACK packet to confirm the connection. This handshake mechanism ensures that the client and server are in sync and ready to exchange data reliably.
Network congestion
The TCP protocol also has a network congestion control mechanism. In practice, the protocol constantly monitors network performance, to avoid sending too much data at the same time and causing network congestion. If the network becomes congested, the protocol reduces the data transmission speed, to avoid making the situation worse.
In general, the TCP protocol was designed to ensure high reliability of communication, but at the expense of network performance in terms of data transmission speed.
TCP Fast Open: how it works
The TFO three-way handshake
TCP Fast Open provides an alternative handshake mechanism to the traditional three-way handshake. In practice, during the connection opening phase, the client sends a SYN packet with an additional data field (TFO cookie), which contains information about the previous connection. The server can use this information to authenticate the client's request and immediately respond with a SYN/ACK packet, also containing a TFO cookie. The client can then send critical connection data immediately, without waiting for the outcome of the complete three-way handshake.
How TFO reduces connection opening delays
TCP Fast Open reduces connection opening times, as it allows the client to immediately send critical connection data, without having to wait for the outcome of the complete three-way handshake. This mechanism is particularly useful in case of repetitive requests between the same client and server, for example when downloading content from a website. This reduces connection opening delays and improves overall network performance.
In general, TCP Fast Open is a network performance optimization technology of the TCP protocol that allows you to reduce the opening time of network connections, which improves overall network performance.
Implementation of TCP Fast Open
Operating systems support
TCP Fast Open is supported by several operating systems, including Linux, macOS and Windows 10. In particular, starting from version 3.6 of the Linux kernel, TCP Fast Open is enabled by default. In macOS, TCP Fast Open was introduced in version 10.11. In Windows 10, TCP Fast Open is supported starting with version 1607.
Enable TCP Fast Open on Linux
To enable TCP Fast Open on Linux, the operating system must have a kernel that supports this technology. In particular, support for TCP Fast Open was introduced in Linux kernel version 3.7, so it is important to verify that your system kernel is updated to this version or later.
Once the kernel version has been verified, TCP Fast Open can be enabled by changing the kernel settings via the /etc/sysctl.conf configuration file. To do this, you need to add the following line to your configuration file:
net.ipv4.tcp_fastopen=3
This setting enables TCP Fast Open on the system, allowing the server to send data to the client during the TCP handshake phase.
It is important to note that to use TCP Fast Open, the server software must also support this technology. For example, if you are using apache as your web server, you can enable TCP Fast Open by adding the following line to your configuration file:
AcceptFilter http none
To enable TCP Fast Open on NGINX, you need to verify that your version of NGINX supports this technology. In particular, support for TCP Fast Open was introduced starting with NGINX version 1.5.7.
Once you've verified your NGINX version, you can enable TCP Fast Open by adding the following line to your NGINX server configuration:
tcp_fastopen on;
This setting allows the NGINX server to send data to the client during the TCP handshake phase, improving network performance and website speed.
It is important to note that, as with Linux, the client must also support TCP Fast Open in order to take full advantage of this technology. Otherwise, the technology will be disabled.
Additionally, you can configure additional parameters for TCP Fast Open in NGINX, such as the maximum number of TCP Fast Open connections for each client or the maximum size of data that can be sent during the TCP handshake phase.
Also, you must verify that the client supports TCP Fast Open, otherwise the technology cannot be used.
Limitations of TCP Fast Open
TCP Fast Open has some limitations that limit its use in certain scenarios. For example, TCP Fast Open is not recommended for connections to untrusted websites, as it may be vulnerable to spoofing or man-in-the-middle attacks. Also, TCP Fast Open requires the server to have support for this technology, otherwise it cannot be used.
Conclusions
Advantages of TCP Fast Open
TCP Fast Open offers several benefits in terms of network performance. In particular, it allows you to reduce the opening times of network connections and improve overall network performance. This results in increased efficiency of applications that use the TCP protocol, especially those that require fast and frequent communication between clients and servers.
Challenges in implementing TCP Fast Open
Despite the benefits, implementing TCP Fast Open has some challenges. In particular, as we have seen, TCP Fast Open requires the server to have support for this technology, which may limit its usefulness in some scenarios. Also, using TCP Fast Open may pose some security risks, especially when connecting to untrusted websites.