HTTP/2 Server Threads

From sgcWebSockets 2024.2.0 the HTTP/2 server has been improved when receiving HTTP/2 requests. Now, by default, when the server receives a new HTTP/2 request, this is queued and dispatched by one of threads of the Pool of Threads. This avoid the problem when several requests are sent using the same connection and those are processed sequentially.

See below the differences between HTTP 1.1 and HTTP 2.0:

HTTP 1.1

 In traditional HTTP behavior, when making multiple requests over the same connection, the client has to wait for the response of each request before sending the next one. This sequential approach significantly increases the load time of a website's resources. To address this issue, HTTP/1.1 introduced a feature called pipelining, allowing a client to send multiple requests without waiting for the server's responses. The server, in turn, responds to the client in the same order as it received the requests.

While pipelining appeared to be a solution, it faced challenges:

  • Server Ignorance or Response Corruption: Some servers either ignored pipelined requests or corrupted the responses, leading to unreliable communication.
  • Head-of-Line Blocking: The first request in the pipeline could block subsequent requests, causing a delay in the processing of other requests. This phenomenon, known as head-of-line blocking, resulted in slower page loading times.

In an effort to optimize page loading from servers supporting HTTP/1.1, the Web-Browsers implemented a workaround. It opens six-eight parallel connections to the server, enabling the simultaneous transmission of multiple requests. This parallelism aims to mitigate the issues associated with pipelining and improve overall page load times.

The choice of six-eight parallel connections by the Web-Browsers is based on optimization considerations. The specific reasons behind selecting this number may involve a trade-off between resource utilization, network efficiency, and avoiding potential bottlenecks. 

HTTP 2.0

In response to the constraints encountered in pipelining, HTTP/2 introduced a feature called multiplexing. Multiplexing allows for more efficient communication between the client and server by enabling the concurrent transmission of multiple requests and responses over a single connection.

HTTP/2 utilizes a binary framing mechanism, which means that HTTP messages are broken down into smaller, independent units called frames. These frames can be interleaved and sent over the connection independently of one another. At the receiving end, the frames are reassembled to reconstruct the original HTTP message.

This binary framing mechanism is fundamental to achieving multiplexing in HTTP/2. It enables the browser to send multiple requests over the same connection without encountering blocking issues. As a result, browsers like Chrome utilize the same connection ID for HTTP/2 requests, allowing for efficient and uninterrupted communication between the client and server.

In essence, HTTP/2's multiplexing feature, enabled by the binary framing mechanism, enhances the efficiency and speed of data exchange between clients and servers by facilitating concurrent transmission of multiple requests and responses over a single connection.

TsgcWebSocketHTTPServer Component

To improve the performance of the HTTP/2 protocol, the requests are dispatched by default in a Pool Of Threads (by default 32) every time a new HTTP/2 request is received by the server, this avoid waits when a single connection sends a lot of concurrent requests which will require processing sequentially (in the context of the connection thread) in the absence of this pool of threads.

The behaviour of the PoolOfThreads can be configured in the following properties.

  • HTTP2Options.PoolOfThreads.Enabled: (by default true) enable to dispatch the http/2 requests in the pool of threads instead of the connection thread.
  • HTTP2Options.Threads: (by default 32) the number of threads used to handle the HTTP/2 requests. Set a number according the number of processors of your server.

To fine-tune the requests, selecting which must be processed in the Pool Of Threads (because are time consuming) while others can be processed in the connection thread, you can use the event OnHttp2BeforeAsyncRequest, this event is raised before queue the request in the pool of threads, use the parameter Async to set if the request is threaded or not.
procedure OnHTTP2BeforeAsyncRequest(Sender: TObject; Connection: TsgcWSConnection; const ARequestInfo: TIdHTTPRequestInfo; var Async: Boolean);
begin
  if ARequestInfo.Document = '/time-consuming-request' then
    ASync := False;
end; 
×
Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

sgcIndy & DevExpress
sgcWebSockets 2024.1

Related Posts