I'm currently experimenting with transferring 8 MB data frames over TCP versus HTTP. When I transfer data between processes on localhost using TCP, the average time is around 7-8 ms. However, with HTTP (using raw bytes and no base64 encoding), the transfer takes about 40-50 ms, regardless of whether I use ASP.NET or FastAPI. I'm puzzled because I thought HTTP relied on TCP for transport. What causes this significant difference in speed?
3 Answers
It's true that HTTP usually runs over TCP, but it definitely adds extra layers that slow things down. HTTP is a more complex protocol, which means it has more overhead. This includes various headers and other metadata that must be processed each time a request is made, leading to increased latency and thus longer transfer times compared to raw TCP transfers. It’s kind of like how mailing a letter (HTTP) takes longer than just passing a note (TCP) directly between two people.
HTTP adds overhead primarily because it's a text-based protocol, meaning it includes more information with each request. Even simple requests can have multiple headers that must be parsed and understood, which all adds to the total time it takes. You might want to look at the exact configurations of your server or how you're measuring the times too, as these can introduce additional delays.
Thanks for the insight! I timed the requests using an HTTP client in .NET. I'll check on the headers and settings.
There are tons of factors in play here. For instance, the specific OS settings, the socket API used, and how you actually benchmark the transfers matter a lot. If you're just doing local transfers, you might also want to consider if you're including response times in your measurements. Sometimes the version of HTTP matters too, like with HTTP/2 or HTTP/3 having different efficiencies. Just thought I'd throw that out there!
Exactly! Plus, the server has to handle things like connections and status codes, which adds to the time taken. It's just not a direct byte transfer like TCP.