I'm in the process of building an eCommerce system with a few different servers, including one for load balancing that will also function as a reverse proxy. I have several Node.js servers for the web app, and a separate server for my database. My main question is about sizing the load balancer: should it have similar power to my Node.js servers, or can it be less powerful since it mainly routes requests? Alternatively, should it actually be more powerful to handle traffic spikes effectively? Right now, I'm starting with just one Node.js server but plan to scale up as my site grows, especially during busy times like Black Friday or Christmas. Currently, I'm using Hetzner's CAX21 cloud server for the Node.js app, which has 4 vCPU and 8 GB RAM. Would it be acceptable to use a cheaper CAX11 server for my load balancer instead, which offers 2 vCPU and 4 GB RAM?
4 Answers
You might want to consider that your load balancer doesn't need a ton of power—it primarily handles routing connections, while your web app servers will be carrying the heavy load. However, to prevent single points of failure (like having just one load balancer), think about having a backup or making it part of a high-availability setup, especially if you expect significant traffic.
Remember, there’s no perfect answer here. Each setup is unique based on traffic patterns and usage. Start small, monitor how the system behaves under load, and then make adjustments as needed. That's usually the best approach.
Exactly! I plan to test thoroughly and adjust accordingly.
Your load balancer's specs might vary depending on its tasks. Since it handles TLS for your web servers, it might need a bit more power than just routing if you're also doing any logging or monitoring. But if caching's done on your app servers, a lower-spec load balancer could work fine.
Good point! I'll keep an eye on performance as I scale.
Honestly, make sure to cluster your servers in a high-availability configuration. Using tools like Proxmox or Kubernetes can really help distribute the load properly without overloading any single server. This way you can handle spikes better without constantly worrying about performance.
Great suggestion! I'm looking into Kubernetes for that very reason—it seems like a solid choice for scaling.
Agreed! Redundancy is key. If your load balancer goes down, you're in trouble. Maybe start with a basic setup and scale from there.