I'm currently working on a fintech platform where we need to optimize a page that calculates data based on two combinations of tenors. Currently, we can handle a maximum of 390,000 combinations, but loading the table takes about 8-9 minutes. Our client wants to scale this to accommodate around 9.7 million rows in under 30 seconds, while ensuring the table is fully sortable without any infinite scrolling. We are using a tech stack that includes a Next.js front end, a Node.js back end, and a Golang microservice for calculations, with most of the processing done in Golang.
Right now, the loading size for 390k rows amounts to 107MB of JSON, which causes some lag in agGrid. I've come up with a few strategies, such as moving more processes to Golang, upgrading our server for multithreading, and reducing gRPC latency while increasing batch sizes. Other ideas include minimizing response size, switching from agGrid to a lighter table, or, as a last resort, recalculating at off-peak hours and caching results. What suggestions or advice do you have for optimizing this? Any insights would be greatly appreciated!
5 Answers
Make sure you’re measuring the request times to identify the bottlenecks. Is it from database retrieval, calculation, or data transmission? If calculations are taking too long, consider migrating them to the backend entirely. Also, definitely explore multithreading and pre-computing results. Don't forget to ensure gzip is active and check for redundant JSON fields that could be cut.
The bottlenecks seem to be in the calculations which max out the Golang server. Multithreading is definitely going to be essential for future scalability!
Has your client ever worked with large Excel files? Even they have limitations like 1 million rows. Maybe you could offer a downloadable file instead of trying to shove all that data into a browser window. Just a thought!
That would solve a lot of pain points! But I think they need to see the data live, so this might not work.
Right! They need to watch these metrics in real-time, multiple windows open at once, so an Excel file isn't really a feasible option.
You could cache results at different calculation levels to speed things up. Creating a "meta" table that stores resolved calculations could offload some processing. It might lead to stale data for users, but depending on their needs, that could be acceptable.
Have you tried gzipping your raw JSON? It can really reduce transfer sizes by 80-90%. Also, with larger data sets, consider downloading them in chunks rather than all at once. You can do this by using web assembly to convert calculations into HTML efficiently. Just make sure your server setup supports gzip for streaming responses.
That's a solid suggestion! Gzip is already set up for JSON, but I haven’t implemented gzip for streaming packets yet, so I’ll definitely look into that.
Honestly, your client's expectations might be a bit unrealistic. Loading over 250MB of JSON in under 30 seconds, especially with full sorting capabilities, is a tall order. Have you thought about suggesting they use a tool like Power BI and create an API to access the data instead? With how things are set up, I'd be concerned about what their browser can actually handle.
Totally agree! Even 390k rows in near real-time seems crazy. It’s for live traders, which complicates things because the calculations are based on the latest market data. Caching data long-term feels risky here.
Definitely! They might have to adjust their requirements if they want everything to run smoothly.

I hear you on the JSON size—it’s definitely on the radar. But server-side, we need to sort by calculated values, which complicates performance. Gzip is covered for HTTP2 requests, so streaming should help when we scale, too.