I'm currently researching the process of breaking down monolithic applications into serverless functions. For those who have experience with this migration, I'd love to know: – How difficult was it from both a technical and organizational standpoint? – What were the key benefits you saw? – Did you encounter any unexpected downsides? – If you had the chance to do it all over again, what would you approach differently? I'm particularly curious about aspects like cost changes (pay-per-use vs. provisioned infrastructure), scalability improvements, and how it affected development speed and maintainability. Any success stories, lessons learned, or even regrets are welcome. Thanks in advance for your insights!
4 Answers
I find that the 'lambdalith' approach can actually be the best mix, combining both monolithic and serverless architectures effectively—it gives you the stability of monoliths and the flexibility of serverless functions.
I’ve worked with both methods, and here’s a tip: it's best to avoid altering an existing stack or database. It usually adds more trouble than it’s worth. Plan your migration well to avoid unnecessary headaches and costs.
Serverless is great for low traffic use cases—it simplifies a lot of headaches. But if you need to scale, you might hit a wall. At higher scales, serverless can end up being pricier and slower, negating the initial benefits. It really all comes down to your specific use-case.
I've had a different experience! For scaling, I've found serverless to be fantastic, but I agree it can get expensive, so it's best for low to medium traffic.
From my experience, this seems like more of a monolith vs microservices debate unless you're really inclined towards FaaS. Don’t forget about serverless containers like ECS; they might fit your needs better without getting into complexities.

Can you share a practical example of when serverless worked well for you?