I'm looking for advice on how to effectively capture and store very large memory crash dumps (over 100GB) from a Windows pod running in Azure Kubernetes Service (AKS) after a crash. It's crucial for me to ensure these dumps are saved without any corruption and are accessible for later download or inspection. I've tried using a premium Azure disk (az-disk) but it hasn't been reliable for this situation. I'm also considering options like emptyDir, although I haven't tested that yet. Any suggestions would be greatly appreciated! Thanks!
3 Answers
What exactly is the use case you're looking at? If you're trying to debug something specific, knowing more about the application could help narrow down some solutions. Dealing with memory dumps can be tricky, especially in containers—so knowing the context is key!
You might want to look into different methods for storing those large dumps since the typical Azure disk approach isn't cutting it for you. I've heard good things about using Azure Files for larger data sets, which could help in retaining the integrity of your dumps. Just make sure you're considering performance, as well! Good luck!
Using windows pods definitely requires a different approach for handling those massive dumps. If you're consistently getting large dumps, it might be time to refactor your application for better memory management. It could help reduce how often you encounter these crashes in the first place!
I'm focused on debugging my app which generates these large memory dumps. Just wish it didn't have to be a Windows container, they're such a pain to work with!