I need some advice on dealing with a specific situation in Kubernetes. I'm looking for reliable ways to capture and store very large full memory crash dumps (over 100GB) from a Windows pod in Azure Kubernetes Service (AKS) after it crashes. It's crucial that these dumps are saved without any corruption so they can be downloaded or inspected later. Here's some extra context:
- The cluster is running on AKS.
- I've tried using a premium Azure disk (az-disk), but it hasn't provided reliable results for this situation.
- I'm considering using emptyDir but haven't tested it yet.
Any suggestions or ideas would be fantastic. Thanks!
2 Answers
It's definitely a tricky situation with large dumps! Have you considered refactoring your application instead? Sometimes, addressing memory management in your code can prevent those massive dumps from being generated in the first place. Also, using tools that specialize in memory profiling might help you pinpoint issues before they lead to crashes.
I'm curious about what you're trying to achieve with those dumps. Are you mainly looking to debug your application? Knowing more about your application's behavior could help brainstorm better solutions. Plus, I feel your pain with Windows containers in Kubernetes—they can be quite a hassle!
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures