I'm seeking advice on managing large memory crash dumps (over 100GB) that result from crashes in a Windows pod on Azure Kubernetes Service (AKS). It's crucial for me to ensure these dumps can be stored reliably without corruption, so they can be downloaded and inspected later. I've explored using a premium Azure disk (az-disk) but haven't had consistent success. I'm also considering using emptyDir, although I haven't tried that yet. Any suggestions or alternative methods would be greatly appreciated!
3 Answers
It sounds like a tricky situation! Have you considered using alternative storage solutions like Azure Blob Storage? It might help you manage the large dump files better. Just ensure you're handling the writing processes correctly to avoid corruption. Also, I'm curious if you find a solution, as it's a complex issue! I won't lie though, I initially misread your post and had a good chuckle at the title, haha!
I get where you're coming from. Honestly, Windows containers in Kubernetes can be really frustrating. Are you primarily trying to debug application issues? Making sure that dumps are manageable in size before they even hit the storage can save you headaches down the line.
Handling large dumps in Windows pods can definitely be a pain. If you’re looking for reliability, make sure to flush the dumps and possibly refactor your application to manage resources more efficiently. It’s crucial to debug effectively and avoid creating such large dumps in the first place if you can. What's the core application you’re running, and what specific debug information are you hoping to extract?
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures