How to Safely Capture and Store Large Memory Dumps from Windows Pods in AKS?

0
6
Asked By TechyNinja42 On

I'm seeking advice on managing large memory crash dumps (over 100GB) that result from crashes in a Windows pod on Azure Kubernetes Service (AKS). It's crucial for me to ensure these dumps can be stored reliably without corruption, so they can be downloaded and inspected later. I've explored using a premium Azure disk (az-disk) but haven't had consistent success. I'm also considering using emptyDir, although I haven't tried that yet. Any suggestions or alternative methods would be greatly appreciated!

3 Answers

Answered By CuriousCoder99 On

It sounds like a tricky situation! Have you considered using alternative storage solutions like Azure Blob Storage? It might help you manage the large dump files better. Just ensure you're handling the writing processes correctly to avoid corruption. Also, I'm curious if you find a solution, as it's a complex issue! I won't lie though, I initially misread your post and had a good chuckle at the title, haha!

Answered By AppTinkerer88 On

I get where you're coming from. Honestly, Windows containers in Kubernetes can be really frustrating. Are you primarily trying to debug application issues? Making sure that dumps are manageable in size before they even hit the storage can save you headaches down the line.

Answered By DevGuru53 On

Handling large dumps in Windows pods can definitely be a pain. If you’re looking for reliability, make sure to flush the dumps and possibly refactor your application to manage resources more efficiently. It’s crucial to debug effectively and avoid creating such large dumps in the first place if you can. What's the core application you’re running, and what specific debug information are you hoping to extract?

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.