Best Practices for Capturing Large Memory Dumps from Windows Pods in AKS

0
6
Asked By CoolGiraffe94 On

I'm looking for advice on how to effectively capture and store very large full memory crash dumps (over 100GB) from a Windows pod in Azure Kubernetes Service (AKS) after it crashes. It's crucial that these dumps are saved without corruption so that I can download or inspect them later. Just a bit of context here: I have a cluster running on AKS, and I've tried using a premium Azure disk (az-disk), but it hasn't worked reliably for this specific scenario. I'm also considering options like emptyDir but haven't tested that yet. Any tips or solutions you can offer would be greatly appreciated. Thanks!

2 Answers

Answered By CuriousCat23 On

I have to ask, what exactly are you trying to accomplish here? If it's about debugging your application, knowing which app you're using could help narrow down the best approach. In my experience, managing large dumps from Windows pods requires a different strategy. If the application is misbehaving, consider flushing out unnecessary data and maybe refactoring your code. Just my two cents!

DebugNinja52 -

Yeah, I totally get that dealing with Windows containers is a pain! I'm currently stuck with them too.

Answered By MemoryDumpMaverick On

Using emptyDir for those massive dumps might not be the best route since it wipes data on pod restarts. You could look into using object storage for persistence. It's generally safer for large files and less likely to get corrupted. And definitely be sure to have enough resources allocated to your pods if you're generating huge dumps.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.