I'm working at a mid-sized firm in the AEC sector, primarily focused on automation and computational design. Although my training isn't formally software-based, I've transitioned from a traditional role into writing various tools and scripts in C#. Recently, I've been consolidating scattered scripts into a larger Visual Studio solution, moving from just loose scripts on our network drives to a more structured setup.
Over about a week, our Linux Samba server, which supports roughly 200 users, faced daily outages lasting 30-40 minutes, during which users couldn't access their files. IT traced the issue back to my user account, which had around 120 simultaneous file handles compared to the usual 30. They framed my activity as a concern, hinting that it was leading to these outages, while also mentioning that the latest version of our core software might contribute to the problem.
My question is, should a file server shut down due to one user having what seems like an excessive amount of file handles? I've already switched to local development, but I'm trying to figure out if my workflow was at fault or if the server just couldn't handle the load. Is it typical for one developer's usage to crash an entire server?
2 Answers
It sounds like your file server might've been under-resourced to handle that kind of load. You might have just pushed it past its breaking point. If the server struggles with IOPS (Input/Output Operations Per Second), it can really get messed up when someone opens a bunch of files.
Have you considered what kind of locks your code could be causing? It's possible that, just by having Visual Studio open, it's locking several essential files like csproj files and Git metadata, which can impact server performance, especially when others are trying to access the same resources.

Yeah, it seems like opening VS could lead to locking files that aren't necessarily caused by your code. If those locks are tied to your setup, it might explain some of the chaos without your scripts even being deployed yet.