Did I Overload the File Server with My Development Work?

0
10
Asked By DevelopmentDude42 On

I'm working at a mid-sized AEC firm where I'm primarily focused on automation and computational design. Though I don't have formal training in software development, I've gradually transitioned from a traditional role to writing C# tools and scripts. Our firm uses a Linux Samba server to manage over 100TB of data for around 200 users, and I've noticed some issues recently.

While consolidating several smaller scripts and plugins into a larger Visual Studio solution, I encountered repeated outages on our file server. Over the course of about a week, the server experienced daily outages lasting between 30 to 40 minutes, which halted user access to files and caused considerable disruption. IT later reported that my user account was holding around **120 simultaneous file handles**, significantly more than the average user.

IT's communication suggested that I might be responsible for these outages, and they mentioned that the latest version of Autodesk Revit could also be creating multiple small files, contributing to the problem. However, I'm unsure if my development work was genuinely the cause of the outages or if the server was already under strain. Considering our server is meant to support 200 users, is it reasonable for one developer's actions to bring it down like this? I want to learn if I did something wrong or if this signals a broader issue with the server's capacity management.

5 Answers

Answered By OldSchoolCoder On

I remember working in a setup like this years ago, and it was an absolute hassle! If I were in your position, I'd focus on resolving the problem instead of the blame game. IT should prioritize replicating the outage to truly understand what’s happening with the server. They need to get to the bottom of these repeated crashes instead of pointing fingers.

Answered By FileHandleFan On

The mention of Revit causing issues makes me think there might be a deadlock situation happening. If users are waiting on files locked by each other, it could freeze everything up. But honestly, 120 handles is pretty light, so something else must be causing these extended outages.

Answered By DevGuru99 On

Your project likely isn't the root cause of the server issues. It seems like IT is searching for a scapegoat rather than addressing the real problems. In addition to moving your work local, I'd recommend pushing for a thorough analysis of what went wrong. Common issues could be storage I/O limits, SMB locking conflicts, or just poor Samba configuration. It's definitely an infrastructure issue that needs fixing to avoid future outages.

Answered By CuriousCoder On

Honestly, I wonder about the contexts in which your scripts are running. But even then, 120 file handles is quite manageable. It seems like more of an oversight in the server's capability to handle multiple users rather than a solo performance issue.

Answered By TechieTrendsetter On

Honestly, 120 file handles shouldn't be enough to crash a Samba server meant to support 200 users. It sounds more like your server was already struggling under its own weight. Typically, a properly set up Samba server can handle thousands of concurrent file handles without failing. Instead of blaming you, IT should consider why there are no connection limits or monitoring in place to catch these issues sooner. Your shift to local development was a smart move, and don't let them make you feel at fault for the server problems; this seems more like an infrastructure failure than anything you did wrong.

SysAdminSleuth -

Exactly! Without monitoring, they're missing key info on server performance. Sounds like they need to step it up.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.