I know that AI tools can really make work easier by saving time on tasks like documentation, log analysis, and script writing. However, a lot of the information I handle daily includes sensitive data such as credentials, internal IPs, and client configurations, which are often covered by NDAs. I'm really interested in hearing how other sysadmins are approaching this issue. Here are a few thoughts I have:
- Do you remove sensitive information before using tools like ChatGPT?
- Do you avoid using AI for work purposes altogether?
- Are there self-hosted solutions that you prefer?
- Or do you just take a chance and hope your company doesn't find out?
I'm not judging anyone's method; I'm just trying to figure out if there's a more efficient way to handle this situation.
6 Answers
I prefer using Open WebUI with local LLMs. It gives me more control over the data without exposing it to external servers.
We use Copilot, but I always make sure to strip out sensitive details and stick to general queries instead.
For us, anything that goes through AI tools is sent for legal review first. We have DLP and DPI tools in place to restrict access to sensitive data. I recently learned about Fortigate's features, and they seemed pretty useful.
We mainly use M365 Copilot since it has a good sovereignty policy. Additionally, I rely on tools like SentinelOne's Prompt Security and Cyberhaven for data loss prevention. But I also believe that sometimes a solid policy and user training can be just as effective as technical controls.
We use the company-approved AI tool designed for handling sensitive data. It's important to have that kind of solution in place to ensure compliance and security.
Honestly, I don't trust many third-party tools with sensitive data. I'd rather be safe than sorry!

Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures