I've noticed a trend where employees are openly using personal AI tools like ChatGPT to paste sensitive company documents, including contracts and client information, without any oversight. It's puzzling to me why they feel comfortable doing this, especially since their companies have AI capabilities available to them. Isn't this a compliance issue waiting to happen? Have others noticed this behavior, and what are the implications?
5 Answers
Most people don’t understand how LLMs work or that using free versions makes them the product, similar to Gmail. We’ve blocked all public LLMs except for our M365 Copilot for security reasons.
Or some organizations might think their information isn't valuable enough to worry about. I see a lot of people not realizing the potential risks.
Honestly, most people don't care about protecting their company's sensitive information. Unless there are strict consequences and guidelines, they'll keep using these tools without thinking.
I get that, but it baffles me. I need my job for my livelihood, and to me, prioritizing company security seems like common sense.
At the end of the day, many employees just don’t think about the consequences. It's a lack of training or awareness that’s leading to this compliance nightmare, and organizations need to make sure they educate their staff.
You nailed it! A formal training program on the implications of using such tools could make a huge difference, not just policies.
So true! Just recently, I discovered that people often mix usage of tools without understanding the risks involved.
Companies should really implement stronger data loss prevention (DLP) strategies, restricting the ability to share sensitive info, regardless of where it goes. If a document shouldn’t go to ChatGPT, it shouldn’t be able to be emailed out.
Exactly! Setting up these restrictions can help maintain compliance and protect sensitive data.
This is exactly why I had my entire company sign an AI policy last year. No matter how many technical safeguards you put in place, people will still find loopholes. Violating this policy can lead to serious consequences like termination.
From my training, I believe no technical measures should be enforced before a solid policy is established. Policies guide the implementation of controls and give users a clear understanding of acceptable practices.
It's all about accountability. Policies should adapt quickly because the moment they sign, they're already outdated.

Same here! I find Copilot frustrating because it often misses the mark when I need answers, leading some to think they can just use ChatGPT instead.