We're a small to medium-sized business and we're trying to stay at the forefront of technology. However, I've noticed that some of our sales staff, who have manager-approved paid ChatGPT accounts, are still using them instead of our officially sanctioned Microsoft Copilot licenses, which integrate GPT-5. Our policy clearly states that no business information should be entered into public AI platforms, and the only AI tools they should be using are the Microsoft licenses we provide. I spoke to the team about this, but I'm skeptical they'll actually stop using their personal GPT accounts since they're used to those. How can I effectively manage this situation without outright banning the free version of GPT?
5 Answers
If you're blocking domains associated with unauthorized AI tools, then that helps! If employees can’t access ChatGPT, they can't use it. But relying solely on tech can backfire if staff find workarounds.
To put it simply, you can't rely solely on technology to solve this problem. The first step is to revisit your official policy—what are the consequences for employees not following it? If there are no repercussions, people will naturally gravitate to the easier option. You might also want to collaborate with the sales manager to stress the importance of using designated tools. If management doesn't step in, it could be tough to enforce compliance.
I think the most straightforward solution is really about making sure everyone is clear on the expectations and the potential fallout if they ignore the policy. Maybe a group session or training on the approved tools would help?
Don't forget, this is really a people management issue, not a tech issue. You need strong buy-in from management to ensure that policies are enforced. HR should also communicate with employees who aren't following the rules so they understand the seriousness of the situation.
Exactly! It's about creating awareness. Sending out reminders about the consequences can help if management stays proactive.
One potential solution is to combine your Data Loss Prevention (DLP) and Conditional Access policies. This way, you could prevent users from uploading sensitive information to platforms like ChatGPT. Check out this resource for more info on setting it up: [blog.admindroid.com](https://blog.admindroid.com/detect-shadow-ai-usage-and-protect-internet-access-with-microsoft-entra-suite/#Prevent%20uploading%20sensitive%20data%20to%20GenAI%20with%20Netskope%20One%20Advanced%20SSE).
Absolutely! Our IT policy states that inputting company data into unauthorized AI tools is gross misconduct and could lead to termination. Generic queries are fine elsewhere, but we need to keep sensitive data safe.