I recently discovered that about half of our development team has been using ChatGPT and GitHub Copilot for code generation without asking for permission. I'm becoming increasingly concerned about the potential for proprietary code or sensitive data being shared with these platforms. It's crucial for us to secure and manage the use of generative AI effectively before it becomes a bigger issue. However, simply banning these tools might push developers to find ways around it. I'm looking for suggestions on policies or technical controls that have worked for others. How can we strike a balance between maintaining AI security and ensuring productivity?
4 Answers
Absolutely! One thing we implemented is giving developers the right tools so they don’t feel the need to sanitize their code when using LLMs. For instance, GitHub Copilot offers a business plan that’s quite easy to manage for teams. If you’re using M365, integrating Copilot chat for enterprise can help as well. This way, you manage data sovereignty while still allowing flexibility.
It’s a real challenge, I get it. In my experience, the best approach is to focus on providing guidelines rather than outright bans. We’re using ChatGPT for business and Copilot, and we've set up some rules about what can be shared. Like, developers should only use small snippets and avoid sensitive data. We also do training workshops to raise awareness about safe usage. We encourage using company-provided tools and subscriptions but allow some flexibility for internal projects. Keeping everything transparent usually helps as well.
But you can't ignore the reality that many developers will find their own way around restrictions. If there's a requirement for efficiency, people will use their subscriptions. If you can lock down the network, that’s one thing, but working from home makes it tricky. It’s about finding a balance, providing the right resources, and keeping communication open.
I have to disagree with the idea of just letting it slide. If you want to maintain control, sometimes you need to lay down strict rules. Either implement an on-prem instance of the tools or restrict access entirely. It's important to enforce security seriously; a workplace shouldn't operate like a democracy when it comes to data safety. Additionally, if employees work remotely, that does complicate security, but they should still respect the rules.

Related Questions
How To: Running Codex CLI on Windows with Azure OpenAI
Set Wordpress Featured Image Using Javascript
How To Fix PHP Random Being The Same
Why no WebP Support with Wordpress
Replace Wordpress Cron With Linux Cron
Customize Yoast Canonical URL Programmatically