A few months ago at a Cloudflare conference, a lot of audience questions revolved around AI usage and security concerns. One participant shared an incident where different teams in their company started using ChatGPT and other generative AI tools without informing the IT department. I'm also aware that some employees submit sensitive information in AI prompts without considering the risks. How are you all handling this issue in your organizations? Do you think it's a significant problem right now? I know this mainly relates to generative AI, but it seems even more complicated when discussing AI APIs or developing in-house AI models, especially when sensitive data is processed outside the company.
1 Answer
It's really becoming a data security challenge. You can block access to these tools at the gateway level, but that only goes so far. Network security isn't enough. The real foundation should be data labeling, classification, and solid policies. If you're using Microsoft tools, check out their resources on data governance.
Good advice, thanks!