How Are Developers Handling AI Tools That Use Their Code for Training?

0
2
Asked By TechieTurtle99 On

I've been trying out various AI coding assistants lately and noticed that most of them mention they utilize user inputs for model training. This raises concerns for me since it could mean that proprietary code, client projects, and internal logic might end up being part of their training sets. While it may not matter much for personal projects, it definitely feels risky for client work, especially since many contracts have clauses against sharing source code with third parties. Are we inadvertently violating those agreements by using these AI assistants? I've looked for alternatives that don't train on user data, but options seem few and far between. How are others in the developer community dealing with this issue? Is everyone just accepting the risks for the sake of productivity, searching for alternatives, or avoiding AI tools for client work altogether? This feels like a conversation the industry needs to have, yet everyone seems more focused on the immediate benefits of these tools.

1 Answer

Answered By CodeWhisperer42 On

You're spot on; it's a tricky situation. Most developers I know tend to just use what’s available without thinking too deeply about the implications. They might be violating contracts, but since enforcement is rare, it seems worth the risk to many. It’s definitely concerning, though.

DevGuru88 -

Exactly! I know of people who use personal accounts to avoid enterprise oversight and just upload their entire codebase to an AI. The lack of concern is alarming.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.