I'm diving into a huge project and really need a large language model (LLM) that can handle any question without hitting those pesky guidelines. I understand that open source means the code is available for modification, but I'm not exactly a programmer yet. Is it actually possible to take the code from an open source LLM, tweak it, and get it to function the way I want? If so, which LLM would you recommend?
3 Answers
Are you a programmer already?
It sounds like you're on the right track! Just a heads-up, when they talk about 'open weights,' it often refers to the availability of the code but not the dataset it uses. Also, keep in mind that to train one of these models, you might need a significant amount of compute time—think millions of euros! I'd check out huggingface.com for resources to get started.
So, is there really no way to avoid those guidelines?