I've noticed that R1 and V3 are open source alternatives to GPT 4o and o1, and they seem like they could be profitable commercial products. This makes me wonder: how can DeepSeek spend millions on training a language model and then offer the weights for free without any licensing restrictions? Surely there's a way they make money, otherwise, they wouldn't be able to sustain this model. If anyone has insights into their funding strategies or how profitability works for open-source projects, I'd love to hear more about it! Thanks everyone!
1 Answer
One thing to keep in mind is that DeepSeek's parent company is involved in hedge funds, so a part of their revenue is funneled into DeepSeek. It's a clever way to ensure they have backing while venturing into AI.

Exactly! Plus, they released their new R1-0528 just as Nvidia's earnings were coming out. Some even think their AI division might just be a side project to play the market. Interestingly, they reported a 545% profit margin on their models during their open source week!