I'm curious about DeepSeek's high hallucination rate. For me, it can be beneficial when I'm writing science fiction novels, but it often leads to issues, like incorrect mentions of concepts in quantum mechanics or citing non-existent laws. I wonder why DeepSeek focused on science fiction themes during its training.
5 Answers
What’s up with some of the posts here that seem anti-DeepSeek? It gets a bit tiring to read non-objective stuff.
In my experience, DeepSeek, especially the R1 model, has one of the lowest hallucination rates around. Compared to ChatGPT and Gemini, it seems much less prone to those weird fabrications.
Honestly, a high hallucination rate isn’t inherently good or bad; it’s just a characteristic of the AI. The trick is to use it wisely—for creativity, it can be awesome, but you’ve got to be careful with factual stuff.
I think it really depends on your use case! Sometimes high hallucination rates can be useful, especially for brainstorming or creative writing. But for things that require accuracy, like legal work, it's definitely a downside. Check out this post I wrote about it for more thoughts!
How do you know that it has a high hallucination rate if you just made your Reddit account? Seems a bit strange to jump to conclusions.
That's a good point! It does seem like the right context makes all the difference.