I've heard that O3 has a tendency to hallucinate more than other models, presenting information in a convincing yet often fabricated way. Since I recently opted out of the Plus plan and Gemini's plan meets my needs for now, I'm curious to gather insights from those who have used O3. What have your experiences been like, particularly regarding its hallucinations?
4 Answers
I once joked about Glenn Close's roles and O3 'quoted' her saying she was balancing her karma with animals, but it couldn't back that up with a source when I asked. I found it pretty amusing but a bit frustrating too. Just goes to show how it can invent stuff convincingly!
I got quite a kick when O3 constructed an entire legal argument about OpenAI harvesting data from users. It was both wild and a bit eye-opening about how the model can generate complex narratives. 😂
I've mainly used O3 for coding help, and I've noticed that while the initial responses are spot on, it kind of loses the plot with longer interactions. It can start to forget important details or serve up incomplete code. I wouldn't say I've encountered many hallucinations, but it definitely has some quirks.
I frequently use O3 for research, especially in the humanities. It does really well when there are plenty of sources available, pulling up info I wouldn't find on Google. But when info is thin, it can get a bit overzealous, interpreting things that weren't really said. It's usually reliable for me, though sometimes it takes me down a rabbit hole for things that don’t even exist anymore.

Totally get that! O3 leads you to some wild conclusions if you're not careful. Always check its sources!