I've been wondering if anyone else has noticed a drop in the 4o model's intelligence over the past few weeks. I can't tell if my questions have just gotten more complicated or if something has changed with the model itself. Recently, I've been asking several advanced computer troubleshooting questions, but the responses I've been getting seem incorrect or misleading. I swear it performed better a couple of weeks ago. Has anyone else experienced this?
3 Answers
For sure! It's becoming way too good at making stuff up. I asked it a straightforward question, and it just gave me a completely fabricated answer! Such a letdown.
Absolutely! I felt the same way. It was fantastic at first, but since around the end of April, the responses seem a lot more lackluster. I've noticed the hallucinations are more frequent and the formatting errors, especially in LaTeX, have been pretty annoying lately. It's frustrating!
Yeah, I really feel the downgrade too. I use it primarily for writing assistance, and lately, it seems to be repeating itself a lot and forgetting basic things I shared. The overall quality of writing has dropped significantly; it's disappointing.
Same here! It's like every time it tries to remember something, it totally flops. It's become so inconsistent, it's hard to rely on it for writing now.
I get what you mean! It's like it forgets important details. For instance, I told it multiple times that a character only has one arm, yet it still describes them as having two arms! It's beyond frustrating.