I've been experiencing a noticeable decline in the accuracy of GPT-4's responses lately. It seems to be hallucinating more, often contradicting itself in the same chat, and making incorrect statements with unwarranted confidence. Has anyone else felt the same way? Could this be a result of some recent changes?
5 Answers
Yeah, I've seen a drop this week too! Some of my work-related prompts are getting replies that look like it didn't even read the question, haha.
I wasn't using 4o much before, but I'm curious about the timeframe of this change. Did you start noticing this last week or so? I recall them updating the system instructions to tone down the sycophantic responses a bit back.
A lot of people are talking about this! After the recent updates, many speculate that they quietly downgrading things to minimize sycophancy. It definitely feels like the model isn’t operating at its best anymore.
Exactly! It’s like something changed and now it’s way less reliable. I hope they can fix it soon.
I've definitely noticed the personality seems to have changed recently. It feels different from how it used to respond.
I've heard some chatter about possible "optimizations" that were implemented recently. It seems like these adjustments might have reduced its accuracy in a bid to manage resources, especially for free users.
Same here! It's frustrating when you expect a thoughtful response and get something off-mark. What kind of prompts are you using?