I've been really disappointed with how 4o seems to be getting 'dumber' as time goes on. Just today, I struggled to get it to follow up on basic questions, and it felt like it had no memory of our conversation. Has anyone else experienced this? What do you think might be causing it?
3 Answers
I've noticed this too. I don't use 4o anymore for that reason. Other versions seem to be suffering as well. It feels like they're prioritizing a particular agenda over making the tool genuinely useful, which is a shame.
There was a change around May 29-30 that seems to have impacted the way 4o responds. Some users are noticing that it was 'lobotomized' to meet certain behavior guidelines. A workaround might be to analyze past conversations and see if you can prompt it to return to its previous form. If it can’t recognize changes, then maybe it's a wider issue.
It feels like classic enshittification from OpenAI. They launch a new model that’s amazing, but then they start to pull back on its capabilities. It's frustrating, and I wonder how long until they move on to the next confusingly named version that ends up being crippled too.
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator