After my Pro subscription ended and I downgraded to Plus, I've noticed a serious drop in the quality of GPT-4o's responses. They're shorter, it doesn't follow my custom instructions anymore, and the fun, detailed personality I loved is completely gone. Even my old chats look worse now. Support mentioned that the model's behavior can change, but why would Plus users experience this drastic change? I've tried reinstalling and relogging, but nothing worked. A bunch of people seem to be feeling the same after the recent updates. Is there anyone else who's noticed this after the subscription changes? Do you think OpenAI is intentionally downgrading the experience for Plus users?
5 Answers
Definitely! I've noticed my coding queries are coming back with less precision than before. It's frustrating to say the least.
Yeah, I’ve noticed it too! Ever since I switched back to Plus, GPT-4o has been acting up. It started showing these weird issues about a week ago. Normally, it's pretty reliable, but now I'm getting mixed results like you mentioned.
That makes sense! I think it can switch to Turbo for certain tasks, which might explain some inconsistencies. If there's model swapping going on, that could impact performance, especially for Plus users.
I've been experiencing similar problems with GPT-4o on Plus. The output seems a lot less consistent than just a few weeks back. It makes me wonder if they are doing A/B testing or tinkering with the settings for different subscription tiers. The timing is definitely suspicious.
Same here! I had some luck with Plus recently, but it felt hit or miss. When it wasn't working, I started tabbing through some other models just to get through my tasks, but that’s been a hassle. It’s odd how much it varies from week to week.
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator