I'm wondering if the context window that Gemini Advanced 2.5 Pro remembers during conversations has been drastically reduced. I've spent two hours prepping it with a lot of information and rules, but it seems to have forgotten almost everything from the first half of our chat. I know that long conversations typically aren't the best fit for most LLMs, but I thought this version handled it well with its supposed massive context window. Something definitely seems off now, and I'm pretty frustrated. I worked hard to set it up for this session, feeding it URLs to read, and now it's like starting from scratch. I'm confused about which model to use moving forward for content research and writing since I pay for ChatGPT, Gemini, and Claude, but they've all felt limited lately. Are these companies focusing more on coding now? After the great performance we saw with 2.5 Pro, I can't believe the changes!
5 Answers
Just a heads up, this *is* the OpenAI subreddit.
What did you spend two hours on? I’m curious about what kind of data you were feeding into it!
You know, just the usual info and loads. Giggity.
Why is this post in the OpenAI subreddit? This seems more relevant to Gemini discussions.
Right? Seems like it should be in a Gemini-specific space.
About feeding URLs, I’m not sure how that works with Gemini 2.5 Pro, because as far as I know, it can’t actually browse the web, just pull results from Google. Can you clarify how you're doing that?
It appears that the context window is still the same, but reports indicate that performance has dipped, possibly around 30% when it comes to remembering information over longer conversations. So, while you can still input the same amount of data, it seems like it might be having trouble keeping track of everything as effectively as before.
So, basically, it can still take in lots of info, but it might struggle to recall all of it during the chat? That’s frustrating!
Exactly, sounds like it might lose track of the conversation as it goes on.
Yeah, I’d love to hear more about those 'loads' too!