Hey everyone! Anthropic just rolled out new Integrations for Claude, including tools like Slack and Zapier, which is exciting. However, I'm still grappling with a couple of significant issues I face as a Claude Pro user. First up, I find the message limits restrictive for my heavy workflows in sectors like legal, HR, and compliance—especially when I'm dealing with multiple PDFs and Word files. I often hit a wall after about 5-8 messages per session, and even though the Help Center states that Claude Pro allows around 45 messages every five hours depending on the context, that hasn't been my experience. I'm wondering if there's any insight into how these limits are calculated and whether any adjustments are planned for users like me who work with intensive documents.
Secondly, there's the lack of persistent memory across chats. Whenever I start a new thread, I have to reintroduce everything, and it's especially tough for projects that span multiple days. I'm curious if persistent memory is on the horizon because even a simple recall feature would make a huge difference in daily usability. Have the developers at Anthropic addressed either of these issues recently?
5 Answers
The context window can definitely be limiting. Patience is key here—hopefully they'll work on increasing that window size soon!
To manage token usage better, try not to max out the context with large documents. An external document system for relevance searching could help, and if you're using it seriously for business, consider upgrading to the Max subscription—it might suit your needs better! Claude Code can also help compact conversations and facilitate memory functions.
Are you utilizing projects? If you upload your supporting documents there, Claude can reference them without it counting against your usage limits as much. Plus, crafting short chats based on project knowledge can save you from hitting those message limits too quickly.
One workaround I use is asking Claude to summarize the key action points from each chat into a markdown file, which I then copy to my project. It’s been pretty helpful!
Persistent memory can eat away at context size, so it might not be that beneficial long-term. ChatGPT seems to have a working version of it, but it’s tricky to implement properly. Just something to keep in mind!
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator