I recently subscribed to Claude Pro to help me analyze philosophy books, but I've noticed that I'm running out of my usage limit way faster than I expected. I think it might have to do with how the conversation retains too much previous text and context. Is this what's causing me to blow through my quota so quickly?
7 Answers
Depending on your needs and hardware, local models might be your best bet. I've had lots of success with Mistral small and DeepSeek. Plus, ChatGPT has better usage limits and works well for philosophy topics.
Honestly, I've been feeling that Anthropic isn't treating their pro users well since releasing Max. You might want to consider switching to another model; Claude can sometimes feel more of a hassle than it's worth.
If your project doesn't require internet, you might try using Haiku. It's generally cheaper per token, which can save you some money in the long run!
Totally! You could try using Gemini for specific info gathering and then ask Claude to elaborate on that. Think of it this way: Claude's like a monk who provides deep insights, and Gemini is more of an assistant to help you prep those questions. Just make sure your questions for Claude are the ones that really require his wisdom.
Yes, that's definitely a factor! The more context Claude has to remember from previous messages, the more tokens each new prompt uses. You might want to be selective about what you include in your conversations. Also, consider using a vector database for retrieval through an MCP server if you're on a desktop.
Can you give examples of that vector database approach?
Which MCP would you recommend?
Claude reads the entire conversation context each time, which really eats up your tokens. One workaround is to ask Claude for a summary every time you get notified that you're using tokens too quickly. This can help minimize that cumulative token drain.
For philosophy analysis, you might want to check out NotebookLM from Google. It's a solid alternative and you won't hit limits as easily as with Claude.
Which one did you switch to?