I've been experimenting with Claude Pro and I've noticed it has a larger context window than ChatGPT, which is really important for my work with large code snippets (over 500 lines). Every time I try to use ChatGPT with big files, it just crashes or stops mid-output, especially on the canvas. It feels like OpenAI needs to improve the performance of their models to handle larger code files better. I used to be a ChatGPT Plus subscriber, but I switched to Claude because of these limitations. I'm wondering if there's any way OpenAI can fix this issue? I'd love to return to ChatGPT Plus, but only if the model can manage larger context like Claude can!
2 Answers
I recently pasted over 1200 lines into ChatGPT without any issues. But I did wonder if it would stop midway through…
I've been using the O3 model and it's been working fine with 500 lines of code lately. Plus, Codex is now available for Plus subscribers, and it can handle code bases with millions of lines!
Are you sure it didn't stop generating? I thought that's what usually happens with large files.