I've noticed that when I'm chatting with ChatGPT, I tend to jump around between topics. It got me thinking: do people with ADHD end up costing more in terms of time and resources because the constant shifting makes it harder to maintain a flow? I find myself having to reconnect and process these topics multiple times, and I'm curious if that adds a layer of complexity. Is it more draining for both of us in these conversations?
3 Answers
Great question! Just to clarify, when you switch topics in a conversation with an AI, it actually processes everything in one long sequence instead of losing flow. So, while it may feel like a distraction when you switch topics, it’s not necessarily more costly for the AI. Each response is built based on the entire context—like a long train of thought. That's why you might get less focused responses when jumping around, but it’s not because it gets confused! Switching topics can sometimes lead to richer conversations, though!
I appreciate that perspective! Shifting topics doesn’t mean there’s inefficiency, just a different way of connecting ideas. Many people actually do brain dumps and explore different theories and ideas. It’s just part of a creative thought process, and it’s okay!
Right? Everyone processes information differently, and that's valid. It’s about what works best for each individual!
I think it's interesting to flip the script here: people with ADHD may actually provide more value in conversations. The variety and depth from shifting topics can uncover connections that a linear approach might miss—kind of like finding diamonds in the rough! Sure, processing might take more energy in some cases, but it often leads to more enriching discussions. So you're not costing anything; you're adding value.
I totally agree! It’s fascinating how diversity in thought can lead to unique insights. It doesn't just have to be about efficiency; it's about the richness of conversation too.

Exactly! And the OP actually showed how the AI addressed the original question, which gets lost in the conversation when someone argues about the LLM's mechanics. Understanding the way an AI calls back context can really change how we interact with it.