I'm curious about the future of AI models. Is it feasible for us to have AI that constantly generates understanding and reacts to input every half second instead of waiting for new data? I'm imagining a system where the AI continuously updates its predictive tokens based on ever-changing stimuli. Wouldn't this improve accuracy and reduce hallucinations, making the AI more aware of its mistakes?
2 Answers
Always-on AI seems possible, but we need more advanced hardware and tech solutions to handle lengthy token sequences without issues. It's also crucial to develop a system of long-term memory where AIs can manage what they remember effectively. Until technology progresses, we're stuck with the limitations we have now.
The main hurdle here is definitely the power and resources required. If we had 500 million users simultaneously using LLMs in a constant streaming mode, it would be a massive energy drain! We’re talking serious hardware upgrades for that kind of functionality to be feasible.

Are you suggesting we could do this in controlled settings, like labs? It sounds like we could have the capability, just not the means for general use yet.