I've been thinking about whether AI companies have inadvertently slowed down the progress towards AGI (Artificial General Intelligence) by introducing models that might suppress our natural problem-solving skills. Despite this, it's clear that these tools greatly enhance productivity. My theory is that for AGI to advance, we need 1) better data coming from a more intelligent population—imagine the insights from Nobel Laureates building on regular research—and 2) innovative approaches to AI that go beyond just throwing more money at the problem. I feel like we might be missing something vital, even with all the data available on the internet. I'd love to hear if and how these issues are being tackled. Also, just a heads-up, I'm about to hit the hay, so I might not respond until morning!
2 Answers
I feel you on that! A lot of people believe that new AI models possibly make us rely too much on them instead of enhancing our own skills. It would be interesting to see some studies on it, though. Your intuition about needing better data from a more intelligent crowd sounds spot on.
I don’t see why AGI isn’t achievable with the current advancements we’ve made. With better training data and improved hardware often leading the way, there’s plenty of room for progress. It’s about refining the model designs, too—both aspects are crucial for the growth of AI.
I hear you on that! I agree model design is important, but I’m curious about where we stand on the amount of data and compute we have at our disposal. Could it actually be enough?
Thanks for the input! Yeah, I think there’s definitely a balance to strike between using AI to enhance our learning versus letting it do all the thinking for us.