It’s pretty nice to see Kurzgesagt finally continuing its “Big Questions of Life and the Universe” series. I shared the first part on consciousness over a year go. The series is funded by the Templeton Foundation, which I know many people have issues with, but so far the content has been reasonably scientific.
As they did in the first video, Kurzgesagt makes a distinction between intelligence and consciousness. But if you go back and watch the consciousness video, you’ll see a lot of overlap. I personally think the distinction is an illusion, a holdover from Cartesian dualism. When we let go of those notions, it becomes easy to see consciousness as a type of intelligence, specifically related to the spatiotemporal and action selection needs of a physical system trying to achieve its goals.
This video actually showed up when I was contemplating whether to comment on an article in Nature arguing that artificial general intelligence (AGI) will never be achieved, because AI is “not in the world”, by which the author appears to mean it’s not embodied.
The article is typical of a genre that identifies a current difficulty or obstacle, declares it unsolvable in principle, and writes off the entire endeavor as misguided. You’d think the ignominious history of people declaring it impossible for technology to reproduce some natural process would dissuade these types of proclamations, but people continue to do it.
Anyway, the difficulty is that the most primal processing in current technological systems is symbolic. But humans begin as physical beings in the world, building a world model, and then add symbolic thought on top of that. For us, all symbols ultimately reduce to some form of primal sensory or motor experience. When we say we understand something, we mean we can map it back to that primal world model. The argument is that as long as technological systems begin with the symbols, they won’t have general intelligence.
The lack of a world model is a real issue, and understood by many AI researchers. But it’s worth noting that we’re already starting to see systems with incipient world models. It’s what a self driving car uses to avoid pedestrians and other cars, or other semi-autonomous robots such as Mars rovers to avoid obstacles. These systems, compared to animal nervous systems, are still very primitive. They’re probably equivalent in spatiotemporal intelligence to early Cambrian animals (or perhaps late preCambrian ones).
A key question is whether a world model can only be built from the same type of immediate physicality that humans begin with. If we want AI systems to understand the world the way we do, it may be required. Or we may need to use it as an intermediate stepping stone until we understand well enough how this works to start from an alternative foundation. World models built from an alternate foundation may be unimaginably alien, but they may also able insight past many species level blindspots that humans contend with.
Only time will tell. But assuming it’s impossible, even in principle, strikes me as spectacularly short sighted.
Unless of course I’m missing something?