Daniel Dennett and David Chalmers sat down to “debate” the possibility of superintelligence. I quoted “debate” because this was a pretty congenial discussion.
(Note: there’s a transcript of this video on the Edge site, which might be more time efficient for some than watching a one hour video.)
Usually for these types of discussions, I agree more with Dennett, and that’s true to some extent this time, although not as much as I expected. Both Chalmers and Dennett made very intelligent remarks. I found things to both agree and disagree with both of them on.
I found Chalmers a little too credulous of the superintelligence idea. Here I agreed more with Dennett. It’s possible in principle but may not be practical. In general, I think we don’t know all the optimization trade offs that might be necessary to scale up an intelligence.
For example, it’s possible that achieving the massive parallel processing of the human brain at the power levels it consumes (~20 watts), may inevitably require slower processing and water cooled operations. I think it’s extremely unlikely that human minds are the most intelligent minds possible, but the idea that an AI can be thousands of times more intelligent strikes me as proposition that deserves scrutiny. The physical realities may put limits on that.
And I agree more with Dennett on how AI is likely to be used, more as tools than as colleagues. I’m not sure Chalmers completely grasped this point since the dichotomy he described isn’t how I perceived Dennett’s point, that we can have autonomous tools.
That said, I’m often surprised how much I agree with Chalmers when he discusses AI. There was a discussion on AI consciousness, where he made this statement:
There’s some great psychological data on this, on when people are inclined to say a system conscious and has subjective experience. You show them many cases and you vary, say, the body—whether it’s a metal body or a biological body—the one factor that tracks this better than anything else is the presence of eyes. If a system has eyes, it’s conscious. If the system doesn’t have eyes, well, all bets are off. The moment we build our AIs and put them in bodies with eyes, it’s going to be nearly irresistible to say they’re conscious, but not to say that AI systems which are not in body do not have consciousness.
I’m reminded of Todd Feinberg and Jon Mallatt’s thesis that consciousness in animals began with the evolution of eyes. Eyes imply a worldview, some sort of intentionality, exteroceptive awareness. Of course, you can put eyes on a machine that doesn’t have that internal modeling, but then it won’t respond in other ways we’d expect from a conscious entity.
There was also a discussion about mind uploading in which both of them made remarks I largely agreed with. Dennett cautioned that the brain is enormously complex and this shouldn’t be overlooked, and neither philosopher saw it happening anytime soon, as in the next 20 years. In other words, neither buys into the Singularity narrative. All of which fits with my own views.