Daniel Dennett and David Chalmers sat down to “debate” the possibility of superintelligence. I quoted “debate” because this was a pretty congenial discussion.
(Note: there’s a transcript of this video on the Edge site, which might be more time efficient for some than watching a one hour video.)
Usually for these types of discussions, I agree more with Dennett, and that’s true to some extent this time, although not as much as I expected. Both Chalmers and Dennett made very intelligent remarks. I found things to both agree and disagree with both of them on.
I found Chalmers a little too credulous of the superintelligence idea. Here I agreed more with Dennett. It’s possible in principle but may not be practical. In general, I think we don’t know all the optimization trade offs that might be necessary to scale up an intelligence.
For example, it’s possible that achieving the massive parallel processing of the human brain at the power levels it consumes (~20 watts), may inevitably require slower processing and water cooled operations. I think it’s extremely unlikely that human minds are the most intelligent minds possible, but the idea that an AI can be thousands of times more intelligent strikes me as proposition that deserves scrutiny. The physical realities may put limits on that.
And I agree more with Dennett on how AI is likely to be used, more as tools than as colleagues. I’m not sure Chalmers completely grasped this point since the dichotomy he described isn’t how I perceived Dennett’s point, that we can have autonomous tools.
That said, I’m often surprised how much I agree with Chalmers when he discusses AI. There was a discussion on AI consciousness, where he made this statement:
There’s some great psychological data on this, on when people are inclined to say a system conscious and has subjective experience. You show them many cases and you vary, say, the body—whether it’s a metal body or a biological body—the one factor that tracks this better than anything else is the presence of eyes. If a system has eyes, it’s conscious. If the system doesn’t have eyes, well, all bets are off. The moment we build our AIs and put them in bodies with eyes, it’s going to be nearly irresistible to say they’re conscious, but not to say that AI systems which are not in body do not have consciousness.
I’m reminded of Todd Feinberg and Jon Mallatt’s thesis that consciousness in animals began with the evolution of eyes. Eyes imply a worldview, some sort of intentionality, exteroceptive awareness. Of course, you can put eyes on a machine that doesn’t have that internal modeling, but then it won’t respond in other ways we’d expect from a conscious entity.
There was also a discussion about mind uploading in which both of them made remarks I largely agreed with. Dennett cautioned that the brain is enormously complex and this shouldn’t be overlooked, and neither philosopher saw it happening anytime soon, as in the next 20 years. In other words, neither buys into the Singularity narrative. All of which fits with my own views.
Interesting thoughts about eyes. When I close mine, I am still very much conscious of the world, through hearing, touch, perhaps even smell and taste, but the world seems very much “external”. When I open my eyes, the difference between myself and the world largely vanishes. I’m in it. I’m a part of it. Possession of eyes must surely change the nature of perception – qualitatively.
LikeLiked by 1 person
I know what you mean.
There was a news story a while back about sensory substitution for blind people. One device was worn on the tongue and through taste buds presented a “picture” of what a camera was seeing to a blind person. Enough so that the person could mountain climb!
And I’ve heard that blind people overall maintain a map of their environment, at least to the extent they can perceive it, even those born blind, and when they access those maps in an fMRI, the visual centers light up. Which implies they’re not actually vision centers, but spatial perception centers.
All of which is to say, it might be that when you’re receiving sensory information and updating your environmental maps, you feel connected with the world.
LikeLike
How “external” do blind (from birth) humans perceive the world to be?
LikeLiked by 2 people
Don’t know. But here’s the article I referenced above.
https://www.newyorker.com/magazine/2017/05/15/seeing-with-your-tongue
LikeLike
Helen Keller had a huge “aha” moment when she realized there was an external world, She was both blind and deaf, so she probably took longer to get to that point than someone blind but not deaf.
LikeLiked by 2 people
I have more questions than answers.
Can superintelligence be achieved solely with faster processing and more memory or is a there a qualitative difference in the intelligence?
I tend to think there would need to be a qualitative difference otherwise we probably should have already achieved it.
If a qualitative difference is necessary, what would it be like and could we create it with our normal intelligence?
Here’s where I draw blank. If we knew what it would be like we could probably create it.
We might blunder into it some way as by-product of some other effort and then recognize it when we see it.
If we created it or enabled it, I would think at that point it would take off exponentially subject only to some limits on intelligence that we may not understand.
LikeLiked by 1 person
I definitely don’t think simply increasing performance and capacity will eventually cause these systems to “wake up”. It’s not going to come that easily. We’ll have to understand a lot more about the mind than we currently do.
But this isn’t an insurmountable barrier. We’re constantly learning more, both on the cognitive side and the AI technology side. But it’s a marathon rather than a sprint.
On taking off exponentially, what I think a lot of these notions miss is that any computational system has to be a physical one, which means it needs materials, has to deal with cooling and power issues, as well trade offs between speed, efficiency, and capacity.
Right now, brains have a capacity and efficiency advantage, while computers have a speed one. The physics may dictate that in order to get the capacity and efficiency of organic brains, we must sacrifice performance, or balance it in some other way. Of course, in an AI system, we could mix components in a hybrid to get the best of both worlds. But eventually we may be able to do something similar with ourselves and technology cybernetically.
LikeLike
Hi, Mike!
The purpose of thinking is to solve problems. What problems can define it as superintelligence? What are these “superproblems”?
LikeLiked by 2 people
Hi Dimitar,
Good question. The ones that spring to my mind are figuring out fusion power, curing cancer, Alzheimers, etc, designing a productive quantum computer, or the system designing better versions of itself.
Of course, all of this assume the superintelligence can figure this out with the data we can provide, which seems like a major assumption.
LikeLiked by 1 person
So may we say that the collective intelligence is a superintelligence?
(For example, you know that humans working in group can design а space shuttle. But it is impossible for one human mind.)
LikeLiked by 1 person
That’s an excellent point. Just a little while ago I was reading about the Event Horizon Telescope (the one that took the black hole picture), which is actually an array of telescopes with a sophisticated aggregation system. The overall system is far larger than any actual physical telescope could ever be.
It might be that the quickest way to superintelligence is to find a way to network little intelligences together effectively. If you think about it, the scientific revolution was enabled by the invention of the printing press, which radically increased the amount of information sharing. The internet appears to be doing that again. And all the major supercomputers these days are actually massive clusters of individual machines.
Of course, you can think of the brain itself as a massive parallel cluster of processors. The question is how big can a cluster like that get and still be effective. When does it make sense to federate it into a cluster of clusters?
LikeLiked by 2 people
It’s a million dollar question but I have no idea.
LikeLiked by 1 person
I didn’t really get much new out of this discussion. All I got was:
Dennett: Of course we can make superintelligent autonomous AI, but we shouldn’t.
To which Chalmers replies: but if we can, someone will find it useful, so we will.
Thus, I think they are both correct, to an extent. I think the AI’s we create to work on earth will be mostly the tools Dennett imagines, but at some point we will want to send robots to work where we cannot, like Mars or asteroids, In those places, they may need to be smart and autonomous.
[considering skipping over the singularity reference …. naaaah]
I see that many people, Mike included, seem to associate mind uploading with the singularity. As Dennett points out, a brain is extremely complex and to duplicate the workings of one particular brain, yours or mine, would be a crazy hard task. Consider making copy of the Great Pyramid of Giza, matching every stone exactly, every minor error, crack for crack. Now consider just making another pyramid of approximately the same size using approximately the same number of stones in approximately similar arrangement. We could do the latter in a year or two.
My point is, mind uploading will become a possibility long after the singularity, and by then I kinda doubt there will be much point.
*
LikeLiked by 1 person
That last point sounds ominous. Are you saying that the AIs will take over and there won’t be any humans left to upload? Or am I misreading the pyramid analogy?
LikeLike
Didn’t mean to be ominous. I’m saying the singularity starts soon after we can make general ai’s as smart as humans, which will be soon. It will be long after that, long after we can make ai’s significantly smarter than humans, that we will be able to make a copy of a specific human, i.e, an upload. By that time, I don’t think there will be a point, and probably won’t be worth the resources it would take. I think such an upload would be a vanity project, kinda like getting a life size full length painted portrait of yourself. There are some of those around, but not too many.
*
LikeLike
I don’t know. If uploading were to be available before I died, I’d probably go for it. Not that I expect it to be available. But then I’m skeptical human level AI is going to be available in any near term. Maybe before the end of the century. Human level AI has been 20 years away since the 1950s, so I think any timeline predictions should be viewed with extreme skepticism.
If we don’t wipe ourselves out or crash civilization, I do feel comfortable we’ll have both by 2500.
LikeLike
What? Homework again? I’ll put the video (or transcript, more likely) in my short queue, but for now I’ll key off what you wrote.
(I do like both gentlemen, generally respecting their opinions if not always agreeing, so I’ll probably get to it eventually.)
From what you wrote, for one thing, I quite agree the “singularity” (defined as uploading our minds) won’t happen anytime soon. (I think you know I believe it won’t happen, ever. 😀 )
About eyes,… was Chalmers talking about our perception of things with eyes or just about things having eyes? Seems like the former. We perceive things with eyes as (potentially) intelligent. I don’t see eyes as necessary or sufficient for consciousness.
Regarding “super-intelligence”… that needs defining.
In some senses, we already have it. Isn’t Google a kind of super-intelligence? You can get an answer to almost anything from it. It “knows” far more than I do.
How about Wikipedia, which also knows a lot and probably has more accurate knowledge. Does anything compare to it?
“Super-intelligence” (an idea I find suspect) seems to suggest “super-thinking” in some fashion, but exactly what fashion?
Does it provide accurate answers, such as any system might, but does so much, much faster? Any calculator would seem to qualify as a super-intelligence, let alone any general computer.
“Super-intelligence” seems to imply “better” answers, but what’s better than accurate? Perhaps we’re talking about “wisdom” — the ability to see the best path among alternatives.
On the one hand, the old-style chess AI that brute-forced the game by looking ahead better than any human. Are we talking about an AI capable of “looking into the future” to see the best path to take? Capable of balancing myriad inputs better than any human?
In one sense, we have some of that in DL neural nets. But in another, humans still are infinitely better at figuring other things out.
What about a more modern AI, a network, trained to be wiser than any human. But how is it trained? What inputs do we use to make a wise AI?
When I try to actually imagine a “super-intelligence” I have a hard time picturing what that actually is. To me that makes the idea suspect.
I also wonder if it doesn’t turn out that general intelligence is messy. I think part of our success comes from our mistakes and imperfection. I’ve written about how mental noise might give rise to imagination and free will.
Maybe creative general intelligence requires inefficiency and messiness.
It’s said that a truly photographic memory is a serious distraction, and many forms of mental disorder come from the brain’s inability to filter out noise. LSD is one drug that cranks up the gain in the mind.
Maybe we’re in a sweet spot. Maybe there is no “super.”
LikeLiked by 1 person
Ha ha! Yeah, two one hour videos in a week is probably not called for. And I have to admit this one isn’t as interesting as the one I shared Sunday. I actually read the transcript yesterday instead of watching it. If I’d had to actually watch it, I probably couldn’t have done so until the weekend.
Chalmers was definitely talking about how eyes affect our perception of consciousness. Of course, I think consciousness only exists in our perceptions, so for me the line is blurred. And I was relating it to the intuition that made F&M’s thesis make sense. (I say that like I don’t think it’s right, but for what it actually shows, I do think it has a lot going for it.)
Good point about the intelligences we already have. They’re narrow intelligences, and it may be that the very narrowness of their scope is a necessary concession for them to do what they do.
I do suspect humans minds are in a particular sweet spot between capacity, efficiency, and performance. It might be that the overall combination can be improved on, but the idea that it will be several orders of magnitude may just be us looking for the gods. (It’s interesting to me that AIs in many sci-fi stories seem to act like the gods in some ancient myths.)
On messiness, another way of saying that is that sapient level intelligence has to be able to deal with ambiguity and probabilities, with things that can’t be known with certainty. Right now computer systems fumble on that kind of stuff. But maybe the fact is we need to sacrifice precision for fuzzy logic. Doing so may enable breakthroughs, but at a cost, many of the limitations natural intelligence already has to live with.
LikeLiked by 1 person
“On messiness, another way of saying that is that sapient level intelligence has to be able to deal with ambiguity and probabilities, with things that can’t be known with certainty.”
That, and (I believe) it may be the source of our imagination and creativity, perhaps even true free will.
LikeLiked by 1 person
“It might be that the overall combination can be improved on, but the idea that it will be several orders of magnitude may just be us looking for the gods.”
Or not
Have you ever noticed that childbirth is one of the leading causes of women’s death, and that this was even more true during the vast majority of human evolution? Do you think our large brains, and thus heads, might have something to do with that?
Don’t you think it would be a massive coincidence if “biggest brain that can add more useful thinking ability” and “biggest brain that can fit through the birth canal” were approximately the same size?
LikeLiked by 1 person
That might be true if human heads came out of the womb in their final size. Human evolution solved the head size / birth canal issue by having human babies, by the standards of the animal kingdom, be born very premature.
And it’s worth noting that size alone isn’t the determining factor on intelligence. Elephant and whale brains are much larger than human ones, yet they don’t show commensurately more intelligence. In the pilot whale’s case, even its neocortex is larger than ours. Intelligence is as much about the programming as the capacity.
None of that is to say that physiological limitations might not be an issue. But it’s not the slam dunk answer it appears.
LikeLike
I think it might be useful to separate “raw intelligence” (the productive horsepower of the brain mushware) from intelligence as a personality attribute or a personal strategy or tactic for adapting to social or environmental settings. The second case, I would think, might include categories of people who possess above-average intelligence but feel compelled to hide it behind a mask of below-average intelligence in order to appear less threatening to alpha-types. In China during the 1970’s (the Cultural Revolution), citizens who wore glasses were summarily executed because they were thought to be intellectuals. During the 1950’s many brilliant women hid their native intelligence in order to “get their man”. I remember writing an essay for a psych course on whether Intelligence was a function of Personality or Personality was a function of Intelligence. I didn’t get a very good grade on my essay and I’ve forgotten where I stored it, but I’ve continued thinking about the subject. Of course, the brain mushware is responsible for perceptual, conceptual, and motor logistics, but I’m not so sure that physical size or cell counts for brains necessarily correlate with intelligence. There’s a lot of intelligence in a bee’s brain and there wasn’t much intelligence in homo sapiens brains 70,000 years ago. Wales have bigger brains than humans have.
LikeLiked by 1 person
I meant “whales”, not “Wales”. 🙂
LikeLiked by 1 person
Almost any time I discuss intelligence within the context of minds or AI, I don’t mean either raw computational horsepower or the personality attribute. I usually mean the ability to make accurate predictions and solve problems.
Your point about people hiding their intelligence reminded me of my teenage days, where I largely acted dumber than I was in order to keep friends and get women. The irony is that the friends and women it got me, in retrospect, weren’t the ones to have. Showing intelligence probably would have gotten the better picks, but I didn’t have that specific type of intelligence at 15.
On brain size and intelligence, it’s definitely a complicated subject. Bees get by with about a million neurons, but their tiny bodies don’t require much overhead. Elephants and whales have larger brains than we do, but a lot of the processing of those brains has to deal with their larger bodies.
The thing about humans, is that a lot of our intelligence is movement and dexterity intelligence. Human civilization depends on both our brain and dexterity, on both our head and our hands. Whales, even if they had the intelligence, couldn’t build a civilization, because they don’t have the appendages to manipulate their environment.
LikeLike