The barrier of meaning

In the post on the Chinese room, while concluding that Searle’s overall thesis isn’t demonstrated, I noted that if he had restricted himself to a more limited assertion, he might have had a point, that the Turing test doesn’t guarantee a system actually understands its subject matter.  Although the probability of humans being fooled plummets as the test goes on, it never completely reaches zero.  The test depends on human minds to assess whether there is more there than a thin facade.  But what exactly is being assessed?

Cover of "Artificial Intelligence" showing an pixelated outline of The Thinker statueI just finished reading Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans.  Mitchell recounts how, in recent years, deep learning networks have broken a lot of new ground.  Such networks have demonstrated an uncanny ability to recognize items in photographs, including faces, to learn how to play old Atari games to superhuman levels, and have even made progress in driving cars, among many other things.

But do these systems have any understanding of the actual subject matter they’re dealing with?  Or do they have what Daniel Dennett calls “competence without comprehension”?

A clue to the answer is found in what can be done to stymie their performance.  Changing a few pixels in a photographic image, in such a manner that humans can’t even notice, can completely defeat a modern neural network’s ability to accurately interpret what is there.  Likewise, moving key user interface components of an old Atari game over by one pixel, again unnoticeable to a human player, can completely wreck the prowess of these learning networks.  And we’ve all heard about the scenarios that can confuse a self driving car, such as construction zones, or white trucks against a cloudy sky.

In other words, whatever understanding might exist in these networks, it remains thin and brittle, subject to being defeated with unforeseen stimuli or, even worse, being completely fooled from an adversarial attack carefully crafted to fool a recognition (as in the face of a fugitive).  These systems lack a deeper understanding of their subject matter.  They lack a comprehensive world model.

The AI pioneer Marvin Minsky long ago made the observation that with AI, “easy things are hard,”  that is, what is trivially easy for a three year old, often remains completely beyond the capabilities of the most sophisticated AI systems.

Ironically, it’s often things that are hard for humans that computer systems can do easily.  The term “computer” originally referred to skilled humans who performed calculations.  Computation was the original killer app, a capability that the earliest systems were able to do in a manner that left human computers in the dust.  And we all use systems that do accounting, navigation, or complex simulations far better than anything we could do ourselves.

But even the most sophisticated of these systems remain severe idiot savants, supremely capable at a limited skill set, but utterly incapable of applying it in any general manner.  Moving beyond these specialized systems into general intelligence has remained an elusive goal in AI research for decades.  The difficulty of achieving this is often called, “the barrier of meaning.”  But what exactly do we mean by terms like “meaning”, “understanding”, or “worldview”?

As humans, we spend our lives building models of the world.  For instance, even a very young child understands the idea of object permanence, that an object won’t go away if you look away from it, at least unless someone or something acts on it.  Or unless it itself moves, if it’s that kind of object.  We can think of object permanence as an example of intuitive physics, and the understanding that some systems can move themselves as intuitive biology.

As children get older, they also develop intuitive psychology, a theory of mind, an understanding that others have a viewpoint, and an ability to make predictions about how such systems will react in various scenarios, enabling them to navigate social situations.

These intuitive models serve as a foundation that we build symbolic concepts on top of.  Arguably, we only understand complex concepts as metaphors of things we understand at a more primal level, that is, a level involving our immediate spatio-temporal models of the world.  When we say we “understand” something, what we typically mean is that we can reverse the metaphor back to that core physical knowledge.

So what does this mean for getting an AI to a general level of intelligence?  Mitchell notes that a lot of researchers are now thinking that AI will need its own core physical knowledge of the world.  Only with that base, will these systems start understanding the concepts they’re working with.  In other words, AI will need a physical body, along with time to learn about the world.

But is even that sufficient?  The other day I relayed Kingson Man’s and Antonio Damasio’s proposal that AI needs to have feelings rooted in maintaining homeostasis to really get this general understanding.  This might make sense if you think about what it means to actually perceive the environment.  A perception is essentially a cluster of predictions, predictions made for certain purposes.  For us, and any other animal we’re tempted to label “conscious”, those purposes involve survival and procreation, that is, feelings.  A system with instincts not calibrated for satisfying selfish genes may perceive the world very differently.

Which leads to the question, how similar to us do these systems need to be to achieve general intelligence?  At this point, I think it’s worth noting that it’s somewhat of a conceit to label our own type of intelligence as “general.”  A case could be made that we’re a particular type of survival intelligence.  Is the base of our intelligence the only one?

On the one hand, if we want AI systems to be able to do what we do, then it seems reasonable to suppose that their intelligence should be built from similar foundations.  Although hewing too closely to those foundations opens us to the dangers I noted in the post about Man and Damasio’s proposal.  We want intelligent tools, not slaves.

On the other hand, do we necessarily want to limit AI systems by the biases of our species, or even of all animals?  One of the things we hope to get from AI are insights that we as a species may be blind to.  If we build them too much in our image, it seems like we’d be forgoing those types of benefits.

But it might be that we have little choice.  Maybe we have to start by giving them a base similar to ours, until we can learn enough about how this all works.  Once we understand more, it may become obvious whether an alternate base is feasible.  If they are feasible, those alternate bases might produce astoundingly alien minds.

Even starting with the physical base, I tend to think shooting directly for human level intelligence is unrealistic, at least until we first have fish level, reptile level, mouse level, or primate level intelligence to build upon.

How will we know that we’ve achieved general intelligence?  A robust version of the Turing test remains an option, but Mitchell also discusses other interesting tests that go a bit further.  One is a Winograd Schema test.  Consider the following two sentences:

The city councilmen refused the demonstrators a permit because they feared violence.

The city councilmen refused the demonstrators a permit because they advocated violence.

In the first sentence, who feared violence?  In the second, who advocated violence?  The answers are clear to humans with a world model, but no current AI can answer it.  Winograd Schema challenges attempt to get at whether the system in question has a real conceptual understanding of the text.

Another type of test are Bongard problems, where two groups of images are provided, and a subject is asked to identify the characteristic that distinguishes the two groups from each other, such as all small or all large shapes.  It’s a test of pattern matching ability that humans can usually do, but that again are currently beyond machine systems.

I’m not sure these tests are really beyond the ability of a system to conceivably provide answers without deep comprehension, but elaborations like Winograd or Bongard do seem far more robust.  But then, a one hour Turing test also seems extremely difficult to pass with shallow algorithms.

So remember, when seeing breathless press releases about some new accomplishment of an AI system, ask yourself whether the new accomplishment shows a break in the barrier of meaning.  I don’t doubt that announcement will come some day, but it still seems a long way off.

Unless of course I’m missing something.

Add feelings to AI to achieve general intelligence?

Neuroscientists Kingston Man and Antonio Damasio have a paper out arguing that the way to get artificial intelligence (AI) to the next level is to add in feelings.

“Today’s robots lack feelings,” Man and Damasio write in a new paper (subscription required) in Nature Machine Intelligence. “They are not designed to represent the internal state of their operations in a way that would permit them to experience that state in a mental space.”

So Man and Damasio propose a strategy for imbuing machines (such as robots or humanlike androids) with the “artificial equivalent of feeling.” At its core, this proposal calls for machines designed to observe the biological principle of homeostasis. That’s the idea that life must regulate itself to remain within a narrow range of suitable conditions — like keeping temperature and chemical balances within the limits of viability. An intelligent machine’s awareness of analogous features of its internal state would amount to the robotic version of feelings.

Such feelings would not only motivate self-preserving behavior, Man and Damasio believe, but also inspire artificial intelligence to more closely emulate the real thing.

One of the biggest challenges in AI is figuring out how to generalize the lessons learned in specialized neural networks for use in other tasks.  Humans and animals do it all the time.  In that sense, Man’s and Damasio’s proposition is interesting.  Maybe having the system start with its own homeostasis would provide a foundation for that generalization.

On the other hand, I’ve often said I don’t worry too much about the dangers of AI because they wouldn’t have their own survival instinct.  Giving one to them seems like it would open the door to those dangers.  Man and Damasio have a response to that.  Give it empathy.

“Stories about robots often end poorly for their human creators,” Man and Damasio acknowledge. But would a supersmart robot (with feelings) really pose Terminator-type dangers? “We suggest not,” they say, “provided, for example, that in addition to having access to its own feelings, it would be able to know about the feelings of others — that is, if it would be endowed with empathy.”

And so Man and Damasio suggest their own rules for robots: 1. Feel good. 2. Feel empathy.

Well, maybe, but as the Science News author notes, that seems optimistic.  It also raises the danger that rather than building a set of tools motivated to do what we want them to do, we might be creating a race of slaves, survival machines forced to do our bidding.  The danger and possible slavery aspects of this make me uneasy.

I’m also not entirely sure I buy the logic that putting feelings in will necessarily lead to general intelligence.  It seems more likely that it will just lead these systems to behave like animals.  Untold numbers of animal species evolved on Earth before one capable of complex abstract thought came along, and we seem far from inevitable.

Still, exploring in this direction might provide insights into human and animal intelligence and consciousness.  But it also makes John Basl’s and Eric Schwitzgebel’s concern about AI welfare seem more relevant and prescient.

Machine learning and the need for innate foundations

This interesting Nature article by Anthony M. Zador came up in my Twitter feed: A critique of pure learning and what artificial neural networks can learn from animal brains:

Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.

The behavior of the vast majority of animals is primarily driven by instinct, that is, innate behavior, with learning being more of a fine tuning mechanism.  For simple animals, such as insects, the innate behavior is almost the whole thing.  Zador points out, for example, that spiders are born ready to hunt.

By the time we get to mammals, learning is responsible for a larger share of the behavior, but mice and squirrel behavior remains mostly innate.  We have a tendency to view ourselves as an exception, and we are, to an extent.  Our behavior is far more malleable, subject to revision from learning, than the typical mammal.

But a lot more human behavior is innate than most of us are comfortable acknowledging.  We have a hard time seeing it because we’re doing so from within the species.  We talk about “general” intelligence as though we were one.  But our intelligence is tightly wound to the needs of a social primate species.

I’m a bit surprised that the artificial intelligence field needs to be told that natural neural networks are not born blank slates.  Although rather than blank slate philosophy, this might simply represent the desire of engineers to ensure that the learning algorithm well has been thoroughly tapped.

But it seems like the next generation of ANNs will require a new approach.  Zador points out how limited our current ANNs actually are.

We cannot build a machine capable of building a nest, or stalking prey, or loading a dishwasher. In many ways, AI is far from achieving the intelligence of a dog or a mouse, or even of a spider, and it does not appear that merely scaling up current approaches will achieve these goals.

Nature’s secret sauce appears to be this innate wiring.  But a big question is where this innate wiring comes from.  It has to come from the genome, in some manner.  But Zador points out that the information capacity of the genome is far smaller, by several orders of magnitude, than what is needed to specify the wiring for a brain.

Although for simple creatures, like c-elegans worms, it is plausible for the genome to actually specify the wiring of their entire nervous system, in the case of more complex animals, particularly humans, it has to be about specifying rules for wiring during development.  Interestingly, human genomes are relatively small compared to many others in the animal kingdom, such as fish, indicating that the the genome information bottleneck may actually have some adaptive value.

This means that brain circuits should show repeating patterns, a canonical circuit that many neuroscientists search for.  I’m reminded of the hypothesis of cortical columns, which seem similar to the idea of the canonical structure.  If so, it would only apply to the cortex itself.

But aside from the cerebellum, most of the neurons in the brain are in the cortex.  Of the 86 billion neurons in the human brain, 69 billion are in the cerebellum, 16 billion in the cortex, and all the subcortical and brainstem neurons fall in that last billion or so.  I would think the subcortical and brainstem regions are the ones with the most innate wiring, meaning that these are the regions that a lot of the genomic wiring rules would have to apply to, but detailed rules for a billion neurons seem easier to conceive of than for 86 billion.

Zador points out that, from a technological perspective, ANNs learn by encoding the structure of statistical regularities from the incoming data into their network.  In the animal versions, evolution could be viewed as an “outer” loop where long term regularities get encoded across generations, and an “inner” loop of the animal learning during its individual lifetime.  Although the outer loop only happens indirectly through the genome.

Anyway, it seems like there’s a lot to be learned about building a mind by studying how the human genome codes for and leads to the development of neural wiring.  Essentially, our base programming comes from this process.

But apparently it remains controversial that AI research still has things to learn from biological systems.  It’s often said that the relationship of AI to brains is like the one between planes and birds.  Engineers could only learn so much from bird flight.

But Zador points out that this misses important capabilities we want from an AI.  While a plane can fly faster and higher than any bird, it can’t dive into the water and catch a fish, swoop down on a mouse, or hover next to a flower.  Computer systems already surpass humans in many specific tasks, but fail miserably in many others, such as language, reasoning, common sense, spatial navigation, or object manipulation, that are trivially simple for us.

If Zador’s right, and it’s hard for me to imagine he isn’t, then AI research still has a lot to learn from biological systems.  Frankly, I’m a bit surprised this is controversial.  As in many endeavors, intractable problems often become easier if we just broaden the scope of our investigation.

Unless, of course, there’s something about this I’m missing?

Detecting consciousness in animals and machines, inside-out

An interesting paper came up in my feeds this weekend: Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach.  The authors put forth a definition of consciousness, and then criteria to test for it, although they emphasize that these can’t be “hard” criteria, just indicators.  None of them individually definitely establish consciousness.  Nor does any one absent indicator rule it out.  But cumulatively their presence or absence can make consciousness more likely or unlikely.

Admitting that defining consciousness is fraught with lots of issues, they focus on key features:

  1. Qualitative Richness: Conscious content is multimodal, involving multiple senses.
  2. Situatedness: The contents of consciousness are related to the system’s physical circumstances.  In other words, we aren’t talking about encyclopedic type knowledge, but knowledge of the immediate surroundings and situation.
  3. Intentionality: Conscious experience is about something, which unavoidably involves interpretation and categorization of sensory inputs.
  4. Integration: The information from the multiple senses are integrated into a unified experience.
  5. Dynamics and stability: Despite things like head and eye movements, the perception of objects are stabilized in the short term.  We don’t perceive the world as a shifting moving mess.  Yet we can detect actual dynamics in the environment.  The machinery involved in this are prone to generating sensory illusions.

In discussing the biological function of consciousness, the authors focus on the need of an organism to make complex decisions involving many variables, decisions that can’t be adequately handled by reflex or habitual impulses.  They don’t equate consciousness with this complex decision making, but with the “multimodal, situational survey of one’s environment and body” that supports it.

This point seems crucial, because the authors at one point assert that the frontal lobes are not critical for consciousness.  Of course, many others assert the opposite.  A big factor is whether frontal lesions impair consciousness.  There seems to be widespread disagreement in the field about this, but at least some of it may hinge on the exact definition of consciousness under consideration.

The authors then identify key indicators:

  1. Goal directed behavior and model based learning:  Crucially, the goal must be formulated and envisioned by the system.  People like sex because it leads to reproduction, but reproduction is a “goal” of natural selection, not necessarily of the individuals involved, who often take measures to enjoy sex while frustrating the evolutionary “purpose”.  On the other hand, formulating a novel strategy to woo a mate would qualify.
  2. Brain anatomy and physiology: In mammals, conscious experience is associated with thalamo-cortical systems, or in other vertebrates with their functional analogs, such as the nidopallium in birds.  But this criteria largely breaks down with simpler vertebrates, not to mention invertebrates or artificial intelligence.
  3. Psychometrics and metacognitive judgments: The ability of a system to detect and discriminate objects is measurable and, if present, the organism’s ability to assess its own knowledge.
  4. Episodic Memory:  Autobiographical memory of events experienced at particular places and times.
  5. Illusion and multistable perception: Susceptible to sensory illusions (such as visual illusions) due to intentionality, the building of perceptual models.
  6. Visuospatial behavior: Having a stable situational survey despite body movements.

I’m not a fan of relying too much on 2, specific anatomy, at least other than in cases of assessing whether someone is still conscious after brain injuries.  As I noted in the post on plant consciousness, I think focusing on capabilities keeps us grounded but still open minded.

I don’t recall the authors making this connection, but it’s worth noting that the same neural machinery is involved in both 1, goal planning, and 4, episodic memory.  We don’t retrieve memories from a recording, but imagine, simulate, reproduce past events, which is why memory can be so unreliable, but also why we can fit a lifetime of memories in our brain.

I was initially skeptical of the illusion criteria in 5, but on reflection it makes sense.  Experiencing a visual or other sensory illusion means you are accessing a representation, even if not a correct one, so a system showing signs of that experience does indicate intentionality, the aboutness of experience.

The authors spend some space assessing IIT (Integration Information Theory) in relation to “the problem of panpsychism”.  They view panpsychism as cheapening the concept of consciousness to the point where the word loses its usefulness, and see IIT as “underconstrained” in a manner that leads to it.  (I saw a comment the other day that IIT gets cited as much in neuroscience papers as other theories like GWT, but at least in my own personal survey, most of the citations of IIT seem to be criticisms.)

Finally, the authors look at modern machine learning neural networks and conclude that they currently show no signs of consciousness.  They note that machines may have alternate strategies for accomplishing the same thing as consciousness, which raises the question of how malleable we want the word “consciousness” to be.

There’s a lot here that resonates with the work surveyed by Feinberg and Mallatt, which I’ve reported on before, although these indicators seem a bit less concrete that F&M’s.  They might better be viewed as criteria for the development of more specific experimental criteria.

Of course, if you don’t buy their definition of consciousness, they you may not buy their criteria for indicators.  But this is always the problem with scientific studies of ambiguous concepts.

So the question is, do you buy their description?  Or the resulting indicators?

h/t Neuroskeptic

The ASSC 23 debate on whether artificial intelligence can be conscious

The ASSC (Association of Scientific Study of Consciousness) had its annual conference on consciousness this week, which culminated in a debate on whether AI can be conscious.

Note: the event doesn’t actually start until the 28:30 minute mark.  The remaining part is about 99 minutes long.

I was delighted to see the discussion immediately become focused on the importance of definitions, since I think the question is otherwise meaningless.  In my humble and totally unbiased opinion, the first speaker, Blake Richards, hit it out of the park with his answer that it depends on which definition of consciousness we’re using, and in noting the issues with the folk definitions, such as subjective experience, phenomenality, etc.

In fact, I would go on to say that just about all of Richards’ positions in this discussion struck me as right.  The only issue I think he might have misplaced faith in is our ability to come together on one definition of consciousness that is scientifically measurable.  (And to be fair, it was more an aspiration than a faith.)  I strongly suspect that we’ll always have to qualify which specific version we’re talking about (i.e. access consciousness, exteroceptive-consciousness, etc).   But overall I found his hard core functionalism refreshing.

It’s inevitable that this type of conversation turns toward ethics.  Indeed, I think when it comes to folk conceptions of consciousness, the questions are inextricably linked.  Arguably what is conscious is what is a subject of moral worth, and what is a subject of moral worth is conscious.

I got a real kick out of Hakwan Lau’s personality.  As a reminder, he was one of the authors of the paper I shared last week on empirical vs fundamental IIT.

I was also happy to see all the participants reject the zombie concept in the later part of the discussion.

Generally speaking, this was an intelligent, nuanced, and fairly well grounded discussion on the possibilities.

As I noted above, my own view is similar to Richards’.  If we can design a system that reproduces the functional capabilities of an animal, human or otherwise, that we consider conscious, then by whatever standard we’re using, that system will be conscious.  The interesting question to me is what is required to do that.

What do you think?  Is AI consciousness possible?  Why or why  not?  And if it is, what would be required to make you conclude there is a consciousness there?

Protecting AI welfare?

John Basl and Eric Schwitzgebel have a short article at Aeon arguing that AI (artificial intelligence) should enjoy the same protection as animals do for scientific research.  They make the point that while AI is a long way off from achieving human level intelligence, it may achieve animal level intelligence, such as the intelligence of a dog or mouse, sometime in the near future.

Animal research is subject to review by IRBs (Institutional Research Boards), committees constituted to provide oversight of research into human or animal subjects, ensuring that ethical standards are followed for such research.  Basl and Schwitzgabel are arguing for similar committees to be formed for AI research.

Eric Schwitzgebel also posted the article on his blog.  What follows is the comment, slightly amended, that I left there.

I definitely think it’s right to start thinking about how AIs might compare to animals.  The usual comparisons with humans is currently far too much of a leap. Although I’m not sure we’re anywhere near dogs and mice yet.  Do we have an AI with the spatial and navigational intelligence of a fruit fly, a bee, or a fish?  Maybe at this point mammals are still too much of a leap.

But it does seem like there is a need for a careful analysis of what a system needs in order to be a subject of moral concern.  Saying it needs to be conscious isn’t helpful, because there is currently no consensus on the definition of consciousness.  Basl and Schwitzgabel mention the capability to have joy and sorrow, which seems like a useful criteria.  Essentially, does the system have something like sentience, the ability to feel, to experience both negative and positive affects?  Suffering in particular seems extremely relevant.

But what is suffering?  The Buddhists seemed to put a lot of early thought into this, identifying desire as the main ingredient, a desire that can’t be satisfied.  My knowledge of Buddhism is limited, but my understanding is that they believe we should convince ourselves out of such desires.  But not all desires are volitional.  For instance, I don’t believe I can really stop desiring not to be injured, or the desire to be alive, and it would be extremely hard to stop caring about friends and family.

For example, if I sustain an injury, the signal from the injury conflicts with the desire for my body to be whole and functional.  I will have an intense reflexive desire to do something about it. Intellectually I might know that there’s nothing I can do but wait to heal.  During the interim, I have to continuously inhibit the reflex to do something, which takes energy. But regardless, the reflex continues to fire and continuously needs to be inhibited, using up energy and disrupting rest.  This is suffering.

But involuntary desires seem like something we have due to the way our minds evolved.  Would we build machines like this (aside from cases where we’re explicitly attempting to replicate animal cognition)?  It seems like machine desires could be satisfied in a way that primal animal desires can’t, by learning that the desire can’t be satisfied at all.  Once that’s known, it’s not productive for one part of the system to keep needling another part to resolve it.

So if a machines sustains damage, damage it can’t fix, it’s not particularly productive for the machine’s control center to continuously cycle through reflex and inhibition.  One signal that the situation can’t be resolved should quiet the reflex, at least for a time.  Although it could always resurface periodically to see if a resolution has become possible.

That’s not to say that some directives might not be judged so critical that we would put them as constant desires in the system.  A caregiver’s desire to ensure the well being of their charge seems like a possible example.  But it seems like this would be something we only used judiciously.  

Another thing to consider is that these systems won’t have a survival instinct.  (Again unless we’re explicitly attempting to replicate organic minds.) That means the inability to fulfill an involuntary and persistent desire wouldn’t have the same implications for them that they do for a living system.  In other words, being turned off or dismantled would not be a solution the system feared.

So, I think we have to be careful with setting up a new regulatory regime.  The vast majority of AI research won’t involve anything even approaching these kinds of issues.  Making all such research subject to additional oversight would be bureaucratic and unproductive.  

But if the researchers are explicitly trying to create a system that might have sentience, then the oversight might be warranted.  In addition, having guidelines on what current research shows on how pain and suffering work, similar to the ones used for animal research, would probably be a good idea.

What do you think?  Is this getting too far ahead of ourselves?  Or is it passed time something like this was implemented?

Is superintelligence possible?

Daniel Dennett and David Chalmers sat down to “debate” the possibility of superintelligence.  I quoted “debate” because this was a pretty congenial discussion.

(Note: there’s a transcript of this video on the Edge site, which might be more time efficient for some than watching a one hour video.)

Usually for these types of discussions, I agree more with Dennett, and that’s true to some extent this time, although not as much as I expected.  Both Chalmers and Dennett made very intelligent remarks.  I found things to both agree and disagree with both of them on.

I found Chalmers a little too credulous of the superintelligence idea.  Here I agreed more with Dennett.  It’s possible in principle but may not be practical.  In general, I think we don’t know all the optimization trade offs that might be necessary to scale up an intelligence.

For example, it’s possible that achieving the massive parallel processing of the human brain at the power levels it consumes (~20 watts), may inevitably require slower processing and water cooled operations.  I think it’s extremely unlikely that human minds are the most intelligent minds possible, but the idea that an AI can be thousands of times more intelligent strikes me as proposition that deserves scrutiny.  The physical realities may put limits on that.

And I agree more with Dennett on how AI is likely to be used, more as tools than as colleagues.  I’m not sure Chalmers completely grasped this point since the dichotomy he described isn’t how I perceived Dennett’s point, that we can have autonomous tools.

That said, I’m often surprised how much I agree with Chalmers when he discusses AI.  There was a discussion on AI consciousness, where he made this statement:

There’s some great psychological data on this, on when people are inclined to say a system conscious and has subjective experience. You show them many cases and you vary, say, the body—whether it’s a metal body or a biological body—the one factor that tracks this better than anything else is the presence of eyes. If a system has eyes, it’s conscious. If the system doesn’t have eyes, well, all bets are off. The moment we build our AIs and put them in bodies with eyes, it’s going to be nearly irresistible to say they’re conscious, but not to say that AI systems which are not in body do not have consciousness.

I’m reminded of Todd Feinberg and Jon Mallatt’s thesis that consciousness in animals began with the evolution of eyes.  Eyes imply a worldview, some sort of intentionality, exteroceptive awareness.  Of course, you can put eyes on a machine that doesn’t have that internal modeling, but then it won’t respond in other ways we’d expect from a conscious entity.

There was also a discussion about mind uploading in which both of them made remarks I largely agreed with.  Dennett cautioned that the brain is enormously complex and this shouldn’t be overlooked, and neither philosopher saw it happening anytime soon, as in the next 20 years.  In other words, neither buys into the Singularity narrative.  All of which fits with my own views.

SMBC on what separates humans from machines

Source: Saturday Morning Breakfast Cereal (Click through for full sized version and the red button caption.)

My own take on this is that what separates humans from machines is our survival instinct.  We intensely desire to survive, and procreate.  Machines, by and large, don’t.  At least they won’t unless we design them to.  If we ever did, we would effective be creating a race of slaves.  But it’s much more productive to create tools whose desires are to do what we design them to do, than design survival machines and then force them to do what we want them to.

Many people may say that the difference is more about sentience.  But sentience, the ability to feel, is simply how our biological programming manifests itself in our affective awareness.  A machine may have a type of sentience, but one calibrated for its designed purposes, rather than the ones evolution produces calibrated for gene preservation.

I do like that the strip uses the term “humanness” rather than “consciousness”, although both terms are inescapably tangled up with morality, particularly in what makes a particular system a subject of moral concern.

It’s interesting to ponder that what separates us from non-human animals may be what we have, or will have, in common with artificial intelligence, but what separates us from machines is what we have in common with other animals.  Humans may be the intersection between the age of organic life and the age of machine life.

Of course, eventually machine engineering and bioengineering may merge into one field.  In that sense, maybe it’s more accurate to describe modern humans as the link between evolved and engineered life.

 

Why we’ll know AI is conscious before it will

At Nautilus, Joel Frohlich posits how we’ll know when an AI is conscious.  He starts off by accepting David Chalmers’ concept of a philosophical zombie, but then makes this statement.

But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have.

He then goes on to describe what I’d call a Turing test for consciousness.

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.

This seems to include a couple of  major assumptions.

First is the idea that we’ll accidentally make an AI conscious.  I think that is profoundly unlikely.  We’re having a hard enough time making AIs that can successfully navigate around houses or road systems, not to mention ones that can simulate the consequences of real world physical actions.  None of these capabilities are coming without a lot of engineering involved.

The second assumption is that consciousness, like some kind of soul, is a quality a system either has or doesn’t have.  We already have systems that, to some degree, take in information about the world and navigate around in it (self driving cars, Mars rovers, etc).  This amounts to a basic form of exteroceptive awareness.  To the extent such systems have internal sensors, they have a primitive form of interoceptive awareness.  In the language of the previous post, these systems already have a sensorium more sophisticated than many organisms.

But their motorium, their ability to perform actions, remains largely rule based, that is, reflexive.  They don’t yet have the capability to simulate multiple courses of action (imagination) and assess the desirability of those courses, although the Deepmind people are working on this capability.

The abilities above provide a level of functionality that some might consider conscious, although it’s still missing aspects that others will insist are crucial.  So it might be better described as “proto-conscious.”

For a system to be conscious in the way animals are, it would also have to have a model of self, and care about that self.  This self concern comes naturally to us because having such a concern increases our chances of survival and reproduction.  Organisms that don’t have that instinctive concern tend to quickly be selected out of the gene pool.

But for the AI to ask about its own consciousness, its model of self would need to include a another model to monitor aspects of its own internal processing.  In other words, it would need metacognition, introspection, self reflection.  Only once that is in place will it be capable of pondering its own consciousness, and be motivated to do so.

These are not capabilities that are going to come easily or by accident.  There will likely be numerous prototype failures that are near but not quite there.  This means that we’re likely to see more and more sophisticated systems over time that increasingly trigger our intuition of consciousness.  We’ll suspect these systems of being conscious long before they have the capability to wonder about their own consciousness, and we’ll be watching for signs of this kind of self awareness as we try to instill it, like a parent watching for their child’s first successful utterance of a word (or depending on your attitude, Frankenstein looking for the first signs of life in his creation).

Although it’s also worth wondering how prevalent systems with a sense of self will be.  Certainly they will be created in labs, but most of us won’t want cars or robots that care about themselves, at least beyond their usefulness to their owners.  And given all the ethical concerns with full consciousness and the difficulties in accomplishing it, I think the proto-conscious stage is as far as we’ll bring common everyday AI systems, a stage that makes them powerful tools, but keeps them as tools, rather than slaves.

Unless of course I’m missing something?

AI and creativity

Someone asked for my thoughts on an argument by Sean Dorrance Kelly at MIT Technology Review that AI (artificial intelligence) cannot be creative, that creativity will always be a  human endeavor.  Kelly’s main contention appears to be that creativity lies in the eye of the beholder and that humans are unlikely to recognize AI accomplishments as creative.

Now, I think it’s true that AI suffers from a major disadvantage when it comes to artistic creativity.  Art’s value amounts to what emotions it can engender in the audience.  Often generating those emotions requires an insight from the artist into the human condition, an insight that draws heavily on our shared experiences as human beings.  This is one reason why young artists often struggle, because their experiences are too limited yet to have those insights, or at least too limited to impress older consumers of their art.

Of course an AI has none of these experiences, nor the human drives that make that experience meaningful in the way it is to us.  AI may be able to use correlations between things in other works and how popular those works are, but for finding a genuine insight into the human condition, it is simply not going to be equipped to do it, at least not for a long time.  In that sense, I agree with Kelly, although his use of the word “always” has an absolutist ring to it I can’t endorse.

But it’s in the realm of games and mathematics that I think Kelly oversells his thesis.  These are areas where insights into the human condition are not necessarily an advantage, although in the case of games they can be.

Much has been written about the achievements of deep-learning systems that are now the best Go players in the world. AlphaGo and its variants have strong claims to having created a whole new way of playing the game. They have taught human experts that opening moves long thought to be ill-conceived can lead to victory. The program plays in a style that experts describe as strange and alien. “They’re how I imagine games from far in the future,” Shi Yue, a top Go player, said of AlphaGo’s play. The algorithm seems to be genuinely creative.

In some important sense it is. Game-playing, though, is different from composing music or writing a novel: in games there is an objective measure of success. We know we have something to learn from AlphaGo because we see it win.

I can’t say I understand this point.  Because AlphaGo’s success is objective we can’t count what it does in achieving that win as creative?  The fact is AlphaGo found strategies that humans missed.  In some ways, this reminds me of the way evolution often find creative solutions to problems, solutions that in retrospect look awfully creative.

In the realm of mathematics, Kelly asserts that, so far, mathematical proofs by AI have not been particularly creative.  Fair enough, although by his own standard that’s a subjective judgment.  But he then focuses on proofs an AI might come up with that humans couldn’t understand, noting that a proof isn’t a proof if you can’t convince a community of mathematicians that it’s correct.

Kelly doesn’t seem to consider the possibility that an AI might develop a proof incomprehensible to humans that nevertheless convince a community of other AIs who can demonstrate its correctness by using it to solve problems.  Or the possibility that the “not particularly creative” AIs today might advance considerably in years to come and produce ground breaking proofs that human mathematicians can understand and appreciate.  Mathematics is one area where I could see AI eventually having insights a human might never have.

But I think the biggest weakness in Kelly’s thesis is at its heart, his admission that creativity, like beauty, lies in the eye of the beholder, that it only exists subjectively.  In other words, it’s culturally specific, and our conception of what is creative might change in the future, particularly as we become more accustomed to intelligent machines.

This leads him to this line of reasoning:

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.

In other words, machines can’t be creative because we humans won’t recognize them as so, and if humans do start to consider them creative, then we will have denigrated ourselves.  This is just a rationalized bias for human exceptionalism, a self reinforcing loop that closes off any possibility of considering counter-evidence.

So, in sum, will AI ever be creative?  I think that’s a meaningless question (similar to the question of whether it will ever be conscious).  The real question is will we ever regard them as creative?  The answer is we already do in some contexts (see the AlphaGo quote above), but in others, notably in artistic achievement, it may be a long time before we do.  But asserting we never will seems more like a statement of faith than a reasoned conclusion.  Who knows what AIs in the 22nd century will be capable of?

What do you think?  Is creativity something only humans are capable of?  Is there any fact of the matter on this question?