Michael Graziano’s attention schema theory

It’s been a while since I’ve had a chance to highlight Graziano’s attention schema theory.  This brief video is the very barest of sketches, but I think it gets the main idea across.

Those of you who’ve known me for a while might remember that I was once quite taken with this theory of consciousness.  I still think it has substantial value in understanding metacognition and top down control of attention, but I no longer see it as the whole story, seeing it as part of a capability hierarchy.

Still, the attention schema theory makes a crucial point.  What we know of our own consciousness is based on an internal model of it that our brain constructs.  Like all models, it’s simplified in a way that optimizes it for adaptive feedback, not for purposes of understanding the mind.

The problem is that this model feels privileged, to the extent that the proposition that what it shows us isn’t accurate, is simply dismissed out of hand by many people.  That our external senses aren’t necessarily accurate is relatively easy to accept, but the idea that our inner senses might have the same limitations is often fiercely resisted.

But there is a wealth of scientific research showing that introspection is unreliable.  It actually functions quite well in day to day life.  It’s only when we attempt to use it as evidence for how the mind works that we run into trouble.  Introspective data that is corroborated by other empirical data is fine, but when it’s our only source of information, caution is called for.

Graziano’s contention that conscious awareness is essentially a data model puts him in the illusionist camp.  As I’ve often said, I think the illusionists are right, although I don’t like calling phenomenal consciousness an illusion, implying that it doesn’t exist, instead currently preferring the slightly less contentious assertion that it only exists subjectively, a loose and amorphous construction from various cognitive processes.

Posted in Zeitgeist | Tagged , , , , | 28 Comments

Brain inspired hardware

The Scientist has an interesting article up reporting on the progress that’s being made in neuromorphic hardware.

But the fact that computers “think” very differently than our brains do actually gives them an advantage when it comes to tasks like number crunching, while making them decidedly primitive in other areas, such as understanding human speech or learning from experience. If scientists want to simulate a brain that can match human intelligence, let alone eclipse it, they may have to start with better building blocks—computer chips inspired by our brains.

So-called neuromorphic chips replicate the architecture of the brain—that is, they talk to each other using “neuronal spikes” akin to a neuron’s action potential. This spiking behavior allows the chips to consume very little power and remain power-efficient even when tiled together into very large-scale systems.

Traditionally, artificial neural networks have been implemented with software.  While this gets at algorithms that may resemble the ones in biological nervous systems, it does so without the advantages of the physical implementation of those systems.  Essentially it’s emulating that hardware (wetware?), which in computing has always come with a performance hit, with the magnitude of the hit usually corresponding to just how different the hardware architectures are, and modern chips and nervous systems are very different.

There’s a lot of mystique associated with neural networks.  But it’s worth remembering that a neural network is basically a crowd sourcing strategy.  Instead of having one sophisticated and high performing processor, or a few of them, like the ones in modern commercial computers, the strategy involves having large numbers, millions or billions, of relatively simple processors, the neurons.

Each neuron sums up its inputs, both positive and negative (excitation and inhibitions) and fires when a threshold is reached, providing inputs to its downstream neurons.  Synapses, the connections between neurons, grow or weaken depending on usage, changing the overall flow of information.

Of course, biological neurons are cells, which come with all the complexity associated with cellular processes.  But we shouldn’t be surprised that evolution solved its computing and communication needs with cells, since in complex life it solves everything that way.

Neuromorphic computing is moving the actual hardware closer to the structure used in nervous systems.  I’d always known about the performance advantages that might bring, but apparently a lot of the power efficiency of the brain (which operates on about 20 watts) comes down to its analog features, and neuromorphic computing, by adopting hybrid analog-digital structures, appears to be reaping many of those benefits.

The article also discusses various attempts that are underway to run simulations of the brain, although at present they’re simulating simplified versions of it.  But combined with computational neuroscience, this approach may yield theoretical insights into actual biological brains.

I’ve written before about Moore’s Law petering out, and that further progress in computing will require innovative architectural changes for us to continue seeing progress.  I find it heartening that this kind of research is happening.  Too much of the industry seems caught up in the quantum computing hype, but this line of inquiry may yield results much sooner.

Posted in Zeitgeist | Tagged , , , , , | 33 Comments

Are zombies conscious?

This is not a question about philosophical zombies.  I did a post on them a while back.  (The TL;DR is that I find that whole concept ranges from incoherent to dubious, depending on the exact version.)

This post is on the zombies we see in fiction, such as Night of the Living Dead, the Resident Evil franchise, World War Z, and a host of other movies and shows.  Last week, while watching the Game of Thrones episode featuring the epic battle with the dead, and the way the zombies searched for and pursued their victims, a question suddenly occurred to me.  Are zombies, as traditionally portrayed, conscious?  (Yes, I know I’m talking about fantasy entities here, but I’m doing so to get at intuitions.)

Let’s say first what most, if not all, of the portrayals indicate zombies are not.  They’re not the original person.  This fits with the original Haitian Vodoo concept of a zombie, a reanimated but soulless corpse.  The zombies as typically portrayed appear to have no memory of their past life, have lost the ability to communicate with language, and generally seem to be cognitively limited in a number of ways.

In the case of Game of Thrones, the zombies are controlled by the White Walkers, but there appear to only be a limited number of those Walkers, so it doesn’t seem like they’re controlling the detailed movement of every zombie.  Broadly speaking, the GoT zombies have no long term will of their own, but a lot of their detailed movements appear to be left to their discretion.

And as in a lot of other fiction, the zombies seem able to search for and pursue victims.  This indicates that they have exteroception, awareness of their environment, enough that they can navigate around in it.  They also seem able to discriminate between other zombies and living humans.  And they seem to be able to focus attention on specific people or groups.

On the other hand, your typical zombie doesn’t appear to have much of any somatic sense, any sense of touch.  Or if they do, it doesn’t appear to affect them much.  For instance, zombies seem to only minimally notice when they lose body parts.  So their interoceptive sense is either missing or stunted.

This might tempt us to conclude that the zombies have no sense of their own body.  However, being able to navigate your environment, as the zombies can clearly do at least on some level, requires being able to understand your body’s existence and its relationship to that environment.  So the zombies appear to have a only a limited sense of their own body, but a sense nonetheless.

I mentioned above that zombies don’t have memory of their past life, but they also don’t appear to have any long term memories of their current existence.  In most depictions, they do seem to have short term memory and imagination, not instantly forgetting a prey just because that prey is momentarily out of sight.  But they don’t appear to have any memory beyond the last few moments or be able to imagine anything more than a few minutes into the future.

I think it’s fair to say that zombies, while they may have some limited sense of their body, have no metacognitive self awareness, but then neither do most animals.  Although the zombies also have no self concern, no survival instinct, which everything alive seems to have in some form or another.  They do have some limited affective desires, such as desiring to eat brains, kill humans, or whatever, but those affects generally aren’t oriented toward their own preservation.

I suspect it’s this last point that really nixes any intuition we might have of them being conscious.  But what do you think?  Which aspects are necessary for us to think of a system as conscious?  Which ones, if they were in a machine, might incline us to feel like that machine was conscious?

Posted in Zeitgeist | Tagged , , , | 36 Comments

Emotions, feelings, and action programs

Sean Carroll’s latest Mindscape podcast features an interview with neuroscientist Antonio Damasio:

When we talk about the mind, we are constantly talking about consciousness and cognition. Antonio Damasio wants us to talk about our feelings. But it’s not in an effort to be more touchy-feely; Damasio, one of the world’s leading neuroscientists, believes that feelings generated by the body are a crucial part of how we achieve and maintain homeostasis, which in turn is a key driver in understanding who we are. His most recent book, The Strange Order of Things: Life, Feeling, and the Making of Cultures, is an ambitious attempt to trace the role of feelings and our biological impulses in the origin of life, the nature of consciousness, and our flourishing as social, cultural beings.

Listening to Damasio reminded me of his specific use of the word “emotion” and the definitional issues that always arise when trying to discuss emotions, feelings, and affects.  For some people, these words all mean more or less the same thing.  For others they have distinct meanings.

Damasio’s use of the word “emotion” refers not to the conscious feeling, but to the underlying automatic reaction that causes it.  Early in the evolution of central nervous systems, these automatic reactions led directly to action.  But as animals evolved distance senses such as vision, smell, and hearing, these automatic reactions became more a predisposition toward a certain action, one that could be allowed or inhibited by higher reasoning systems.

On the blog, I’ve longed referred to these early automatic reactions as “reflexes” to communicate their non-conscious or pre-conscious nature, although I know use of that specific word has its issues, mostly because I’m conflating spinal cord programs with brainstem ones.  I’ve also seen the phrase “reflex arcs” used.  Damasio, in the interview, calls them “action programs”, which seems like a pretty good name.

The problem is that using the word “emotion” to refer specifically to the action program seems prone to confusion.  The word “emotion” may have originally meant externally caused motion (e-motion), but it seems like in our society it’s become hopelessly entangled with the conscious feeling, the information signals from the action program to our higher faculties.

It’s why I often avoid the word “emotion” now.  When I do use it, it’s generally to refer to the entire stack, from the triggered action program, to the habitual allowing or inhibition of the action, to the feeling that acts as an input to our reasoning faculties, the ones that decide which reflexes or habits to allow and which to inhibit.

“Affect” seems fraught with the same difficulties.  In some cases it refers to the action program, other times to the feeling.  So I use it somewhat in the same manner as “emotion”, although to me the word “affect” has broader applicability.  It seems strange to call pain or hunger an emotion, but calling them an affect feels suitable.

Damasio’s view that emotions evolved to drive an organism to maintain its homeostasis has always made a lot of sense to me.  After all, what else is pain, hunger, or fear but impulses to motivate a creature to maintain that homeostasis, to ensure that its energy levels and other parameters remain within a range of parameters that maximize its chance of survival.

The only impulses that don’t quite seem to fit are those related to reproduction.  It doesn’t seem like reproduction, in and of itself, has much to do with homeostasis.  Indeed, given that males often have to fight for the right to mate, and the burden pregnancy puts on female bodies, it can outright threaten homeostasis in many circumstances.

Here I think we have to back up further and ask why maintaining homeostasis is desirable for an organism, why survival matters.  This brings us back to the selfish gene.  (“Selfish” here being a metaphor for the naturally selected effects of genes that preserve and propagate their pattern.)  An organism is essentially a survival and gene propagation machine.  So, selfish genes lead to homeostasis, which leads to action programs, which cause feelings, so that an animal’s reasoning ability can optimize their chances for survival.

Of course, once an animal has the ability to reason, it can figure out ways to satisfy its feelings in a manner that doesn’t necessarily accomplish the agenda of its genes.  Birth control is the obvious example.

Anyway, I like the sound of “action program”.  Although the term “reflex arc” can work too, signalling their similarities with spinal cord reflexes but also the added complexity, although the word “arc” might throw some people.  Of course, others will see the word “program” as a fighting word.

Ultimately definitions are what society makes them.  Any thoughts on these terms?  Or on alternatives?

Posted in Zeitgeist | Tagged , , , , | 23 Comments

Avengers: Endgame

I saw it this weekend.  I will say that it’s an enjoyable and entertaining movie.

But it’s something of a logical mess.  I’m not spoiling much by saying that time travel features in the story.  Early in the movie, there’s discussion about how lame movie treatments of time travel typically are.  (Back to the Future and Hot Tub Time Machine get mentioned.)  I think this was included as a wink to irony, because the movie proceeds to create paradoxes all over the place.  Some are cleaned up, because doing so is important to the plot, but many others are ignored, because it would be inconvenient to the plot.

Those aren’t the only logical inconsistencies.  We’re not talking about scientific implausibilities, which you have to just accept in these kinds of movies, but places where the movie makes a statement, and then later outright contradicts itself.  In the experience of the movie, it ends up working, because everything is happening fast, loud, and with feeling and panache.  It’s only afterward, as you dwell on what happened, that the inconsistencies become glaring.

I don’t doubt that hard core fans will be able to come up with explanations for all those inconsistencies.  I used to do that myself when I was a boy reading the actual comics that these movies are based on.  Still, it’d be nice if so many of them weren’t required.

That said, it’s a Marvel movie.  You have to be willing to suspend your disbelief if you’re going to enjoy it.  And I did leave the theater satisfied.  If you’ve followed the saga to this point, you’ll definitely want to watch it.  (Although this is definitely not the movie to introduce yourself to the Marvel universe.)  I thought most of the character resolutions were pretty satisfying.

Highly recommended for popcorn entertainment.

Posted in Science Fiction | Tagged , , , , | 18 Comments

Protecting AI welfare?

John Basl and Eric Schwitzgebel have a short article at Aeon arguing that AI (artificial intelligence) should enjoy the same protection as animals do for scientific research.  They make the point that while AI is a long way off from achieving human level intelligence, it may achieve animal level intelligence, such as the intelligence of a dog or mouse, sometime in the near future.

Animal research is subject to review by IRBs (Institutional Research Boards), committees constituted to provide oversight of research into human or animal subjects, ensuring that ethical standards are followed for such research.  Basl and Schwitzgabel are arguing for similar committees to be formed for AI research.

Eric Schwitzgebel also posted the article on his blog.  What follows is the comment, slightly amended, that I left there.

I definitely think it’s right to start thinking about how AIs might compare to animals.  The usual comparisons with humans is currently far too much of a leap. Although I’m not sure we’re anywhere near dogs and mice yet.  Do we have an AI with the spatial and navigational intelligence of a fruit fly, a bee, or a fish?  Maybe at this point mammals are still too much of a leap.

But it does seem like there is a need for a careful analysis of what a system needs in order to be a subject of moral concern.  Saying it needs to be conscious isn’t helpful, because there is currently no consensus on the definition of consciousness.  Basl and Schwitzgabel mention the capability to have joy and sorrow, which seems like a useful criteria.  Essentially, does the system have something like sentience, the ability to feel, to experience both negative and positive affects?  Suffering in particular seems extremely relevant.

But what is suffering?  The Buddhists seemed to put a lot of early thought into this, identifying desire as the main ingredient, a desire that can’t be satisfied.  My knowledge of Buddhism is limited, but my understanding is that they believe we should convince ourselves out of such desires.  But not all desires are volitional.  For instance, I don’t believe I can really stop desiring not to be injured, or the desire to be alive, and it would be extremely hard to stop caring about friends and family.

For example, if I sustain an injury, the signal from the injury conflicts with the desire for my body to be whole and functional.  I will have an intense reflexive desire to do something about it. Intellectually I might know that there’s nothing I can do but wait to heal.  During the interim, I have to continuously inhibit the reflex to do something, which takes energy. But regardless, the reflex continues to fire and continuously needs to be inhibited, using up energy and disrupting rest.  This is suffering.

But involuntary desires seem like something we have due to the way our minds evolved.  Would we build machines like this (aside from cases where we’re explicitly attempting to replicate animal cognition)?  It seems like machine desires could be satisfied in a way that primal animal desires can’t, by learning that the desire can’t be satisfied at all.  Once that’s known, it’s not productive for one part of the system to keep needling another part to resolve it.

So if a machines sustains damage, damage it can’t fix, it’s not particularly productive for the machine’s control center to continuously cycle through reflex and inhibition.  One signal that the situation can’t be resolved should quiet the reflex, at least for a time.  Although it could always resurface periodically to see if a resolution has become possible.

That’s not to say that some directives might not be judged so critical that we would put them as constant desires in the system.  A caregiver’s desire to ensure the well being of their charge seems like a possible example.  But it seems like this would be something we only used judiciously.  

Another thing to consider is that these systems won’t have a survival instinct.  (Again unless we’re explicitly attempting to replicate organic minds.) That means the inability to fulfill an involuntary and persistent desire wouldn’t have the same implications for them that they do for a living system.  In other words, being turned off or dismantled would not be a solution the system feared.

So, I think we have to be careful with setting up a new regulatory regime.  The vast majority of AI research won’t involve anything even approaching these kinds of issues.  Making all such research subject to additional oversight would be bureaucratic and unproductive.  

But if the researchers are explicitly trying to create a system that might have sentience, then the oversight might be warranted.  In addition, having guidelines on what current research shows on how pain and suffering work, similar to the ones used for animal research, would probably be a good idea.

What do you think?  Is this getting too far ahead of ourselves?  Or is it passed time something like this was implemented?

Posted in Zeitgeist | Tagged , , , , | 55 Comments

The relationship between usefulness and falsifiability

There’s an article by Matthew R. Francis in Symmetry magazine garnering a lot of attention asking whether falsifiability is a useful criteria for scientific theories.

Popper wrote in his classic book The Logic of Scientific Discovery that a theory that cannot be proven false—that is, a theory flexible enough to encompass every possible experimental outcome—is scientifically useless. He wrote that a scientific idea must contain the key to its own downfall: It must make predictions that can be tested and, if those predictions are proven false, the theory must be jettisoned.

If you think about it, Popper’s criteria is simply that for a theory, a model, to be scientific, there must be something about observable reality that is different if it is true instead of false.  There are a wide variety of conditions that could satisfy this criteria.  It’s actually pretty broad.  But apparently not broad enough for some.

But where does this falsifiability requirement leave certain areas of theoretical physics? String theory, for example, involves physics on extremely small length scales unreachable by any foreseeable experiment. Cosmic inflation, a theory that explains much about the properties of the observable universe, may itself be untestable through direct observations. Some critics believe these theories are unfalsifiable and, for that reason, are of dubious scientific value.

At the same time, many physicists align with philosophers of science who identified flaws in Popper’s model, saying falsification is most useful in identifying blatant pseudoscience (the flat-Earth hypothesis, again) but relatively unimportant for judging theories growing out of established paradigms in science.

Physicist Sabine Hossenfelder has a response up on her blog that is well worth reading.  Both Francis and Hossenfelder discuss situations in which a rigid adherence to falsifiability is problematic, although Francis allows for a broader scope for it than Hossenfelder.

My own attitude, as a layperson and skeptic, is that it matters to me whether a theory has been tested, or can be tested in some plausibly foreseeable scenario.  I’ll grant that scientists need space to work on speculative ideas, but as Hossenfelder notes, if there is never any pressure to eventually find testable predictions in those ideas, then they eventually become just metaphysical philosophy.  Not that there’s anything wrong necessarily with metaphysics, but it doesn’t enjoy the credibility of science for a reason.

Anyway, there are a couple of points the Symmetry article makes that I want to comment on.

On that note, Caltech cosmologist Sean M. Carroll argues that many very useful theories have both falsifiable and unfalsifiable predictions. Some aspects may be testable in principle, but not by any experiment or observation we can perform with existing technology. Many particle physics models fall into that category, but that doesn’t stop physicists from finding them useful. SUSY as a concept may not be falsifiable, but many specific models within the broad framework certainly are. All the evidence we have for the existence of dark matter is indirect, which won’t go away even if laboratory experiments never find dark matter particles. Physicists accept the concept of dark matter because it works.

First is the observation that well established theories often make untested predictions.  Sure they do.  But those theories are well established because of the predictions that have been tested.  And it’s well worth keeping in mind which predictions haven’t been tested yet, because it’s always possible that they reflect areas where even a well established theory might eventually have to be adjusted in the future.

But the other point is the use of the word “useful” here.  In what way are the untestable theories useful?  What about them makes them useful?  How would they be different if they weren’t useful?  Do they add value to other theories, value that makes the predictions of that other theory more accurate?  If so, then congratulations, you’ve just made the useful theory falsifiable.

Or are they “useful” in some other manner involving their aesthetics or emotional appeal?  Do they give us a feeling like we’ve explained something, plugged a hole in our knowledge of how something works, but without enhancing our ability to make predictions?  If so, then this feels like what in psychology is often called a “just so” story.

Just-so stories are generally recognized as having little or no value.  They’re just bias enforcing narratives we come up with that make us feel better about how something got to be the way it is, but not in giving us any real insights.  The danger with such narratives is that if everyone is too satisfied with them, they might actually stifle investigation into areas that still need it.

(Of course, whether a particular theory has been or can be tested is inevitably a matter of judgment.  I’ve had numerous people over the years point to a well accepted scientific theory they disliked and insisted that it either wasn’t falsifiable or that there was no evidence for it, and then insist that the evidence accepted by the vast majority of scientists wasn’t actually evidence.  Even Popper had trouble with this, initially thinking that natural selection wasn’t falsifiable, a fact that delights the creationists who know about it.)

Falsifiability, in my understanding, is simply an insistence that a scientific theory must be epistemically useful, must enhance our ability to make more accurate predictions, directly or indirectly.  If it doesn’t do that, or at least pave the way in some foreseeable manner for other theories that might do it, then the notion might have value as philosophy, but presenting it as settled science is misleading and, I think, puts the credibility of science in jeopardy.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , | 88 Comments

Do boiling crawfish suffer?

Boiled crawfish.
Image credit: Giovanni Handal via Wikipedia

This Easter I visited one of my cousins and, as is tradition for a lot of people this time of year, we had a crawfish boil.  Eating boiled crawfish (crayfish for you non-Cajuns) is an ever present activity in southern Louisiana, at least when they’re in season, and I’ve had my share over the years.  Although for me it’s a mostly social thing because I can take or leave crawfish as a food.

Anyway, it had been a while since I observed the actual cooking process.  When the squirming wriggling mass of crawfish are lowered into the boiling water, I’ve always had a moment of dread and mortification, wondering how much these creatures are suffering in their final moments.  And  how long they remain alive in that pot.

When I was a boy, I mentioned this once or twice, and was teased for it, both by adults and other kids, for essentially being concerned about the welfare of “mud bugs.”  At the time I accepted this as a correction for attributing too much intelligence and feelings to these creatures.  But the disquiet each time I saw it never went away, although I eventually learned to keep my mouth shut.

In retrospect, after seeing other kids get the treatment over the years, I now see the teasing as a defensive reaction.  No one wants to consider that we may be subjecting these creatures to unconscionable suffering.  Far easier to conclude that they have no real sentience, and to squash any sentiment that they might, particularly in kids who might go on to ask difficult questions.

Pain in crustaceans such as crawfish, as well as invertebrates overall, is a difficult issue.  The evolution of vertebrates and invertebrates diverged from each other long before central nervous systems came along, so many of the structures we associate with cognition and pain are either radically different or missing.

Even in vertebrates, we have to be careful.  Vertebrates have specialized nerve cells throughout their peripheral nervous system called nociceptors, which are sensitive to tissue damage.  Signals from these nociceptors are rapidly relayed to the spinal cord and brain, where it usually leads to automatic responses such as withdrawal reflexes or avoidance behavior, as well as changes in heart rate, breathing, blood pressure, and other metabolic functions.

But, as counter-intuitive as it sounds, nociception by itself is not pain.  Pain is a complex emotional mental state.  Neurological case studies show that in humans it happens in the forebrain, the thalamo-cortical system, where the right kind of lesions on pathways to the anterior cingulate cortex can knock it out.  This means that the processing happening in the brainstem is below the level of consciousness, and that the behavior associated with it, when seen in other species, is not by itself an indicator of conscious pain.

This is an important point, because a lot of the material out there confuses nociception with pain, citing things like protective motor reactions and avoidance behavior as evidence for pain.  But pain is a higher cognitive state.  To establish that it’s present requires demonstrating that the animal can engage in nonreflexive operant learning and value trade off reasoning.

All vertebrates appear to display at least incipient levels of this more sophisticated behavior, indicating that all vertebrates feel pain.  Although in the case of fish, many species are missing a type of nociceptive fibers, c-fibers, which transmit the signals that lead to the long burning type of pain associated with prolonged suffering.  These fish appear to suffer the sharp pain when an injury is incurred, but not the long burning pain that land-animals experience.

However, nociceptors haven’t been found in most invertebrates, either of the fast sharp variety or the long burning kind.  This has led many to conclude that they don’t feel pain.  However, many invertebrates do show some reflexive reactions similar to the ones associated with nociception in vertebrates, which seems to show they have alternative interoceptive mechanisms for accomplishing similar results.

Perhaps a more difficult issue is whether they show any signs of the cognitive abilities required for pain in vertebrates.  Todd Feinberg and Jon Mallatt, whose book, The Ancient Origins of Consciousness, is my go-to source for this sort of thing, lists crayfish as demonstrating global operant learning and behavioral trade offs.

Following the citation trail, the paper that reaches this conclusion shows that crayfish, while having a pretty limited repertoire of behaviors, can nonetheless inhibit reflexive responses, and change responses depending on value based calculations.  This is pretty much the same capability in vertebrates associated with the capacity to experience affective states, such as pain.

That would seem to indicate that the crawfish, while possibly not experiencing pain as we understand it, nonetheless are in distress.

I read somewhere that lobsters being boiled can live for up to three minutes.  (The experiments to figure that out can’t have been pretty.)  Hopefully, crawfish, being smaller, die quicker.  And hopefully they lose consciousness quickly.

Some countries ban boiling of crustaceans alive, requiring that cooks kill the animal prior to boiling them.  Apparently there’s a device you can get that will shock the head of a lobster, killing it instantly, or at least rendering it unconscious.  Unfortunately, even if it’s anatomically feasible, the idea of using something like that on the hundreds of crawfish about to go into a pot isn’t very practical.  There’s just too many packed too closely together.  Some people advocate freezing first, but it’s not clear that’s a humane way to go either, and doing so with a large cache of crawfish is, again, not practical.

So even if people could be convinced that there was suffering to be concerned about here, I doubt there would be much change in the technique, although it might lead to less people wanting to eat them in the first place.

There is also the fact that lobsters only have 100,000 neurons, less than half what fruit flies and ants have, and only about a tenth of what bees or cockroaches have.  I couldn’t find anywhere how many crawfish have, but I suspect it’s comparable to the lobsters.  In other words, the resolution and depth of their experience of the world is extremely limited, far more so than many other animals whose welfare we typically disregard.

How much of a difference should that make?  Is it right to think of them as conscious?  Does the fact that they themselves have no empathy and couldn’t return ours, matter?  How concerned about this should we be?  Should we follow the example of the countries that outlaw boiling lobsters alive?

Posted in Mind and AI | Tagged , , , , | 17 Comments

Neanderthals and the beginnings of us

The Smithsonian has an interesting article up on what we currently know about Neanderthals.  The article details some of the internecine battles that always seems to be a part of the paleoanthropology field, in this case focusing on the capabilities of Neanderthals, whether they had art, religion, and other qualities of modern humans.

Our view of Neanderthals has undergone a radical transformation from when they were first discovered in the 19th century.  Then they were thought of a ape-men, large lumbering brutes who probably didn’t have language, clothing, or brains to speak of.  As recently as a few decades ago, in the movie Quest for Fire (one of my favorite movies, despite its flaws), Neanderthals were portrayed as mental inferiors who often acted like monkeys.

But in science, evidence always has the final word:

A new body of research has emerged that’s transformed our image of Neanderthals. Through advances in archaeology, dating, genetics, biological anthropology and many related disciplines we now know that Neanderthals not only had bigger brains than sapiens, but also walked upright and had a greater lung capacity. These ice age Eurasians were skilled toolmakers and big-game hunters who lived in large social groups, built shelters, traded jewelry, wore clothing, ate plants and cooked them, and made sticky pitch to secure their spear points by heating birch bark. Evidence is mounting that Neanderthals had a complex language and even, given the care with which they buried their dead, some form of spirituality. And as the cave art in Spain demonstrates, these early settlers had the chutzpah to enter an unwelcoming underground environment, using fire to light the way.

It seems clear now that if we were to encounter Neanderthals today, they might look a bit strange to us, but we would quickly come to regard them as people.  Indeed, that appears to be what our ancestors did.

The real game-changer came in 2013, when, after a decades-long effort to decode ancient DNA, the Max Planck Institute published the entire Neanderthal genome. It turns out that if you’re of European or Asian descent, up to 4 percent of your DNA was inherited directly from Neanderthals.

4% may not seem like much, but my understanding is that it represents a lot of interbreeding between Homo sapiens and Homo neanderthalis.  These weren’t one off encounters, the results of deviants from one or both species.  It indicates pretty wide integration.

Decades ago, there were two prevailing theories about how modern humans evolved.  One held that we had gradually evolved from earlier Homo species, primarily Homo erectus, throughout the world, with ongoing genetic exchanges.  In this model, called Multiregional Evolution, Europeans evolved mostly separately from eastern Asians who evolved mostly separately from Africans, etc.

The other view, called the Replacement model, or Recent African Origin theory, held that modern humans had evolved in Africa, and then sometime in the last 50,000-100,000 years had migrated out and spread throughout the world, displacing any other Homo species they encountered.

The debate between these two views raged on for decades, with the evidence gradually growing in favor of the Replacement model, before genetic research finally weighed in on it and sealed the deal.  It turns out that modern humans evolved in Africa within the last 200,000-300,000 years.  All of us today are descended from these Africans.  A branch of humanity migrated out of Africa sometime between 60,000 and 80,000 years ago, spreading throughout the world.  All non-Africans are descended from this branch.

But while the Replacement model was mostly right, it wasn’t entirely right.  As mentioned above, further research showed that non-Africans have DNA from other branches of humanity.  European ancestors interbred with Neanderthals, and Asian ancestors probably interbred with another branch of humanity called Denisovans.

One of the theories about why these other branches of humanity died out, prevalent until just a few years ago, was that Homo sapiens probably wiped them out.  I have to admit that this dark genocidal theory seemed plausible to me at the time.  Neanderthals in particular had been around for hundreds of thousands of years, only disappearing when modern humans came around.

But it now strikes me as more plausible that Neanderthals weren’t wiped out.  They were assimilated.  This is referred to as the Assimilation Model in the article.  The population of Neanderthals was never more than a few thousand individuals, while the incoming Homo sapiens population was reportedly in the tens of thousands.  It seems likely that what happened was some degree of interbreeding, merging, and assimilation.

I’m sure that doesn’t mean it was all sweetness and light.  Homo sapiens were an invading force.  I’m sure there was conflict, and some of it was probably brutal.  There’s too much continuity in violent behavior from other primates to humans to think it wouldn’t have happened.  But we’re also a pragmatic species, one whose members will make alliances when it’s the best option.  It seems clear that happened in at least some portion of the encounters.

All of which indicates that Homo sapiens and Neanderthals had enough in common to recognize each other’s humanity.  Which also means that their common ancestor, Homo heidelbergensis, who lived from 700,000 to 300,000 years ago, likely had many of the qualities we’d recognize in people.  There’s no evidence they had what’s now called behavioral modernity, including symbolic thought, but they must have had a lot of what makes us…us, including perhaps an early form of language, or proto-language.

But this is a field where new evidence is constantly being uncovered and paradigms shifted, so we should probably expect more surprises in the years to come.

Posted in Science | Tagged , , , , , , | 13 Comments

Frans de Waal on animal consciousness

Frans de Waal is a well known proponent of animals being much more like us than many people are comfortable admitting.  In this short two minute video, he gives his reason for concluding that at least some non-human animals are conscious.  (Note: there’s also a transcript.)

de Waal is largely equating imagination and planning with consciousness, which I’ve done myself on numerous occasions.  It’s a valid viewpoint, although some people will quibble with it since it doesn’t necessarily include metacognitive self awareness.  In other words, it doesn’t have the full human package.  Still, the insight that many non-humans have imagination, whether we want to include it in consciousness or not, is an important point.

As I’ve noted many times before, I think the right way to look at this is as a hierarchy or progression of capabilities.  In my mind, this usually has five layers:

  1. Survival circuit reflexes
  2. Perception: predictive sensory models of the environment, expanding the scope of what the reflexes can react to
  3. Attention: prioritizing what the reflexes react to
  4. Imagination / sentience: action scenario simulations to decide which reflexes to allow or inhibit, decoupling the reflexes into feelings, expanding the scope of what the reflexes can react to in time as well as space
  5. Metacognition: theory-of-mind self awareness, symbolic thought

There’s nothing crucial about this exact grouping.  Imagination in particular could probably be split into numerous capabilities.  And I’m generally ignoring habitual decisions in this sketch.  The main point is that our feelings of consciousness come from layered capabilities, and sharp distinctions between what is or isn’t conscious probably aren’t meaningful.

It’s also worth noting that there are many layers to self awareness in particular.  A creature with only 1-3 will still have some form of body self awareness.  One with 4 may also have attention and affect awareness, both arguably another layer of self awareness.  Only with 5 do we get the full bore mental self awareness.

It seems like de Waal’s point about observing the capabilities in animal behavior to determine if they’re conscious will also eventually apply to machines.  Although while machines will have their own reflexes (programming), those reflexes won’t necessarily be oriented toward their survival, which may prevent us from intuitively seeing them as conscious.  Lately I’ve been wondering if “agency” might be a better word for these types of systems, ones that might have models of themselves and their environment, but don’t have animal sentience.

Of course, the notion that comes up in opposition to this type of assessment is the philosophical zombie, specifically the behavioral variety, a system that can mimic consciousness but has no inner experience.  But if consciousness evolved, for it to have been naturally selected, it would have had to produce beneficial effects, to be part of the causal structure that produces behavior.  The idea that we can have its outputs without some version of it strikes me as very unlikely.

So in general, I think de Waal is right.  Our best evidence for animal consciousness lies in the capabilities they display.  This views consciousness as a type of intelligence, which I personally think is accurate, although I know that’s far from a universal sentiment.

But is this view accurate?  Is consciousness something above and beyond the functionality of the system?  If it is, then what is its role?  And how widespread would it be in the animal kingdom?  And could a machine ever have it?

Posted in Zeitgeist | Tagged , , , , | 41 Comments