Manufacturing liquid fuel

One of the things that I think is often not appreciated about petroleum oil, is that it’s essentially stored solar energy.  Energy that was originally captured by life forms that died, sank to the bottom of oceans or lakes where under stagnant water they couldn’t decompose, and were eventually covered and put under pressure.

This energy capture happened over hundreds of millions of years.  We’ve been burning through it in the last century or so, and while current projections are that we might have reserves for decades or possibly even centuries to come, eventually it will be used up.  It’s a non-renewable resource.

One of the challenges with renewable resources such as solar energy has always been capturing and storing the energy.  The solutions up to now have been expensive and problematic.  However, that appears to be changing.  Scientists have created a liquid fuel that can store solar  energy for up to 18 Years:

Scientists in Sweden have developed a specialised fluid, called a solar thermal fuel, that can store energy from the sun for well over a decade.

“A solar thermal fuel is like a rechargeable battery, but instead of electricity, you put sunlight in and get heat out, triggered on demand,” Jeffrey Grossman, an engineer works with these materials at MIT explained to NBC News.

The fluid is actually a molecule in liquid form that scientists from Chalmers University of Technology, Sweden have been working on improving for over a year.

This molecule is composed of carbon, hydrogen and nitrogen, and when it is hit by sunlight, it does something unusual: the bonds between its atoms are rearranged and it turns into an energised new version of itself, called an isomer.

Like prey caught in a trap, energy from the sun is thus captured between the isomer’s strong chemical bonds, and it stays there even when the molecule cools down to room temperature.

When the energy is needed – say at nighttime, or during winter – the fluid is simply drawn through a catalyst that returns the molecule to its original form, releasing energy in the form of heat.

The article goes on to describe in detail how it works, that research in this area is ongoing, and that we can expect the fuels to become denser and more efficient.

I’m not an energy expert, but this seems like a significant development.  We may be essentially inventing the ability to produce our own gasoline from solar energy.  Granted, it may not have the energy density of refined petroleum, but it sounds like that’s a gap that may close in time.

And as the technology matures, it may hasten the transition away from fossil fuels, which would be good news for the environment.  Some mornings it’s easy to feel optimistic about the future.

Posted in Zeitgeist | Tagged , , , , | 12 Comments

Is it time to retire the term “artificial intelligence”?

Eric Siegel at Big Think, in a new “Dr. Data Show” on the site, explains Why A.I. is a big fat lie:

1) Unlike AI, machine learning’s totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, “Hooha!”. However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can’t do.

2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level “Artificial Intelligence” course, as well as other related courses there.

AI is nothing but a brand. A powerful brand, but an empty promise. The concept of “intelligence” is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.

The term artificial intelligence has no place in science or engineering. “AI” is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.

3) AI isn’t gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.

He goes into detail in a long post at Big Think, or you can watch him discuss it in the video.

His presentation is over the top, but I have to agree with much of what he ways.  AI is hopelessly over hyped.  From singularities to Frankensteinian worries it will turn on us, it’s become a new mythology, supplying new versions of the old deities and demons, superhuman powers that rule the world, including promises to solve our problems, or threats of destruction, but now with a thin veneer of technological sophistication.  (Not that I don’t enjoy science fiction with these elements too.)

That’s not to say that there isn’t some danger from these technologies, but it more involves how humans might use them than from the technologies in and of themselves.  For instance, it’s not hard to imagine systems that closely and tirelessly monitor our activity using machine learning to figure out when we’re doing things a government or employer might not like.  Or an intelligent bomb smart enough to wait until it recognizes an enemy close by before exploding.

And I think he has a point with the overall term “artificial intelligence” or “AI”.  It’s amorphous, meaning essentially computer systems that are smarter than normal, lumping everything from heuristic systems to Skynet under one label.  We sometimes talk about the “AI winter”, the period where AI research fell on hard times but that it eventually pulled out of.  It could be argued that the endeavor to build a mind never really escaped that winter.  We just lumped newer more focused efforts under the same name.  (Not that I expect the term to die anytime soon.)

To be clear, I do think it will eventually be possible to build an engineered mind.  (I wouldn’t use the adjective “artificial” because if we succeed, it will be a mind, not an artificial one.)  They exist in nature with modest energy requirements, so saying it’s impossible to do technologically is essentially asserting substance dualism, that there is something about the mind above and beyond physics.

But we’re unlikely to succeed with it until we understand animal and human minds much better than we currently do.  We’re about as likely to create one accidentally as a web developer is to accidentally create an aerospace navigation system.

Posted in Zeitgeist | Tagged , , , | 48 Comments

Blindsight explained and conscious perception

Warning: neuroscience weeds.

Every so often we get into discussions about where in the brain consciousness lies.  Sometimes it’s asserted to be in the brainstem, other times in the thalamus, sometimes in the parietal lobe, and yet other times in the prefrontal cortex.  Myself, I’ve concluded that conscious perception requires activation of a network including sensory cortical areas, the posterior association cortex, and regions in the frontal lobes.  In other words, a number of areas in the thalamo-cortical system are necessary.

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

One interesting data point for this discussion has been the phenomenon of blindsight.  Sometimes people sustain brain injuries in the visual cortex in the Occipital Lobe.  Their eyes are physically fine, but the cortical region in their brain that processes vision is damaged, so their ability to consciously perceive what the eyes are seeing is reduced or eliminated.

However, often people with this condition, if forced to make a decision about whether something is in front of them, are able to to do so with a success rate significantly higher than chance.   Such people are also often able to perceive emotions on people’s faces, even if they have no conscious perception of what they’re seeing.  In essence, they seem to have only a vague feeling about what their eyes are seeing.

Credit: BruceBlaus via Wikipedia

There have been a number of theories about how this happens, but the one most cited proposes that there are alternate pathways for visual information to reach the patient’s executive centers in their frontal lobes.  The majority of the axons in the optic nerve go to the thalamus and then to the visual cortex, but about 10% go to a region in the upper brainstem or midbrain region called the superior colliculus.

Image credit: OpenStax College via Wikipedia

The visual processing that happens in this region is interesting because we have no conscious access to it.  It’s like a subterranean perception area.  None of the axons from the color sensitive cones cells project there, so the sensory images that form there are low resolution (blurry) and colorless.  These images typically drive reflexive eye movements including saccades, low level attention impulses, and other functions.

The alternate pathway theory assumes that signals from this region somehow reach the executive centers of the brain, allowing the accurate but “blind” guesses.

A new study by Australian researchers using fMRI has confirmed the existence of this alternate pathway.  It appears to go from the superior colliculus, to the pulvinar region in the thalamus, and from there to the amygdala, which communicates emotions to the executive center.  This is how the feeling of something in front of the person reaches their conscious perception without the visual part of the perception.

This seems like additional evidence that conscious perception is a cortical phenomenon.  If it were in the upper brainstem, we would likely have at least some conscious access to the visual images there.  And since the alternate pathway goes through the thalamus, that also seems to rule out that region as a seat for consciousness.  (At least by itself, since the thalamus is a signaling hub that the cortical regions use to communicate with each other.)

Of course, alternate explanations are always possible, but we have to keep Occam’s razor in mind.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , , | 69 Comments

Recommendation: The Soldier

Cover of The Soldier showing a space battle near a planetI’ve recommended Neal Asher’s books before.  He writes epic space opera where the stories take place over a vast scale, most of the characters are superhuman or alien entities, and the forces involved are titanic.  That was the motif of his Transformation series that I recommended back in 2017.

Like that series, The Soldier, which begins a new trilogy titled Rise of the Jain, takes place in a shared future interstellar setting that most of Asher’s stories share.  Earth lies at the center of an interstellar society called the Polity, which is ruled by AIs that are (generally) benevolent toward the humans.

The humans are generally enhanced to varying degrees such that baseline natural humans don’t really exist anymore.  Enhancements come in a lot of different forms, including changes to just make the person more physically robust, cyborg enhancements, human / AI hybrids, or just uploading into an AI crystal (the advanced quantum computing substrate in these stories).  Some humans also are enhanced after contracting an alien virus that gives them superhuman strength, resilience, and longevity (albeit at the potential cost of turning into something monstrous if they are injured too severely).

The Polity has an old enemy, the Prador, an interstellar kingdom of vicious crab like creatures with which it fought an epic war centuries ago, but now maintains a sort of detente.  The detente includes a neutral zone type region called The Graveyard.  The Prador seem to be an ever evolving concept in these stories, with the ones in this one continuing to show their development.

The main focus of this book is an ancient technology that is occasionally found throughout the galaxy, called Jain technology, named after an ancient but extinct species, the Jain.  Jain tech always appears to provide powerful benefits, but it is always a trap, a trap calculated to destroy civilizations.  So it is heavily proscribed and controlled.  Of course, that doesn’t stop some people from attempting to play with it, often with catastrophic consequences.

It turns out there is an accretion disk, a developing solar system, that is heavily infested with Jain technology.  The beginning of the book describes a team of AIs led by a human-AI hybrid named Orlandine, who guard the accretion disk to prevent the technology from escaping.  Orlandine herself is infested with Jain technology, but it apparently able to control it.  Her and the AIs are assisted in guarding the accretion disk by an ancient alien AI named Dragon, although Dragon’s motives are often obscure and it’s not always clear just how much of an ally it actually is.

Orlandine has a plan to conclusively eliminate the threat of the Jain tech in the accretion disk.  But Dragon fears a trap, and sets in motion a plan to delay her.  This in turn sets off a cascade of events leading to, among other things, resurrection of an alien AI hostile to both the Polity and the Prador who ends up going on a quest of self discovery, incubation of an ancient Jain artifact that turns out to be a Jain soldier (giving the book its title), and destruction and battles galore.

Similar to the Transformation series, the story is told from the perspective of several characters.  Asher is fearless in taking the viewpoint of utterly alien characters, so we get the perspective of human-AI hybrids, enhanced humans, straight out AIs, alien AIs, and even (briefly) a Jain AI.  This does an excellent job at adding dramatic tension to the story.  It also makes many characters, even one hostile to humans, far more sympathetic than they would otherwise be.

My only beef with this move is that, when describing the internal motivations of utterly alien characters, Asher ends up working in a lot of phrases like, “as the humans would say,” or, “like an Earth centipede,” from characters that seem unlikely to have their thoughts dominated by how humans think or how Earth creatures look.  But this may be an inevitable awkwardness of portraying utterly alien perspectives, and I give Asher a lot of credit for tackling those perspectives.

As in all his books, Asher delights in exploring how technologies work using exotic physics and materials, and how characters solve problems with those technologies.  Although many of the technologies, such as faster than light travel or runcibles, amount to magic, he puts a veneer of science on all of it.

Sometimes the book focuses so much on these technologies and their use that it starts to feel like descriptions of players in a game.  And as I noticed in my review of Dark Intelligence, I think Asher sometimes gets a little too carried away with the detail, making some of his writing tedious to get through.  But for the descriptions you like, they are mind candy.

This is space opera in its grandest tradition, involving gargantuan technologies, aliens, heady concepts, the return of an ancient threat, and action galore.  It is the first book in a series, so it does end with cliffhangers.  The technologies invoked often border more on fantasy than science fiction, but if it’s your cup of tea, I highly recommend it.

Posted in Science Fiction | Tagged , , , , | 4 Comments

The dangers of artificial companionship

Lux Alpstraum at Undark argues against “Our Irrational Fear of Sexbots”:

When most people envision a world where human partners are abandoned in favor of robots, the robots they picture tend to be reasonably good approximations of flesh-and-blood humans. The sexbots of “Westworld” are effectively just humans who can be programmed and controlled by the park’s operators.

…What most of us want isn’t an intimate relationship with a sentient Roomba, but a relationship with a being who closely approximates all the good parts of sex and love with a human — minus the messiness that comes with, well, sex and love with a human. Yet a robot capable not just of passing a Turing test but of feeling like a human partner in the most intimate of settings isn’t likely to be built any time soon. True AI is still a long ways off. Even if we assume that sexbot lovers will feel content with Alexa-level conversations, robots that not only look and feel real but also autonomously move with the grace and dexterity of a human aren’t within the realm of current, or near future, tech.

I have to admit I didn’t know that angst about sexbots was a thing, but given the success and acclaim of Westworld, not to mention other AI movies like Ex Machina and Her, it seems kind of inevitable.  I do think Alpstraum is right that realistic sex robots are not anything we’re going to have to worry about in the next few years.  Anything feasible in the short term, as Alpstaum mentions, remains firmly in the Uncanny Valley, the space where something that resembles humanity is just close enough to be creepy but not convincing.

That said, I do think the long term concern about sexbots is valid.  They do have the potential to disrupt normal human relationships.  But I’m going to broaden it to a long term concern about artificial companionship overall, not just involving sex, but friendship and social interactions of any kind.  It is worth noting the positive aspects of this for people needing caretakers such as the elderly or infirm, or for those who are just lonely.  But there is a danger.

Imagine a world where you are surrounded by entities that take care of you, do tasks for you, keep you company, laugh at all your jokes, pay attention to you whenever you want attention and go away when you don’t want it, and just all around make you the center of their world.  It seems like it would be extremely easy to fall into a routine where these entities, these artificial humans, become your entire sphere of interaction.

Now imagine how jarring it might be when you encounter an actual other human being, one with their own point of view, their own unfunny jokes, their own ego, their own selfish desires, and basically their own social agenda.  Is it that hard to imagine that many humans might prefer being with the first group?

Science fiction has looked at this many times.  An early example is Isaac Asimov’s The Naked Sun about a planet where humans are outnumbered 10,000 to one by their servant androids, where people live alone on vast estates with their androids, and where actual face to face interaction between humans is so rare that it has become dirty and taboo.  Another is Charles Stross’ Saturn’s Children, where humanity’s social and reproductive urges are so catered to by robots, that the humans end up going extinct, leaving behind a robot civilization that worships the memory of “the makers”.

Now, I doubt that humanity would ever go completely extinct because of this.  For one thing, we’re talking about it, as the Undark article demonstrates, which means that it’s entering our public consciousness as a concern, increasing the chances that we will eventually take steps to avoid that scenario.  And I suspect there would always be a portion of humanity that values the old ways enough to reject sexbots and other forms of artificial companionship.

But it’s still easy to see it leading to the overall human population crashing to some small portion of what it is today.  A civilization where real humans are vastly outnumbered by artificial engineered entities seems like a plausible scenario.  And that’s before considering that the line between evolved humans and engineered ones will likely be blurred as genetic manipulation and other forms of biological engineering eventually merge with machine engineering, leading to humans first being enhanced, then later copied and perpetuated.

So, there is a danger.  I don’t think the solution is to react as conservatives currently are, with talk of prohibitions.  A world with a much smaller human population isn’t necessarily a bad thing.  (Although it’s interesting to think about how this could lead to artificial intelligence being taboo as imagined by Frank Herbert in his Dune universe.)  But we should be aware of how artificial humans, when we get to the point that we can create them, might change us.

Posted in Zeitgeist | Tagged , , , , , | 11 Comments

What positions do you hold that are not popular?

Rebecca Brown has an article at Aeon on how philosophy can make the previously unthinkable thinkable.  She starts with a discussion of the Overton window:

In the mid-1990s, Joseph Overton, a researcher at the US think tank the Mackinac Center for Public Policy, proposed the idea of a ‘window’ of socially acceptable policies within any given domain. This came to be knownas the Overton window of political possibilities. The job of think tanks, Overton proposed, was not directly to advocate particular policies, but to shift the window of possibilities so that previously unthinkable policy ideas – those shocking to the sensibilities of the time – become mainstream and part of the debate.

Overton’s insight was that there is little point advocating policies that are publicly unacceptable, since (almost) no politician will support them. Efforts are better spent, he argued, in shifting the debate so that such policies seem less radical and become more likely to receive support from sympathetic politicians. For instance, working to increase awareness of climate change might make future proposals to restrict the use of diesel cars more palatable, and ultimately more effective, than directly lobbying for a ban on such vehicles.

This reminds me of someone on Twitter recently asking for what positions people held that were unpopular.  Here are mine (slightly expanded from the response tweet):

  1. The universe is ultimately meaningless.  Whatever meaning we find in this life, we have to provide, both to ourselves and to each other.
  2. There is no objective morality.  Ultimately what a society calls “moral” amounts to what the majority of a given population decides is allowable and what is not.  Innate instincts do provide some constraints on this, but the variances they allow are wider than just about anyone is comfortable with.
  3. Whether a given system is conscious is not a fact, but an interpretation, depending on what definition of “consciousness” we’re currently using.  Consciousness exists only relative to other conscious entities.
  4. We don’t have contra-causal free will, but social responsibility remains a coherent and useful concept.
  5. The mind is a physical process and system that can be understood, and someday enhanced and copied.
  6. Enhancement of ourselves, either with technological add-ons or genetic therapy, should be allowed, particularly when it will alleviate suffering.
  7. Politics is about inclusive self interest.  The political philosophies people choose are generally stances that benefit them, their family and friends, or people like them.  If we could admit this, compromising to get things done would be easier.

Those are mine.  What about you?  Do you have positions that are not currently popular, that may lie outside of the current Overton window?

Posted in Zeitgeist | Tagged , | 66 Comments

The implications of embodied cognition

Sean Carroll on his podcast interviewed Lisa Aziz-Zadeh on embodied cognition:

Brains are important things; they’re where thinking happens. Or are they? The theory of “embodied cognition” posits that it’s better to think of thinking as something that takes place in the body as a whole, not just in the cells of the brain. In some sense this is trivially true; our brains interact with the rest of our bodies, taking in signals and giving back instructions. But it seems bold to situate important elements of cognition itself in the actual non-brain parts of the body. Lisa Aziz-Zadeh is a psychologist and neuroscientist who uses imaging technologies to study how different parts of the brain and body are involved in different cognitive tasks.

As Carroll notes in his description, the idea of embodied cognition could almost be considered trivially true.  The body is the brain’s chief object of interest.  It is hardwired to monitor and control it.  Cognition in a brain is relentlessly oriented toward this relationship, to the extent that when we think about abstract things, we typically do so in metaphors using sensory or action experience, experiences of a primate body.

A recent study showed that our memories and imagination are actually mapped according to internal location maps primordially used for tracking physical locations.  In light of the brain’s body focus and orientation, this makes complete sense.  (I often think of the location of various web sites, including this one, as existing in an overall physical space, which completely fits with these findings.)

It’s fair to say that the body is what gives the information processing that happens in a brain its meaning.  That said, I do think some of the embodied cognition advocates get a little carried away, asserting that thinking is impossible without a body.

It may be that a human consciousness can’t develop without a body.  If we could somehow grow a human brain without a body, it’s hard to imagine what kind of consciousness might be able to form.  It seems like it would be an utterly desolate one by our standards.  But once it has developed with a body, I think we have plenty of evidence that the human mind is far more resilient than many people assume.

Patients with their spinal cord severed at the neck are cut off from most of their body.  Without the interoceptive feedback, their emotions are reportedly less intense than healthy people’s, but they retain their mind and consciousness.  Likewise, someone can be blind, deaf, lose their sense of smell, or apparently even have their vagus nerve cut, and still be conscious (albeit perhaps on life support).

It seems like the only essential component that must be present for a mind is a working brain, and not even the entire brain.  Someone can have their cerebellum destroyed and remain mentally complete.  (They’ll be clumsy, but their mind will be intact.)  The necessary and sufficient components appear to be the brainstem and overall cerebrum.  (We can lose small parts of the cortex and still retain most of our awareness, although each loss in these regions comes with a cost to our mental abilities.)

Embodied cognition is also sometimes invoked to make the case that mind uploading is impossible, even in principle.  I think it does make the case that a copied human mind would need a body, even if a virtual one.  And it definitely further illuminates just how difficult such an endeavor would be.  But “impossible” is a very strong word, and I don’t think this line of reason really establishes it.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , , | 36 Comments

Dark energy and repulsive gravity

Over the weekend, Sean Carroll put up a blog post to address common misconceptions about cosmology.  I understood most of his points, but was confused when I saw this one:

Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.

Carroll was referring to the accelerating expansion of the universe.  But gravity causing the acceleration, instead of dark energy?  I asked in a comment, along with at least one other commenter, how this could be so.  Carroll was kind enough to respond to us:

Gravity causes the universe to accelerate because gravity is not always attractive. Roughly speaking, the “source of gravity” is the energy density of a fluid plus three times the pressure of that fluid. Ordinary substances have positive energy and pressure, so gravity attracts. But vacuum energy has negative pressure, equal in size but opposite in sign to its energy. So the net effect is to push things apart.

I had always been under the impression that dark energy was simply the unknown force behind the accelerating expansion, a force I understood to be in opposition to gravity.  However, it appears that dark energy actually affects gravity by causing it, on cosmological scales, to be repulsive, to repel distant parts of the universe apart from each other.

The force behind this appears to be negative pressure.  Pressure, it turns out, is a source of gravity.  Brian Greene in his book, The Fabric of the Cosmos, explains that pressure, in the sense of outward pushing, like what you might find with a coiled spring, is a form of energy, and energy generates gravity.  Negative pressure, such as the tension in a rubber band that wants to contract when it’s stretched out, is energy going in a different direction.  Negative pressure actually has a negative effect on the attractive force of gravity, causing it to be repulsive.

Albert Einstein understood this when he first formulated his Cosmological Constant to explain why gravity didn’t cause the universe to collapse.  After Edwin Hubble’s discovery that the universe was in fact expanding, Einstein would regard the Cosmological Constant as his greatest blunder.  It’s therefore ironic that several decades later it became useful again with the discovery that the expansion of the universe was actually accelerating.

So gravity can be repulsive, and dark energy, an energy apparently permeating all of space, due to its negative pressure, brings out this repulsive nature.  Many of you who are more knowledgeable about physics no doubt already understood this, but it was a major revelation to me.

I briefly wondered if this might be a way to achieve the anti-gravity capabilities that often show up in science fiction.  But after giving it some thought, no, it wouldn’t.

The problem is that most of what generates the gravity that attracts, say, a flying car, to the Earth is Earth’s overall mass.  In order to overcome this with repulsive gravity, the car would have to generate so much negative pressure that it would cause the car to generate more repulsive gravity than the Earth’s attractive gravity.  Such a force would violently repel everything around it, push the Earth out of its orbit, and probably cause a host of other catastrophes.  Not exactly a practical solution.

Still, this is a fascinating effect and I learned something new!

Posted in Zeitgeist | Tagged , , , , | 22 Comments

China will have the world’s largest economy in 2020

At least, according to a report by Standard Charter Bank as reported by Big Think:

  • The Standard Chartered Bank, a British multinational banking and financial services company, recently issued a report to clients outlining projections about the world economy up until 2030.

  • The report predicts Asian economies will grow significantly in the next decade, taking seven of the top 10 spots on the list of the world’s biggest economies by 2030.

  • However, the researchers formed their predictions by measuring purchasing power parity at GDP, which is an approach that not all economists would use in these kinds of projections.

The Big Think article discusses the last point, that according to exchange rates rather than purchasing power parity, the US will remain the largest economy for a few more years.  It also makes the point that the total overall size of the economy is different from the GDP per capita, the income for the average person in the economy, with China at $18,000 and the US at $63,000.

However, I think this misses the point.  Historically, total economic size equaled economic power, and economic power equaled political and military power.  The ascent of China, and Asia overall, will eventually change the political and cultural orientation of the world.  Deft maneuvering by the US on the international scene might delay this for a while (although that’s decidedly not what we’re getting with our current dumpster fire of an administration) but the long term writing appears to be on the wall.

The world seems primed to be a very different place in coming decades.

Posted in Zeitgeist | Tagged , , , | 3 Comments

A qualified recommendation: Consciousness Demystified

A couple of years ago I did a series of posts inspired by Todd Feinberg and Jon Mallatt’s excellent  The Ancient Origins of Consciousness, a book on the evolution of animal consciousness.  Somewhat building on what I had read in Antonio Damasio’s Self Comes to Mind, it was a pivotal point in my exploration of consciousness science.  Feinberg and Mallatt shook me out of my human centered understanding of consciousness, one that was largely focused on various forms of metacognition.

Consciousness Demystified coverThey’ve written a new book, Consciousness Demystified.  Unlike their first book, this new one is much more approachable for general readers, although it covers the same basic topics, albeit updated with some new concepts that have come along since the last book.

One of the things Feinberg and Mallatt did that I thought was useful was breaking up the overall concept of consciousness into various types: exteroceptive consciousness, interoceptive consciousness, and affect consciousness.

Exteroceptive consciousness is awareness of the outside world, image maps, models built on information from distance senses such as sight, hearing, and smell.  Interoceptive consciousness is the internal awareness of a body, how the stomach feels, the lungs, or muscles.  Touch and proprioception often sit on the boundary between these categories.  In this book, Feinberg and Mallatt group these perceptions under the phrase “image based consciousness”.

Image based consciousness is interesting because the image maps, the neural firing patterns in the early sensory regions in the brain, are topographically or isomorphically mapped to the surface of the sense organ.  So the pattern that the photoreceptors on the retina are activated in is preserved in the bundle of axons that project up the optic nerve to the thalamus and then to the visual cortex.  A similar relationship exists for touch where each body part ends up being mapped to particular regions in the somatosensory cortex.

But image based consciousness, perception, is more than just these initial firing patterns.  It includes the patterns of neurons activated in later neural layers, layers that map associations, where a particular pattern gets mapped to a concept.  Eventually these layers become integrated across the senses into multi-modal perceptions, such as a piece of food, or a predator.

The third category is affective consciousness, essentially emotional and other valence based feelings.  Unlike image based consciousness, affective consciousness is not mapped to any sense organs.  Affects tend to be global states.  For example, you don’t feel sad in your foot, you just feel sad.  Another name for affective consciousness is sentience.

Many consider affective consciousness, sentience, the ability to feel, to be consciousness.  But in principle there’s no reason that an organism can’t have image based consciousness with only reflexive reactions to the contents of that consciousness, to essentially only have perception paired with unthinking action.

The authors talk about criteria that can be used to determine whether a particular animal has affective consciousness:

Behavioral criteria showing an animal has affective consciousness (likes and dislikes)

  1. Global operant conditioning (involving whole body and learning brand-new behaviors)
  2. Behavioral trade-offs, value-based cost-benefit decisions
  3. Frustration behavior
  4. Self-delivery of pain relievers or rewards
  5. Approach to reinforcing drugs or conditioned place preference

Feinberg, Todd E.. Consciousness Demystified (The MIT Press) . The MIT Press. Kindle Edition.

(As I’ve discussed in other posts, I think affect awareness is closely associated with imaginative simulations, which if you think about it, are necessary to meet all of these criteria, except possibly 3.)

One omission a consciousness aficionado may notice here is self reflection, introspective self awareness, metacognition.  Feinberg and Mallatt explicitly exclude this from their scope.  Their focus is on primary consciousness, also known as sensory consciousness, which could be equated with phenomenal consciousness.  (Although this last association is controversial.)

Most of their discussion is focused on vertebrates, but the authors do spend time exploring the possibility of invertebrate consciousness.  As they did in their first book, they express reservations about the tiny brains of insects, but on balance conclude that many arthropods are conscious to one degree or another, as well as cephalopods (octopusses, etc).  Given the early divergence of these evolutionary lines, consciousness appears to be an example of convergent evolution.

In chapters on the evolution of consciousness, Feinberg and Mallatt spend time discussing the evolution of reflex arcs, then the gradual accumulation of predictive functionality into image based and affective consciousness.  As in the earlier book, they see this happening during the Cambrian Explosion, making consciousness very ancient.  They finish up with what they see as the adaptive values of consciousness:

Adaptive advantages of consciousness

  • It efficiently organizes much sensory input into a set of diverse qualia for action choice. As it organizes them, it resolves conflicts among the diverse inputs.
  • Its unified simulation of the complex environment directs behavior in three-dimensional space.
  • Its importance ranking of sensed stimuli, by assigned affects, makes decisions easier.
  • It allows flexible behavior. It allows much and flexible learning.
  • It predicts the near future, allowing error correction.
  • It deals well with new situations.

Feinberg, Todd E.. Consciousness Demystified (The MIT Press) . The MIT Press. Kindle Edition.

They finish up with a discussion of the hard problem, introducing two terms: auto-ontological irreducibility and allo-ontological irreducibility.  The first refers to the fact that the brain has no sensory neurons, and we have no introspective access to its lower level processing, which means that we can never intuitively look at brain operations and feel like they reflect our subjective states.  The second refers to the fact that an outside observer can never access the subjective state of a system, if it has one.  Together these create an uncrossable subjective / objective divide, although understanding why the divide exists can drain the mystery from it.

My recommendation for this book is qualified.  If you didn’t read their earlier technical book, then this more approachable version may well be worth your time, particularly if the technical nature of the early book was what made you avoid it.  That said, if you’re not comfortable looking at anatomical brain diagrams, this still may not be your cup of tea.

But if you did read that earlier book, I’m not sure this new one has enough to warrant the time and money.  It does contain some concepts that came up in the last few years, as well as descriptions of new experiments and research, but you have to be a serious brain geek like me to make it worth it.

Finally, I can’t resist mapping the categories Feinberg and Mallatt discuss into the hierarchy of conscious capabilities I often use to discuss this stuff.

  1. Reflex arcs
  2. Perception (exteroceptive and interoceptive image based awareness)
  3. Attention
  4. Imagination with affect awareness, enabling the abilities to meet the criteria above for affect consciousness, sentience
  5. Self reflection, metacognition

This hierarchy was, in many ways, inspired by Feinberg and Mallatt’s earlier book.

Posted in Mind and AI | Tagged , , , , , , , | 15 Comments