The layers of emotion creation

Image credit: Toddatkins via Wikipedia

What are emotions?  Where do they come from?  Are they something innate or something we learn?  The classic view is that they’re precognitive impulses that happen to us.  If so, this would imply that they have specific neural signatures.

Early in her career, psychologist Lisa Feldman Barrett attempted to isolate the neural basis of emotions using brain scans, but discovered no consistent patterns for any one emotion.  For example, the neurons involved in anger were not consistent in every brain, or even in every instance of anger in a particular individual.  No individual neuron consistently fired in every case.

This evidence seemed to line up with what she had already found when attempting to find the behavioral signatures of emotions.  After discovering problems with research showing multiple cultures the same emotional faces, she eventually discovered that reading real emotions via facial expressions has little or no objective basis.  It only seems to work when the other person is from a similar culture.  Objectively reading emotions from the faces of people from other cultures proved to be impossible.

Each emotion, she found, was more a category of neural activity rather than a discrete process.  Their diffuse nature led her to conclude that emotions are mental concepts.  Similar to other types of concepts such as chairs, dogs, or hiking, they are things we learn, sensory predictions our brains develop over a lifetime.

Somewhat in opposition to her understanding, is the one championed by the recently deceased neuroscientist Jaak Panksepp.  Focusing on animal studies, Panksepp identified what he referred to as seven primary emotions, labeling them in all caps as: RAGE, FEAR, SEEKING, LUST, CARE, PANIC/GRIEF, and PLAY.  Panksepp identified these primary impulses as sub-cortical and mostly sub-cerebral circuits arising from the brainstem and mid-brain regions, deep regions that are difficult to reach with brain scans.

So who is right?  As is often the case, binary thinking here can mislead us.  I think it helps to review the details on how they agree and disagree.

Barrett admits in her book, How Emotions Are Made, that most animals have circuits for what are commonly called the “Four Fs”: fighting, fleeing, feeding, and mating.  It doesn’t seem like much of a stretch to link Panksepp’s RAGE to fighting, FEAR to fleeing, LUST to mating, and to see feeding as a result of a type of SEEKING.  In other words, Barrett’s admission of the Four Fs gets her halfway to Panksepp’s version of innate impulses.  It’s not hard to imagine that mammals and birds have innate impulses for CARE and PANIC/GRIEF.

However, there is a big difference in how Barrett and Panksepp label these circuits.  Barrett makes a sharp distinction between emotions and affects.  She regards affects as more primal states and characterizes them in a fairly minimalist manner, sticking to the traditional description with two dimensions: valence and arousal.  Valance is an assessment of how good or bad a stimulus is.  And arousal is the level of excitement the stimulus causes.  Barrett either is unaware of or disagrees with the idea put forth by other scientists that affects also include an action motivation.  She sees affects as an early pre-emotional stage of processing.

Panksepp in contrast, in his own book, The Archaeology of the Mind, seemed to view the words “emotion” and “affect” as almost synonymous, or perhaps saw emotions as merely a type of affect.  Panksepp’s view was that the primal drives he identified were feeling states, that they include some form of lower level consciousness in the sub-cerebral regions he discussed.

In this sentiment, Panksepp stood in opposition to Barrett’s view that these are best described as “survival circuits”, non-conscious reflexes.  From what I’ve read, the vast majority of neurobiologists would agree with Barrett on this point, that consciousness is a cortical phenomenon, at least in mammals, and that sub-cortical, or at least sub-cerebral structures are primarily reflexive in nature.

Interestingly, both Barrett and Panksepp push back against the popular notion that emotions, particularly fear, originate from the amygdala.  In Panksepp’s view, the amygdala seems like more of a linkage system, linking cerebral memories to the primal impulses he identifies as coming from lower level structures.  Both acknowledge that patients with destroyed amygdalae can still experience fear, although generally only in response to a subset of the stimuli that cause it in healthy people.

And Panksepp admits early in his book that social learning has a major role to play in the final experience of felt emotions in the neocortex, which seems like a major concession to Barrett’s views.

One significant difference between them, which may come down to a definitional dispute, is whether animals have emotions.  Panksepp, using his understanding of emotions, unequivocally considered animals to have them.  Barrett, while admitting some great apes may come close, doesn’t think animals have much beyond what she considers affects.   In her view, most species lack the neural machinery to learn emotions the way a human does.

My own reaction to all of this is similar to how I view consciousness, as a multi-level phenomenon.  It seems pointless to argue about how and where emotions arise unless we can agree on definitions.  I perceive most, though not all, of the disagreement between Barrett and Panksepp to amount to a difference in definition, in what is meant by the word “emotion”.

Regardless of how we label them, it seems like what we experience as emotions start as lower level reflexes in sub-cerebral regions, which reach cortical regions as primal feelings.  Over a lifetime, the interpretation of these primal feelings change and evolve into what we commonly refer to as emotional feelings.

It seems like there’s a major divide here between interoception, sensory perceptions from the body, and exteroception, perceptions from distance senses such as sight, hearing, and smell.  We probably have strong innate primal responses to certain interoceptive sensations, but mostly have to learn which exteroceptive ones to link to those primal responses.

In other words, we don’t have to learn that a stinging sensation on our hand is bad.  We likely know that innately.  But we do have to learn (as I once did as a small child) that sticking our hand in an anthill will result in the bad stinging sensation.  Once we’ve done it the first time, our amygdala will link the memory of an anthill to the negative valence of a stinging hand.

So emotions, it seems to me, have an innate core and a learned response.  Crucially, introspectively separating the learned response from the innate primal feeling is extremely difficult, perhaps impossible.  This is why anthropologists can often say that emotions are not universal across all cultures.  In their final learned form, we shouldn’t expect them to be.

What do you think?  Do you see any issues with either Barrett’s or Panksepp’s view?  Or my own syncretized one?  Am I missing anything?

Posted in Mind and AI | Tagged , , , , , , , | 33 Comments

Politics is about self interest

I’ve read a lot of history, including American history of the 18th and 19th centuries.  It’s interesting to read about the politics of these periods.  From a distance across generations and centuries, you can see the distinction between the self interested stances people took and the rhetoric that was used to justify those stances.

An example from the 18th century was the controversy about the new federal government assuming the Revolutionary War debt from the states.  Both sides of the controversy had philosophical reasons for their position, such as concern about federal power versus the benefits of establishing faith and credit for the United States.  But in general, the states that favored the idea (called “assumption”) still had a lot of war debt, while the states that were against it had paid most or all of their debt already.

This also holds for what was the most controversial issue in early America: slavery.  People’s stance on this issue seemed to be heavily influenced by the economy of their state.  In northern industrial states, slavery was becoming less economically viable and dying out, and was therefore seen as barbaric.  However, in the largely agricultural southern states, slavery remained a major part of the economic system, and was therefore seen as a vital institution.

It’s much more difficult for us to separate the stories we tell ourselves today from the self interested realities.  This is probably why some political scientists argue that people aren’t motivated by self interest when they vote.  But that idea simply isn’t backed by history or psychology.

In their book, The Hidden Agenda of the Political Mind: How Self-Interest Shapes Our Opinions and Why We Won’t Admit It, Jason Weeden and Robert Kurzban argue self interest figures heavily into our political positions.

This isn’t something we generally do consciously.  Citing psychology research that shows we often don’t understand our own motivations, they argue that our unconscious mind settles on stances that reflect our inclusive personal interests, with “inclusive” meaning that it includes the interests of our friends and family.

We tell ourselves a high minded story, one that we consciously believe, but like the public relations spokesperson for a large corporation, our consciousness is often uninformed on the actual reasons why the Board of Directors of our mind adopt a stance.  In other words, our self interested positions feel like the morally right ones to have, and people opposed to our positions seem evil or stupid.

Working from this premise, and using data from the United States GSS (General Social Survey), Weeden and Kurzban proceed to show correlations between political positions and various demographic, lifestyle, and financial income factors.  They also periodically glance at broader international data and, although the specific issues and populations vary, find that the general principle holds.

They identify some broad factors that have large effects on our political positions, including things such as sexual lifestyle, membership in traditionally dominant or subordinate groups (religion, race, sexual orientation, etc), the amount of human capital we have, and financial income.

The first factor, sexual lifestyle, generally affects your attitude on a number of social issues such as abortion, birth control, pornography, and marijuana legalization.  Weeden and Kurzban break people into two broad groups: Ring-bearers and Freewheelers.

Ring-bearers tend to have fewer sexual partners across their life, generally making a commitment to one partner, marrying them, and having a family with a higher number of children.  They often strongly value their commitments (which is why they’re called “Ring-bearers”).  A major concern for Ring-bearers is the possibility of being tempted away from those commitments, having their spouse be tempted away, or their kids being tempted away from leading a similar lifestyle.

This concern often makes them want to reduce the prevalence of lifestyles that lead to such temptation, such as sexual promiscuity.  As a result, Ring-bearers tend to favor policies that make promiscuous lifestyles more costly.  Which is why they’re generally pro-life, oppose birth control and sexual education, and oppose things like marijuana legalization, which is perceived as facilitating promiscuity.

Of course the reasons they put forward for their stances (and consciously believe) don’t reflect this.  For the abortion stance, they’ll often argue that they’re most concerned about protecting unborn children.  But the fact that they’re usually willing to make exceptions in cases of rape or incest, where the woman’s sexual lifestyle usually isn’t a causal factor, shows their true hand.

On the other side are the Freewheelers.  Freewheelers generally lead a more active sexual lifestyle, or aspire to, or want to keep their options open for that lifestyle.  They’re less likely to marry, more likely to divorce if they do, and generally have fewer kids.

Freewheelers generally don’t want their life style options curtailed, and don’t want to experience moral condemnation for it.  This generally makes them pro-choice, in favor of birth control and family planning, and in favor of things like marijuana legalization.

Like Ringbearers, Freewheelers usually don’t admit to themselves that preserving their lifestyle options is the motivating factor for their social stances.  Again, focusing on abortion, Freewheelers usually say and believe that their stance is motivated to protect women’s reproductive freedom.  But the fact that pro-choice people are often comfortable with other laws that restrict personal freedoms, such as seat belt laws or mandatory health insurance, shows that personal freedom isn’t the real issue.

Freewheelers also often don’t have the private support networks that Ringbearers typically enjoy, such as church communities, which Weeden and Kurzban largely characterize as child rearing Ringbearer support groups.  This makes Freewheelers tend to be more supportive of public social safety net programs than Ringbearers.

The next factor is membership in traditionally dominant or subservient groups.  “Groups” here refers to race, gender, religion, sexual orientation, immigrant status, etc.  In the US, traditionally dominant groups include whites, Christians, males, heterosexuals, and citizens, while traditionally subservient groups include blacks, Hispanics, Jews, Muslims, nonbelievers, females, gays, transsexuals, and immigrants.  It’s not necessarily surprising that which group you fall in affects your views on the fairness of group barriers (discrimination) or set-asides (such as affirmative action).

But there’s a complicating factor, and that is the amount of human capital you have.  Human capital is the amount of education you’ve attained and/or how good you are at taking tests.  Having high human capital makes you more competitive, reducing the probability that increased competition will negatively affect you.  People with high levels of human capital are more likely to favor a meritocracy.  On the other hand, having low human capital tends to make getting particular jobs or getting into desirable schools more uncertain, so increased competition from any source tends to be against your interests.

For people with high human capital and in a dominant group, group barriers mean little, so people in this category tend to be about evenly split on the fairness of those barriers.  But people with low human capital and in a dominant group tend to be more effected by increased competition when group barriers are reduced, making them more likely to be in favor of retaining those barriers.

People in subservient groups tend to be opposed to any group barriers, or at least barriers affecting their particular group.  People in subservient groups and with high human capital, once barriers have been removed, tend to favor a meritocracy and to be less supportive of specific group set asides.  But people in subservient groups and with low human capital tend to be in favor of the set-asides.

All of which is to say, more educated people tend to be less affected by group dynamics unless they’re being discriminated against, but less educated people are more affected by those dynamics.  Less educated people discriminate more, not because they’re uneducated, but because their interests are more directly impacted by the presence or absence of that discrimination.

And finally, Weeden and Kurzban look at financial income.  It probably won’t surprise anyone that people with higher incomes are less supportive of social safety net programs, which essentially redistribute income from higher income populations to lower income ones, but that people with lower incomes are usually in favor of these programs.

Most people fall in some complex combination of these groups.  Weeden and Kurzban recognize at least 31 unique combinations in the book.  Which particular combination a person is in will define their political perspective.

For example, I’m a Freewheeler (relatively speaking), mostly in dominant groups except in terms of religion, where I’m in a subservient group (a nonbeliever), have moderately high human capital (a Master’s degree), and above average income.  Weeden and Kurzban predict that these factors would tend to make me socially liberal, modestly supportive of social safety nets, opposed to religious discrimination, in favor of meritocracy, and economically centrist.  This isn’t completely on the mark, but it’s uncomfortably close.

But since people fall into all kinds of different combinations, their views often don’t fall cleanly on the conservative-liberal political spectrum.  Why then do politics in the US fall into two major parties?  I covered that in another post last year, but it has to do with the way our government is structured.  The TL;DR is that the checks and balances in our system force broad long lasting coalitions in order to get things done, which tend to coalesce into an in-power coalition and an opposition one.

In other words, the Republican and Democratic parties are not philosophical schools of thought, but messy constantly shifting coalitions of interests.  Republicans are currently a coalition of Ringbearers, traditionally dominant groups, and high income people.  Democrats are a coalition of Freewheelers, traditionally subservient groups, and low income people.  There may be a realignment underway between people with low human capital in dominant groups (white working class) and those with high human capital, but it’s too early to tell yet how durable it will be.

But it’s also worth remembering that 38% of the US population struggles to consistently align with either party.  A low income Freewheeler in traditionally dominant groups, or a high income Ringbearer in a traditionally subservient group, might struggle with the overall platform of either party.

So what does all this mean?  First, there’s a lot of nuance and detail I’m glossing over in this post (which is already too long).

Weeden and Kurzban admit that their framework isn’t fully determinant of people’s positions and doesn’t work for all issues.  For example, they admit that people’s stances on military spending and environmental issues don’t seem to track closely with identifiable interests, except for small slices of the population in closely related industries.

The authors’ final takeaway is pretty dark, that political persuasion is mostly futile.  The best anyone can hope to do is sway people on the margins.  The political operatives are right, electoral victory is all about turning out your own partisans, not convincing people from the other side, at least unless you’re prepared to change your own position to cater to their interests.

My own takeaway is a little less stark.  Yes, the above may be true, but to me, when we understand the real reasons for people’s positions, finding compromise seems more achievable if we’re flexible and creative.  For instance, as a Freewheeler, the idea of content ratings and restricting nightclubs to red light districts suddenly seem like decent compromises, ones that don’t significantly curtail my freedom but assuage Ringholder concerns of being able to keep those influences away from them and their family.

And understanding that the attitude of low human capital Americans toward illegal immigrants is shaped by concern for their own livelihood, rather than just simple bigotry, makes me look at that issue a bit differently.  I still think Trump is a nightmare and his proposed solutions asinine, but this puts his supporters in a new light.  Most politicians tend to be high human capital people and probably fail to adequately grasp the concerns of low human capital voters.  In the age of globalization, should we be surprised that this group has a long simmering anger toward the establishment?

In the end, I think it’s good that we mostly vote our self interest.  We typically understand our own interests, but generally don’t understand the interests of others as well as we might think.  This is probably particularly true when we assume people voting differently than us are acting against their own interests.

Everyone voting their own interests forces at least some portion of the political class to take those interests into account.  And that’s the whole point of democracy.  Admittedly, it’s very hard to remember that when elections don’t go the way you hoped they would.

Posted in Society | Tagged , , , | 28 Comments

Adding imagination to AI

As we’ve discussed in recent posts on consciousness, I think imagination has a crucial role to play in animal consciousness.  It’s part of a hierarchy I currently use to keep the broad aspects of cognition straight in my mind.

  1. Reflexes, instinctive or conditioned responses to simuli
  2. Perception, which increases the scope of what the reflexes are reacting to
  3. Attention, which prioritizes what the reflexes are reacting to
  4. Imagination, action scenario simulations, the results of which determine which reflexes to allow or inhibit
  5. Metacognition, introspection, self reflection, and symbolic thought

Generally, most vertebrate animals are at level 4, although with wide ranging levels of sophistication.  The imagination of your typical fish is likely very limited in comparison to the imagination of a dog, or a rhesus monkey.

Computers have traditionally been at level 1 in the sense that they receive inputs and generate outputs.  The algorithms between the inputs and outputs can get enormously complicated, but garden variety computer systems haven’t got much beyond this point.

However newer cutting edge autonomous systems are beginning to achieve level 2, and depending on how you interpret their systems, level 3.  For example, self driving cars build models of their environment as a guide to action.  These models are still relatively primitive, and still hitched to rules based engines, essentially to reflexes, but it’s looking like that may eventually be enough to allow us to read or sleep during our morning commutes.

But what about level 4?  As it turns out, the Google DeepMind people are now trying to add imagination to their systems.

Researchers have started developing artificial intelligence with imagination – AI that can reason through decisions and make plans for the future, without being bound by human instructions.

Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.

The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they haven’t been specifically programmed for. Insert your usual fears of a robot uprising here.

I don’t doubt the truth of that last sentence, that this is going to probably freak some people out, such as perhaps a certain billionaire (<cough>Elon Musk</cough>).  But it’s worth keeping in mind how primitive this will be for the foreseeable future.

Despite the success of DeepMind’s testing, it’s still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, it’s a promising start in developing AI that won’t put a glass of water on a table if it’s likely to spill over, plus all kinds of other, more useful scenarios.

I’ve argued before why I think the robot uprising concerns are vastly overstated.  But putting those reasons aside, we’re a long way from achieving human level imagination or overall intelligence.  People like Nick Bostrom point out that, taking a broad view, there’s not much difference between the intelligence of the village idiot and, say, Stephen Hawking, and that systems approaching human level intelligence may shoot through the entire human range into superhuman intelligence very quickly.

But there are multiple orders of magnitude difference between the intelligence of a village idiot and a mouse, and multiple orders of magnitude difference between the mouse and a fruit fly.  Our systems aren’t at fruit fly level yet.  (Think how much better a Roomba might be if it was as intelligent as a cockroach.)

Still, adding imagination, albeit a very limited one, may be a very significant step.  I think it’s imagination that provides the mechanism for inhibiting or allowing reflexive reactions, that essentially turn a reflex into an affect, a feeling.  These machine affect states may be utterly different from what any living systems experience, so different that we may not be able to recognize them as feelings, but the mechanism will be similar.

It may force us to consider our intuitive sense of what feelings are.  Would it ever be productive to refer to machine affect states as “feelings”?  Can a system whose instincts are radically different from a living system’s have feelings?  Is there a fact of the matter answer to this?

Posted in Mind and AI | Tagged , , , | 21 Comments

The difficulty of interstellar travel for humans

Futurism.com has an article reviewing the results of a survey they conducted with their readers asking when the first human might leave the solar system.  The leading answer was after the year 2100, which make sense given our current level of progress just getting humans back out of low Earth orbit.  But I think the prospects for human interstellar exploration are far more difficult than most people realize.

First, when pondering this question, we can forget about the traditional sci-fi answers of hyperspace, warp drive, or similar notions.  These concepts are fictional plot devices with little or no basis in science.  Even concepts that have been considered by scientists, such as wormholes and Alcubierre warp drives, are extremely speculative, requiring that certain speculative and unproven aspects of physics be true.

Getting anything like these types of technology will require a new physics.  Undoubtedly, we will learn new things about physics in the decades and centuries to come, but the probability that what we learn will enable interstellar travel to function like sea travel (the preoccupation of most space opera stories) is an infinitesimal slice of the possibilities.

The only way to speculate scientifically on this kind of thing is to take the science that we currently have and try to extrapolate from it.  When we do that, the obstacles to human interstellar flight seem pretty daunting.

Worrying about the speed of light limit, which current physics tells us is the ultimate speed limit in the universe, is sour grapes to a large extent.  Even getting to an appreciable percentage of the speed of light turns out to require astounding amounts of energy.  Doing it with our current chemical rocket technology is a lost cause.  According to Paul Gilster, in his book Centauri Dreams, Imagining and Planning Interstellar Exploration (which I recommend for anyone interested in this subject, as well as his blog), it would take more mass than exists in the visible universe to propel a chemical rocket to a substantial fraction of the speed of light.

An artist’s concept of nuclear pulse propulsion.
Image credit: NASA

Of course, there are better, more efficient propulsion options that might eventually be available.  For purposes of this post, let’s leap to the most efficient and plausible near term option, nuclear pulse propulsion.  This is a refined version of an original idea that involved lobbing nuclear bombs behind a spacecraft to push it forward.

Giltster, in his book, notes that a nuclear pulse propulsion spacecraft, to reach 10% of light speed, would need a mass ratio of 100:1.  This means that for every kilogram of the spacecraft with only its payload, you’d need 100 kilograms of fuel.   Initially, that doesn’t sound too bad since the Apollo missions had an overall mass ratio of 600:1.  But that was for the entire mission, and all we’ve considered so far is the mass ratio to accelerate to 10% of light speed.  We haven’t talked about slowing down at the destination.

In space, given inertia and no air friction, slowing down takes just as much energy as speeding up.  And the kicker is that you have to accelerate the fuel you’ll later need to decelerate.  So slowing down doesn’t just double the mass ratio from 100:1 to 200:1.  The deceleration fuel has to be on the “1” side in the initial acceleration ratio.  That means the ratio for the overall mission (and we’re only talking about a one way mission here), has to be squared, taking it from 100:1 to 10,000:1.

Traveling at 10% lightspeed gets us to Proxima Centauri, the nearest star to the sun, in about 43 years.  When you consider what kind of living space a human crew would need for that time span, and multiply it out by 10,000, an interstellar mission starts to look like the most expensive thing human civilization might ever attempt.  It gets worse if we try to shorten the time span.  Increasing the speed to 20% of light speed raises the ratio to 100,000,000:1.

Imagining antimatter technology might improve the mass ratio substantially.  But it adds new difficulties.  Producing antimatter itself takes tremendous amounts of energy.  It would have to be manufactured and stored in deep space, since any containment failure of any substantial amount would likely result in a gigaton level explosion.  We might save on the mass ratio of the spacecraft, but only at the expense of vast resources dedicated to generating and storing the fuel.  And human crews would likely have to be heavily shielded from the gamma rays generated by antimatter reactions, increasing mass throughout the overall craft.

No discussion of this type is complete without at least mentioning the Bussard ramjet, the idea of a spacecraft with an immense ram scoop to take in interstellar dust to use as fuel.  There was a lot of excitement for this concept in the 60s and 70s, but further study has shown that the interstellar medium isn’t nearly as thick as the initial design hoped for, and many think the ram scoop would generate as much friction as thrust.

Other options are to forego rocketry altogether and go for something like light sails.  Robert Forward, decades ago, put forth a design where a gigantic laser on Mercury would send out a beam to an interstellar light sail spacecraft, steadily accelerating it.  At some point, the craft would separate its sail into two components, one of which would be hit by the laser and reflect it back to the remaining component attached to the craft, decelerating it.  Forward’s design is ingenious, but it would still require titanic amounts of energy, and precise coordination across centuries and light years to work.

Things get a lot easier if we just think about sending uncrewed probes.  That’s the current direction of the Starshot Breakthrough initiative.  The idea is to propel a small, perhaps gram sized probe, to 20% of light speed using Earth based lasers.  The probes would reach Proxima Centauri in about 22 years, taking pictures and readings as they fly through the system, and transmitting the information back to Earth.  There are still major technological hurdles to overcome with this idea, but they all seem achievable within reasonable time periods and with reasonable amounts of energy.

The big drawback to the Starshot design is that it doesn’t have any way to slow the probe down, so everything would have to be learned in the few hours available as it sped through the target system.  An alternate design has been proposed, using the unique topology of the Alpha Centauri / Proxima Centauri system to slow down the probe, but at the cost of increasing the travel time to over a century.

But once we give up the idea of crewed missions, the rocket solutions actually become more plausible.  A 10,000:1 ratio doesn’t seem problematic if the ultimate payload is a one gram probe.  Even the 100,000,000:1 ratio associated with a 20% light speed mission starts to look conceivably manageable.

And when we consider the ongoing improvements in artificial intelligence and the idea of probes building their own daughter probes to explore the destination system, and perhaps even to eventually launch toward systems further out, the possibilities start to look endless.

All of which is to say, that it’s much easier to conduct interstellar exploration with robots, particularly very small ones, than with humans.  It seems likely that we’re going to be exploring the stars with robots for a long time before humans get there, if they ever do.

Unless of course I’m missing something?

Posted in Space | Tagged , , , | 24 Comments

Does information require conscious interpretation to be information?

Peter Kassan has an article at Skeptic Magazine which sets out to disprove the simulation hypothesis, the idea that we’re all living in a computer simulation.

I personally find arguing about the simulation hypothesis unproductive.  Short of the simulation owner deciding to jump in and contact us, we can’t prove the hypothesis.  Even if the simulation has flaws that it would allow us to find and perceive, we can never know whether we are looking at an actual flaw or just something we don’t understand.  For example, is quantum wave-particle duality a flaw in the simulation, or just a puzzling aspect of nature?

Nor can we disprove the simulation.  There’s simply no way to prove to a determined skeptic that the world is real.  And if we are in a simulation, it appears to exact unpleasant consequences for not taking it seriously.  It effectively is our reality.  And we have little choice but to play the game.

But this post isn’t about the simulation hypothesis.  It’s about the central argument Kassan makes against it, that there can’t be a consciousness inside a computer system.  The argument Kassan uses to make this case is one I’m increasingly encountering in online conversations, involving assertions about the nature of information.

ASCII code for “Wikipedia”
Image credit: User:spinningspark at Wikipedia

The argument goes something like this.  Information is only information because we interpret it to be information.  With no one to do that interpretation, the patterns we refer to as information are just patterns, structures, configurations, with no inherent meaning.  Consequently, the physical machinations of computers are information processing only because of our interpretations of what we put into them, what they do with it, and what they produce.  However, brains do their work regardless of the interpretation, so they can’t be processing information, and information processing can’t lead to consciousness.

To be fair, this brief summary of the argument may not do it justice.  If you want to see the case made by someone who buys it, I recommend reading Kassan’s piece.

That said, I think the argument fails for at least two reasons.

The first is that it depends on a particularly narrow conception of information.  There are numerous definitions of information out there.  But for purposes of this post, we don’t need to settle on any one specific definition.  We just need to discuss an implied aspect of all of them, that information must be for something.

The people making the argument are right about one thing.  Information, in an of itself, is not inherently information.  To be information, something must make use of it.  But the assertion is that this role of making use of information can only be fulfilled by a conscious agent.  No conscious agent involved, then no information.  The problem is that this ignores the non-conscious systems that make use of information.

For example, if the long molecules that are DNA chromosomes somehow spontaneously formed by themselves somewhere, there would be nothing about them that made them information.  But when DNA is in the nucleus of a cell, the proteins that surround it create mRNA molecules based on sections of the DNA’s configuration.  These mRNA molecules physically flow to ribosomes, protein factories that assemble amino acids into specific proteins based on the mRNA’s configuration.  Arguably it’s the systems in the cell that make DNA into genetic information on how to construct its molecular machinery.

Another example is a particular type of molecule that is allowed entry through the cell’s membrane.  There’s nothing about that molecule in and of itself that makes it information.  But if the chemical properties of the molecule cause the cell to change its development or behavior, then we often talk about the molecule, perhaps a hormone, being a chemical “signal”.  It’s the cell’s response to the molecule that makes it information.

But even in computer technology, there are often transient pieces of information that no conscious observer interprets.  The device you’re reading this on likely has a MAC address which it uses to communicate on your local network.  It probably contacted a DHCP server to get a dynamically assigned IP address for it to communicate on the internet.  It had to contact a domain name server to get the IP address for this website.  The various apps on it likely all have various internal system identifiers.  None of these things are anything you likely know or think about, but they’re vital for the device to do its job.  Many of the dynamically assigned items will come into and go out of existence without any conscious observer ever interpreting them.  Yet it seems perverse to say that these aren’t information.

Of course, we could fall back to the etymology of “information” and insist on defining it only as something that inputs Platonic forms into a conscious mind (in-form).  But then we’ve created a need to come up with a new word for the patterns, such as DNA or transient IP addresses, that have causal effects on non-conscious systems.  Maybe we could call such patterns “causalation”.  Which means we could talk about brains being causalation processing systems.  Of course, computers would also be causalation processing systems, which just brings us right back to the original bone of contention.

And that in turn bring us to the second reason the argument fails.  Every information processing system is a physical system, and can be described in purely physical terms.  Consider the following description.

A system is constantly propagating energy, at small but consistent levels, through portions of its structure.  The speed and direction of the energy flows are altered by aspects of the structure.  But many of those structural aspects themselves are altered by the energy flow, creating a complex synergy between energy and structure.  The overall dynamic is altered by energy from the environment, and alters the environment by the energy it emits.  Interactions with the environment often happen through intermediate systems that modulate and moderate the inbound energy patterns to a level consistent with the central system, and magnify the causal effects of the emitted energy.

This description can pertain to both computers and central nervous systems.  The energy in commercial computers is electricity, the modifiable aspects of the structure are transistor voltage states, and the intermediate systems are I/O devices such as keyboards, monitors, and printers.  The energy in nervous systems is electrochemical action potentials, the aspects of modifiable structure are the synapses between neurons, and the intermediate systems are the peripheral nervous system and musculature.

(It’s also worth noting that computers can also be built in other ways.  For example, they can be built with mechanical switches, where the energy is mechanical force and the modifiable aspects are the opening and closing switches.  A computer could, in principle, also be built with hydraulic plumbing controlling the flow of liquids.  In his science fiction novel, The Three-Body Problem, Cixin Liu describes an alien computer implemented with a vast army of soldiers, with each soldier acting as a switch, raising or lowering their arms  following simple rules based on what the soldiers next to them did.)

It’s the similarities between how these physical systems work that make it easy for neuroscientists to talk in terms of neural circuits and neural computation, and to see the brain as an information processing organ.  Engaging in lingistic jiu jitsu over the definition of “information” (or “computation” as often happens in similar arguments) doesn’t change these similarities.

Not that there aren’t major differences between a commercial digital computer and an organic brain.   (Although the differences between technology and biology are constantly decreasing.)  The issue isn’t whether brains are computers in the narrow modern sense, but whether they are computational information processing systems.

So, am I being too dismissive of this interpretation argument?  Or are there similar arguments that may make a better case?  How do you define “information”?

Posted in Mind and AI | Tagged , , , , , | 32 Comments

Layers of self awareness and animal cognition

In the last consciousness post, which discussed issues with panpsychism and simple definitions of consciousness, I laid out five functional layers of cognition which I find helpful when trying to think about systems that are more or less conscious.  Just to recap, those layers are:

  1. Reflexes, primal reactions to stimuli.
  2. Perception, sensory models of the environment that increase the scope of what the reflexes can react to.
  3. Attention, prioritizing which perceptions the reflexes are reacting to.
  4. Imagination, action planning, scenario simulations, deciding which reflexes to allow or inhibit.
  5. Metacognition, introspective access to portions of the processing happening in the above layers.

In the discussion thread on that post, self awareness came up a few times, particularly in relation to this framework.  As you might imagine, as someone who’s been posting under the name “SelfAwarePatterns” for several years, I have some thoughts on this.

Just like consciousness overall, I don’t think self awareness is a simple concept.  It can mean different things in different contexts.  For purposes of this post, I’m going to divide it up into four concepts and try to relate them to the layers above.

At consciousness layer 2, perception, I think we get the simplest form of self awareness, body awareness.  In essence, this is having a sense that there is something different about your body from the rest of the environment.  I think body awareness is phylogenetically ancient, dating back to the Cambrian explosion, and is pervasive in the animal kingdom, including any animal with distance senses (sight, hearing, smell).  As I’ve said before, distance senses seem pointless unless they enable modeling of the environment, and those models are themselves of limited use if they don’t include your body and its relation to that environment.

The next type is attention awareness, which models the brain’s attentional state.  I think of this as layer 4 modeling what’s happening in layer 3.  (These layers appear to be handled by different regions of the brain.)  This type of awareness is explored in Michael Graziano’s attention schema theory.  It provides what we typically think of as top down attention, as opposed to bottom up attention driven from the perceptions in layer 2.

The third type, affect awareness, is integral to the scenario simulations that happen in layer 4.  Affects can be thought of as roughly synonymous with emotions or feelings, although at a broader and more primal level.  Affects include states like fear, pleasure, anger, but also more primal ones like hunger.

Each action scenario needs to be assessed on its desirability, whether it should be the action attempted, and those assessments happen in terms of the affects each scenario triggers.  The results of the simulations are that some reflexes are inhibited and some allowed.  Arguably, it’s this change from automatic action to possible action that turn the reflexes into affects, so in a sense, affect awareness could be considered reflex awareness that enables the creation of affects.

The types of self awareness discussed so far are essentially a system modeling the function of something else.  Body awareness is the brain modeling the body, attention awareness is the planning regions of the brain modeling the attention regions, and affect awareness is the planning regions modeling the sub-cortical reflex circuits.  But the final type, metacognitive awareness, recursive self reflection, is different.  It’s the planning regions modeling their own processing.

Metacognitive awareness lives in layer 5, metacognition.  This is self awareness in its most profound sense.  It’s being aware of your own awareness, experiencing your own experience, thinking about your own thoughts, being conscious of your own consciousness.  But it’s more than that, because if you understand this paragraph, it shows you have the ability to be aware of the awareness of your awareness.  And if you understood the last sentence, it means you have the ability to do so to an arbitrary level of recursion.

This type of awareness is far rarer in the animal kingdom than the other kinds.  It requires a metacognitive capability, an ability to build models not just of the environment, your own body, your attention, or your affective states, but to build models of the models, to reason about your own reasoning.  This capability appears to be limited to only a few species.  But scientifically determining exactly which species is difficult.

Mirror test with a baboon
Image credit: Moshe Blank via Wikipedia

One test that’s been around for a few decades is the mirror test.  You sneak a mark or sticker on the animal where it can’t see it, then put them in front of a mirror.  If the animal sees its reflection, notices the mark or sticker and tries to remove it, then, the advocates of this test propose, it is aware of itself.  But this test seems to conflate the different types of self awareness noted above, so it’s not clear what’s being demonstrated.  It could be only body awareness, although I can also see a case that it might demonstrate attention awareness too.

Regardless, most species fail the mirror test.  Mammals that pass include elephants, chimpanzees, bonobos, orangutans, dolphins, and killer whales.  The only non-mammal that passes is the Eurasian magpie.  Gorillas, monkeys, dogs, cats, octopusses, and other tested species, all fail.

But testing for the higher form of self awareness, metacognitive awareness, means testing for metacognition itself, which more recent tests try to get at directly.

One test looks at how animals behave when they’ve been given ambiguous information about how to get a reward (usually a piece of food).  If the ambiguity causes them to display uncertainty, the reasoning goes, then they must understand how limited their knowledge is.  Dolphins and monkeys seem to pass this test, but not birds.  However, this test has been criticized because it’s not clear that the displayed behavior comes from knowledge of uncertainty, or just uncertainty.  It could be argued that fruit flies display uncertainty.  Does that prove they have metacognition?

A more rigorous experiment starts by showing an animal information, then hides that information.  The animal then has to decide whether to take a test on what they remember seeing.  If they decide not to take the test, they get a moderately tasty treat.  If they do take the test and fail, they get nothing.  But if they take it and succeed, they get a much tastier treat.  The idea is that their decision on whether or not to take the test depends on their evaluation of how well they remember the information.  The goal of the overall experiment is to measure how accurately the animal can assess its own memory.

Some primates pass this more rigorous test, but nothing else seems to.  Dolphins and birds reportedly fail it.  This type of self reflective ability appears to be restricted to only primates.  (There was a study that seemed to show rats passing a similar test, but the specific test reportedly had a flaw where the rats might simply have learned an optimized sequence without any metacognition.)

What do all these tests mean?  Well, failure to pass them is not necessarily conclusive.  There may be confounding variables.  For example, all of these tests seem to require relatively high intelligence.  I think this is a particularly serious issue for the mirror test.  What it’s testing for is a fairly straightforward type of body or attention awareness, but the intelligence required to figure out who the reflection is seems bound to generate false negatives.

This seems like less of an issue for the metacognition tests.  Metacognition could itself be considered a type of intelligence.  And its functionality might not be a useful adaptation unless it’s paired with a certain level of intelligence.  Still, as I noted in the panpsychism post, any time a test shows that only primates have a certain ability, we need to be mindful of the possibility of an anthropocentric bias.

Again, my own sense is that body awareness is pervasive among animals.  I think attention and affect awareness are also relatively pervasive, although as this NY Times article that amanimal shared with me discusses, humans are able to imagine and plan far more deeply and much further into the future than other animals.  Most animals can only think ahead by a few minutes, whereas humans can do it days, months, years, or even decades into the future.

This seems to indicate that the level 4 capabilities of most animals, along with the associated attention and affect awareness, are far more limited than in humans.  And metacognitive awareness, the highest form of self awareness, only appears to exist in humans and, to a lesser extent, in a few other species.

Considering that our sense of inner experience likely comes from a combination of attention, affect, and metacognitive awareness, it seems like the results of these tests are a stark reminder that we should be careful to not project our own cognitive scope on animals, even when our intuitions are powerfully urging us to do so.

Unless of course there are aspects of this I’m missing?

Posted in Mind and AI | Tagged , , , , , , , | 40 Comments

Recommendation: We Are Legion (We Are Bob)

One of the things that many space enthusiasts find frustrating about the space age is how slow it’s moving, at least relative to its early years.  Humans made it to the moon almost 50 years ago, but since then seem to have retreated to low Earth orbit, working in space stations just above the atmosphere.  Although there is always lots of talk about going further out again, it always seems to be several years in the future.

But while this has been going on with human spaceflight, robots have been exploring the entire solar system.  We’ve sent probes on missions to every planet, and have some orbiting several of them.  Mars has been thoroughly mapped from orbit and has had rovers exploring its surface pretty much continuously for the last couple of decades.

When it comes to space, humans simply aren’t the pioneers.  That role now falls to robots.  I don’t see that changing in the future, particularly as AI (artificial intelligence) continues growing in capabilities.  This means that when humanity does reach the stars, it will inevitably be first, and possibly exclusively, with robots.

This puts science fiction authors in a bind.  Stories of humans sitting around drinking coffee and eating bagels as news comes in of all the things the robots are doing aren’t very compelling.  Some authors solve this by simply ignoring AI, or by imagining some limitation that AI development will run into that allows human characters to be at the center of the action again.  But some solve it by making the stories about the AIs, or even by making the AIs….us.

That’s the approach that Dennis Taylor takes with his Bobiverse books, the first of which is We Are Legion (We Are Bob).  The Bob in the title is Bob Johansson, a software entrepreneur who signs a contract with a cryonics company to be frozen at his death in hopes of revival in the future when medical technology improves.  Shortly afterward, he ends up getting killed in an accident.

He wakes up a century later inside a computer, as a software replicant of the original Bob, an uploaded mind.  America has become a theocracy, one that considers him in his new form to be technology rather than a person.  He is drafted into being the control system for an interstellar Von Neumann probe, a self replicating robotic spacecraft designed to reach other solar systems and build new copies of itself using local resources, and then send the new copies further out exploring, where they’ll eventually build copies of themselves, and so on.

Just as Bob is being launched, a full scale war erupts and he barely makes it out.  He then has to fight probes he encounters from other nations among the stars on his way to building BobNet, a network of replicated Bobs exploring the stars near Earth.  Eventually, some of his copies return to a devastated Earth to help the last remnants of humanity escape to the stars.

He also encounters a primitive but intelligent species on one of the planets he explores, setting up a situation that starts off similar to the one at the beginning of 2001 A Space Odyssey, but essentially seen from the perspective of the monolith.   And as the story progresses, he encounters a powerful existential threat to himself and humanity.

As a writer, I found the first book interesting, partly because, despite it’s lack of a tight plot structure, it was still very satisfying.  We follow Bob as he reaches other solar systems, fights dangers, replicates, and makes new discoveries.  There is a constantly increasing number of story threads as each new Bob comes online, and many different conflicts.  It’s a bit episodic, with the episodes overlapping with each other, but works because the concept of self replicating interstellar probes is being explored.

I think the loose structure starts to make itself felt in the second book.  I sometimes found the earlier parts of that book tedious, with some of the conflicts and issues feeling a bit like filler.  But as the second books progresses, the existential threat becomes more apparent, which adds tension and excitement back to the story.

The story is told in first person, with the specific instance of the Bob (who each have their own names often drawn from classic science fiction stories, mythologies, or other sources) and his location listed at the beginning of each chapter.  Bob is a good natured and sympathetic character whose fairly positive and humorous viewpoint keeps the story approachable, even when it gets pretty dark.

These books aren’t super hard science fiction.  The Von Neumann concept is a serious one, but Taylor introduces some magical technologies in order to tell story he wants.  These include a subspace concept that enables a reactionless drive, relativistic travel, faster than light detection and, eventually, faster than light communication.  He also has each Bob’s personality be a little different, which he doesn’t explain except to hint that quantum indeterminacy may be the cause.  And it seems like Taylor relies heavily on the panspermia concept for his aliens.

Of course, these compromises have the benefits of having each Bob be a somewhat unique character that can be in physical jeopardy, allows a more interactive community across interstellar distances, makes the aliens more relatable, and saves the story from being far more concerned with the logistics of energy production and usage than it otherwise would have needed to be.

There’s a lot to like in these books.  I’ve read the first two and expect to quickly consume the third when it becomes available later this year.  If the ideas of mind uploading, self replicating interstellar probes, and space battles appeal to you, I highly recommend them.

Posted in Science Fiction | Tagged , , , | 5 Comments