Why I haven’t been posting lately

It’s been a while since I’ve posted.  It’s probably fair to say that my posting frequency has plummeted to the lowest level since I started this blog in 2013.  I feel obliged to offer an explanation.

First, we’ve been undergoing an epic reorganization at work.  In the early stages, this endeavor left me very unsettled on what my future work life might look like, to the extent that I was considering early retirement.  Eventually, it ended up that I’m going to be moving into a new job (in the same organization, the central IT shop for a university).

This is a good thing, but I’m going to be managing a much more technical area than I have in years, and that’s forcing me to immerse myself back into the details of database administration, application development, and enterprise integration, all of which won’t leave much room for a while to think about consciousness, philosophy, space, science, and many of the other things I often post about.

On top of that, my father passed away this weekend, a staggering emotional blow that currently has me adrift in a way I haven’t been in a long time.  I anticipate being occupied dealing with the emotional and financial fallout for a while.

So, just to let you know, I definitely have no intention of giving up blogging.  But posts may be thin for a while until I can get all this processed.  I think I’ll have some new insights on stress, emotion, and grief when I do start up again.

Hopefully more soon.  How are you guys doing?

Posted in Zeitgeist | 42 Comments

Breakthroughs in imagination

When thinking about human history, it’s tempting to see some developments as inevitable.  Some certainly were, but the sheer amount of time before some of them took place seem to make them remarkable.

The human species, narrowly defined as Homo sapiens, is about 200,000 years old.  Some argue that it’s older, around 300,000 years, others that full anatomical modernity didn’t arrive in total until 100,000 years ago.  Whichever definition and time frame we go with, the human species has been around far longer than civilization, spending well more than 90% of its existence in small hunter-gatherer tribes.  (If we broaden the definition of “humanity” to the overall Homo genus, then we’ve spent well over 99% of our history in that mode.)

For tens of thousands of years, no one really seemed to imagine the idea of a settled, sedentary lifestyle, until around 10,000-12,000 years ago in the Middle East.  I’ve often wondered what those first settlers were thinking.  Did they have any idea of the world changing significance of what they were doing?  More than likely, they were solving their own immediate problems and judged the solutions by the immediate payoff.

The earliest sedentary, or semi-sedentary culture appears to have been a group we now call the Natufians.  Living on the east coast of the Mediterranean in what is now Israel, Lebanon, and Syria, they were in a nexus of animal migrations and, in their time, a lush environment.  Life for them was relatively good.  They appear to have gotten a sedentary lifestyle effectively for free, in other words, without having to farm for it.

Then the climate started to change, an event called the Younger Dryas cooled the world for a brief period (brief in geological time, over a millenia in human time), but it was long enough to endanger the easy lifestyles the Natufians had probably become used to.  After centuries or millenia of living in a sedentary environment, they likely had little or no knowledge of how to live the way their ancestors had.

Victims of circumstance, they were forced to innovate, and agriculture emerged.  Maybe.  This is only one possible scenario, but it strikes me as a very plausible one.  The earliest evidence of nascent agriculture reportedly appears in that region in that period.

Early proto-writing from Kish, c. 3500 BC
Image credit: Locutus Borg via Wikipedia

Another development that took a long time was writing.  The oldest settlements arose several thousand years before writing developed.

The traditional view of the development of writing was that it evolved from pictures.  But as Mark Seidenberg points out in his book, Language at the Speed of Sight, picture drawing is far more ancient than writing.  The oldest cave art goes back 40,000 years, but what we call writing only arose about 5000 years ago, in Mesopotamia according to most experts (although some Egyptologists insist the Egyptian system came first).

It appears that the mental jump from pictures to symbols representing concepts was not an easy transition.  What caused it?  Seidenberg presents an interesting theory developed by archaeologist Denise Schmandt-Besserat.

Starting around 8000 BC, people in the Middle East started using small clay figures, called “tokens” today, as an accounting tool.  The tokens were simple shapes such as cones, disks, or shell like forms.  A token of a particular shape represented something like a sheep, or an amount of oil, or some other trade commodity.  Pragmatic limitations in producing the tokens kept their shapes simple, instead of being accurate detailed depictions of what they represented.

A number of tokens were placed in sealed clay containers, presumably one for each actual item.  The container was sent along with a trade shipment so the recipient would know they were receiving the correct items in the correct amounts.  In time, in order to know what kinds of tokens were in a particular container, a 2D impression, a picture of the token, was often made on the container, in essence a label indicating which tokens it contained.

It then gradually dawned on people that they could get by with just the labels, with the token shape and some indicator of quantity.  No container or actual physical tokens required.  According to the theory, written symbolic representation of concepts had arrived.

The earliest proto-writing systems were a mixture of symbols and pictures.  Over time, the picture portions did evolve into symbols, but only after the conceptual breakthrough of the symbols had already happened.

The early Bronze Age writing systems  were difficult, requiring considerable skill to write or read.  Reading and writing was effectively a specialty skill, requiring a class of scribes to do the writing and later reading of messages and accounts.  It took additional millenia for the idea of an alphabet, with a symbol for each language sound, to take hold.

The earliest known alphabet was the Proto-Sinaitic script found in the Sinai peninsula dating to sometime around 1800-1500 BC.  It appears to have been the precursor to the later Canaanite script, which itself was a precursor to the Phoenician and Hebrew alphabets that arose around 1100-1000 BC.  The Phoenicians were sea traders and spread their alphabet around the Mediterranean.  The Greeks would adapt the Phoenician alphabet, add vowels to it (a necessity driven by the fact that Greek was a multi-syllable language, as opposed to the Semitic languages, which were dominated by monosyllable words), and then use it to produce classical Greek civilization.

The development of these alphabets would lead to a relative explosion in ancient literature.  This is why studying Bronze Age societies (3300-1200 BC) is primarily an exercise in archaeology, but studying the later classical ages of Greece and Rome is primarily about studying historical narratives, supplemented by archaeology.

Why did so much of this take place in the Middle East?  Probably because, for thousands of years, the Middle East lay at the center of the world, a nexus of trading paths and ideas.  It seems entirely possible to me that some of these breakthroughs happened in other lands, but that we first find archaeological evidence for them in the Middle East because they were imported there.  The Middle East only lost this central role in the last 500 or so years, a result of the European Age of Exploration and the moving of world trade to the seas.

So, are there any new ideas, any new basic breakthroughs on the scale of agriculture or writing that are waiting for us, that we simply haven’t conceived of yet?  On the one hand, you could argue that the invention of the printing press in the 15th century, along with the rise of the internet in the last couple of decades, and the dramatically increased collaboration they bring in, have ensured that the low hanging fruit has been picked.

On the other hand, you could also argue that all of these systems are built using our existing paradigms, paradigms that are so ingrained in our cognition that they simply may not point to breakthroughs waiting to happen.  We don’t know what we don’t know.

It’s worth noting that the execution of agriculture and writing are not simple things.  Most of us, if dropped unto an ancient farm, despite the techniques being much simpler than modern farming, would have no idea where to even begin.  Or know how to construct an appropriate alphabet for whatever language was in use at the time.  (Seidenberg points out that not all alphabets are useful for all languages.  The Latin alphabet this post is written may be awkward for ancient Sumerian or Egyptian.)

It may be that the idea of farming or writing did occur to people in the paleolithic, but they simply had no conception of how to make it happen.  In this view, these seeming breakthroughs are really the result of incremental improvements, none of which individually were that profound, that eventually added up to something that was profound.  Consider again the two theories above on how farming and writing came about.  Both seem more plausible than one lone genius developing them out of nothing, primarily because they describe incremental improvements that eventually add up to a major development.

Ideas are important.  They are crucial.  But alone, without competence, without the underlying pragmatic knowledge, they are impotent.  On the other hand, steady improvements in competence often cause us to stumble on profound ideas.  I think that’s an important idea.

Unless of course, I’m missing something?

Posted in History | Tagged , , , | 22 Comments

The extraordinary low probability of intelligent life

Marc Defant gave a TEDx talk on the improbable events that had to happen in our planet’s history for us to eventually evolve, along with the implications for other intelligent life in the galaxy.

I find a lot to agree with in Defant’s remarks, although there are a couple points I’d quibble with.  The first, and I’m sure a lot of SETI (Search for Extraterrestrial Intelligence) enthusiasts will quickly point this out, is that we shouldn’t necessarily use the current lack of results from SETI as a data point.  It’s a big galaxy, and within the conceptual space where SETI could ever pay off, we shouldn’t necessarily expect it to have done so yet.

My other quibble is that Defant seems to present the formation of our solar system as a low probability event, or maybe he means a solar system with our current metallicity.  I can’t really see the case for either being unlikely.  There are hundreds of billions of stars in our galaxy, most with some sort of attending solar system.  So I’m not sure where he’s coming from on that one.

My own starting point for this isn’t SETI, but the fact that we have zero evidence for Earth having ever been colonized.  If the higher estimated numbers of civilizations in the galaxy are correct, the older ones should be billions of years older than we are.  They’ve had plenty of time to have colonized the entire galaxy many times over, even if 1% of lightspeed is the best propagation rate.

The usual response is that maybe they’re not interested in colonizing the galaxy, not even with their robotic progeny.  That might hold if there is one other civilization, but if there are thousands, hundreds, even a few dozen?  Across billions of years?  The idea that every other civilization wouldn’t be interested in sending its probes out throughout the galaxy seems remote, at least to me.

But to Defant’s broader point about the probability of intelligent life evolving, there are many events in our own evolutionary history that, if we were to rewind things, might never happen again.

Life seems to have gotten an early start on Earth.  Earth is roughly 4.54 billion years old, and the earliest fossils date to 3.7 billion years ago.  With the caveat that we’re unavoidably drawing conclusions from a sample of one planet’s history, the early start of life here seems promising for its likelihood under the right conditions.

But there are many other developments that seem far less certain.

One crucial step was the evolution of photosynthesis, at least 2.5 billion years ago.  The development of photosynthesis gave life a much more reliable energy source than what was available before, converting sunlight, water, and carbon dioxide into sugars.

And its waste product, oxygen, started the process of oxygenation, increasing the levels of oxygen in Earth’s atmosphere, which would be very important as time went on.  The early atmosphere didn’t have much oxygen.  Indeed, the rise of oxygen levels may have originally been a serious problem for the life that existed at the time.  But life adapted and eventually used oxygen as a catalyst for quicker access to free energy.

The good news with photosynthesis is that there are multiple chemical pathways for it, and it’s possible it evolved multiple times, making it an example of convergent evolution.  That means photosynthesis might be a reasonably probable development.  Still, oxygen producing photosynthesis doesn’t seem to have arisen until the Earth was more than halfway through its current history, which doesn’t make it seem very inevitable.

The rise of eukaryotes may be a more remote probability.  The earliest life were simple prokaryotes.  Eukaryotes, cells with organelles, complex specialization compartments, arose 1.6-2.1 billion years ago.  All animal and plant cells are eukaryotes, making this development a crucial building block for later complex life.

Eukaryotes are thought to have been the result of one organism attempting to consume another, but somehow instead of consuming it, the consuming organism entered into a symbiotic relationship with the consumed organism.  This low probability accident may have happened only once, although no one knows for sure.

Yet another crucial development was sexual reproduction, arising 1-1.2 billion years ago, or when Earth was 73% of its current age.  Sexual reproduction tremendously increased the amount of variation in offspring, which arguably accelerated evolution.  Who knows how long subsequent developments might have taken without it?

Oxygen had been introduced with the rise of certain types of photosynthesis, but due to geological factors, oxygen levels remained relatively low by current standards until 800 million or so years ago, when it began to rise substantially, just in time for the development of complex life.  The Cambrian explosion, the sudden appearance of a wide variety of animal life 540-500 million years ago, would not have been possible without these higher oxygen levels.

Complex life (animals and plants) arose in the last 600-700 million years, after the Earth had reached 84% of its current age.  When you consider how contingent complex life is on all the milestones above, it’s development looks far from certain.  Life may be pervasive in the universe, but complex life is probably relatively rare.

Okay, but once complex life developed, how likely is intelligent life?  There are many more low probability events even within the history of animal life.

Earth’s environment just so happens to be mostly aquatic, providing a place for life to begin, but with enough exposed land to allow the development of land animals.  In general, land animals are more intelligent than marine ones.  (Land animals can see much further than marine ones, increasingly the adaptive benefits of being able to plan ahead.)  A 100% water planet may have limited opportunities for intelligence to develop.  For example, mastering fire requires being in the atmosphere, not underwater.

Defant mentions the asteroid that took out the dinosaurs and gave mammals a chance to expand their ecological niche.  Without an asteroid strike of just the right size, mammals might not have ascended to their current role in the biosphere.  We might still be small scurrying animals hiding from the dinosaurs if that asteroid had never struck.

Of course, there have been a number of intelligent species that have evolved, not just among mammals but also among some bird species, the surviving descendants of dinosaurs.  Does this mean that, given the rise of complex life, human level intelligence is inevitable?  Not really.  While there are many intelligent species (dolphins, whales, elephants, crows, etc), the number of intelligent species that can manipulate the environment is much smaller, pretty much limited to the primates.

(Cephalopods, including octopusses, can manipulate their environment, but their short lives and marine environment appear to be obstacles for developing a civilization.)

Had our early primate ancestors not evolved to live in trees, developing a body plan to climb and swing among branches, we wouldn’t have the dexterity that we have, nor 3D vision, or the metacognitive ability to assess our confidence in making a particular jump or other move.  And had environmental changes not driven our later great ape ancestors to live in grasslands, forcing them to walk upright, and freeing their hands to carry things or manipulate the environment, a civilization building species may never have developed.

None of this is to say that another civilization producing species can’t develop using an utterly different chain of evolutionary events.  The point is that our own chain is a series of many low probability events.  In the 4.54 billion years of Earth’s history, only one species, among the billions that evolved, ever developed the capability of symbolic thought, the ability to have language, art, mathematics, and all the other tools necessary for civilization.

Considering all of this, it seems like we can reach the following conclusions.  Microscopic single celled life is likely fairly pervasive in the universe.  A substantial subset of this life probably uses some form of photosynthesis.  But complex life is probably rare.  How rare we can’t really say with our sample of one, but much rarer than photosynthesis.

And intelligent life capable of symbolic thought, of building civilizations?  I think the data is telling us that this type of life is probably profoundly rare.  So rare that there’s likely not another example in our galaxy, possibly not even in the local group, or conceivably not even in the local Laniakea supercluster.  The nearest other civilization may be hundreds of millions of light years away.

Alternatively, it’s possible that our sample size of one is utterly misleading us and there actually are hundreds or even thousands of civilizations in the galaxy.  If so, then given the fact that they’re not here, interstellar exploration, even using robots, may be impossible, or so monstrously difficult that hardly anyone bothers.  This is actually the scenario that SETI is banking on to a large extent.  If true, our best bet is to continue searching with SETI, since electromagnetic communication may be the only method we’ll ever have to interact with them.

What do you think?  Is there another scenario I’m missing here?

Posted in Space | Tagged , , , , , , , | 19 Comments

The layers of emotion creation

Image credit: Toddatkins via Wikipedia

What are emotions?  Where do they come from?  Are they something innate or something we learn?  The classic view is that they’re precognitive impulses that happen to us.  If so, this would imply that they have specific neural signatures.

Early in her career, psychologist Lisa Feldman Barrett attempted to isolate the neural basis of emotions using brain scans, but discovered no consistent patterns for any one emotion.  For example, the neurons involved in anger were not consistent in every brain, or even in every instance of anger in a particular individual.  No individual neuron consistently fired in every case.

This evidence seemed to line up with what she had already found when attempting to find the behavioral signatures of emotions.  After discovering problems with research showing multiple cultures the same emotional faces, she eventually discovered that reading real emotions via facial expressions has little or no objective basis.  It only seems to work when the other person is from a similar culture.  Objectively reading emotions from the faces of people from other cultures proved to be impossible.

Each emotion, she found, was more a category of neural activity rather than a discrete process.  Their diffuse nature led her to conclude that emotions are mental concepts.  Similar to other types of concepts such as chairs, dogs, or hiking, they are things we learn, sensory predictions our brains develop over a lifetime.

Somewhat in opposition to her understanding, is the one championed by the recently deceased neuroscientist Jaak Panksepp.  Focusing on animal studies, Panksepp identified what he referred to as seven primary emotions, labeling them in all caps as: RAGE, FEAR, SEEKING, LUST, CARE, PANIC/GRIEF, and PLAY.  Panksepp identified these primary impulses as sub-cortical and mostly sub-cerebral circuits arising from the brainstem and mid-brain regions, deep regions that are difficult to reach with brain scans.

So who is right?  As is often the case, binary thinking here can mislead us.  I think it helps to review the details on how they agree and disagree.

Barrett admits in her book, How Emotions Are Made, that most animals have circuits for what are commonly called the “Four Fs”: fighting, fleeing, feeding, and mating.  It doesn’t seem like much of a stretch to link Panksepp’s RAGE to fighting, FEAR to fleeing, LUST to mating, and to see feeding as a result of a type of SEEKING.  In other words, Barrett’s admission of the Four Fs gets her halfway to Panksepp’s version of innate impulses.  It’s not hard to imagine that mammals and birds have innate impulses for CARE and PANIC/GRIEF.

However, there is a big difference in how Barrett and Panksepp label these circuits.  Barrett makes a sharp distinction between emotions and affects.  She regards affects as more primal states and characterizes them in a fairly minimalist manner, sticking to the traditional description with two dimensions: valence and arousal.  Valance is an assessment of how good or bad a stimulus is.  And arousal is the level of excitement the stimulus causes.  Barrett either is unaware of or disagrees with the idea put forth by other scientists that affects also include an action motivation.  She sees affects as an early pre-emotional stage of processing.

Panksepp in contrast, in his own book, The Archaeology of the Mind, seemed to view the words “emotion” and “affect” as almost synonymous, or perhaps saw emotions as merely a type of affect.  Panksepp’s view was that the primal drives he identified were feeling states, that they include some form of lower level consciousness in the sub-cerebral regions he discussed.

In this sentiment, Panksepp stood in opposition to Barrett’s view that these are best described as “survival circuits”, non-conscious reflexes.  From what I’ve read, the vast majority of neurobiologists would agree with Barrett on this point, that consciousness is a cortical phenomenon, at least in mammals, and that sub-cortical, or at least sub-cerebral structures are primarily reflexive in nature.

Interestingly, both Barrett and Panksepp push back against the popular notion that emotions, particularly fear, originate from the amygdala.  In Panksepp’s view, the amygdala seems like more of a linkage system, linking cerebral memories to the primal impulses he identifies as coming from lower level structures.  Both acknowledge that patients with destroyed amygdalae can still experience fear, although generally only in response to a subset of the stimuli that cause it in healthy people.

And Panksepp admits early in his book that social learning has a major role to play in the final experience of felt emotions in the neocortex, which seems like a major concession to Barrett’s views.

One significant difference between them, which may come down to a definitional dispute, is whether animals have emotions.  Panksepp, using his understanding of emotions, unequivocally considered animals to have them.  Barrett, while admitting some great apes may come close, doesn’t think animals have much beyond what she considers affects.   In her view, most species lack the neural machinery to learn emotions the way a human does.

My own reaction to all of this is similar to how I view consciousness, as a multi-level phenomenon.  It seems pointless to argue about how and where emotions arise unless we can agree on definitions.  I perceive most, though not all, of the disagreement between Barrett and Panksepp to amount to a difference in definition, in what is meant by the word “emotion”.

Regardless of how we label them, it seems like what we experience as emotions start as lower level reflexes in sub-cerebral regions, which reach cortical regions as primal feelings.  Over a lifetime, the interpretation of these primal feelings change and evolve into what we commonly refer to as emotional feelings.

It seems like there’s a major divide here between interoception, sensory perceptions from the body, and exteroception, perceptions from distance senses such as sight, hearing, and smell.  We probably have strong innate primal responses to certain interoceptive sensations, but mostly have to learn which exteroceptive ones to link to those primal responses.

In other words, we don’t have to learn that a stinging sensation on our hand is bad.  We likely know that innately.  But we do have to learn (as I once did as a small child) that sticking our hand in an anthill will result in the bad stinging sensation.  Once we’ve done it the first time, our amygdala will link the memory of an anthill to the negative valence of a stinging hand.

So emotions, it seems to me, have an innate core and a learned response.  Crucially, introspectively separating the learned response from the innate primal feeling is extremely difficult, perhaps impossible.  This is why anthropologists can often say that emotions are not universal across all cultures.  In their final learned form, we shouldn’t expect them to be.

What do you think?  Do you see any issues with either Barrett’s or Panksepp’s view?  Or my own syncretized one?  Am I missing anything?

Posted in Mind and AI | Tagged , , , , , , , | 40 Comments

Politics is about self interest

I’ve read a lot of history, including American history of the 18th and 19th centuries.  It’s interesting to read about the politics of these periods.  From a distance across generations and centuries, you can see the distinction between the self interested stances people took and the rhetoric that was used to justify those stances.

An example from the 18th century was the controversy about the new federal government assuming the Revolutionary War debt from the states.  Both sides of the controversy had philosophical reasons for their position, such as concern about federal power versus the benefits of establishing faith and credit for the United States.  But in general, the states that favored the idea (called “assumption”) still had a lot of war debt, while the states that were against it had paid most or all of their debt already.

This also holds for what was the most controversial issue in early America: slavery.  People’s stance on this issue seemed to be heavily influenced by the economy of their state.  In northern industrial states, slavery was becoming less economically viable and dying out, and was therefore seen as barbaric.  However, in the largely agricultural southern states, slavery remained a major part of the economic system, and was therefore seen as a vital institution.

It’s much more difficult for us to separate the stories we tell ourselves today from the self interested realities.  This is probably why some political scientists argue that people aren’t motivated by self interest when they vote.  But that idea simply isn’t backed by history or psychology.

In their book, The Hidden Agenda of the Political Mind: How Self-Interest Shapes Our Opinions and Why We Won’t Admit It, Jason Weeden and Robert Kurzban argue self interest figures heavily into our political positions.

This isn’t something we generally do consciously.  Citing psychology research that shows we often don’t understand our own motivations, they argue that our unconscious mind settles on stances that reflect our inclusive personal interests, with “inclusive” meaning that it includes the interests of our friends and family.

We tell ourselves a high minded story, one that we consciously believe, but like the public relations spokesperson for a large corporation, our consciousness is often uninformed on the actual reasons why the Board of Directors of our mind adopt a stance.  In other words, our self interested positions feel like the morally right ones to have, and people opposed to our positions seem evil or stupid.

Working from this premise, and using data from the United States GSS (General Social Survey), Weeden and Kurzban proceed to show correlations between political positions and various demographic, lifestyle, and financial income factors.  They also periodically glance at broader international data and, although the specific issues and populations vary, find that the general principle holds.

They identify some broad factors that have large effects on our political positions, including things such as sexual lifestyle, membership in traditionally dominant or subordinate groups (religion, race, sexual orientation, etc), the amount of human capital we have, and financial income.

The first factor, sexual lifestyle, generally affects your attitude on a number of social issues such as abortion, birth control, pornography, and marijuana legalization.  Weeden and Kurzban break people into two broad groups: Ring-bearers and Freewheelers.

Ring-bearers tend to have fewer sexual partners across their life, generally making a commitment to one partner, marrying them, and having a family with a higher number of children.  They often strongly value their commitments (which is why they’re called “Ring-bearers”).  A major concern for Ring-bearers is the possibility of being tempted away from those commitments, having their spouse be tempted away, or their kids being tempted away from leading a similar lifestyle.

This concern often makes them want to reduce the prevalence of lifestyles that lead to such temptation, such as sexual promiscuity.  As a result, Ring-bearers tend to favor policies that make promiscuous lifestyles more costly.  Which is why they’re generally pro-life, oppose birth control and sexual education, and oppose things like marijuana legalization, which is perceived as facilitating promiscuity.

Of course the reasons they put forward for their stances (and consciously believe) don’t reflect this.  For the abortion stance, they’ll often argue that they’re most concerned about protecting unborn children.  But the fact that they’re usually willing to make exceptions in cases of rape or incest, where the woman’s sexual lifestyle usually isn’t a causal factor, shows their true hand.

On the other side are the Freewheelers.  Freewheelers generally lead a more active sexual lifestyle, or aspire to, or want to keep their options open for that lifestyle.  They’re less likely to marry, more likely to divorce if they do, and generally have fewer kids.

Freewheelers generally don’t want their life style options curtailed, and don’t want to experience moral condemnation for it.  This generally makes them pro-choice, in favor of birth control and family planning, and in favor of things like marijuana legalization.

Like Ringbearers, Freewheelers usually don’t admit to themselves that preserving their lifestyle options is the motivating factor for their social stances.  Again, focusing on abortion, Freewheelers usually say and believe that their stance is motivated to protect women’s reproductive freedom.  But the fact that pro-choice people are often comfortable with other laws that restrict personal freedoms, such as seat belt laws or mandatory health insurance, shows that personal freedom isn’t the real issue.

Freewheelers also often don’t have the private support networks that Ringbearers typically enjoy, such as church communities, which Weeden and Kurzban largely characterize as child rearing Ringbearer support groups.  This makes Freewheelers tend to be more supportive of public social safety net programs than Ringbearers.

The next factor is membership in traditionally dominant or subservient groups.  “Groups” here refers to race, gender, religion, sexual orientation, immigrant status, etc.  In the US, traditionally dominant groups include whites, Christians, males, heterosexuals, and citizens, while traditionally subservient groups include blacks, Hispanics, Jews, Muslims, nonbelievers, females, gays, transsexuals, and immigrants.  It’s not necessarily surprising that which group you fall in affects your views on the fairness of group barriers (discrimination) or set-asides (such as affirmative action).

But there’s a complicating factor, and that is the amount of human capital you have.  Human capital is the amount of education you’ve attained and/or how good you are at taking tests.  Having high human capital makes you more competitive, reducing the probability that increased competition will negatively affect you.  People with high levels of human capital are more likely to favor a meritocracy.  On the other hand, having low human capital tends to make getting particular jobs or getting into desirable schools more uncertain, so increased competition from any source tends to be against your interests.

For people with high human capital and in a dominant group, group barriers mean little, so people in this category tend to be about evenly split on the fairness of those barriers.  But people with low human capital and in a dominant group tend to be more effected by increased competition when group barriers are reduced, making them more likely to be in favor of retaining those barriers.

People in subservient groups tend to be opposed to any group barriers, or at least barriers affecting their particular group.  People in subservient groups and with high human capital, once barriers have been removed, tend to favor a meritocracy and to be less supportive of specific group set asides.  But people in subservient groups and with low human capital tend to be in favor of the set-asides.

All of which is to say, more educated people tend to be less affected by group dynamics unless they’re being discriminated against, but less educated people are more affected by those dynamics.  Less educated people discriminate more, not because they’re uneducated, but because their interests are more directly impacted by the presence or absence of that discrimination.

And finally, Weeden and Kurzban look at financial income.  It probably won’t surprise anyone that people with higher incomes are less supportive of social safety net programs, which essentially redistribute income from higher income populations to lower income ones, but that people with lower incomes are usually in favor of these programs.

Most people fall in some complex combination of these groups.  Weeden and Kurzban recognize at least 31 unique combinations in the book.  Which particular combination a person is in will define their political perspective.

For example, I’m a Freewheeler (relatively speaking), mostly in dominant groups except in terms of religion, where I’m in a subservient group (a nonbeliever), have moderately high human capital (a Master’s degree), and above average income.  Weeden and Kurzban predict that these factors would tend to make me socially liberal, modestly supportive of social safety nets, opposed to religious discrimination, in favor of meritocracy, and economically centrist.  This isn’t completely on the mark, but it’s uncomfortably close.

But since people fall into all kinds of different combinations, their views often don’t fall cleanly on the conservative-liberal political spectrum.  Why then do politics in the US fall into two major parties?  I covered that in another post last year, but it has to do with the way our government is structured.  The TL;DR is that the checks and balances in our system force broad long lasting coalitions in order to get things done, which tend to coalesce into an in-power coalition and an opposition one.

In other words, the Republican and Democratic parties are not philosophical schools of thought, but messy constantly shifting coalitions of interests.  Republicans are currently a coalition of Ringbearers, traditionally dominant groups, and high income people.  Democrats are a coalition of Freewheelers, traditionally subservient groups, and low income people.  There may be a realignment underway between people with low human capital in dominant groups (white working class) and those with high human capital, but it’s too early to tell yet how durable it will be.

But it’s also worth remembering that 38% of the US population struggles to consistently align with either party.  A low income Freewheeler in traditionally dominant groups, or a high income Ringbearer in a traditionally subservient group, might struggle with the overall platform of either party.

So what does all this mean?  First, there’s a lot of nuance and detail I’m glossing over in this post (which is already too long).

Weeden and Kurzban admit that their framework isn’t fully determinant of people’s positions and doesn’t work for all issues.  For example, they admit that people’s stances on military spending and environmental issues don’t seem to track closely with identifiable interests, except for small slices of the population in closely related industries.

The authors’ final takeaway is pretty dark, that political persuasion is mostly futile.  The best anyone can hope to do is sway people on the margins.  The political operatives are right, electoral victory is all about turning out your own partisans, not convincing people from the other side, at least unless you’re prepared to change your own position to cater to their interests.

My own takeaway is a little less stark.  Yes, the above may be true, but to me, when we understand the real reasons for people’s positions, finding compromise seems more achievable if we’re flexible and creative.  For instance, as a Freewheeler, the idea of content ratings and restricting nightclubs to red light districts suddenly seem like decent compromises, ones that don’t significantly curtail my freedom but assuage Ringholder concerns of being able to keep those influences away from them and their family.

And understanding that the attitude of low human capital Americans toward illegal immigrants is shaped by concern for their own livelihood, rather than just simple bigotry, makes me look at that issue a bit differently.  I still think Trump is a nightmare and his proposed solutions asinine, but this puts his supporters in a new light.  Most politicians tend to be high human capital people and probably fail to adequately grasp the concerns of low human capital voters.  In the age of globalization, should we be surprised that this group has a long simmering anger toward the establishment?

In the end, I think it’s good that we mostly vote our self interest.  We typically understand our own interests, but generally don’t understand the interests of others as well as we might think.  This is probably particularly true when we assume people voting differently than us are acting against their own interests.

Everyone voting their own interests forces at least some portion of the political class to take those interests into account.  And that’s the whole point of democracy.  Admittedly, it’s very hard to remember that when elections don’t go the way you hoped they would.

Posted in Society | Tagged , , , | 64 Comments

Adding imagination to AI

As we’ve discussed in recent posts on consciousness, I think imagination has a crucial role to play in animal consciousness.  It’s part of a hierarchy I currently use to keep the broad aspects of cognition straight in my mind.

  1. Reflexes, instinctive or conditioned responses to simuli
  2. Perception, which increases the scope of what the reflexes are reacting to
  3. Attention, which prioritizes what the reflexes are reacting to
  4. Imagination, action scenario simulations, the results of which determine which reflexes to allow or inhibit
  5. Metacognition, introspection, self reflection, and symbolic thought

Generally, most vertebrate animals are at level 4, although with wide ranging levels of sophistication.  The imagination of your typical fish is likely very limited in comparison to the imagination of a dog, or a rhesus monkey.

Computers have traditionally been at level 1 in the sense that they receive inputs and generate outputs.  The algorithms between the inputs and outputs can get enormously complicated, but garden variety computer systems haven’t got much beyond this point.

However newer cutting edge autonomous systems are beginning to achieve level 2, and depending on how you interpret their systems, level 3.  For example, self driving cars build models of their environment as a guide to action.  These models are still relatively primitive, and still hitched to rules based engines, essentially to reflexes, but it’s looking like that may eventually be enough to allow us to read or sleep during our morning commutes.

But what about level 4?  As it turns out, the Google DeepMind people are now trying to add imagination to their systems.

Researchers have started developing artificial intelligence with imagination – AI that can reason through decisions and make plans for the future, without being bound by human instructions.

Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.

The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they haven’t been specifically programmed for. Insert your usual fears of a robot uprising here.

I don’t doubt the truth of that last sentence, that this is going to probably freak some people out, such as perhaps a certain billionaire (<cough>Elon Musk</cough>).  But it’s worth keeping in mind how primitive this will be for the foreseeable future.

Despite the success of DeepMind’s testing, it’s still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, it’s a promising start in developing AI that won’t put a glass of water on a table if it’s likely to spill over, plus all kinds of other, more useful scenarios.

I’ve argued before why I think the robot uprising concerns are vastly overstated.  But putting those reasons aside, we’re a long way from achieving human level imagination or overall intelligence.  People like Nick Bostrom point out that, taking a broad view, there’s not much difference between the intelligence of the village idiot and, say, Stephen Hawking, and that systems approaching human level intelligence may shoot through the entire human range into superhuman intelligence very quickly.

But there are multiple orders of magnitude difference between the intelligence of a village idiot and a mouse, and multiple orders of magnitude difference between the mouse and a fruit fly.  Our systems aren’t at fruit fly level yet.  (Think how much better a Roomba might be if it was as intelligent as a cockroach.)

Still, adding imagination, albeit a very limited one, may be a very significant step.  I think it’s imagination that provides the mechanism for inhibiting or allowing reflexive reactions, that essentially turn a reflex into an affect, a feeling.  These machine affect states may be utterly different from what any living systems experience, so different that we may not be able to recognize them as feelings, but the mechanism will be similar.

It may force us to consider our intuitive sense of what feelings are.  Would it ever be productive to refer to machine affect states as “feelings”?  Can a system whose instincts are radically different from a living system’s have feelings?  Is there a fact of the matter answer to this?

Posted in Mind and AI | Tagged , , , | 21 Comments

The difficulty of interstellar travel for humans

Futurism.com has an article reviewing the results of a survey they conducted with their readers asking when the first human might leave the solar system.  The leading answer was after the year 2100, which make sense given our current level of progress just getting humans back out of low Earth orbit.  But I think the prospects for human interstellar exploration are far more difficult than most people realize.

First, when pondering this question, we can forget about the traditional sci-fi answers of hyperspace, warp drive, or similar notions.  These concepts are fictional plot devices with little or no basis in science.  Even concepts that have been considered by scientists, such as wormholes and Alcubierre warp drives, are extremely speculative, requiring that certain speculative and unproven aspects of physics be true.

Getting anything like these types of technology will require a new physics.  Undoubtedly, we will learn new things about physics in the decades and centuries to come, but the probability that what we learn will enable interstellar travel to function like sea travel (the preoccupation of most space opera stories) is an infinitesimal slice of the possibilities.

The only way to speculate scientifically on this kind of thing is to take the science that we currently have and try to extrapolate from it.  When we do that, the obstacles to human interstellar flight seem pretty daunting.

Worrying about the speed of light limit, which current physics tells us is the ultimate speed limit in the universe, is sour grapes to a large extent.  Even getting to an appreciable percentage of the speed of light turns out to require astounding amounts of energy.  Doing it with our current chemical rocket technology is a lost cause.  According to Paul Gilster, in his book Centauri Dreams, Imagining and Planning Interstellar Exploration (which I recommend for anyone interested in this subject, as well as his blog), it would take more mass than exists in the visible universe to propel a chemical rocket to a substantial fraction of the speed of light.

An artist’s concept of nuclear pulse propulsion.
Image credit: NASA

Of course, there are better, more efficient propulsion options that might eventually be available.  For purposes of this post, let’s leap to the most efficient and plausible near term option, nuclear pulse propulsion.  This is a refined version of an original idea that involved lobbing nuclear bombs behind a spacecraft to push it forward.

Giltster, in his book, notes that a nuclear pulse propulsion spacecraft, to reach 10% of light speed, would need a mass ratio of 100:1.  This means that for every kilogram of the spacecraft with only its payload, you’d need 100 kilograms of fuel.   Initially, that doesn’t sound too bad since the Apollo missions had an overall mass ratio of 600:1.  But that was for the entire mission, and all we’ve considered so far is the mass ratio to accelerate to 10% of light speed.  We haven’t talked about slowing down at the destination.

In space, given inertia and no air friction, slowing down takes just as much energy as speeding up.  And the kicker is that you have to accelerate the fuel you’ll later need to decelerate.  So slowing down doesn’t just double the mass ratio from 100:1 to 200:1.  The deceleration fuel has to be on the “1” side in the initial acceleration ratio.  That means the ratio for the overall mission (and we’re only talking about a one way mission here), has to be squared, taking it from 100:1 to 10,000:1.

Traveling at 10% lightspeed gets us to Proxima Centauri, the nearest star to the sun, in about 43 years.  When you consider what kind of living space a human crew would need for that time span, and multiply it out by 10,000, an interstellar mission starts to look like the most expensive thing human civilization might ever attempt.  It gets worse if we try to shorten the time span.  Increasing the speed to 20% of light speed raises the ratio to 100,000,000:1.

Imagining antimatter technology might improve the mass ratio substantially.  But it adds new difficulties.  Producing antimatter itself takes tremendous amounts of energy.  It would have to be manufactured and stored in deep space, since any containment failure of any substantial amount would likely result in a gigaton level explosion.  We might save on the mass ratio of the spacecraft, but only at the expense of vast resources dedicated to generating and storing the fuel.  And human crews would likely have to be heavily shielded from the gamma rays generated by antimatter reactions, increasing mass throughout the overall craft.

No discussion of this type is complete without at least mentioning the Bussard ramjet, the idea of a spacecraft with an immense ram scoop to take in interstellar dust to use as fuel.  There was a lot of excitement for this concept in the 60s and 70s, but further study has shown that the interstellar medium isn’t nearly as thick as the initial design hoped for, and many think the ram scoop would generate as much friction as thrust.

Other options are to forego rocketry altogether and go for something like light sails.  Robert Forward, decades ago, put forth a design where a gigantic laser on Mercury would send out a beam to an interstellar light sail spacecraft, steadily accelerating it.  At some point, the craft would separate its sail into two components, one of which would be hit by the laser and reflect it back to the remaining component attached to the craft, decelerating it.  Forward’s design is ingenious, but it would still require titanic amounts of energy, and precise coordination across centuries and light years to work.

Things get a lot easier if we just think about sending uncrewed probes.  That’s the current direction of the Starshot Breakthrough initiative.  The idea is to propel a small, perhaps gram sized probe, to 20% of light speed using Earth based lasers.  The probes would reach Proxima Centauri in about 22 years, taking pictures and readings as they fly through the system, and transmitting the information back to Earth.  There are still major technological hurdles to overcome with this idea, but they all seem achievable within reasonable time periods and with reasonable amounts of energy.

The big drawback to the Starshot design is that it doesn’t have any way to slow the probe down, so everything would have to be learned in the few hours available as it sped through the target system.  An alternate design has been proposed, using the unique topology of the Alpha Centauri / Proxima Centauri system to slow down the probe, but at the cost of increasing the travel time to over a century.

But once we give up the idea of crewed missions, the rocket solutions actually become more plausible.  A 10,000:1 ratio doesn’t seem problematic if the ultimate payload is a one gram probe.  Even the 100,000,000:1 ratio associated with a 20% light speed mission starts to look conceivably manageable.

And when we consider the ongoing improvements in artificial intelligence and the idea of probes building their own daughter probes to explore the destination system, and perhaps even to eventually launch toward systems further out, the possibilities start to look endless.

All of which is to say, that it’s much easier to conduct interstellar exploration with robots, particularly very small ones, than with humans.  It seems likely that we’re going to be exploring the stars with robots for a long time before humans get there, if they ever do.

Unless of course I’m missing something?

Posted in Space | Tagged , , , | 24 Comments