Frans de Waal on animal consciousness

Frans de Waal is a well known proponent of animals being much more like us than many people are comfortable admitting.  In this short two minute video, he gives his reason for concluding that at least some non-human animals are conscious.  (Note: there’s also a transcript.)

de Waal is largely equating imagination and planning with consciousness, which I’ve done myself on numerous occasions.  It’s a valid viewpoint, although some people will quibble with it since it doesn’t necessarily include metacognitive self awareness.  In other words, it doesn’t have the full human package.  Still, the insight that many non-humans have imagination, whether we want to include it in consciousness or not, is an important point.

As I’ve noted many times before, I think the right way to look at this is as a hierarchy or progression of capabilities.  In my mind, this usually has five layers:

  1. Survival circuit reflexes
  2. Perception: predictive sensory models of the environment, expanding the scope of what the reflexes can react to
  3. Attention: prioritizing what the reflexes react to
  4. Imagination / sentience: action scenario simulations to decide which reflexes to allow or inhibit, decoupling the reflexes into feelings, expanding the scope of what the reflexes can react to in time as well as space
  5. Metacognition: theory-of-mind self awareness, symbolic thought

There’s nothing crucial about this exact grouping.  Imagination in particular could probably be split into numerous capabilities.  And I’m generally ignoring habitual decisions in this sketch.  The main point is that our feelings of consciousness come from layered capabilities, and sharp distinctions between what is or isn’t conscious probably aren’t meaningful.

It’s also worth noting that there are many layers to self awareness in particular.  A creature with only 1-3 will still have some form of body self awareness.  One with 4 may also have attention and affect awareness, both arguably another layer of self awareness.  Only with 5 do we get the full bore mental self awareness.

It seems like de Waal’s point about observing the capabilities in animal behavior to determine if they’re conscious will also eventually apply to machines.  Although while machines will have their own reflexes (programming), those reflexes won’t necessarily be oriented toward their survival, which may prevent us from intuitively seeing them as conscious.  Lately I’ve been wondering if “agency” might be a better word for these types of systems, ones that might have models of themselves and their environment, but don’t have animal sentience.

Of course, the notion that comes up in opposition to this type of assessment is the philosophical zombie, specifically the behavioral variety, a system that can mimic consciousness but has no inner experience.  But if consciousness evolved, for it to have been naturally selected, it would have had to produce beneficial effects, to be part of the causal structure that produces behavior.  The idea that we can have its outputs without some version of it strikes me as very unlikely.

So in general, I think de Waal is right.  Our best evidence for animal consciousness lies in the capabilities they display.  This views consciousness as a type of intelligence, which I personally think is accurate, although I know that’s far from a universal sentiment.

But is this view accurate?  Is consciousness something above and beyond the functionality of the system?  If it is, then what is its role?  And how widespread would it be in the animal kingdom?  And could a machine ever have it?

Posted in Zeitgeist | Tagged , , , , | 1 Comment

Recommendation: Edges (Inverted Frontier Book 1)

A few weeks ago I recommended Linda Nagata’s novel, Vast, the final book of her Nanotech Succession series.  Edges is both a sequel to that book, and the first episode in a new series, Inverted Frontier.

As in Vast, this is a future where mind uploading and copying is possible, where multiple copies of someone’s mind can run in computer systems, where new bodies can be grown on demand, and so where everyone is, in principle, immortal.  This is handy, because FTL (faster than light) travel is impossible, so interstellar travel takes decades or centuries.

I said immortal “in principle”, because this is a dangerous universe and survival is far from guaranteed.  As humans spread throughout interstellar space, the systems closest to the interior of human occupied space build vast Dyson swarms, called cordons, that effectively wall themselves off from the rest of the galaxy.

The frontier worlds on the outskirts watch through telescopes as the inner systems disappear behind the cordons.  Then, centuries later, the stars, one by one, reappear, apparently indicating that the cordons have collapsed.  The frontier worlds do not know if anyone survives in these regions.

During this time, the frontier worlds have had their own problems.  A fleet of automated alien ships, called the Chenzeme, appear and begin destroying the human colonies.  By the time of Vast, most of humanity has been wiped out.  Vast is a story of characters setting out to discover the source of the Chenzeme.

In Edges, two characters from Vast, or more accurately, two copies of characters from that earlier story, reunite and decide to explore in the direction of the inner worlds, to learn what has happened in those ancient systems, now known as the Hallowed Vastries.  They recruit a crew and begin a centuries long journey to discover what remains in those systems and see if anyone else is alive aside from their homeworld on the edge of human space.

As in Vast, there’s a lot going on in this novel.  The concept of characters spreading themselves out among several copies, and syncing memories as needed, is explored in detail.  One of the characters, Urban, in order to take over and control one of the Chemzeme coursers, has split himself into several specialty copies.  Another, Clementine, has a copy of herself that is forced to become less human, and so refuses to share her memories with her other selves to avoid contaminating them.

As in the earlier book, nano technology features heavily.  Ships and systems in these stories are living things, growing and adapting as needed for different situations.  And often battles happens at the microscopic level between legions of nanotech entities.

A substantial part of the book is spent exploring the society that ends up forming in the ship heading into the inner systems.  A little too much for my tastes.  But unlike Vast, this book has a healthy dose of action, including an explicit villain.

On the edges of the inner systems, the ship encounters a powerful being, shattered from an earlier defeat and exiled, but hungry for revenge and eager to find a ship to take him back.  Naturally our protagonists aren’t willing, which results in a contest that forms the central conflict of the book.

As advanced as the technology is in the story, we get the definite impression that the shattered being is far more advanced, and powerful, even in his compromised state.  And we get glimpses of technology far in advance of the culture that the main characters come from.  In this first book, they are only glimpses, with the promise of more to come.

If philosophical and mind bending space opera is your cup of tea, then I highly recommend Nagata’s books.  As I noted before, I think she is an underappreciated talent.  The imagination shown in her work is as sublime as anyone else I’ve read.

Posted in Science Fiction | Tagged , , , , | 10 Comments

The brain is a computer, but what is a computer?

Kevin Lande has an article up at Aeon which is one of the best discussions of the brain as a computational system that I’ve seen in a while.  For an idea of the spirit of the piece:

The claim that the brain is a computer is not merely a metaphor – but it is not quite a concrete hypothesis. It is a theoretical framework that guides fruitful research. The claim offers to us the general formula ‘The brain computes X’ as a key to understanding certain aspects of our astonishing mental capacities. By filling in the X, we get mathematically precise, testable and, in a number of cases, highly supported hypotheses about how certain mental capacities work. So, the defining claim of the theoretical framework, that the brain computes, is almost certainly true.

Though we are in a position to say that it is likely true that the brain is a computer, we do not yet have any settled idea of what it means for the brain to be a computer. Like a metaphor, it really is unclear what the literal message of the claim is. But, unlike a metaphor, the claim is intended to be a true one, and we do seek to uncover its literal message.

Even if you agree to the literal claim that the brain computes since, after all, our best theories hold that it computes X, Y and Z, you might be unsure or disagree about what it is for something to be a computer. In fact, it looks like a lot of people disagree about the meaning of something that they all agree is true. The points of disagreement do not spoil the point of agreement. You can have a compelling reason to say that a claim is true even before you light upon a clear understanding of what that claim means about the world.

The overall article is pretty long, but if you have strong opinions on this, I encourage you to read the whole thing.

I’m pretty firmly convinced that the brain is a computational system.  When I read about the operations of neurons and synapses, and the circuits they form, the idea that I’m looking at computation is extremely compelling.  It resonates with what I see in computer engineering with the logic gates formed by transistor circuits, or with the vacuum tubes and mechanical switches of earlier designs.

Apparently I’m not the only one, because neuroscientists talk freely and regularly about computation in neural circuits.  It’s not really a controversial idea in neuroscience.  The computational paradigm is simply too fruitful scientifically, and no one has really found a viable alternative theoretical framework.

Of course, many neuroscientists are careful to stipulate that the brain is not like a commercial digital computer, and I do think this is an important point.  Brains aren’t Turing machines, and they certainly don’t have the Von Neumann architecture that most modern commercial computers resemble.  But as the article discusses, these systems are a small slice of possible computational systems.

If we restrict the word “computation” to only being about what these devices do, then we need to come up with another word to describe the other complex causal systems that sure appear to be doing something like computation.  I’ve used the word “causalation” before to describe these systems.  But commercial computers are also causalation machines, so this just brings us right back to the bone of contention.  Both commercial computers and brains appear to be causal nexuses, and computation could be described as concentrated causality.

The stumbling block for many, of course, is that the brain doesn’t follow any particular abstract computational design.  It’s a system that evolved from the bottom up.  So like any biological system, it’s a messy and opportunistic mish-mash.  Still, scientists are gradually making sense of this jumble, and computationalism is an important tool for that.

Anyway, one of the things I’ve learned in the last few years is that a lot of people really hate the idea of the brain as an organic computer.  It’s yet another challenge to human exceptionalism.  So I anticipate a fresh wave of anti-computational responses to this piece.

What do you think?  Are there reasons I’m not seeing to doubt the brain is a computational system?  And if so, is there another paradigm worth considering?

Posted in Zeitgeist | Tagged , , , , | 157 Comments

Is superintelligence possible?

Daniel Dennett and David Chalmers sat down to “debate” the possibility of superintelligence.  I quoted “debate” because this was a pretty congenial discussion.

(Note: there’s a transcript of this video on the Edge site, which might be more time efficient for some than watching a one hour video.)

Usually for these types of discussions, I agree more with Dennett, and that’s true to some extent this time, although not as much as I expected.  Both Chalmers and Dennett made very intelligent remarks.  I found things to both agree and disagree with both of them on.

I found Chalmers a little too credulous of the superintelligence idea.  Here I agreed more with Dennett.  It’s possible in principle but may not be practical.  In general, I think we don’t know all the optimization trade offs that might be necessary to scale up an intelligence.

For example, it’s possible that achieving the massive parallel processing of the human brain at the power levels it consumes (~20 watts), may inevitably require slower processing and water cooled operations.  I think it’s extremely unlikely that human minds are the most intelligent minds possible, but the idea that an AI can be thousands of times more intelligent strikes me as proposition that deserves scrutiny.  The physical realities may put limits on that.

And I agree more with Dennett on how AI is likely to be used, more as tools than as colleagues.  I’m not sure Chalmers completely grasped this point since the dichotomy he described isn’t how I perceived Dennett’s point, that we can have autonomous tools.

That said, I’m often surprised how much I agree with Chalmers when he discusses AI.  There was a discussion on AI consciousness, where he made this statement:

There’s some great psychological data on this, on when people are inclined to say a system conscious and has subjective experience. You show them many cases and you vary, say, the body—whether it’s a metal body or a biological body—the one factor that tracks this better than anything else is the presence of eyes. If a system has eyes, it’s conscious. If the system doesn’t have eyes, well, all bets are off. The moment we build our AIs and put them in bodies with eyes, it’s going to be nearly irresistible to say they’re conscious, but not to say that AI systems which are not in body do not have consciousness.

I’m reminded of Todd Feinberg and Jon Mallatt’s thesis that consciousness in animals began with the evolution of eyes.  Eyes imply a worldview, some sort of intentionality, exteroceptive awareness.  Of course, you can put eyes on a machine that doesn’t have that internal modeling, but then it won’t respond in other ways we’d expect from a conscious entity.

There was also a discussion about mind uploading in which both of them made remarks I largely agreed with.  Dennett cautioned that the brain is enormously complex and this shouldn’t be overlooked, and neither philosopher saw it happening anytime soon, as in the next 20 years.  In other words, neither buys into the Singularity narrative.  All of which fits with my own views.

Posted in Zeitgeist | Tagged , , , , | 24 Comments

SMBC on what separates humans from machines

Source: Saturday Morning Breakfast Cereal (Click through for full sized version and the red button caption.)

My own take on this is that what separates humans from machines is our survival instinct.  We intensely desire to survive, and procreate.  Machines, by and large, don’t.  At least they won’t unless we design them to.  If we ever did, we would effective be creating a race of slaves.  But it’s much more productive to create tools whose desires are to do what we design them to do, than design survival machines and then force them to do what we want them to.

Many people may say that the difference is more about sentience.  But sentience, the ability to feel, is simply how our biological programming manifests itself in our affective awareness.  A machine may have a type of sentience, but one calibrated for its designed purposes, rather than the ones evolution produces calibrated for gene preservation.

I do like that the strip uses the term “humanness” rather than “consciousness”, although both terms are inescapably tangled up with morality, particularly in what makes a particular system a subject of moral concern.

It’s interesting to ponder that what separates us from non-human animals may be what we have, or will have, in common with artificial intelligence, but what separates us from machines is what we have in common with other animals.  Humans may be the intersection between the age of organic life and the age of machine life.

Of course, eventually machine engineering and bioengineering may merge into one field.  In that sense, maybe it’s more accurate to describe modern humans as the link between evolved and engineered life.

 

Posted in Zeitgeist | Tagged , , , , , | 26 Comments

David Chalmers on the meta-problem of consciousness

David Chalmers is famous as the philosopher who coined the hard problem of consciousness, the idea that how and why consciousness is produced from a physical system, how phenomenal experience arises from such a system, is an intractably difficult issue.  He contrasts the hard problem with what he calls “easy problems” such as discriminating between environmental stimuli, integrating information, and reporting on mental states.

Recently, Chalmers has been discussing another problem, the meta-problem of consciousness.  In essence, it’s the problem of why so many people think there is a hard problem.  I give him credit for addressing this, but it’s an issue that has been raised a lot over the years by people, mostly people in the illusionist camp, who have questioned whether the hard problem is really a problem.  Crucially, Chalmers admits that the meta-problem, at least in principle, falls into his easy problem category.

This talk is about an hour and 10 minutes.  I recommend sticking around for the Q&A.  The quality of the questions from the Google staff make it pretty interesting.

One of the things I found interesting in the talk were the multiple references to the idea of consciousness being irreducible.  I’ve pushed back against that idea multiple times on this blog.  I find it strange that anyone familiar with neurological case studies can argue that consciousness can’t be present in lesser or greater quantities, or that aspects of it can’t be missing.

However, what I found interesting is the idea that panpsychism involves an irreducible notion of consciousness.  When you push panpsychists on whether things like a single neuron, a protein, a molecule, an atom, or an electron are conscious, what you usually get back is an assertion that the consciousness in these things isn’t anything like the consciousness we’re familiar with.  It’s a building block of sorts.  Which seems to me like a reduction of our manifest image of consciousness to those these more primitive building blocks.

One prominent panpsychist recently equated quantum spin with those building blocks.  This just brings me back to the observation that the more natualistic versions of panpsychism  seem ontologically equivalent to the starkest forms of illusionism, with the differences between them simply coming down to preferred language.

Anyway, those of you who’ve known me for a while will know that my sympathies in this discussion are largely with the illusionists.  I think their explanations about what is going on are the most productive.

Except for one big caveat.  I don’t care for the word “illusion” in this context.  I do have sympathy with the assertion that if phenomenal experience is an illusion, then the illusion is the experience.  It seems more productive to describe experience as something that is constructed.  We have introspective access to the final constructed show, but not to the backstage mechanisms.  That lack of access makes the show look miraculous, when in reality it’s just us not seeing how the magician does its trick.

Chalmers main point in discussing the meta-problem seems to be an effort not to cede this discussion to the illusionists.  He points out that there may be solutions to the meta-problem that leaves the hard problem intact.

Perhaps, but it seems to me that the most plausible solutions leave the hard problem more as a psychological one, a difficulty accepting that the data provide no support for substance dualism, for any ghost in the machine.  To reconcile with that data, we have to override our intuitions, but that is often true in science.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 110 Comments

Recommendation: Tiamat’s Wrath (The Expanse Book 8)

Tiamat’s Wrath is the eighth book of The Expanse series.  This is definitely a series you want to read in order, so if you’re just starting, I’d recommend beginning with the first book, Leviathan Wakes.

This is the penultimate book of the series, so it shouldn’t surprise anyone that things are seriously heating up.  We learned earlier in the series that the ancient civilization that produced the protomolecule and gate network was destroyed by some force.  In this book, an authoritarian group of military types decide that maybe it’s time to poke the bear, to play a tit-for-tat game with that force.  I’m not spoiling anything by saying there is a response, and it is a pretty serious game changer.

This book continues the old time space opera feel of the series.  It does this by keeping the space logistics at an interplanetary level, even when they’re in a different solar system, and by largely eschewing any role for artificial intelligence.  (One of the authors, Ty Franks, points out that AI exists in the books, just not with its own personality.)  As a result, it’s not the hardest science fiction around, but the result is a very human story, more so than a lot of other fiction I review here.

The Expanse series has been called Game of Thrones in space, but that isn’t really accurate.  Some of the differences are that these books have avoided the sprawl of George R.R. Martin’s Song of Ice and Fire books, they reliably come out once a year, they range around 500 or so pages apiece, and are generally self contained stories, although some of them do end on an overall cliff hanger.

And the main characters all get along pretty well.  (They don’t always in the TV show, but then, although I enjoy the show immensely, I actually find it darker and edgier than the books.)  Indeed, I found myself totally engrossed in this book, and I think the reason, aside from the writing acumen of the authors, is the likable characters.

So if you’re looking for classic adventure in space, with occasional bouts of philosophical pondering, I continue to recommend this series.

Posted in Science Fiction | Tagged , , , | 4 Comments

Recommendation: Vast

I have a bad habit of buying ebooks and then letting them sit in my Kindle account unread, sometimes for years.  I’m sorry to say that the book that was in this state the longest was Linda Nagata’s Vast.  I picked it up back in 2011 based on Alastair Reynold’s glowing recommendation.  However, when I started reading it, I discovered it was the third book in a series, The Nanotech Succession, so I picked up the first book in the series and started reading it instead, but for some reason never finished it, and so never made it back to Vast.

Recently, Nagata announced that she was returning to the universe of Vast.  I read the preview for her new book, Edges, the first installment of the new sequel series, Inverted Frontier, and quickly decided to pre-order it.  I then decided it was time to rectify never having read Vast.

This is a novel which explores a huge range of concepts.  It’s set in a future where mind copying is possible.  Characters can create “ghosts” of themselves who can operate in computer systems, can grow new bodies on demand, can integrate memories from their ghosts as needed, can spawn new copies of themselves to explore or work on things, and generally don’t have to worry about death from old age.

Many of the characters have bodies that can withstand the vacuum of space, including a tough scaled skin and an additional organ, called a kisheer, that allows them to breath when outside of a spaceship.  Other characters have additional posthuman modifications.

But this isn’t a utopian future by any stretch of the imagination.  In the backstory to the book (and perhaps covered in the earlier books in the series) human society expanded into interstellar space while building Dyson swarms around the solar system and other central systems.  These vast structures and the societies within them were referred to as the Hallowed Vastries.

By the time of Vast, the Hallowed Vastries are in the distant past, destroyed as society apparently crumbled, a result of a plague called the “cult virus”, an infection that on the surface seems benign, causing those infected to join in communal cults of love and fellowship, but apparently destroying the initiative and motivation that makes a civilization work.

In addition, humanity has come under attack from a fleet of alien ships, given the name Chenzeme by humans.  These automated ships have attacked frontier worlds and devastated humanity, with only a few pockets of survivors remaining.

One of those pockets is on a world called Deception Well.  Deception Well exists in a nebula with ancient and alien nanotech that protects against the Chenzeme.  However, it apparently does not protect against the cult virus.

As the book opens, four characters are on a ship called the Null Boundary, having set out from Deception Well on a quest to learn the origin of the Chenzeme ships.  They are heading in the direction of the interstellar cloud that the ships appear to come from.  However, they have picked up a pursuer, a Chenzeme attack ship.

This is relatively hard science fiction, so there is no faster than light travel.  The Null Boundary is traveling at 40% of the speed of light.  And the pursuit from the Chenzeme vessel has already lasted for decades.  The story stretches out across centuries in a chase through interstellar space.

Some of the characters have children, grown from constructed embryos, who go on to become adults during the decades and centuries of the story.  One character rarely exists physically, preferring to live as a ghost.  Another routinely alters his cognition, editing his emotions, banning fear or other unpleasant feelings.

But the most haunting character is Lot.  Lot is a carrier of the cult virus.  Indeed, he is actually a genetically engineered weapon designed to distribute the cult virus, an alien modification of a classic human.  The engineering in his body makes it impossible for him to create ghosts.  Alone among the characters, he is stuck in one body.  He yearns to achieve communion with others, but in the story is often consigned to be alone.  Lot’s burdens become central to the plot.

Obviously there’s a lot going on in this book, and I’m just scratching the surface.  True to the title of the series, nanotech features heavily in the story, with a lot of the action happening on the microscopic scale, with battles that often feel like competing infections.

As I noted, this isn’t the first book in the series, and you definitely get the idea that the characters have a lot of history before the book opens.  But the books in the series are described as being stand-alone, so it’s not strictly necessary to read them in order.  I didn’t feel like skipping the others detracted from my experience of this story.  That said, I do hope to go back and read the earlier books at some point.

Although she had some limited success in traditional publishing (this book was originally published in the 1990s), Nagata had largely given up on writing until recent years.  I find her struggles puzzling, since her vision is as compelling as many better known authors.  Reynolds, in his recommendation, cites her as an influence for his own Revelation Space books.  She seems an underappreciated talent, exactly the type of author who has benefited from the self publishing revolution.

This is not an action filled tale.  Much of the conflict in the story is psychological.  It’s a haunting and thought filled exploration of this posthuman and alien world.  But if you like books filled with mind bending concepts, I highly recommend it.

Posted in Science Fiction | Tagged , , , , | 11 Comments

Malcolm MacIver on imagination and consciousness

Sean Carroll’s latest episode of his podcast, Mindscape, features an interview with neuroscientist Malcom MacIver, one that is well worth checking out for anyone interested in consciousness.

Consciousness has many aspects, from experience to wakefulness to self-awareness. One aspect is imagination: our minds can conjure up multiple hypothetical futures to help us decide which choices we should make. Where did that ability come from? Today’s guest, Malcolm MacIver, pinpoints an important transition in the evolution of consciousness to when fish first climbed on to land, and could suddenly see much farther, which in turn made it advantageous to plan further in advance. If this idea is true, it might help us understand some of the abilities and limitations of our cognitive capacities, with potentially important ramifications for our future as a species.

The episode is about 80 minutes long.  If your time is limited, there’s a transcript at the linked page.

MacIver largely equates imagination, the ability to plan, to think, to remember episodic memories and to simulate possible courses of action, with consciousness.  I can see where he’s coming from.  I’ve toyed with that idea myself.   (I don’t use the word “imagination” in the linked post, but that’s what’s being discussed.)

But while I think imagination is an important component of consciousness, meeting a lot of the attributes many of us intuitively associate with it, it doesn’t appear to be the whole show.  This is one reason why I often talk about a hierarchy of consciousness:

  1. Reflexes: survival circuits, primal instinctive reactions to stimuli
  2. Perception: predictive models of the environment based on sensory input, increasing the scope of what the reflexes react to
  3. Attention: prioritization of what the reflexes react to
  4. Imagination / sentience: simulations of possible courses of action based on reflexive reactions, decoupling the reflexes so that they become affective feelings
  5. Metacognitive self awareness / symbolic thought

The consciousness of a healthy mature human contains this entire hierarchy.  Most vertebrates have 1-4, although as MacIver discusses, the imagination of fish is very limited, usually only providing a second or two of advance planning.  Land animals have more, although most can only plan a few minutes into their future.  The more intelligent mammals and birds can plan further.  But to plan weeks, months, or years in the future seems to require the volitional symbolic thought that only humans seem to possess.

But many of us, if presented with an animal who only has 1-3, will still regard it as conscious to at least some degree.  This is particularly true with humans who, due to brain pathologies, may lose 4 and 5.  The fact that they are still aware of their environment and can respond habitually or reflexively to things still triggers most people’s intuition of consciousness.

Which view is right?  Which layers must be present for consciousness?  I don’t think there’s a fact of the matter answer.  Unless of course I’m missing something?

h/t James of Seattle

Posted in Zeitgeist | Tagged , , , , , | 25 Comments

Big societies came before big gods

Some years ago I reviewed a book by Ara Norenzayan called Big Gods: How Religion Transformed Cooperation and Conflict.  Norenzayan’s thesis was that it was a belief in big gods, specifically cosmic gods that cared about human morality, that enabled the creation of large scale human societies.

In small societies, reputation serves as an effective mechanism to keep anti-social behavior to a minimum.  If your entire world is a village with a few hundred people, and it gets around that you shirk duties, stiff friends out of their share of things, or generally are just an immoral person, you’ll eventually be ostracized, or worse, face vengeance from aggrieved parties.

However, as the size of society scales up, reputation increasingly loses its effectiveness.  If I can move between villages, towns, and settlements while scamming people, reputation may never have a chance to catch up.  New mechanisms are needed for cooperation in large scale societies.

Norenzayan’s theory is that one of those mechanisms were big gods, that is, deities worshipped by the overall society, deities that cared about how humans behaved toward one another.  These big gods are in contrast to the relatively small scale amoral spirits that hunter-gatherers typically worship.  The chances that I might act in a prosocial manner toward people in other towns is higher if I think there’s a supernatural cop looking over my shoulder, who will punish me for my immoral ways.

This theory, which puts religion in a crucial role in the formation of civilization, is somewhat at odds with the views of aggressive atheists such as Richard Dawkins, who see supernatural belief as largely a cognitive misfiring, a parasitic meme built on an adaptive over-interpretation of agency in the world, an intuition that once ensured we erred on the side of assuming the rustling in the brush is a predator instead of the wind.

Norenzayan’s conception of moralizing gods also contradicted the scholarly consensus that most gods in ancient religions did not in fact care about human behavior, at least other than receiving the correct libations.  This view, built largely on the lack of moral themes in ancient Greek and middle eastern mythologies, was that moralizing gods were a late addition that only arose during the Axial Age period around 800-300 BC.

The Seshat Project is an effort to add some rigor to these types of discussions by building a database of what is known about early societies.  The database tracks societies in various historical periods noting such things as whether there was a central state, the population, whether writing existed yet, science, common measurement standards, markets, soldiers, a bureaucracy, and whether moralizing high gods were worshiped.

Using the database, a recent study seems to show that big gods come after a society has scaled up to at least a million people, not before.

We analysed standardized Seshat data on social structure and religion for hundreds of societies throughout world history to test the relationship between moralizing gods and social complexity. We coded records for 414 societies spanning the past 10,000 years from 30 regions around the world, based on 51 measures of social complexity and 4 measures of supernatural enforcement of morality. We found that belief in moralizing gods usually followed the rise of social complexity and tended to appear after the emergence of ‘megasocieties’, which correspond to populations greater than around one million people. We argue that a belief in moralizing gods was not a prerequisite for the expansion of complex human societies but may represent a cultural adaptation that is necessary to maintain cooperation in societies once they have exceeded a certain size. This may result from the need to subject diverse populations in multi-ethnic empires to a common higher-level power.

My take on this is that while Norenzayan’s wasn’t entirely correct, moralizing gods were not necessary for civilization to develop, he appears to have been right that they are prevalent in developed societies, in contradiction of the long term scholarly consensus.

That said, I think some cautions are in order.  The Seshat database is undoubtedly a good thing, and will represent a major source of information for studying how societies developed.  But it’s worth noting that much of the information in the database comes down to the subjective judgment of historians, archaeologists, and anthropologists.  To the credit of the project, it does everything it can to minimize this, but they can’t eliminate it entirely.

There’s also the oft quoted maxim that absence of evidence is not necessarily evidence for absence.  The study authors do address this:

Is it possible that moralizing gods actually caused the initial expansion of complexity but you just couldn’t capture that until societies became complex enough to develop writing?

Although we cannot completely rule out this possibility, the fact that written records preceded the development of moralizing gods in the majority of the regions we analysed (by an average period of 400 years)—combined with the fact that evidence for moralizing gods is lacking in the majority of non-literate societies— suggests that such beliefs were not widespread before the invention of writing.

Their position would be stronger if there was writing showing that small scale spirits were still being worshiped during the scale up.  The difficulty here is that no society seems to write down their mythologies in the first few centuries after developing writing.  Early writing seems focused on accounting and overall record keeping.

What we do seem able to say for sure is that the scaling up seemed to require the existence of those accounting and record keeping capabilities.  In other words, writing itself seems to have been far more crucial than big gods.

And it could be argued that for a society to even conceptualize big gods required a broader view that may not have existed until the society had scaled up to a certain size, when writing had been around long enough for at least an incipient sense of history to have developed, and for later generations of writers to build on the ideas of earlier ones.

The authors finish with an interesting question:

If the original function of moralizing gods in world history was to hold together fragile, ethnically diverse coalitions, what might declining belief in such deities mean for the future of societies today? Could secularization in Europe, for example, contribute to the unravelling of supranational forms of governance in the region? If beliefs in big gods decline, what will that mean for cooperation across ethnic groups in the face of migration, warfare, or the spread of xenophobia? Or are the functions of moralizing gods simply being be replaced by other forms of surveillance?

Put another way, what is the long term future of religion?  Does it have a future?  And what do we mean by “religion”?  Does a scientific view of the world count?  Or our civil traditions and rituals?  What kinds of cultural systems might arise in the future that fulfill the same roles that religion has historically filled?  Might technological developments, such as social media, serve to reinstate the old role of reputation, but now on an expanded scale?

Posted in Religion | Tagged , , , , , , | 35 Comments