Big societies came before big gods

Some years ago I reviewed a book by Ara Norenzayan called Big Gods: How Religion Transformed Cooperation and Conflict.  Norenzayan’s thesis was that it was a belief in big gods, specifically cosmic gods that cared about human morality, that enabled the creation of large scale human societies.

In small societies, reputation serves as an effective mechanism to keep anti-social behavior to a minimum.  If your entire world is a village with a few hundred people, and it gets around that you shirk duties, stiff friends out of their share of things, or generally are just an immoral person, you’ll eventually be ostracized, or worse, face vengeance from aggrieved parties.

However, as the size of society scales up, reputation increasingly loses its effectiveness.  If I can move between villages, towns, and settlements while scamming people, reputation may never have a chance to catch up.  New mechanisms are needed for cooperation in large scale societies.

Norenzayan’s theory is that one of those mechanisms were big gods, that is, deities worshipped by the overall society, deities that cared about how humans behaved toward one another.  These big gods are in contrast to the relatively small scale amoral spirits that hunter-gatherers typically worship.  The chances that I might act in a prosocial manner toward people in other towns is higher if I think there’s a supernatural cop looking over my shoulder, who will punish me for my immoral ways.

This theory, which puts religion in a crucial role in the formation of civilization, is somewhat at odds with the views of aggressive atheists such as Richard Dawkins, who see supernatural belief as largely a cognitive misfiring, a parasitic meme built on an adaptive over-interpretation of agency in the world, an intuition that once ensured we erred on the side of assuming the rustling in the brush is a predator instead of the wind.

Norenzayan’s conception of moralizing gods also contradicted the scholarly consensus that most gods in ancient religions did not in fact care about human behavior, at least other than receiving the correct libations.  This view, built largely on the lack of moral themes in ancient Greek and middle eastern mythologies, was that moralizing gods were a late addition that only arose during the Axial Age period around 800-300 BC.

The Seshat Project is an effort to add some rigor to these types of discussions by building a database of what is known about early societies.  The database tracks societies in various historical periods noting such things as whether there was a central state, the population, whether writing existed yet, science, common measurement standards, markets, soldiers, a bureaucracy, and whether moralizing high gods were worshiped.

Using the database, a recent study seems to show that big gods come after a society has scaled up to at least a million people, not before.

We analysed standardized Seshat data on social structure and religion for hundreds of societies throughout world history to test the relationship between moralizing gods and social complexity. We coded records for 414 societies spanning the past 10,000 years from 30 regions around the world, based on 51 measures of social complexity and 4 measures of supernatural enforcement of morality. We found that belief in moralizing gods usually followed the rise of social complexity and tended to appear after the emergence of ‘megasocieties’, which correspond to populations greater than around one million people. We argue that a belief in moralizing gods was not a prerequisite for the expansion of complex human societies but may represent a cultural adaptation that is necessary to maintain cooperation in societies once they have exceeded a certain size. This may result from the need to subject diverse populations in multi-ethnic empires to a common higher-level power.

My take on this is that while Norenzayan’s wasn’t entirely correct, moralizing gods were not necessary for civilization to develop, he appears to have been right that they are prevalent in developed societies, in contradiction of the long term scholarly consensus.

That said, I think some cautions are in order.  The Seshat database is undoubtedly a good thing, and will represent a major source of information for studying how societies developed.  But it’s worth noting that much of the information in the database comes down to the subjective judgment of historians, archaeologists, and anthropologists.  To the credit of the project, it does everything it can to minimize this, but they can’t eliminate it entirely.

There’s also the oft quoted maxim that absence of evidence is not necessarily evidence for absence.  The study authors do address this:

Is it possible that moralizing gods actually caused the initial expansion of complexity but you just couldn’t capture that until societies became complex enough to develop writing?

Although we cannot completely rule out this possibility, the fact that written records preceded the development of moralizing gods in the majority of the regions we analysed (by an average period of 400 years)—combined with the fact that evidence for moralizing gods is lacking in the majority of non-literate societies— suggests that such beliefs were not widespread before the invention of writing.

Their position would be stronger if there was writing showing that small scale spirits were still being worshiped during the scale up.  The difficulty here is that no society seems to write down their mythologies in the first few centuries after developing writing.  Early writing seems focused on accounting and overall record keeping.

What we do seem able to say for sure is that the scaling up seemed to require the existence of those accounting and record keeping capabilities.  In other words, writing itself seems to have been far more crucial than big gods.

And it could be argued that for a society to even conceptualize big gods required a broader view that may not have existed until the society had scaled up to a certain size, when writing had been around long enough for at least an incipient sense of history to have developed, and for later generations of writers to build on the ideas of earlier ones.

The authors finish with an interesting question:

If the original function of moralizing gods in world history was to hold together fragile, ethnically diverse coalitions, what might declining belief in such deities mean for the future of societies today? Could secularization in Europe, for example, contribute to the unravelling of supranational forms of governance in the region? If beliefs in big gods decline, what will that mean for cooperation across ethnic groups in the face of migration, warfare, or the spread of xenophobia? Or are the functions of moralizing gods simply being be replaced by other forms of surveillance?

Put another way, what is the long term future of religion?  Does it have a future?  And what do we mean by “religion”?  Does a scientific view of the world count?  Or our civil traditions and rituals?  What kinds of cultural systems might arise in the future that fulfill the same roles that religion has historically filled?  Might technological developments, such as social media, serve to reinstate the old role of reputation, but now on an expanded scale?

Posted in Religion | Tagged , , , , , , | 35 Comments

Kurzgesagt on the origin of consciousness

This video by Kurzgesagt is pretty interesting.  A word of warning; it’s funded by Templeton, which I know will bother some of you, but I found the content to be reasonably solid from a scientific perspective.

The only real issues I might have are the mysterian overtones at the beginning, and the assertion that consciousness and intelligence are different things, although that second issue might be ameliorated in the next video in the series.

I tend to think of consciousness as a type of intelligence, or more accurately a collection of intelligence capabilities.  Strictly speaking that does make them different, although I’m not sure this is the difference Kurzgesagt has in mind.

I do very much like the way the video describes consciousness as a series of increasingly sophisticated capabilities.  There is no bright line between conscious and non-conscious systems, just points where various people will interpret the system as either conscious or pre-conscious, depending on which definition of “consciousness” they prefer.

Consciousness remains in the eye of the beholder.

Posted in Zeitgeist | Tagged , , , | 82 Comments

Is cosmic inflation settled science?

Ethan Siegel at Starts With a Bang has a post up arguing that the multiverse must exist.  His reasoning has to do with cosmic inflation.  Inflation is the theory that the universe expanded at an exponential rate in the first billionth of a trillionth of a trillionth of a second of the big bang timeline.  After that, space started expanding at a slower rate along the lines we see today.

Inflation was originally developed as a theory to explain why the universe seemed flat (in terms of spacetime), why temperature fluctuations throughout the early universe were so consistent, and a few other things such as the absence of magnetic monopoles.  These motivations for the theory seem important.  I think they should be remembered when evidence for the theory is being discussed.  In other words, can we cite the motivations for the theory as evidence for it?

Anyway, Siegel notes that, due to quantum uncertainty, the particle physics that led to inflation ending would not have been a strictly deterministic event.  Therefore inflation may not have ended everywhere in space at the same time.  Our universe may be a bubble of non-inflation in a sea of inflating space.   And if there is one bubble, there are likely others, other universes, the multiverse.

But all of this seems to hinge on the idea of cosmic inflation, specifically a variant of it called eternal inflation.  And here’s my issue.  The majority of physicists do seem to accept that the theory of inflation is true, but not all of them.  And the evidence for it seems to be a combination of the original problems the theory was developed to solve and circumstantial evidence.   Some notable physicists, including Paul Steinhardt, one of the theory’s early supporters, seem to think it actually creates more questions than it answers.

So, should we consider inflation settled science?  This seems like an example of an unsettling trend in physics in recent years of accepting theories that can’t be empirically tested.  Given that a huge swath of similar theories in particle physics were reportedly invalidated by the LHC results (or rather, non-results), it seems like a very questionable strategy, an abandoning of a key aspect of scientific investigation that been successful for centuries.

Science has credibility for a reason.  That reason is the value it puts on testing every proposition.  Talking about untestable theories as though they’ve been validated seems to put that credibility in jeopardy.  There’s a danger that the public will see start to see theoretical physics as metaphysical navel gazing.

There is also a danger, identified by Jim Baggott some years ago, that many scientists may simply not look at alternative theories because they think inflation has solved the issue, or that they may eschew some speculative theories just because they’re not compatible with inflation.  But if inflation is still really just a speculative theory, then they’re giving up on one speculative theory because it’s not compatible with another speculative theory, perhaps cutting off a fruitful line of inquiry.

It may turn out that inflation does eventually pass some test we simply haven’t thought of yet.  Or we may eventually figure out a way to test the idea of bubble universes.  But until we do, talking as though these are settled issues makes my skeptic meter jump through the roof.

Physics has a reputation for being a very hard science.  Sometimes I wonder how warranted that reputation really is, at least in the theoretical branches.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , , , | 42 Comments

Recommendation: Prador Moon

I’ve recommended Neal Asher’s books before.  This one is pretty much cut from the same pattern: superhuman AIs, fearsome aliens, exotic future technologies, and epic space battles covered in detail.  In terms of the chronology of his Polity future universe, Prador Moon is the earliest story, although it was written after several other books and stories in that universe.

This book covers the first encounter between humans and the prador, the fierce crab-like species that is featured in so many other stories.  This first encounter immediately leads to war.

The prador have no empathy, either with each other or with other intelligent aliens.  They have no problem eating each other, and discover in the story that they like the taste of human flesh.  They also have no problem experimenting on their own children and, of course, on humans.  They’re pretty much the epitome of the evil space opera aliens.

(Their depiction in later books I’ve read and reviewed is a bit more sympathetic, but only because the specific prador in these books are “modified” from the natural versions portrayed in this book.)

My only real disappointment with the book is that it’s short in comparison with Asher’s other books. (Although as of this post, it’s priced lower than most of his stuff.)  It felt like the story of one of the chief protagonists, Jebel Krong, could have been explored in far more detail.  He is referenced in other books, most of which I might not have read, so readers familiar with most of Asher’s work might have already had an impression of his overall biography.

So if you like space opera, might be worth checking out.

Posted in Science Fiction | Tagged , , , | 12 Comments

A neuroscience showdown on consciousness?

Apparently the Templeton Foundation is interested in seeing progress on consciousness science, and so is contemplating funding studies to test various theories.  The stated idea is to at least winnow the field through “structured adversarial collaborations”.  The first two theories proposed to be tested are Global Workspace Theory (GWT) and Integrated Information Theory (IIT).

GWT posits that consciousness is a global area that holds information that numerous brain centers can access.  I’ve seen various versions of this theory, but the one that the study organizers propose to focus on holds that the workspace is held by, or at least coordinated in, the prefrontal cortex at the front of the brain, although consciousness overall is associated with activation of both the prefrontal cortex and the superior parietal lobe at the back of the brain.

IIT posits that consciousness is the integration of differentiated information.  It’s notable for its mathematical nature, including the value Φ (phi); higher values denote more consciousness and lower values less consciousness.  According to the theory, to have experience is to integrate information.  Christof Koch, one of the proponents of this theory, is quoted in the linked article as saying he thinks this integration primarily happens in the back of the brain, and that he would bet that the front of the brain has a low Φ.

While I think this is going to be an interesting process to watch, I doubt it’s going to provide definitive conclusions one way or the other.  Of course, the purpose of the collaboration isn’t to find a definitive solution to consciousness, but to winnow the field.  But even that I think is going to be problematic.

The issue is that, as some of the scientists quoted in the article note, GWT and IIT make different fundamental assumptions about what consciousness is.  Given those different assumptions, I suspect there will be empirical support for both theories.  (They are both supposed to be based on empirical observations in the first place.)

From the article, it sounds like the tests are going to focus on the differences in brain locations, testing whether the prefrontal cortex in the front of the brain is really necessary for consciousness, or whether, as Koch proposes, that the parietal lobe in the back is sufficient.  However, even here, philosophical distinctions matter.

By “consciousness” do we only mean sensory consciousness, that is awareness of the information provided by the senses, both exteroceptive (of the outer world) and interoceptive (of the insides of the body)?  If so, then the parietal lobe probably would be sufficient, provided subcortical structures like the brainstem, thalamus, basal ganglia, and others are functional and providing their support roles.

Or do we mean both sensory and motor consciousness?  Motor consciousness here refers to being aware of what can be done and having preferences about various outcomes, that is, having volition and affective feelings (sentience).

If by “consciousness” we only mean the sensory variety, then Koch will likely be right that only the back of the brain is needed.  But for anyone who considers both sensory and motor consciousness essential, an empirical accounting of the sensorium will not be satisfying.

What complicates a discussion like this is that our intuitions of consciousness are not consistent.  We have intuitions about what subjective experience entails, and that usually includes feelings and volition.  But if we see a patient who’s had a prefrontal lobotomy, who is still able to navigate around their world, and respond reflexively and habitually to stimuli, even if they’ve lost the ability to emotionally feel or plan their actions, we’ll still tend to think they’re at least somewhat conscious.

Which brings me to my own personal attitude toward these theories.  I find GWT more grounded and plausible, but as I’ve progressively learned more about the brain, I’ve increasingly come to see most of these theories as fundamentally giving too much credence to the idea of consciousness as some kind of objective force.

Many of these theories seemed focused on a concept that is like the old vital force that biologists used to hunt for to explain the animism of life.  Today we know there is no vital force.  Vitalism is false.  There is only organic chemistry in motion.  The only “vital force” is the structural organization of molecular systems and the associated processes, and the extremely complex interactions between them.

I currently suspect that we’re eventually going to come to the same conclusion for consciousness, that our subjective experience arises through the complex interactions of cognitive systems in the brain.  Cognitive neuroscience is making steady progress on identifying and describing these systems.  Like molecular biology, we may find that there’s no one simple theory that explain it all, that we have little choice but to get down to the hard work of understanding all of these interactions.

Still, maybe I’m wrong and these “structured adversarial collaborations” will show compelling results.  As Giulio Tononi mentions in a quote in the article, the tests may well teach us useful things about the brain.

What do you think?  Am I too hasty in dismissing consciousness as some kind of objective force?  If so, why?  Are there things about GWT or IIT that make one more likely than the other, or more likely than other theories such as HOT (Higher Order Theory)?

Posted in Zeitgeist | Tagged , , , , | 37 Comments

Captain Marvel

Last night I did something I rarely do anymore, saw a movie in theater right when it was released: Captain Marvel.  As Marvel movies go, it was typical: lots of action, special effects, heart warming moments, and laughs.  Marvel / Disney really seems to have the formula for general entertainment down.  You don’t come out of these movies with any deep insight into the human condition, but you usually do come out emotionally satisfied (as least as long as you don’t scrutinize the details too carefully).

One of the things I found interesting is how much the Captain Marvel story has changed from when I was a kid reading comics.  In my day, Mar-Vell was an alien guy, a Kree military officer who ended up exiled on Earth.  This original character died in a much publicized story in the early 80s.  I had heard about the character “Ms. Marvel” but I think she came along after my comic reading days.

The movie retains elements of this original story, although the title character is, of course, now a woman, and we learn almost immediately that she is in fact originally from Earth.  But her overall history is somewhat confused for a good part of the movie, although eventually the pieces come into place in a satisfying manner.  A version of the original Mar-Vell character even ends up being worked in.

As with all Marvel movies, this one has tie ins with the others.  Nick Fury, played by a Samuel L. Jackson made young with CG, is a major supporting character.   It even has a young Agent Coulson.  We also have a villain show up from another series, but discussing that connection crosses into spoilers.  And, of course, in the now thoroughly expected end-credit scenes, it ties in heavily with the upcoming Avengers: End Game, as anyone who’s seen the end of Avengers: Infinity War would expect.

So if you enjoy Marvel movies in general, I think you’ll enjoy this one.  I could quibble endlessly with the scientific inaccuracies, but that’s really not the frame of mind to be in to enjoy these movies.  They’re fantasy pure and simple and should be taken in as such.

As a side note: it’s interesting that DC is about to come out with a Shazam movie, which will feature the first superhero character from the 1940s named “Captain Marvel.”  You have to wonder about the timing of this release.

Posted in Zeitgeist | Tagged , , , | 13 Comments

Why we’ll know AI is conscious before it will

At Nautilus, Joel Frohlich posits how we’ll know when an AI is conscious.  He starts off by accepting David Chalmers’ concept of a philosophical zombie, but then makes this statement.

But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have.

He then goes on to describe what I’d call a Turing test for consciousness.

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.

This seems to include a couple of  major assumptions.

First is the idea that we’ll accidentally make an AI conscious.  I think that is profoundly unlikely.  We’re having a hard enough time making AIs that can successfully navigate around houses or road systems, not to mention ones that can simulate the consequences of real world physical actions.  None of these capabilities are coming without a lot of engineering involved.

The second assumption is that consciousness, like some kind of soul, is a quality a system either has or doesn’t have.  We already have systems that, to some degree, take in information about the world and navigate around in it (self driving cars, Mars rovers, etc).  This amounts to a basic form of exteroceptive awareness.  To the extent such systems have internal sensors, they have a primitive form of interoceptive awareness.  In the language of the previous post, these systems already have a sensorium more sophisticated than many organisms.

But their motorium, their ability to perform actions, remains largely rule based, that is, reflexive.  They don’t yet have the capability to simulate multiple courses of action (imagination) and assess the desirability of those courses, although the Deepmind people are working on this capability.

The abilities above provide a level of functionality that some might consider conscious, although it’s still missing aspects that others will insist are crucial.  So it might be better described as “proto-conscious.”

For a system to be conscious in the way animals are, it would also have to have a model of self, and care about that self.  This self concern comes naturally to us because having such a concern increases our chances of survival and reproduction.  Organisms that don’t have that instinctive concern tend to quickly be selected out of the gene pool.

But for the AI to ask about its own consciousness, its model of self would need to include a another model to monitor aspects of its own internal processing.  In other words, it would need metacognition, introspection, self reflection.  Only once that is in place will it be capable of pondering its own consciousness, and be motivated to do so.

These are not capabilities that are going to come easily or by accident.  There will likely be numerous prototype failures that are near but not quite there.  This means that we’re likely to see more and more sophisticated systems over time that increasingly trigger our intuition of consciousness.  We’ll suspect these systems of being conscious long before they have the capability to wonder about their own consciousness, and we’ll be watching for signs of this kind of self awareness as we try to instill it, like a parent watching for their child’s first successful utterance of a word (or depending on your attitude, Frankenstein looking for the first signs of life in his creation).

Although it’s also worth wondering how prevalent systems with a sense of self will be.  Certainly they will be created in labs, but most of us won’t want cars or robots that care about themselves, at least beyond their usefulness to their owners.  And given all the ethical concerns with full consciousness and the difficulties in accomplishing it, I think the proto-conscious stage is as far as we’ll bring common everyday AI systems, a stage that makes them powerful tools, but keeps them as tools, rather than slaves.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 36 Comments

The sensorium, the motorium, and the planner

I’ve been reading Gerhard Roth’s The Long Evolution of Brains and Minds. This a technical and, unfortunately, expensive book, not one aimed at general audiences, but it has a lot of interesting concepts.  A couple that Roth mentions are the terms “sensorium” and “motorium.”

The sensorium refers to the sum total of an organism’s perceptions, to its ability to take in information about its environment and itself.  The motorium, on the other hand, is the sum total of an organism’s abilities to produce action and behavior, to affect both itself and its environment.

What’s interesting about this is that sensoriums and motoriums are ancient, very ancient.  They predate nervous systems, and exist in unicellular organisms.  Often these organisms, such as bacteria, have motoriums that include movement via flagella, tiny motors that propel them through their environment.

Their sensoriums often include mechanoreception, the ability to sense when an obstacle has been encountered, and this triggers the flagella to temporally reverse direction for a brief time, followed by a programmed change in direction, and subsequently by forward motion again.  This is typically repeated until the organism has cleared the obstacle.

These organisms also often have chemoreception, the ability to sense whether the environment contains noxious or nutritious chemicals, which again can cause a change in motion until the noxious chemicals are decreasing or the nutritious ones increasing.  Some unicellular organisms even have light sensors, which can cause them to turn toward or away from light, depending on which is more adaptive for that species.

Reading about these abilities, which in many ways seem as sophisticated as those from simple multicellular animals, and given the evolutionary success of these organisms, you have to wonder why complex life evolved.  (There are many theories on why it did.)  But it’s interesting that the earliest multicellular organisms, such as sponges, actually seem less responsive to their environment overall than these individual unicellular life forms.

It’s also interesting to consider what is different between the sensorium of these unicellular and simpler multicellular organisms, and those of more complex animals such as amphibians, mammals, birds, arthropods, and the like.  Obviously the addition of distance senses dramatically increases the size of the sensorium, allowing an organism to react to more than just what it directly encounters, but also by what it can see, hear, or smell.

When we remember that some unicellular organisms have light sensors, and that the evolution in animals from light sensor to camera-like eye is a very gradual thing, with no sharp break between merely detecting light, having multiple light sensors to detect the direction of light, and forming visual images, then the addition of distance senses starts to look like a quantitative rather than qualitative difference.

Of course, more complex animals also have far more complex motoriums enabling a larger repertoire of behavior.  A fish can do more than a worm, a lizard more than a fish, a rat more than a lizard, and a primate more than a rat.  This increased repertoire requires more sophisticated motor machinery in the brain.

But that leads to what is probably the most significant difference, the communication between the motorium and the sensorium.  In unicellular organisms, the communication between them seems to be one way.  The sensorium senses and sends signals to the motorium which acts.  This also seems like the pattern for simple animals.

But distance senses and complex behaviors require interaction between the motorium and sensorium.  In essence, this involves higher order functionality in the motorium interrogating the sensorium for both past perceptions and future scenarios.  A good name for this higher order functionality could be “the planner”.  (I considered “imaginarium”, but that sounds too much like an amusement park attraction.)

The motorium planner interacts with both the sensorium and the lower level motorium.  It constantly queries the sensorium for perceptual information related to possible movement scenarios, and the lower level motorium for reflexive responses to each scenario.  Sometimes it does this while directing the lower level motorium in real time, but often it is considering alternatives while the lower motorium engages in habitual action.

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

In the human brain, the sensorium seems to primarily exist at the back of the brain.  It includes the temporal, occipital, and parietal lobes.  The top part of the parietal lobe is a region sometimes called the posterior association cortex.  This is the center of the human sensorium.  (There is also a simple sensorium in the midbrain, but it appears to operate somewhat separately from the thalamo-cortical one.)

The motorium exists at multiple levels.  The lower levels are in the brainstem and basal ganglia.  Together these handle reflexive and habitual movement respectfully.  The higher order motorium, the planner, is in the frontal lobe cortices, including the prefrontal cortex and premotor cortex.

Neuroscientists often say that consciousness requires activation of both the frontal lobes and posterior association cortex.  (This is sometimes referred to as the “fronto-parietal network.”)  The reason for this is the communication between the motorium planner and the sensorium.  It may be that full phenomenal consciousness requires this interaction, with qualia in effect being the flow of information from the sensorium to the motorium planner.

But there is some controversy that the motorium is required for phenomenal awareness.  Many neuroscientists argue that the sensorium by itself constitutes consciousness.  The problem is that a patient with pathologies in their motorium usually can’t communicate their conscious state to anyone, making determining whether they’re conscious somewhat like attempting to see whether the refrigerator light stays on when the door is closed.

Neuroscientist Christof Koch points out that patients with frontal lobe pathologies who later recovered, reported having awareness and perception when their frontal lobes were non-functional but simply not having any will to respond.  But even this leads to the question, were they fully conscious when they laid down those memories?  Or is “consciousness” just a post hoc categorization we’re attempting to apply to a complex state on the border?

So we have the sensorium and the motorium, which predate nervous systems, going back to unicellular life.  What seems to distinguish more advanced animals is the communication between the sensorium and motorium, particularly the higher level motorium planner.  And this might converge with the view that full consciousness requires both the frontal lobes and parietal regions.

Unless of course, I’m missing something?

Posted in Mind and AI | Tagged , , , , , , | 25 Comments

Probability is relative

At Aeon, Nevin Climenhaga makes some interesting points about probability.  After describing different interpretations of probability, one involving the frequency with which an event will occur, another involving its propensity to occur, and a third involving our confidence it will occur, he describes how, given a set of identical facts, each of these interpretations can lead to different numbers for the probability.  He also describes how each interpretation has its problems.

He then proposes what he calls the “degree of support” interpretation.  This recognizes that probabilities are relative to the information we consider.  That is, when we express a probability of X, we are expressing that probability in relation to some set of data.  If we take away or add new data, the probability will change.

This largely matches my own intuition of probability, that it is always (or almost always) relative to a certain perspective, to a particular vantage point.  If I ask what is the probability of it raining tomorrow, you can give an answer before looking up the weather report based on what you know at that moment.  It might not be a particularly precise probability, but it can still be made based on where you live and  your experience of how often it typically rains there.  Of course, once you look at the weather report, you’ll likely adopt the probabilities it provides (unless the forecast where you live has historically been unreliable).

(One possible exception to probabilities being relative is quantum physics.  Depending on which interpretation you favor, quantum probabilities may be objective or they may be relative.  In non-deterministic interpretations, they might be objective (although that depends on your interpretation of the interpretation 🙂 ).  But in the deterministic interpretations, it would still be relative to our perspective.)

Every so often I do a post discussing the probability of something, such as the probability of other intelligent aliens in our galaxy.  It’s not unusual for someone to comment that we don’t know enough to estimate any probabilities and that the whole exercise is then pointless.  But if probabilities are relative, this position is wrong.

Of course, my estimated probabilities may be wrong, but if so the correct way to address it is in relation to the data that is being considered.  Or to offer additional data that may change the probability.  Or point out why some (or all) of the data should not be considered when making the estimate.

But if we have a perspective, then we have the ability to estimate probabilities from that perspective.  If our perspective is one of complete ignorance, the probability should reflect it.  Maybe we can only say the probability of something being true is 50%, that is, it has an equal chance of being true or false.  Or if the proposition is one of ten possible outcomes, then it might be more along the lines of 10% probable.

But it doesn’t take much knowledge to shift a probability.  In 1600, a natural philosopher could probably rationally argue that, based on what was then known, the probability of the heliocentric model of the solar system being true was only 50%.  But after Galileo’s blurry telescopic observations a few years later, along with confirmations by other observers, the probability shifted dramatically, so much so that by Newton’s time in the latter part of that century, the probability had shot up much higher.

Does that mean the natural philosopher in 1600 was wrong in his probabilities?  No, because relative to his perspective at the time, those were the probabilities.  He would only have been wrong if he hadn’t used the data available to him in making his estimate, or used it correctly, or insisted due to ideological commitments that the probability was zero.

So we’re always in a position to estimate probabilities.  We may not be in a position to do so precisely, since that usually requires a lot of data, but the argument that we should never try strikes me as invalid.  The only valid argument is whether or not we’re doing it correctly based on what is then known.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 9 Comments

AI and creativity

Someone asked for my thoughts on an argument by Sean Dorrance Kelly at MIT Technology Review that AI (artificial intelligence) cannot be creative, that creativity will always be a  human endeavor.  Kelly’s main contention appears to be that creativity lies in the eye of the beholder and that humans are unlikely to recognize AI accomplishments as creative.

Now, I think it’s true that AI suffers from a major disadvantage when it comes to artistic creativity.  Art’s value amounts to what emotions it can engender in the audience.  Often generating those emotions requires an insight from the artist into the human condition, an insight that draws heavily on our shared experiences as human beings.  This is one reason why young artists often struggle, because their experiences are too limited yet to have those insights, or at least too limited to impress older consumers of their art.

Of course an AI has none of these experiences, nor the human drives that make that experience meaningful in the way it is to us.  AI may be able to use correlations between things in other works and how popular those works are, but for finding a genuine insight into the human condition, it is simply not going to be equipped to do it, at least not for a long time.  In that sense, I agree with Kelly, although his use of the word “always” has an absolutist ring to it I can’t endorse.

But it’s in the realm of games and mathematics that I think Kelly oversells his thesis.  These are areas where insights into the human condition are not necessarily an advantage, although in the case of games they can be.

Much has been written about the achievements of deep-learning systems that are now the best Go players in the world. AlphaGo and its variants have strong claims to having created a whole new way of playing the game. They have taught human experts that opening moves long thought to be ill-conceived can lead to victory. The program plays in a style that experts describe as strange and alien. “They’re how I imagine games from far in the future,” Shi Yue, a top Go player, said of AlphaGo’s play. The algorithm seems to be genuinely creative.

In some important sense it is. Game-playing, though, is different from composing music or writing a novel: in games there is an objective measure of success. We know we have something to learn from AlphaGo because we see it win.

I can’t say I understand this point.  Because AlphaGo’s success is objective we can’t count what it does in achieving that win as creative?  The fact is AlphaGo found strategies that humans missed.  In some ways, this reminds me of the way evolution often find creative solutions to problems, solutions that in retrospect look awfully creative.

In the realm of mathematics, Kelly asserts that, so far, mathematical proofs by AI have not been particularly creative.  Fair enough, although by his own standard that’s a subjective judgment.  But he then focuses on proofs an AI might come up with that humans couldn’t understand, noting that a proof isn’t a proof if you can’t convince a community of mathematicians that it’s correct.

Kelly doesn’t seem to consider the possibility that an AI might develop a proof incomprehensible to humans that nevertheless convince a community of other AIs who can demonstrate its correctness by using it to solve problems.  Or the possibility that the “not particularly creative” AIs today might advance considerably in years to come and produce ground breaking proofs that human mathematicians can understand and appreciate.  Mathematics is one area where I could see AI eventually having insights a human might never have.

But I think the biggest weakness in Kelly’s thesis is at its heart, his admission that creativity, like beauty, lies in the eye of the beholder, that it only exists subjectively.  In other words, it’s culturally specific, and our conception of what is creative might change in the future, particularly as we become more accustomed to intelligent machines.

This leads him to this line of reasoning:

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.

In other words, machines can’t be creative because we humans won’t recognize them as so, and if humans do start to consider them creative, then we will have denigrated ourselves.  This is just a rationalized bias for human exceptionalism, a self reinforcing loop that closes off any possibility of considering counter-evidence.

So, in sum, will AI ever be creative?  I think that’s a meaningless question (similar to the question of whether it will ever be conscious).  The real question is will we ever regard them as creative?  The answer is we already do in some contexts (see the AlphaGo quote above), but in others, notably in artistic achievement, it may be a long time before we do.  But asserting we never will seems more like a statement of faith than a reasoned conclusion.  Who knows what AIs in the 22nd century will be capable of?

What do you think?  Is creativity something only humans are capable of?  Is there any fact of the matter on this question?

Posted in Zeitgeist | Tagged , , | 44 Comments