The prospects for a scientific understanding of consciousness

Michael Shermer has an article up at Scientific American asking if science will ever understand consciousness, free will, or God.

I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language.

On consciousness in particular, I did a post a few years ago which, on the face of it, seems to take the opposite position.  However, in that post, I made clear that I wasn’t talking about the hard problem of consciousness, which is what Shermer addresses in his article.  Just to recap, the “hard problem of consciousness” was a phrase originally coined by philosopher David Chalmers, although it expressed a sentiment that has troubled philosophers for centuries.

Chalmers:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Broadly speaking, I agree with Shermer on the hard problem.   But I agree with an important caveat.  In my view, it isn’t so much that the hard problem is hopelessly unsolvable, it’s that there is no scientific explanation which will be accepted by those who are troubled by it.  In truth, while I don’t think the hard problem has necessarily been solved, I think there are many plausible solutions to it.  The issue is that none of them are accepted by the people who talk about it.  In other words, for me, this seems more of a sociological problem than a metaphysical one.

What are these plausible solutions?  I’ve written about some of them, such as that experience is the brain constructing models of its environment and itself, that it is communication between the perceiving and reflexive centers of the brain and its movement planning centers, or that it’s a model of aspects of its processing as a feedback mechanism.

Usually when I’ve put these forward, I’m told that I’m missing the point.  One person told me I was talking about explanations of intelligence or cognition rather than consciousness.  But when I ask for elaboration, I generally get a repeat of language similar to Chalmers’ or that of other philosophers such as Thomas Nagel, Frank Jackson, or others with similar views.

The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain.  This is a notion that has long struck me as a conceit, that our minds just can’t be another physical system in the universe.  It’s a privileging of the way we process information, an insistence that there must be something fundamentally special and different about it.  (Many people broaden the privilege to include non-human animals, but the conceit remains the same.)

It’s also a rejection of the lessons from Copernicus and Darwin, that we are part of nature, not something fundamentally above or separate from it.  Just as our old intuitions about Earth being the center of the universe, or of us being separate and apart from other animals, are not to be trusted, our intuitions formed from introspection, from self reflection, a source of information proven to be unreliable in many psychology studies, should not necessarily be taken as data that need to be explained.

Indeed, Chalmers himself has recently admitted to the existence of a separate problem from the hard one, what he calls “the meta-problem of consciousness”.  This is the question of why we think there is a hard problem.  I think it’s a crucial question, and I give Chalmers a lot of credit for exploring it, particularly since in my mind, the existence of the meta-problem and its most straightforward answers make the answer to the hard problem seem obvious: it’s an illusion, a false problem.

It implies that neither the hard problem, nor the version of consciousness it is concerned about, the one that remains once all the “easy” problems have been answered, exist.  They are apparitions arising from a data model we build in our brains, an internal model of how our minds work.  But the model, albeit adaptive for many everyday situations, is wrong when it comes to providing accurate information about the architecture of the mind and consciousness.

Incidentally, this isn’t because of any defect in the model.  It serves its purpose.  But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience.  This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain.  But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.

Once we’ve accounted for capabilities such as reflexive affects, perception (including a sense of self), attention, imagination, memory, emotional feeling, introspection, and perhaps a few others, essentially all the “easy” problems, we will have an accounting of consciousness.  To be sure, it won’t feel like we have an accounting, but then we don’t require other scientific theories to validate our intuitions.  (See quantum mechanics or general relativity.)  We shouldn’t require it for theories of consciousness.

This means that asking whether other animals or machines are conscious, as though consciousness is a quality they either have or don’t have,  is a somewhat meaningless question.  It’s really a question of how similar their information processing and primal drives are to ours.  In many ways, it’s a question of how human these other systems are, how much we should consider them subjects of moral worth.

Indeed, rewording the question about animal and machine consciousness as questions about their humanness, makes the answers somewhat obvious.  A chimpanzee obviously has much more humanness than a mouse, which itself has more than a fish.  And any organism with a brain currently has far more than any technological system, although that may change in time.

But none have the full package, because they’re not human.  We make a fundamental mistake when we project the full range of our experience on these other systems, when the truth is that while some have substantial overlaps and similarities with how we process information, none do it with the same calibration of senses or the combination of resolution, depth, and primal drives that we have.

So getting back to the original question, I think we can have a scientific understanding of consciousness, but only of the version that actually exists, the one that refers to the suite and hierarchy of capabilities that exist in the human brain.  The version which is supposed to exist outside of that, the version where “consciousness” is essentially a code word for  an immaterial soul, we will never have an understanding of, in the same manner we can’t have a scientific understanding of centaurs or unicorns, because they don’t exist.  The best we can do is study our perceptions of these things.

Unless of course, I’m missing something.  Am I being too hasty in dismissing the hard-problem version of consciousness?  If so, why?  What about subjective experience implies anything non-physical?

Posted in Mind and AI | Tagged , , , , , , , | 92 Comments

Recommendation: Children of Time

The Fermi Paradox is the observation that if intelligent life is pervasive in the universe, it should have arrived on Earth ages ago, but there is no evidence it ever did.  The solutions to the paradox include the possibilities that interstellar travel is impossible (or so appallingly difficult that no one bothers), that there is something out there suppressing the development of intelligent life, or that intelligence capable of symbolic thought and technological development is profoundly rare in the universe.

There’s a new model which lends credence to the last possibility.  Looking at chemical and genetic pathways, this model predicts that we may be the only technological civilization in the observable universe.  I’ve written before on the reasons to think that our nearest intelligent neighbors may be cosmically distant.  Life appears to have started early in Earth’s history, but intelligent life took billions of years to emerge, and only did it once, at least so far.  And its development seemed highly dependent on a large number of low probability events throughout Earth’s evolutionary history.

So the common science fiction trope of us encountering an alien intelligence may never happen.  If we meet other intelligent entities, it seems far more likely they’ll be ones we created rather than ones who evolved independently.  These might be technological AI (artificial intelligence) or they might be animal species we “uplifted”, that is, genetically modified to be more intelligent than evolution made them.

Cover of Children of TimeAt the beginning of Adrian Tchaikovsky’s Children of Time, humans have begun terraforming a number of exoplanets for eventual habitation.  On one of these planets, the egotistical and misanthropic Doctor Avrana Kern plans to uplift a species of monkey.  The idea is to release the monkeys on a terraformed planet with lots of Earth life (including most animal species) along with a virus to guide the monkeys’ rapid evolution into sapience.  Eventually they would be guided by a scientist left behind in orbit, all in order for them to be ready to help human colonists when they eventually arrive.

But the scientist recruited for the guidance role turns out to be a religious fanatic who destroys the scientific mission, killing all the scientists and the monkeys.  Only the arrogant Kern herself survives, trapped in orbit, stuck in the guidance role she envisioned for another.  The uplift virus does get delivered into the planet’s biosphere, but without the monkeys it will have unforeseen consequences.  Shortly afterward, Kern loses all contact with humanity, receiving just enough clues to deduce that human civilization has destroyed itself in war.

Across the millenia, the virus, engineered to avoid working in mammalian species other than the monkeys it was programmed for, begins working in arthropod species, particularly spiders.  Gradually the spiders evolve into an intelligent species and begin building a civilization.  Eventually they realize that there is a Messenger in the Sky (in actuality the trapped scientist Avrana Kern, who has by now effectively merged with the AI managing her craft) and begin worshiping it.

Meanwhile on Earth, after several thousand years, civilization has slowly begun to recover, but the Earth is damaged by the previous wars, poisoned by biological weapons left behind from the war.  It gradually becomes apparent to this new civilization, still a technological shadow of the old one, that the poisoning has doomed them to eventual extinction.  In desperation, they send out colonization sleeper ships to the terraformed worlds.

One of these ships, the Gilgamesh, has Kern’s world as its destination.  But when they arrive, they encounter a now deranged Kern who successfully chases them away with superior technology.  But not before they’re able to see a world that has been successfully terraformed, albeit one inhabited by “monsters” (the spiders).  They leave, but both they and Kern know their options are limited, and that they’ll be back.

Two story threads then run in parallel, one of the spiders and their climb into a civilization that may be able to protect their world from invaders, and another of humans desperately attempting to survive in space, preparing for what they hope will be a successful return to claim Kern’s world for their own.

This book explores a lot of philosophical concepts, such as humanity’s apparent need to always be at war with itself, the nature of religion, and the plausibility of uploading minds.  It also explores how an arachnid species might see the world, and how this worldview might affect the technologies and civilization it develops.  Attempted communications between the species show just how alien an intelligent arachnid species might be, even when it’s descended from the same biosphere as humanity.

The book is also a gripping story.  On the human side, the story is told from the point of view of Holsten Mason, a classicist on the Gilgamesh familiar with the history of the old destroyed civilization, who is often able to compare it with what he sees happening with his own society.  The Gilgamesh has received no signals from the other colony ships.  They appear to be the only surviving one, effectively making them humanity’s last chance, raising the stakes to survival of the species.

My only real gripe with this side of the story is that fuel, and the related logistics, are never mentioned.  The interstellar sleeper ship (and eventually generation ship), the Gilgamesh, is apparently limited to a cruising velocity of 10% light speed.  The reasons for a limit like that in space always come down to fuel efficiency.  Yet there is never any discussion of finding fuel for the ship when it’s in a solar system.  But this is really just a pet peeve on my part, a reaction to a common failing in space opera.  In fairness, Tchaikovsky mostly steers clear of discussing the drive technologies on the ships (except toward the end), so he does leave conceptual room for these issues to be handled behind the scenes.

On the spider side, Tchaikovsky, using an omniscient narrator, provides human names to the spiders, such as “Portia”, “Bianca”, and “Fabian”, while making it clear that these are placeholders for a species that communicates with vibrations made by their legs on web strands, for actual names that would be utterly alien to humans.  The human names are reused for various spiders with similar personalities throughout their civilization’s history.  The effect is that it provides a semblance of continuity in the narrative that actually works fairly well.

The book won the British Arthur C. Clarke Award for Best Novel, and has received wide acclaim.  Lionsgate is reportedly working on a film adaptation, although I wonder how they plan to handle the spiders.  The alien nature of the spiders works in the book because the story is told somewhat from their perspective, but it seems like a movie would have to make significant compromises.  It will be interesting to see how it’s handled.

Anyway, if you like hard(ish) space opera, and don’t mind reading a story with significant portions told from an alien perspective, I highly recommend this book.  In some ways, it reminded me of Vernor Vinge’s A Deepness in the Sky, which also has a spider civilization in it, although Vinge’s spiders are true aliens, as well as the movie Battle for Terra, which has a conflict between desperate humans and a native species on their target world.  Children has enough original material that it never feels derivative, but if you enjoyed those other stories, you’ll probably enjoy this one.

Posted in Science Fiction | Tagged , , , | 11 Comments

The soul of the Roman Empire

Map of the Roman Empire at its greatest extent

The Roman Empire at its greatest extent
Image credit: Tataryn vis Wikipedia

According to tradition, in the early days of ancient Rome, King Numa Pompilius established a religious institution: the Vestal Virgins.  The Vestal Virgins were chaste priestesses of Vesta, the goddess of home and hearth.  Their duty was to maintain the sacred flame in the temple of Vesta.  The Romans believed that as long as the sacred flame was maintained, Rome would prosper, but that if it should ever be untended, it would lead to Rome’s destruction.  The rites of the Vestal Virgins were faithfully maintained throughout the centuries.

Until 394 CE, when on order of the Christian emperor Theodosius, those rites were discontinued and the virgin priestesses dismissed.  For many pagans living at the time, this abandonment of an ancient order, along with the associated forsaking of the ancient gods of Rome, were why the western empire fell on hard times in the 400s CE, eventually collapsing completely by 476.  Rome in particular was sacked by 410.  (Don’t get too spooked by this.  Rome was also sacked much earlier in its history in 387 BCE, presumably despite the efforts of the Vestal Virgins.)

Since then, there have been many theories about why the western empire fell.  Of course, one plausible narrative is that the empire’s external enemies simply became strong enough to collectively overwhelm it.  Under this narrative, the invasion of the Huns in Europe was pivotal, along with the pressure the Eastern Empire was under from the Sasanian Empire that prevented it from sending aid to the west.  In this view, the fall of the Western Empire is more about the ferocity of the Huns than anything else.

But there’s always been a sense that the empire rotted from within.  Much of the speculation about the fall looks at events in the empire going as far back as its founding in 27 BCE.  But looking at developments in the first, second, or third centuries, when the empire continued to function for centuries afterward, has long struck me as little more than an excuse to indulge in moral grandstanding.  Often this type of speculation says more about those doing it than anything about ancient history.

The fact is that the Roman Empire was never an ideal state.  From the beginning it was a military dictatorship that lurched from one succession crisis to another, often resulting in civil war.  On many occasions the empire fragmented among multiple leaders proclaimed emperor by their local legions.

The real question isn’t what caused the empire to fall, but how it managed to hold together through all of these crises.  Civic virtue doesn’t appear to be what held it together.  Some other quality was important.  Something that must have changed in the later part of its history.  (At least in the western half.  The external enemy narrative seems more compelling for the Eastern Empire’s later fall in 1453 CE.)

I think through much of its history, there was a sense in the empire that it was civilization, while the rest of the world was alien and barbaric, or maybe that the Greco-Roman civilization was the only one worth having.  Perhaps another way of saying this is that the culture of the empire was very different from the neighboring regions.  This distinction between the empire and its neighbors, this shared identity, may have been enough, despite frequent upheavals, for the people of the empire to repeatedly piece it back together again.

Until the century from 376 to 476 CE.  During that period, the Western Empire lost control of its borders and countryside, and long before its “official” fall in 476, ceased to be an effective state.  What changed?

I think the answer is Christianity.  Now, this isn’t a polemic against the Christian faith.  Contrary to what Edward Gibbon and others might have thought, I doubt there was anything in the Christian doctrine* beliefs in the afterlife or ethics that, in and of itself,  weakened the empire.  To counter that suggestion, we only have to look at all the successful Christian societies that have existed since then, including for many centuries, the Eastern Empire (aka the Byzantine Empire).

But I think there are a couple of important aspects of Christianity worth considering.  The first is that it was exclusive.  To be a Christian was to forsake all other gods and religions.  (Early Christians were often called “atheists” by their pagan contemporaries because they denied all but one of the gods.)  That was unusual in the ancient world.  Most pagan religions didn’t really care whether you worshiped other gods.  The only other known religion that did was Judaism.

But Judaism didn’t share the second important aspect of Christianity, its evangelistic nature.  Christianity was both exclusive and expansionist, a faith that encouraged adherents to find converts, one that saw its mission as bringing as many people into the fold as possible.  This combination meant that every time Christianity gained a convert, paganism lost an adherent.  In other words, these aspects of Christianity is what made pagan religions easy pickings for it.

Contrary to traditional Christian narrative, there was nothing miraculous in the spread of Christianity.  Bart Erhman in his book, The Triumph of Christianity, points out that it only took a modest growth per year, compounded across centuries, for Christianity to become a substantial minority of the empire’s population by the end of the third century with millions of adherents.

Prior to that period, the empire had never persecuted Christians in any consistent sustained manner, but that changed in the closing decades of that century.  Until then, Christianity had been an oddball fringe cult, but its growth was bringing it increasingly into mainstream society, along with a perception of its threat to the traditional cults.  The persecutions in these decades, under the emperor Diocletian, would become known as the Great Persecution.

Students of history know that the persecutions ended in 313 CE under the first Christian emperor, Constantine.  For a few decades, the empire was more or less tolerant of multiple faiths.  But in the latter part of the fourth century that started to change, perhaps as a reaction to the last pagan emperor who tried to reverse Christianity’s ascent.  As the number of Christians increased, the empire became increasingly intolerant of paganism, culminating in emperor Theodosius declaring Christianity to be the only legitimate religion in 380 CE.

The roles from a century earlier were now reversed.  Paganism found itself persecuted, and engaged in a losing cultural battle.  According to Erhman, by 400, half the empire was Christian.  But the other half remained pagan.  Consider what this must have done to the society, what the consequences must have been for its social cohesion.

And Ehrman points out that the western half of the empire was actually behind the east in its conversion rates.  In other words, there were more pagans in the west, indicating that proclamations of Christianity as the only legitimate religion were probably far more traumatic there than in the east.

Finally, consider that Christianity had already spread across the borders into the Germanic tribes.  This meant that when those (Christian) tribes were fleeing the Huns, looking for refuge inside the borders of the empire, many Christians within the empire may have felt more affinity with them than with Roman pagans.  The cultural distinction between the empire and its neighbors, along with the shared identity, at least in the west, was no more.  (There remained a strong cultural distinction in the east, particularly after the rise of Islam, which may be why it endured for another millenia.)

Of course, this is admittedly speculation on my part.  One of the problems with talking about the cause of a civilization’s collapse, is that a society’s decline and fall is usually not a time when detailed and careful records are kept.  Ultimately, we may never know the real reason.  Indeed, it’s almost certainly wrong to talk in terms of any one reason when there were probably a multitude of causal factors.  But this explanation strikes me as more plausible than most I’ve heard.

Or we could just blame the Huns.

* Edited wording to address Steve Ruis’ point in the comments.

Posted in History | Tagged , , , , , | 37 Comments

Altered Carbon

Several years ago, I read Richard K. Morgan’s Takeshi Kovacs novels about a future where people’s minds are recorded in a device (called a “stack”) implanted just below the brain stem, essentially providing a form of mind uploading, and allowing people to survive the death of their body.  Kovacs, the protagonist of the series, is an ex-soldier, mercenary, criminal, and all around bitter human being, who is used to inhabiting many different bodies.  The novels follow his grim adventures, first on Earth, and then on other planets.

This weekend, I binge watched Netflix’s adaptation of the first novel, Altered Carbon.

It’s been several years since I read the books, but from what I can remember, the series broadly follows the first book, although it adds new content and new characters to fill out the 10 episode arc, and borrows content from the other two books, which in the process makes the story more interesting.  But it does preserve the issues Morgan explores in the novel, about what having a society of people who can transfer to new bodies might look like, particularly one that retains sharp divisions between rich and poor.

The show does seem to moderate Morgan’s intense anti-religious sentiment.  I recall Kovacs being a staunch atheist in the books, and with the story told from his point of view, that outlook permeates.  But the show seems to take a more even handed approach, showing the issues devout Catholics in this future have with the idea of being technologically resurrected while still showing them as sympathetic characters.

As in the book, the reluctance of Catholics to be resurrected makes them uniquely vulnerable targets.  A non-Catholic citizen who is murdered can potentially be revived to testify against their murderer.  But with the Catholic stipulation that they are not to be revived, it means that when they are murdered, they are dead.  There is a debate that takes place in the background of the story on whether a law should be passed to allow law enforcement to revive anyone who is a victim of a crime, regardless of their religious preferences.

What the show doesn’t moderate however, is the book’s grim noir character.  Kovacs seems more sympathetic than I recall him being in the books, more human and approachable, but the overall story’s very dark take on humanity remains.  This vision is dystopian, in a way that any Blader Runner fan should love.

Morgan’s version of mind uploading, where minds are recorded on the stack implanted in the body, preserves the jeopardy of the characters in the story.  Characters can be killed if their stack is destroyed (called “real death” in the story), and the stacks are frequently targeted in fight scenes.  Only the very rich are able to have themselves periodically backed up so that destruction of their stack doesn’t result in their real death.  (Although they have other vulnerabilities that get mentioned.)

The story also makes it clear that, although the technology is available, double-sleeving (being in two or more bodies at the same time) is illegal in this society, with the penalty being real death, although the reasons for this restriction are never discussed.  It seems to be one of many aspects of a repressive society.  (In the third book, I recall the suggestion that the Protectorate, the overall interstellar civilization in the books, is essentially holding back humanity by not allowing society to fully evolve with the technology.)

The show doesn’t explicitly address this, nor do I recall the books doing so, but the idea of the stack holding a person’s mind so that when it’s implanted between the brain and the spinal cord it takes control of the body, is actually very dark when you think about it.  It leaves open the disturbing possibility that the body’s original consciousness is still in there somewhere, but totally cut off from its body.

I think there are also some serious scientific issues with whether the technology as described would work.  For a device to really function the way the stacks are described, they would need to intercept all sensory input, much of which isn’t routed though the brain stem.  Vision, for example, at least detailed vision, goes straight to the thalamus and then the occipital lobe.  That said, the details are never discussed, so there’s enough room to imagine that the stacks are part of an overall technology harness that reaches deep into the brain.

Anyway, if you’re interested in watching the ideas of what a human society with mind uploading might look like, and don’t mind a lot of violence, language, and sexual content, then you might want to check it out.

Posted in Science Fiction | Tagged , , , | 42 Comments

What is knowledge?

In the discussion on the last post on measurement, the definition of knowledge came up a few times.  That’s dredged up long standing thoughts I have about knowledge, which I’ve discussed with some of you before, but that I don’t think I’ve ever actually put in a post.

The ancient classic definition of knowledge is justified true belief.  This definition is simple and feels intuitively right, but it’s not without issues.  I think the effectiveness of a definition is in how well it enables us to distinguish between things that meet it or violate it.  In the case of “justified true belief”, its effectiveness hinges on how we define “justified”, “true”, and “belief”.

How do we justify a particular proposition?  Of course, this is a vast subject, with the entire field of epistemology dedicated to arguing about it.  But it seems like the consensus arrived at in the last 500 years, at least in scientific circles, is that both empiricism and rationalism are necessary, but that neither by themselves are sufficient.  Naive interpretations of observations can lead to erroneous conclusions.  And rationalizing from your armchair is impotent if you’re not informed on the latest observations.  So justification seems to require both observation and reason, measurement and logic.

The meaning of truth depends on which theory of truth you favor.  The one most people jump to is correspondence theory, that what is true is what corresponds with reality.  The problem with this outlook is that only works from an omniscient viewpoint, which we never have.  In the case of defining knowledge, it sets up a loop: we know whether a belief is knowledge by knowing whether the belief is true or false, which we know by knowing whether the belief about that belief is true or false, which we know by…  Hopefully you get the picture.

We could dispense with the truth requirement, simply define knowledge as justified belief, but that doesn’t seem right.  Prior to Copernicus, most natural philosophers were justified in saying they knew that the sun and planets orbit the earth.  Today we say that that belief was not knowledge.  Why?  Because it wasn’t true.  How do we know that?  Well, we have better information.  You could say that our current beliefs about the solar system are more justified than the beliefs of 15th century natural philosophers.

So maybe we could replace “justified true belief” with “currently justified belief” or perhaps “belief that is justified and not subsequently overturned with greater justification.”  Admittedly, these aren’t nearly as catchy as the original.  And they seem to imply that knowledge is a relative thing, which some people don’t like.

The last word, “belief”, is used in a few different ways in everyday language.  We often say “we believe” something when we really mean we hope it is true, or we assume it’s true.  We also often say we “believe in” something or someone when what we really mean is we have confidence in it or them.  In some ways, this usage is an admission that the proposition we’re discussing isn’t very justified, but we want to sell it anyway.

But in the case of “justified true belief”, I think we’re talking about a version that says our mental model of the proposition is that it is true.  In this version, if we believe it, if we really believe it, then don’t we think it’s knowledge, even if it isn’t?

Personally, I think the best way to look at this is as a spectrum.  All knowledge is belief, but not all belief is knowledge, and it isn’t a binary thing.  A belief can have varying levels of justification.  The more justified it is, the more it’s appropriate to call it knowledge.  But at any time, new observations might contradict it, and it would then retroactively cease to have ever been knowledge.

Someone could quibble here, making a distinction between ontology and epistemology, between what is reality, and what we can know about reality.  Ontologically, it could be argued that a particular belief is or isn’t knowledge regardless of whether we know it’s knowledge.  But we can only ever have theories about ontology, theories that are always subject to being overturned.  And a rigid adherence to a definition that requires omniscience to ever know whether a belief fits the bill, effectively makes it impossible for us to know whether that belief is knowledge.

Seeing the distinction between speculative belief and knowledge as a spectrum pragmatically steps around this issue.  But again, this means accepting that what we label as knowledge is, pragmatically, something relative to our current level of information.  In essence, it makes knowledge belief that we currently have good reason to feel confident about.

What do you think?  Is there a way to avoid the relative outlook?  Is there an objective threshold where we can authoritatively say a particular belief is knowledge?  Is there an alternative definition of knowledge that avoids these issues?

Posted in Philosophy | Tagged , , , | 67 Comments

Are there things that are knowable but not measurable?

It’s a mantra for many scientists, not to mention many business managers, that if you can’t measure it, it’s not real.  On the other hand, I’ve been told by a lot of people, mostly non-scientists, and occasionally humanistic scholars including philosophers, that not everything knowable is measurable.

But what exactly is a measurement?  My intuitive understanding of the term fits, more or less, with this Wikipedia definition:

Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events.[1][2]

There’s a sense that measurement is a precise thing, usually done with standard units, such as kilograms, meters, or currency denominations.  But Doug Hubbard argues in an interview with Julia Galef, as well in his book How to Measure Anything, that measurement should be thought of as a reduction in uncertainty.  More precisely, he defines measurement as:

A quantitatively expressed reduction of uncertainty based on one or more observations.

Hubbard, Douglas W.. How to Measure Anything: Finding the Value of Intangibles in Business (p. 31). Wiley. Kindle Edition.

The observation part is crucial.  Hubbard argues that, for anything we care about, there is a difference between what we’ll observe if that thing happens and what we’ll observe if it doesn’t.  Figure out this difference, define it carefully, and you have the basis to measure anything, at least anything knowable in this world.  The more the differences can be defined with observable intermediate stages, the more precise the measurement can be.

One caveat: just because it’s possible to measure anything knowable doesn’t mean it’s always practical, that it is cost effective to do so.  Hubbard spends a lot of time in the early parts of his book discussing how to figure out the value of information to decide if the costs of measuring something is worth it.

In many cases, precise measurement may not be practical, but not all measurements must be precise in order to be useful.  Precision is always a matter of degree since we never get 100% accurate measurements, not even in the most sophisticated scientific experiments.  There’s always a margin of error.

Measuring some things may only be practical in a very coarse grained manner, but if it reduces uncertainty, then it’s still a measurement.  If we have no idea what’s currently happening with something, then any observations which reduce that uncertainty count as measurements.  For example, if we have no idea what the life expectancy is in a certain locale, and we make observations which reduces the range to, say, 65-75 years, we may not have a very precise measurement, but we still have more than what we started with.

Even in scenarios where only one observation is possible, the notorious sample of one, Hubbard points out that the probability of that one sample being representative of the population as a whole is 75%.  (This actually matches my intuitive sense of things, and will make me a little more confident next time I talk about extrapolating possible things about extraterrestrial life using only Earth life as a guide.)

So, is Hubbard right?  Is everything measurable?  Or are there knowable things that can’t be measured?

One example I’ve often heard over the years is love.  You can’t measure, supposedly, whether person A loves person B.  But using Hubbard’s guidelines, is this true?  If A does love B, wouldn’t we expect their behavior toward B to be significantly different than if they didn’t?  Wouldn’t we expect A to want to spend a lot of time with B, to do them favors, to take care of them, etc?  Wouldn’t that behavior enable us to reduce the uncertainty from 50/50 (completely unknown) to knowing the answer with, say, an 80% probability?

(When probabilities are mentioned in these types of discussions, there’s almost always somebody who says that the probabilities here can’t be scientifically ascertained.  This implies that probabilities are objective things.  But, while admitting that philosophies on this vary, Hubbard argues that probabilities are from the perspective of an observer.  Something that I might only be able to know with a 75% chance of being right, you may be able to know with 90% if you have access to more information than I do.)

Granted, it’s conceivable for A to love B without showing any external signs of it.  We can never know for sure what’s in A’s mind.  But remember that we’re talking about knowable things.  If A loves B and never gives any behavioral indication of it (including discussing it), is their love for B knowable by anybody but A?

Another example that’s often put forward is the value of experience for a typical job.  But if experience does add value, people with it should perform better than those without it in some observable manner.  If there are quantifiable measurements of how well someone is doing in a job (productivity, sales numbers, etc), the value of their experience should show up somewhere.

But what other examples might there be?  Are there ones that actually are impossible to find a conceivable measurement for?  Or are we only talking about measurements that are hopelessly impractical?  If so, does allowing for very imprecise measurement make it more approachable?

Posted in Philosophy | Tagged , , , | 93 Comments

Could a neuroscientist understand a microprocessor? Is that a relevant question?

A while back, Julia Galef on Rationally Speaking interviewed Eric Jonas, one of the authors of a study that attempted to use neuroscience techniques on a simple computer processor.

The field of neuroscience has been collecting more and more data, and developing increasingly advanced technological tools in its race to understand how the brain works. But can those data and tools ever yield true understanding? This episode features neuroscientist and computer scientist Eric Jonas, discussing his provocative paper titled “Could a Neuroscientist Understand a Microprocessor?” in which he applied state-of-the-art neuroscience tools, like lesion analysis, to a computer chip. By applying neuroscience’s tools to a system that humans fully understand (because we built it from scratch), he was able to reveal how surprisingly uninformative those tools actually are.

More specifically, Jonas looked at how selectively removing one transistor at a time (effectively creating a one transistor sized lesion) affected the behavior of three video games: Space Invaders, Donkey Kong, and Pitfall.  The idea was to see how informative correlating a lesion with a change in behavior, a technique often used in neuroscience, would be in understanding how the chip generated game behavior.

As it turned out, not very informative.  From the transcript:

But we can then look on the other side and say: which transistors were necessary for the playing of Donkey Kong? And when we do this, we go through and we find that about half the transistors actually are necessary for any game at all. If you break that, then just no game is played. And half the transistors if you get rid of them, it doesn’t appear to have any impact on the game at all.

There’s just this very small set, let’s say 10% or so, that are … less than that, 3% or so … that are kind of video game specific. So there’s this group of transistors that if you break them, you only lose the ability to play Donkey Kong. And if you were a neuroscientist you’d say, “Yes! These are the Donkey Kong transistors. This is the one that results in Mario having this aggression type impulse to fight with this ape.”

While I think Jonas makes an important point, one that just about any reputable neuroscientist would agree with, that neuroscience is far from having a comprehensive understanding of how brains generate behavior, and his actual views are quite nuanced, I think many people are overselling the results of this experiment.  There’s a sentiment that all the neuroscience work that’s currently being done is worthless, which I think is wrong.

The issue, which Jonas accepts but then largely dismisses, is in the differences we think we know about how brains work versus how computer chips work, specifically the hardware / software divide.  When we run software on a computer, we’re actually using layered machinery.  On one level is the hardware, but on another level, often just as sophisticated, if not more so, is the software.

To illustrate this, consider the two images below.  The first is the architecture of the old Intel 80386DX processor.  The second is the architecture of one of the most complicated software systems ever built: Windows NT.  (Click on either image to see them in more detail, but don’t worry about understanding the actual architectures.  I’m not going down the computer science rabbit hole here.)

Architecture of the Intel 80386DX processor.
Image credit: Appaloosa via Wikipedia

 

Architecture of Windows NT
Image credit: Grn wmr via Wikipedia

The thing to understand is that the second system is built completely on the first.  If it occurred in nature, we’d probably consider the second system to be emergent from the first.  In other words, the second system is entirely a category of actions of the first system.  The second system is what the first system does (or more accurately, a subset of what it can do).

This works because the first system is a general purpose computing machine.  Windows is just one example of vast ephemeral machines built on top of general computing ones.  Implementing these vast software machines is possible because the general computing machine is very fast, roughly a million times faster than biological nervous systems.  This is why virtually all artificial neural networks, until recently, were implemented as software, not in hardware (as they are in living systems).

However, a performance optimization that always exists for engineers who control both the hardware and software of a system, is to implement functionality in hardware.  Doing so often improves performance substantially, since it moves that functionality down to a more primal layer.  This is why researchers are now starting to implement neural networks at the hardware level.  (We don’t implement everything in hardware because doing so would require a lot more hardware.)

Now, imagine that the only hardware an engineer had was a million times slower than current commercial systems.  The engineer, tasked with creating the same overall systems, would be forced to optimize heavily by moving substantial functionality into the hardware.  Much more of the system’s behavior would then be modules in the actual hardware, rather than modules in a higher level of abstraction.

In other words, we would expect that more of a brain’s functionality would be in its physical substrate, rather than in some higher abstraction of its behavior.  As it turns out, that’s what the empirical evidence of the last century and a half of neurological case studies show.  (The current wave of fMRI studies are only confirming this and doing so with more granularity.)

Jonas argues that we can’t be sure that the brain isn’t implementing some vast software layer.  Strictly speaking, he’s right.  But the evidence we have from neuroscience doesn’t match the evidence he obtained by lesioning a 6502 processor.  In the case of brains, lesioning a specific region very often leads to specific function loss.  If the brain were a general purpose computing system, we would expect results similar to those with the 6502, but we don’t get them.

Incidentally, lesioning a 6502 to see the effect it has on, say, Donkey Kong, is a mismatch between abstraction layers.  Doing so seems more equivalent to lesioning my brain to see what effect it has on my ability to play Donkey Kong, rather than my overall mental capabilities.  I suspect half the lesions might completely destroy my ability to play any video games, and many others would have no effect at all, similar to the results Jonas got.

Lesioning the 6502 to see what deficits arise in its general computing functionality would be a much more relevant study.  This recognizes that the 6502 is a general computing machine, and should be tested as one, just as testing for brain lesions recognizes that a brain is ultimately a movement decision machine, not a general purpose computing one.  (The brain is still a computational system, just not a general purpose one designed to load arbitrary software.)

All of which is to say, while I think Jonas’ point about neuroscience being very far from a full understanding of the brain is definitely true, that doesn’t mean the more limited levels of understanding it is currently garnering are useless.  There’s a danger in being too rigid or binary in our use of the word “understanding”.  Pointing out how limited that understanding is may have some cautionary value, but it ultimately does little to move the science forward.

What do you think?  Am I just rationalizing the difference between brains and computer chips (as some proponents of this experiment argue)?  Is there evidence for a vast software layer in the brain?  Or is there some other aspect of this that I’m missing?

Posted in Mind and AI | Tagged , , , , | 38 Comments