The problem with philosophical thought experiments

James Wilson has an article up at Aeon, looking at the trolley problem and other ethical and philosophical thought experiments.  One of the things he discusses is the notion that many philosophers have, along with many fans of particular thought experiments, that they’re sort of like a scientific experiment.  It’s not that unusual for someone philosophically inclined to tell me that X is true, and cite a thought experiment as  evidence.

Many of you already know that I have serious issues with this view of thought experiments.  I don’t think a philosophical thought experiment tells us anything about the external world or reality overall, and the notion that they do is fairly pernicious.  It gives people misplaced confidence in a notion based on nothing but a concurring opinion from the author of the thought experiment.

In many ways, thought experiments demonstrate the power of narrative.  If you want to sell people on an idea, tell them a story where the idea is true.  A thought experiment does this.  The most memorable ones can even have characters with names in them.  Would Mary’s Room or the Euthyphro Delimma have the same punch if the key players weren’t named?

Now, some may make a comparison with all the Alice and Bob type descriptions used in physics.  But these narratives are almost always used in a pedagogical fashion, to get across a concept that has already been worked out mathematically, and may have empirical evidence backing it up.  In these cases, the narrative isn’t itself the main argument, it’s just a vehicle to get a concept across in a non-technical fashion.

There have been famous thought experiments in science that were used as arguments.  Schrödinger’s Cat comes to mind.  It’s original use was meant as a reductio absurdum, similar to Einstein’s “spooky action at a distance” argument.  But reality turned out to be absurd.

Anyway, philosophical thought experiments typically only have their narrative.  Does that mean they’re useless?  I don’t think so.  But we should understand their limitations.  All they can do, really, is clarify people’s existing intuitions.  That can be pretty useful, fulfilling the role of what Daniel Dennett calls “intuition pumps.”  But that’s basically it.

So an ethical thought experiment may tell us about people’s ethical intuitions (although even here, check out Wilson’s piece for many of the issues), but they don’t fundamentally tell us what those ethics should be.  Likewise the Chinese Room, Mary’s Room, or philosophical zombies, don’t tell us anything about their subject matter.  They only flush out people’s intuitions about those subjects.

Unless of course I’m missing something.

Pain is information, but what is information?

From an evolutionary standpoint, why does pain exist?  The first naive answer most people reach for is that pain exists to make us take action to prevent damage.  If we touch a hot stove, pain makes us pull our hand back.

But that’s not right.  When we touch a hot surface, nociceptors in our hand send signals to the spinal cord, which often respond with a reflexive reaction, such as a withdrawal reflex.  When the signal makes it to the brain, further automatic survival action patterns may be triggered, such as reflexively scrambling to get away.

But all of this can happen before, or independent of, the conscious experience of pain.  So why then do we have the experience itself?  It isn’t necessarily to motivate immediate action.  The reflexes and survival circuitry often take care of that.

I think the reason we feel pain is to motivate future action.  Feeling pain dramatically increases the probability that we’ll remember what happens when we touch a hot stove, that we’ll learn that’s it’s a bad move.  If the pain continues, it also signals a damaged state which needs to be taken into account in planning future moves.

So then pain is information, information communicated to the reasoning parts of the brain, and serves as part of the motivation to learn or engage in certain types of planning.

People often dislike the conclusion that pain, or any other mental quality, is information.  It seems like it should be something more.  This dislike is often bundled with an overall notion that consciousness can’t be just information processing.  What’s needed, say people like John Searle and Christof Koch, are the brain’s causal powers.

But I think this reaction comes from an unproductive conception of information.

I’ve often resisted defining “information” here on the blog.  Like “energy”, it’s a very useful concept that is devilishly hard to define in a manner that addresses all the ways we use it.

Many people reach for the definition from Claude Shannon’s information theory: information is reduction in uncertainty.  That definition is powerful when the focus is on the transmission of information.  (Which of course, is what Shannon was interested in.)  But when I think about something like DNA, I wonder what uncertainty is being reduced for the proteins that translate it into RNA?  Or the ones that replicate it during cell division?

Historically, when pressed for my own definition, I’ve offered something like: patterns that, due to their causal history, can have effects in a system.  While serviceable, it’s a bit awkward and not something I was ever thrilled with.

Not that long ago, in a conversation about information in the brain, philosopher Eric Schwitzgebel argued simply that information is causation.  The more I think about this statement, the more I like it.  It seems to effectively capture a lot of the above in a very simple statement.  It also seems to capture the way the word is used from physics, to Shannon information, to complex IT systems.

Information is causation.

This actually fits with something many neuroscientists often say: that information is a difference that makes a difference.

This means an information processing system is effectively a system of concentrated causality, a causal nexus.  The brain in particular could be thought of as a system designed to concentrate causal forces for the benefit of the organism.  It also means that saying it’s the causal powers that matter rather than information, is a distinction without a difference.

The nice thing about this definition is, instead of saying pain is information, we can say that pain is causation.  Maybe that’s easier to swallow?

What do you think?  Is there something I’m missing that distinguishes pain from information?  Or information from causation?  If so, what?

Is the ultimate nature of reality mental?

Philosopher Wilfrid Sellars had a term for the world as it appears, the “manifest image.”  This is the world as we perceive it.  In it, an apple is an apple, something red or green with a certain shape, a range of sizes, a thing that we can eat, or throw.

The manifest image can be contrasted with the scientific image of the world.  Where the manifest image has colors, the scientific one has electromagnetic radiation of certain wavelengths.  Where the manifest image has solid objects, like apples, the scientific image has mostly empty space, with clusters of elementary particles, held together in configurations due to a small number of fundamental interactions.

The scientific image is often radically different from the manifest image, although how different it is depends on what level of organization is being examined.  For many purposes, including scientific ones, the manifest image, which is itself a predictive theory of the world at a certain level or organization, works just fine.  For example, an ethologist, someone who studies animal behavior, can generally do so without having to concern themselves about quantum fields and their interactions.

But if the manifest image of the world is how it appears to us, how do we develop the scientific ones?  After all, we only ever have access to our own subjective experience.  We never get direct access to anything else.

The answer is that we start with those conscious experiences, sensory experiences of the world, and we work to develop models, theories, of how those experiences relate to each other.  (We sometimes forget that “empiricism” is just another word for “experience.”  One comes from Greek, the other Latin.)  We judge these theories by how accurately they’re able to predict future experiences.  It’s the only real measure of a theory, or any kind of knowledge, we ever get.

But often developing these theories, these models, requires that we posit aspects of reality that we can’t perceive.  For example, no one has ever seen an electron.  We take electrons to exist because they’re crucial to many theories.  But they’re most definitely not part of the manifest image.

So the theories give us a radically different picture of the world from what we perceive.  Often those theories force us to conclude that our senses, our actual conscious experience, isn’t showing us reality.  The only reason we take such theories seriously, and give them precedence over our direct sensory experience, is because they accurately predict future conscious experiences.

Of course, there are serious issues with many of these theories.  Two of the most successful, quantum mechanics and general relativity, aren’t compatible with each other.  And there’s the measurement problem in quantum mechanics, the fact that everything we observe tells us that there is a quantum wave, until we measure it, then everything tells us there’s just a localized particle.

These are truly hard problems, and solving them is forcing scientists to consider theories that posit a reality even more removed from the manifest image.  It’s why we get things like brane theory, the many worlds interpretation of quantum physics, or the mathematical universe hypothesis.  If any of these models are true, than the ultimate nature of reality is utterly different from the manifest image.

But as stark as the distinctions between the manifest and scientific images are or could be, it’s not enough for some.  Donald Hoffman is a psychologist and philosopher whose views I’ve discussed before.  Hoffman has a new book that he’s promoting, and it’s putting his views back into the public square.  This week I listened to a podcast interview he did with Michael Shermer.

Hoffman’s main point is that evolution doesn’t prepare us to accurately perceive reality.  That reality therefore can be very different than what our perceptions tell us.  But Hoffman is going much further than the typical manifest / scientific image distinction.  He contends that there isn’t even a physical reality out there.  There are only minds.  Our perception of reality is a “user interface” that enables access to something utterly alien in nature.  Even the various scientific images don’t reflect reality.  These are just more user interfaces.

What then is the ultimate reality?  Hoffman appears to believe it’s consciousness all the way down.  In my last post on Hoffman, I labeled him an idealist, in the sense of thinking that the primary reality is mental rather than physical, and I still think that’s the right description.

Although in the Shermer interview, he says he does think there is an objective reality.  Based on what I’ve heard, he sees this objective reality existing because there’s a universal mind of some sort outside of our minds thinking about it, a view that seems similar to the subjective idealism (and theology) of George Berkeley, where objective things exist because God is thinking about them.

How does Hoffman reach this conclusion?  He starts with the fact that natural selection doesn’t seem to favor an accurate perception of reality, just an effectively adaptive one.  He tests this using mathematical simulations which reportedly tell him that there’s zero probability of natural selection selecting for accuracy.

Here we come to my issues with this idea.  Hoffman is using an empirical theory (natural selection) along with empirically observed results of simulations, to conclude that empirical observations aren’t telling us about reality.  But if all of reality is an illusion, then how can he trust his own observations?  In the interview, he assures Shermer that he avoids this undercutting trap, but if so, it doesn’t seem evident to me.

The second issue is that Hoffman is taking this insight and apparently making a major logical leap to conclude that it leads to much more than the manifest vs scientific image distinction.  The established scientific images exist because they’re part of predictive models.  Extending these images to another level requires additional models and evidence, and those models must explain the successes of the previous ones.  Hoffman owns up to this requirement, but admits it hasn’t been met yet.

My third issue is that Hoffman’s stated motivation for positing this idealism is to solve the hard problem of consciousness.  Per the hard problem, there’s no way to relate physics to consciousness, so maybe the solution is to do away with all physics.

But there is an easier solution to the hard problem, one that doesn’t require radically overturning our view of reality.  That solution is to recognize what many psychological studies tell us, that introspection is unreliable, including our introspection of experience.

This too is a sharp distinction between the manifest image and the scientific view.  The problem, of course, it’s that this version isn’t emotionally comforting.  Like Copernicanism, natural selection, relativity, and quantum physics, it takes us ever further from any central role in reality.

Which brings me to my fourth issue with Hoffman’s view.  It’s a radical view that’s emotionally comforting, seemingly positing that it’s all about us after all.  Of course, just because it’s comforting doesn’t mean it’s wrong, but it does mean we need to be more on guard than usual against fooling ourselves.

I’m a scientific instrumentalist.  While I generally think our scientific theories are telling us about reality, I think to “tell us about reality” is to be a useful prediction instrument.  They are one and the same.  There is no understanding of reality which is not such an instrument.

We can’t rule out idealism.  We can only note that any feasible version of it has to meet all the predictive successes of physicalism.  Once it does, it has to then justify any additional assumptions it makes.  It’s not clear to me that we then have anything other than physicalism by another name, or perhaps a type of neutral monism that amounts to the same thing.

But maybe I’m missing something?

Platonism and the non-physical

On occasion, I’ve been accused of being closed-minded.  (Shocking, I know.)  Frequently the reason is not seriously considering non-physical propositions, a perception of rigid physicalism.  However, as I’ve noted before, I’m actually not entirely comfortable with the “physicalist” label (or “materialist”, or other synonyms or near synonyms).  While it’s fairly accurate as to my working assumptions, it actually doesn’t represent a fundamental commitment.

My actual commitment is empiricism.  By “empiricism” here, I don’t necessarily mean physical measurement, but conscious experience, specifically reproducible or verifiable experience, and inferred theories that can predict future experiences, with an accuracy better than alternate theories, or at least better than random chance.  I do generally assume physicalism is true, mainly because many physical propositions seem able to meet this standard, where non-physical ones seem to struggle with it.

But that raises a question.  Are there any non-physical propositions that do meet the standard?  It depends on what we’re willing to consider non-physical.   In the Chalmers post a few weeks ago, I noted that we could interpret his views in a platonic or abstract fashion, in which case the differences between him and a functionalist might collapse into differences in terminology.  Although as I also noted, neither Chalmers nor Dennett would agree.

And this bridge between the views depends on your attitude toward platonism.  Note that “platonism” with a small ‘p’ doesn’t really refer to the philosophy of Plato, but to a modern outlook that regards abstract concepts as real.  This is sometimes described as real in a separate platonic realm, which many misinterpret as meaning a physical existence in a parallel universe or something.

But in modern platonism, abstract objects are held to have no spatio-temporal properties, and to be causally inert.  If they have an existence, it is one completely separate from time and space.  It’s not even right to say they’re “outside” of time and space, because that implies a physical location, something they don’t have.

What are examples of these abstract objects?  Numbers, mathematical relations, properties such as redness, structures, patterns, etc.  Under platonism, these things are held to have a non-physical existence.  For the Chalmers outlook, the property one is important since he often refers to his view as property dualism.

But is platonism true?  One of the the strongest arguments for it appears to be the way we talk about abstract objects.  We refer to concepts like “7” as though they have an existence separate and apart from a pattern of seven objects.  We refer to structures and properties in much the same way.  The fact that we can discuss “redness” coherently seems to imply we accept that property as having an independent existence.

But this assumes that analyzing language is in any way meaningful for what’s real.  At best, it might just show our intuitions, intuitions we might not even believe.  For instance, we refer to things like the sun “rising” and “setting” all the time without seriously thinking that the sun is moving around us (at least since Copernicus and Galileo).  It might be that all this usage should be viewed as metaphorical, and abstract objects as “useful fictions”.

But the dividing line between a useful fiction and a real concept seems like a blurry one.  The more useful a concept is, particularly one useful in an epistemic fashion, the harder it seems to dismiss as a fiction.  We reach a point where we have to invest a lot of energy in explaining why it’s not real.

That said, a strong case against platonism is also an epistemic one.  If minds exist in this universe, and abstract objects exist without any spatio-temporal aspects, and are causally inert, how can we know about them?  We could say the mind is capable of accessing abstract objects, but this implies something super-physical about it.  The relevant physics appear to be causally closed, and this proposition wouldn’t meet the empiricism criteria above.

The more usual defense is that we infer the existence of abstract objects by what we observe in the physical world, by the patterns and relations we see there.  But if that’s how we come to know about abstract objects, why do we actually need the separate abstract objects themselves?  Why can’t we just get by with the models in our mind and the physical patterns they’re based on?

This last point has long been what makes me leery of platonism.  A ruthless application of Occam’s razor seems to make it disappear in a flash of parsimony.  It doesn’t seem necessary.  And given how far some people have tried to run with it, this seems important.

All that said, this is a case where I’m not confident in my conclusion, at least not yet.  I still wonder if its pragmatic value might not imply ontology.  Everything in physics above the level of fundamental forces and quantum fields seems to exist as structure and function, a pattern of lower level constituents.

In many cases, these structures and functions, such as wings or the shape of fish, seem convergent.  These convergences could be seen as implying that the converged structure has an independent reality.  Of course, these are optimal energy structures that emerge from the laws of physics, but then do the laws themselves have an independent reality aside from the physical patterns and regularities?  Are they themselves abstract entities?

And the fact that large portions of the mathematics profession are mathematical platonists gives me pause.  Mathematicians seem convinced that they’re discovering something, not developing tools in some nominalist sense, although the dividing line between invention and discovery itself seems pretty blurry.

If platonism is true, then we have a non-physical reality, and properties such as consciousness (the property of being conscious) could be said to exist non-physically in a platonic sense.  To be sure, this is a far more limited sense of non-physical than many advocates of dualism envision.

Interestingly, Chalmers himself does not appear to be a platonist, but appears to consider the question of the existence of abstract objects to have no fact of the matter answer, espousing a view called ontological anti-realism.  Given my own instrumentalist leanings, I may have to investigate this view.  But it also implies my attempt at steel-manning his argument is probably fruitless.

What do you think?  Do you see other arguments for platonism?  Or against?  Or is the whole thing just hopeless navel-gazing?

Inflate and explode, or deflate and preserve?

Philosopher Eric Schwitzgebel has an interesting post up criticizing the arguments of illusionists, those who have concluded that phenomenal consciousness is an illusion.

Here’s a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A.

If that assumption is wrong — if things of Type X needn’t necessarily have Property A — then you’ve given what I’ll pejoratively call an inflate-and-explode argument. This is what I think is going on in eliminativism and “illusionism” about (phenomenal) consciousness. The eliminativist or illusionist wrongly treats one or another dubious property as essential to “consciousness” (or “qualia” or “what-it’s-like-ness” or…), argues perhaps rightly that nothing in fact has that dubious property, and then falsely concludes that consciousness does not exist or is an illusion.

Schwitzgebel is talking about philosophers like Keith Frankish, Patricia Churchland, and Daniel Dennett.  I did a post a while back discussing Frankish’s illusionism and the debate he had arranged in the Journal of Consciousness Studies about that outlook.

As I noted back then, I largely agree with the illusionists that the idea of a form of consciousness separate and apart from the information processing in the brain is a mistaken one, but I remain uncomfortable saying something like, “Phenomenal consciousness doesn’t exist.”   I have some sympathy with the argument that if it is an illusion, then the illusion is the experience.  I much prefer pointing out that introspection is unreliable, particularly in trying to understand consciousness.

But as some of you know from conversation on the previous post, I have to admit that I’m occasionally tempted to just declare that the whole consciousness concept is an unproductive one, and that we should just move on without it.  But I also have to admit that, when I’m thinking that way, I’m holding what Schwitzgebel calls “the inflated” version of consciousness in my mind.  When I think about the more modest concept, I continue to see it as useful.

But this leads to a question.  Arguably when having these discussions, we should use words in the manner that matches the common understandings of them.  If we don’t do that, clarity demands that we frequently remind our conversation partners which version of the concept we’re referring to.  The question is, which version of consciousness matches most people’s intuitive sense of what the word means?  The one that refers to the suite of capabilities such as responsiveness, perception, emotion, memory, attention, and introspection?  Or the version with dubious properties such as infallible access to our thoughts, or being irreducible to physical processes?

I think consciousness is one of those terms where most people’s intuitions about it are inconsistent.  In most day to day pragmatic usage, the uninflated version dominates.  And these are the versions described in dictionary definitions.  But actually start a conversation specifically about consciousness, and the second version tends to creep in.

(I’ve noticed a similar phenomenon with the concept of “free will.”  In everyday language, it’s often taken as a synonym for “volition”, but talk specifically about the concept itself and the theological or libertarian version of free will tends to arise.)

So, are Frankish and company really “inflating” the concept of phenomenal consciousness when they call it an illusion?  It depends on your perspective.

But thinking about the practice Schwitzgebel is criticizing, I think we also have to be cognizant of another one that can happen in the opposite direction: deflate and preserve.  In other words, people sometimes deflate a concept until it is more defensible and easier to retain.

Atheists often accuse religious naturalists of doing this with the concept of God, accusing them of deflating it to something banal such as “the ground of being” or a synonym for the laws of nature.  And hard determinists often accuse compatibilists of doing it with “free will.”  I’ve often accused naturalistic panspychists of using an excessively deflated concept of consciousness.  And I could see illusionists accusing Schwitzgebel of doing it with phenomenal consciousness.

Which is to say, whether a concept is being inflated or deflated is a matter of perspective and definition.  And definitions are utterly relativist, which makes arguing about them unproductive.  Our only anchor seems to be common intuitions, but those are often inconsistent, often even in the same person.

I come back to the requirements for clarity.  For example, in the previous post, I didn’t say consciousness as a whole doesn’t exist, but was clear that I was talking about a specific version of it.  For me, that still seems like the best approach, but I recognize it will always be a judgment call.

Unless of course I’m missing something?

What is knowledge?

In the discussion on the last post on measurement, the definition of knowledge came up a few times.  That’s dredged up long standing thoughts I have about knowledge, which I’ve discussed with some of you before, but that I don’t think I’ve ever actually put in a post.

The ancient classic definition of knowledge is justified true belief.  This definition is simple and feels intuitively right, but it’s not without issues.  I think the effectiveness of a definition is in how well it enables us to distinguish between things that meet it or violate it.  In the case of “justified true belief”, its effectiveness hinges on how we define “justified”, “true”, and “belief”.

How do we justify a particular proposition?  Of course, this is a vast subject, with the entire field of epistemology dedicated to arguing about it.  But it seems like the consensus arrived at in the last 500 years, at least in scientific circles, is that both empiricism and rationalism are necessary, but that neither by themselves are sufficient.  Naive interpretations of observations can lead to erroneous conclusions.  And rationalizing from your armchair is impotent if you’re not informed on the latest observations.  So justification seems to require both observation and reason, measurement and logic.

The meaning of truth depends on which theory of truth you favor.  The one most people jump to is correspondence theory, that what is true is what corresponds with reality.  The problem with this outlook is that only works from an omniscient viewpoint, which we never have.  In the case of defining knowledge, it sets up a loop: we know whether a belief is knowledge by knowing whether the belief is true or false, which we know by knowing whether the belief about that belief is true or false, which we know by…  Hopefully you get the picture.

We could dispense with the truth requirement, simply define knowledge as justified belief, but that doesn’t seem right.  Prior to Copernicus, most natural philosophers were justified in saying they knew that the sun and planets orbit the earth.  Today we say that that belief was not knowledge.  Why?  Because it wasn’t true.  How do we know that?  Well, we have better information.  You could say that our current beliefs about the solar system are more justified than the beliefs of 15th century natural philosophers.

So maybe we could replace “justified true belief” with “currently justified belief” or perhaps “belief that is justified and not subsequently overturned with greater justification.”  Admittedly, these aren’t nearly as catchy as the original.  And they seem to imply that knowledge is a relative thing, which some people don’t like.

The last word, “belief”, is used in a few different ways in everyday language.  We often say “we believe” something when we really mean we hope it is true, or we assume it’s true.  We also often say we “believe in” something or someone when what we really mean is we have confidence in it or them.  In some ways, this usage is an admission that the proposition we’re discussing isn’t very justified, but we want to sell it anyway.

But in the case of “justified true belief”, I think we’re talking about a version that says our mental model of the proposition is that it is true.  In this version, if we believe it, if we really believe it, then don’t we think it’s knowledge, even if it isn’t?

Personally, I think the best way to look at this is as a spectrum.  All knowledge is belief, but not all belief is knowledge, and it isn’t a binary thing.  A belief can have varying levels of justification.  The more justified it is, the more it’s appropriate to call it knowledge.  But at any time, new observations might contradict it, and it would then retroactively cease to have ever been knowledge.

Someone could quibble here, making a distinction between ontology and epistemology, between what is reality, and what we can know about reality.  Ontologically, it could be argued that a particular belief is or isn’t knowledge regardless of whether we know it’s knowledge.  But we can only ever have theories about ontology, theories that are always subject to being overturned.  And a rigid adherence to a definition that requires omniscience to ever know whether a belief fits the bill, effectively makes it impossible for us to know whether that belief is knowledge.

Seeing the distinction between speculative belief and knowledge as a spectrum pragmatically steps around this issue.  But again, this means accepting that what we label as knowledge is, pragmatically, something relative to our current level of information.  In essence, it makes knowledge belief that we currently have good reason to feel confident about.

What do you think?  Is there a way to avoid the relative outlook?  Is there an objective threshold where we can authoritatively say a particular belief is knowledge?  Is there an alternative definition of knowledge that avoids these issues?

Are there things that are knowable but not measurable?

It’s a mantra for many scientists, not to mention many business managers, that if you can’t measure it, it’s not real.  On the other hand, I’ve been told by a lot of people, mostly non-scientists, and occasionally humanistic scholars including philosophers, that not everything knowable is measurable.

But what exactly is a measurement?  My intuitive understanding of the term fits, more or less, with this Wikipedia definition:

Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events.[1][2]

There’s a sense that measurement is a precise thing, usually done with standard units, such as kilograms, meters, or currency denominations.  But Doug Hubbard argues in an interview with Julia Galef, as well in his book How to Measure Anything, that measurement should be thought of as a reduction in uncertainty.  More precisely, he defines measurement as:

A quantitatively expressed reduction of uncertainty based on one or more observations.

Hubbard, Douglas W.. How to Measure Anything: Finding the Value of Intangibles in Business (p. 31). Wiley. Kindle Edition.

The observation part is crucial.  Hubbard argues that, for anything we care about, there is a difference between what we’ll observe if that thing happens and what we’ll observe if it doesn’t.  Figure out this difference, define it carefully, and you have the basis to measure anything, at least anything knowable in this world.  The more the differences can be defined with observable intermediate stages, the more precise the measurement can be.

One caveat: just because it’s possible to measure anything knowable doesn’t mean it’s always practical, that it is cost effective to do so.  Hubbard spends a lot of time in the early parts of his book discussing how to figure out the value of information to decide if the costs of measuring something is worth it.

In many cases, precise measurement may not be practical, but not all measurements must be precise in order to be useful.  Precision is always a matter of degree since we never get 100% accurate measurements, not even in the most sophisticated scientific experiments.  There’s always a margin of error.

Measuring some things may only be practical in a very coarse grained manner, but if it reduces uncertainty, then it’s still a measurement.  If we have no idea what’s currently happening with something, then any observations which reduce that uncertainty count as measurements.  For example, if we have no idea what the life expectancy is in a certain locale, and we make observations which reduces the range to, say, 65-75 years, we may not have a very precise measurement, but we still have more than what we started with.

Even in scenarios where only one observation is possible, the notorious sample of one, Hubbard points out that the probability of that one sample being representative of the population as a whole is 75%.  (This actually matches my intuitive sense of things, and will make me a little more confident next time I talk about extrapolating possible things about extraterrestrial life using only Earth life as a guide.)

So, is Hubbard right?  Is everything measurable?  Or are there knowable things that can’t be measured?

One example I’ve often heard over the years is love.  You can’t measure, supposedly, whether person A loves person B.  But using Hubbard’s guidelines, is this true?  If A does love B, wouldn’t we expect their behavior toward B to be significantly different than if they didn’t?  Wouldn’t we expect A to want to spend a lot of time with B, to do them favors, to take care of them, etc?  Wouldn’t that behavior enable us to reduce the uncertainty from 50/50 (completely unknown) to knowing the answer with, say, an 80% probability?

(When probabilities are mentioned in these types of discussions, there’s almost always somebody who says that the probabilities here can’t be scientifically ascertained.  This implies that probabilities are objective things.  But, while admitting that philosophies on this vary, Hubbard argues that probabilities are from the perspective of an observer.  Something that I might only be able to know with a 75% chance of being right, you may be able to know with 90% if you have access to more information than I do.)

Granted, it’s conceivable for A to love B without showing any external signs of it.  We can never know for sure what’s in A’s mind.  But remember that we’re talking about knowable things.  If A loves B and never gives any behavioral indication of it (including discussing it), is their love for B knowable by anybody but A?

Another example that’s often put forward is the value of experience for a typical job.  But if experience does add value, people with it should perform better than those without it in some observable manner.  If there are quantifiable measurements of how well someone is doing in a job (productivity, sales numbers, etc), the value of their experience should show up somewhere.

But what other examples might there be?  Are there ones that actually are impossible to find a conceivable measurement for?  Or are we only talking about measurements that are hopelessly impractical?  If so, does allowing for very imprecise measurement make it more approachable?

What do scientific theories actually tell us about the world?

One of the things that’s exciting about learning new things, is that often a new understanding in one area sheds light on what might seem like a completely separate topic.  For me, information about how the brain works appears to have shed new light on a question in the philosophy of of science, where there has long been a debate about the epistemic nature of scientific theories.

Spacetime lattice Image credit: mysid via Wikipedia
Spacetime lattice
Image credit: mysid via Wikipedia

One camp holds that scientific theories reflect reality, at least to some level of approximation.  So when we talk about space being warped in general relativity, or the behavior of fermions and bosons, there is actually something “out there” that corresponds to those concepts.  There is something actually being warped, and there actually are tiny particles and/or waves that are being described in particle physics.  This camp is scientific realism.

The opposing camp believes that scientific theories are only frameworks we build to predict observations.  The stories we tell ourselves associated with those predictive frameworks may or may not correspond to any underlying reality.  All we can know is whether the theory successfully makes its predictions.  This camp is instrumentalism.

The vast majority of scientists are realists.  This makes sense when you consider the motivation needed to spend hours of  your life in a lab doing experiments, or to endure the discomforts and hazards of field work.  It’s pretty hard for geologists to visit the antarctic for samples, or for biologists to crawl through the mud for specimens, if they don’t see themselves in some way as being in pursuit of truth.

But the instrumentalists tend to point out all the successful scientific theories that could accurately predict observations, at least for a time, but were eventually shown to be wrong.

The prime example is Ptolemy’s ancient theory of the universe, a precise mathematical model of the Aristotelian view of geocentrism, the idea that the Earth is the center of the universe with everything revolving around it.    For centuries, Ptolemy’s model accurately predicted naked eye observations of the heavens.

But we know today that it is completely wrong.  As Copernicus pointed out in the 1500s, the Earth orbits around the sun.  Interestingly, many science historians have pointed out that Copernicus’ model actually wasn’t any better at making predictions than Ptolemy’s, at least until Galileo started making observations through a telescope.  Indeed, the first printing of Copernicus’ theory had a preface from someone, probably hoping to head off controversy, saying the ideas presented might only be a predictive framework unrelated to actual reality.

For a long time, I was agnostic between realism and instrumentalism.  Emotionally, scientific realism is hard to shake.  Without it, science seems little more than an endeavor to lay the groundwork for technology, for practical applications of its findings.  Many instrumentalists are happy to see it in that light.  A lot of instrumentalists tend to be philosophers, theologians, and others who may be less than thrilled with the implications of scientific findings.

However I do think it’s important for scientists, and anyone assessing scientific theories, to be able to put on the instrumentalist cap from time to time, to conservatively assess which parts of a theory are actually predictive, and which may just be speculative baggage.

But here’s the thing.  Often what we’re really talking about here is the difference between the raw mathematics of a theory, and its language description, including the metaphors and analogies we use to understand it.  The idea is that the mathematics might be right, but the rest wrong.

But the language part of a theory is a description of a mental understanding of what’s happening.  That understanding is a model we build in our brains, a neural firing pattern that may or may not be isomorphic with patterns in the world.  And as I’ve discussed in my consciousness posts, the model building mechanism evolved for an adaptive purpose: to make predictions.

In other words, the language description of a theory is itself a predictive model.  Its predictions may not be as precise as the mathematical portions, they may not be currently testable in the same manner as the mathematics (assuming those mathematics are actually testable; I’m looking at you string theorists), but it will still make predictions.

Using the Ptolemy example above, the language model did make predictions.  It’s just that many of its predictions couldn’t be tested until the availability of telescopes.  Once they could, the Ptolemy model quickly fell from favor.  (At least it was quick on historical time scales.  It wasn’t quick enough to avoid making Galileo’s final years miserable.)  As many have pointed out, it wasn’t that Copernicus’ model made precisely right predictions, but it was far less wrong than Ptolemy’s.

When you think about it, any mental model we hold makes predictions.  The predictions might not be testable, currently or ever, but they’re still there.  Even religious or metaphysical beliefs make predictions, such as whether we’ll wake up in an afterlife after we die.  They’re just predictions we may never be able to test in this world.

This means that the distinction between scientific realism and instrumentalism is an artificial one.  It’s really just a distinction between aspects of a theory that can be tested, and the currently untestable aspects.  Often the divide is between the mathematical portions and the language portions, but the only real difference there is that the mathematical predictions are precise, whereas the language ones are less precise, to varying degrees.

Of course, I’m basing this insight on a scientific theory about how the brain works.  If that theory eventually ends up failing in its predictions, it might have implications for the epistemic point I’m making here, for the revision to our model of scientific knowledge I think is warranted.

And idealists might note that I’m also making the assumption that brains exist, that along with the rest of the external world they aren’t an illusion.  I have to concede that’s true, and even if this understanding makes accurate useful predictions, within idealism, it still wouldn’t be mapping to actual reality.  But given that I’m also assuming that all you other minds exist out there, it’s a stipulation I’m comfortable with.

As always, it might be that I’m missing something.  If so, I hope you’ll set me straight in the comments.

Libertarian free will is incoherent, and that’s good for responsibility

For a while, I’d considered myself done debating free will, having expressed everything about it I had to say.  However, with this Crash Course video, and in light of the discussion on physicality we had earlier this summer, I realized I do have some additional thoughts on it.

Just a quick reminder: I’m a compatibilist.  I’m convinced that the mind is a system that fully exists in this universe and operates according to the laws of physics.  However, I think responsibility remains a coherent and pragmatically useful social concept.

Even if the laws of physics are fully deterministic, the knowledge that we may be held responsible is one of the many causal influences on our choices.   Holding people accountable for their decisions is productive for society.  In that sense, I’m a compatibilist, regarding free will as the ability of a competent person to act on their own desires, even if those desires ultimately have external causes.

For me, free will is something that exists at a sociological, psychological, and legal level.  Like democracy, the color white, or the rules of baseball, you’ll look in vain for it in physics.  At the physics layer, nothing exists except space, elementary particles, and their interactions, and even they may be patterns of even smaller phenomena.  Insisting that anything that we can’t find at this layer doesn’t exist strikes me as unproductive; to be consistent with it requires dismissing the things I listed above, along with most of every day reality.

Anyway, this post isn’t about compatibilism, but old fashioned libertarian free will, that is, the type of free will that many people do think exists, the one that says that even if the laws of physics are mostly or fully deterministic, there is something about the human mind that makes its actions not fully determined by those physics.  It’s an assertion that each human mind is essentially its own uncaused cause.  But is this a coherent concept?

It seems to me that there are two broad approaches to libertarianism.  One is substance dualism, the idea that there are two types of substances in reality: the physical, and the mental.  With substance dualism, free will is possible because the actions of the mind are affected by its non-physical mental components.  Therefore our decisions can’t be fully accounted for by physical causes.  We must bring in mental causation to complete that accounting.

Another approach is to posit (per the Penrose / Hameroff crowd) that quantum indeterminacy is a significant factor in mental processing.  In many ways, this seems like a modern version of the Epicurean swerve proposition, that there is something inherently random about mental processing.  This makes it impossible for us to predict decisions with physics, although in this case it should be possible, in principle, to account for them.  (Whether quantum randomness is real depends on the interpretation of quantum mechanics you prefer.)

The problem I see with both of these approaches, is that even if the mind is not part of the normal physical causal framework, it must operate according to some kind of principles.  These principles might be forever beyond our understanding, but minds have to operate in some manner.  That means, in the case of substance dualism, we haven’t so much avoided or escaped the causal framework as expanded it.  Everything is still determined, it’s just that the causal framework now includes mental substance.

You might argue that perhaps mental dynamics aren’t deterministic, that they have an inherent unpredictability, which would make it similar to the case of quantum consciousness.  In both of these scenarios, it means we’ve added a randomizing element to the causal framework.

But in all cases, the question we have to ask is, what is added to make an action praiseworthy or blameworthy if it wasn’t before?  If physics fully determined our choices before and that made our choices free of responsibility, how does adding mental causation add responsibility back?  Or how does adding randomness do it?

It seems like there is an appeal to ignorance here, to the unknowable.  Somehow, if we can’t predict a person’s actions, then they can be held accountable for them.  Of course, for all practical purposes, we can’t predict a person’s actions even if they are fully determined by physics, which seems to nullify any advantage from the unknowable aspects of the other scenarios.  Chaos theory dynamics might make prediction of physical minds actions just as unreachable as quantum mechanics or ghostly dualism.

Chaos theory is all about the fact that no measurement is infinitely accurate.  There is always a margin of error.  In a complex dynamic system, the margins of error quickly snowball, making the system unpredictable, even in principle.  This is why weather may never be 100% predictable: too many factors that can’t be measure with absolute precision.  And  unless synapses strength turns out to exist in discrete steps (a possibility), then brains may be an excellent candidate for a complex dynamic system.

In other words, introducing non-physical phenomena or new physics doesn’t evade the central issue: does it make sense to hold people accountable for their decisions or not?  It seems like the issue basically remains the same.

If the mind is strictly physical, people do talk about someday altering a convicted criminal’s brain so that they wouldn’t have immoral impulses, or at least would have the will to resist those impulses.  This is usually presented as a more humane approach than traditional punishment, and it might well turn out to be so.  The problem is that we’re still a long way from being able to do that, and even when it is possible, something tells me that people will regard having their mind, their core self, forcibly altered, to be just as nightmarish as many other punishments.

Personally, my own feeling on this is that the mind’s operations are substantially, if not fully, deterministic.  It’s possible that quantum indeterminacy has some role in the brain’s processing, but if so, it seems like it would be an extremely nuanced one.  The brain evolved for animals to make movement decisions, presumably to maximize access to food and reproduction while minimizing exposure to predators.  A rampantly indeterminate brain doesn’t seem like it would be very adaptive.  (One reader did point out to me how a slight indeterminism might be adaptive, although it seemed to me that the unpredictable causal factors in the environment would accomplish much the same thing.)

Myself, I certainly hope that the mind is mostly deterministic.  I know the kind of decisions I want to make, and the idea of some random element affecting those decisions is not one that I’d personally find comforting.  I want learning, practice, and deliberation, and the other things I’ve done to become the person I am, to causally determine my decisions.  It seems like randomness would actually undermine responsibility, rather than justify it.

Unless of course, I’m missing something?

By the way, Crash Course followed up the above video with one on compatibilism.  Here it is:

Don’t trust your emotions. They will betray you.

Image credit: Toddatkins via Wikipedia
Image credit: Toddatkins via Wikipedia

I’ve mentioned before that my views have changed dramatically over the years.  But thinking about that the other day, it occurred to me that most of that change happened in a fairly narrow period.  At the beginning of 2004, I was still a nominal Catholic, often voted Republican, was suspicious of gays and other non-traditional groups, and generally considered the United States to be on the cutting edge of democratic and economic innovation.

By the end of 2005, I was a liberal progressive and committed Democrat, was painfully aware of the undemocratic aspects of my country, along with the fact that many other developed countries had been doing things with social safety nets for generations that were considered hopelessly experimental and academic in the US, and my religious beliefs were more or less history.

What happened?  Well, I read a book in the summer of 2004.  It was not a book on politics, economics, religion, or philosophy.  It was a self-help book on emotional intelligence.  Well, sort of.  The term “emotional intelligence” didn’t show up anywhere in it, but I had already read a couple of other books on that subject and been irritated by how theoretical, how disconnected they were from practical solutions.

But Sheenah Hankin’s ‘Complete Confidence: A Handbook‘ ended up being pretty much what I was looking for.  I wasn’t necessarily looking for a confidence boost (although it certainly wouldn’t have hurt) but a practical guide on keeping my emotions reigned in and making better decisions.  I’m not sure what called my attention to this book, but it would eventually have a profound effect on my thinking.

Hankin’s chief thesis was to push back on the idea that there is something virtuous or enlightening with following our feelings, of giving precedence to our emotions.  As I’ve discussed before, we are emotional beings.  Reason is a tool of emotion.  But we have a wide variety of emotions, many of which are often in conflict with each other.  Notably, short term emotional needs are often in conflict with longer term emotional needs.

Reason is a tool to allow us to choose which emotional impulses we should indulge in.  That capability has become increasingly crucial in a world radically different from the one we evolved in.  Our emotions are often tuned for life on the African savanna, not for surviving workplace politics or succeeding in online discussions.

Of course, this is often much easier said than done.  Emotions are powerful things.  Often anger, fear, and sorrow overwhelm the small voice of reason pointing out what the better course of action may be.  Hankin provides a relatively simple framework to deal with this, which she refers to as “The Winning Hand of Comfort”, mainly because she counts off the steps on the fingers of one hand: calm, clarify, challenge, comfort, confidence.

Calm

This is the first and most crucial stage.  It’s also the most difficult, particularly when you’re emotionally upset.  The trick is to recognize when you’re in that state, realize that you need to calmly assess the situation, and take steps to do so.  Hankin talks about taking deep breathes, which of course is almost a cliche at this point.  But I think the key thing is to recognize that you’re not calm, and take steps to reach a calmer place.  It may involve separating yourself from the situation, which depending on that situation, could be difficult.  But in most cases (immediate life safety emergencies aside) it’s worth the effort.

Often, when  you do attempt to do this, there may be people who don’t want to give you that opportunity.  They may have an agenda and are hoping to pressure you into a decision that benefits them.  Pushy salespeople come to mind, but as a manager I’ve often had this come from customers, colleagues, employees, and from many other directions.  In most cases (again life and death emergencies aside), little or nothing is lost taking a break to calm down, and often there is much to be gained.

The good news, is that this gets much easier over time and with practice, easier to recognize when you need to do it, and easier to actually do it.  I’ve reached the point where it’s more or less a reflex now.  If I’m upset, I reflexively try to calm down.  Of course, I’m human and it doesn’t always work, but compared to early 2004, I’m practically a zen master now.

Clarify

Once you’ve managed to calm down, the next step to to assess the situation.  Why are you upset?  It’s crucial to be honest with yourself at this stage.  If you’re getting upset over something minor and trivial,  it’s still something that you need to understand.  We can’t control our immediate emotions, we can only control our reaction to them.  There’s nothing dishonorable with having emotions we may not be proud of, although there might be something dishonorable in giving in to them.

One thing I was surprised to discover when I started doing this, was how often there wasn’t really anything major I was getting upset about.  Often it was because I was jacked on caffeine.  (I drank about ten cups of coffee a day back in 2004.)  Eventually this realization led me to drastically cut back on my caffeine intake.

Challenge

Once you’re in a calm state and understand why you were upset, the next step is to challenge that notion.  How upset should I really be over that guy cutting me off in traffic?  Did that person in the meeting really mean to insult me?  Is my significant other mad at me or just in a bad mood?  More often than not, the resulting conclusion will be that there is nothing there, that really there isn’t any real reason to be upset.

Of course, sometimes the conclusion will go the other way.  The good news is, by the time you reach this point, you’ll be in a much better state of mind to deal with it rationally, rather than simply meeting what will often be an emotional display with another emotional display.

Comfort

Hankin talks about the importance of self coaching, of comforting ourselves.  I thought this was pretty strange when I first read it, but all of my reading about the mind and brain since then has convinced me that there is a lot of insight here.  We are not one unified whole, but rather a loose collection of impulses and desires.  Often, the more primal aspects of our minds can be soothed and comforted by the conscious rationalist aspect of ourselves, in a way that simply doesn’t seem to happen by just holding that comforting knowledge.

Hankin recommends having a ready phrase to use, such as “It’s no big deal,” or “Don’t overreact,” but I personally find coming up with a tailored phrase for the situation more helpful, but it does require more thought in the moment than having a ready phrase.

Confidence

This is the final phase in Hankin’s framework.  It’s really more of a desired result than a phase you work on.  I’m not sure how much personal confidence this sequence really provides, except perhaps the confidence of feeling like you’re making a more considered and careful decision.  Still, I have to say that I’m often in a much better state of mind at this stage than I was before the Calm stage.


Is this sequence the end all be all?  Will it solve every personal issue?  Not at all.  But it did help me in my professional interactions in the summer of 2004.  And then it began to have more far ranging effects.  It inspired me to calm down, clarify, and challenge my intuitive position on many matters, personal, professional, and philosophical, leading to the changes I mentioned above.  (As well as many other personal and professional changes I haven’t mentioned.)

It may be that Hankin’s book just happened to catch me at a particular point in my life where I was already subconsciously questioning many things, and using her framework allowed me to bring it up into my consciousness.

But it was a significant enough influence on me that it’s one of three books I often recommend when a discussion comes up about books on leadership or career management.  It’s the third, after Dale Carnegie’s classic ‘How to Win Friends and Influence People’, and Sun Tzu’s ‘The Art of War’.  But only Hankin’s book led to wholesale changes in my worldview.

What about you?  Are there any books that had effects on your thinking far beyond their initial scope?