The problem with philosophical thought experiments

James Wilson has an article up at Aeon, looking at the trolley problem and other ethical and philosophical thought experiments.  One of the things he discusses is the notion that many philosophers have, along with many fans of particular thought experiments, that they’re sort of like a scientific experiment.  It’s not that unusual for someone philosophically inclined to tell me that X is true, and cite a thought experiment as  evidence.

Many of you already know that I have serious issues with this view of thought experiments.  I don’t think a philosophical thought experiment tells us anything about the external world or reality overall, and the notion that they do is fairly pernicious.  It gives people misplaced confidence in a notion based on nothing but a concurring opinion from the author of the thought experiment.

In many ways, thought experiments demonstrate the power of narrative.  If you want to sell people on an idea, tell them a story where the idea is true.  A thought experiment does this.  The most memorable ones can even have characters with names in them.  Would Mary’s Room or the Euthyphro Delimma have the same punch if the key players weren’t named?

Now, some may make a comparison with all the Alice and Bob type descriptions used in physics.  But these narratives are almost always used in a pedagogical fashion, to get across a concept that has already been worked out mathematically, and may have empirical evidence backing it up.  In these cases, the narrative isn’t itself the main argument, it’s just a vehicle to get a concept across in a non-technical fashion.

There have been famous thought experiments in science that were used as arguments.  Schrödinger’s Cat comes to mind.  It’s original use was meant as a reductio absurdum, similar to Einstein’s “spooky action at a distance” argument.  But reality turned out to be absurd.

Anyway, philosophical thought experiments typically only have their narrative.  Does that mean they’re useless?  I don’t think so.  But we should understand their limitations.  All they can do, really, is clarify people’s existing intuitions.  That can be pretty useful, fulfilling the role of what Daniel Dennett calls “intuition pumps.”  But that’s basically it.

So an ethical thought experiment may tell us about people’s ethical intuitions (although even here, check out Wilson’s piece for many of the issues), but they don’t fundamentally tell us what those ethics should be.  Likewise the Chinese Room, Mary’s Room, or philosophical zombies, don’t tell us anything about their subject matter.  They only flush out people’s intuitions about those subjects.

Unless of course I’m missing something.

Pain is information, but what is information?

From an evolutionary standpoint, why does pain exist?  The first naive answer most people reach for is that pain exists to make us take action to prevent damage.  If we touch a hot stove, pain makes us pull our hand back.

But that’s not right.  When we touch a hot surface, nociceptors in our hand send signals to the spinal cord, which often respond with a reflexive reaction, such as a withdrawal reflex.  When the signal makes it to the brain, further automatic survival action patterns may be triggered, such as reflexively scrambling to get away.

But all of this can happen before, or independent of, the conscious experience of pain.  So why then do we have the experience itself?  It isn’t necessarily to motivate immediate action.  The reflexes and survival circuitry often take care of that.

I think the reason we feel pain is to motivate future action.  Feeling pain dramatically increases the probability that we’ll remember what happens when we touch a hot stove, that we’ll learn that’s it’s a bad move.  If the pain continues, it also signals a damaged state which needs to be taken into account in planning future moves.

So then pain is information, information communicated to the reasoning parts of the brain, and serves as part of the motivation to learn or engage in certain types of planning.

People often dislike the conclusion that pain, or any other mental quality, is information.  It seems like it should be something more.  This dislike is often bundled with an overall notion that consciousness can’t be just information processing.  What’s needed, say people like John Searle and Christof Koch, are the brain’s causal powers.

But I think this reaction comes from an unproductive conception of information.

I’ve often resisted defining “information” here on the blog.  Like “energy”, it’s a very useful concept that is devilishly hard to define in a manner that addresses all the ways we use it.

Many people reach for the definition from Claude Shannon’s information theory: information is reduction in uncertainty.  That definition is powerful when the focus is on the transmission of information.  (Which of course, is what Shannon was interested in.)  But when I think about something like DNA, I wonder what uncertainty is being reduced for the proteins that translate it into RNA?  Or the ones that replicate it during cell division?

Historically, when pressed for my own definition, I’ve offered something like: patterns that, due to their causal history, can have effects in a system.  While serviceable, it’s a bit awkward and not something I was ever thrilled with.

Not that long ago, in a conversation about information in the brain, philosopher Eric Schwitzgebel argued simply that information is causation.  The more I think about this statement, the more I like it.  It seems to effectively capture a lot of the above in a very simple statement.  It also seems to capture the way the word is used from physics, to Shannon information, to complex IT systems.

Information is causation.

This actually fits with something many neuroscientists often say: that information is a difference that makes a difference.

This means an information processing system is effectively a system of concentrated causality, a causal nexus.  The brain in particular could be thought of as a system designed to concentrate causal forces for the benefit of the organism.  It also means that saying it’s the causal powers that matter rather than information, is a distinction without a difference.

The nice thing about this definition is, instead of saying pain is information, we can say that pain is causation.  Maybe that’s easier to swallow?

What do you think?  Is there something I’m missing that distinguishes pain from information?  Or information from causation?  If so, what?

The spectrum of science to fantasy

A question long argued in the philosophy of science is the demarcation problem.  How to we distinguish science from non-science?  Karl Popper famously proposed falsifiability as a criteria.  To be science, a theory must make predictions that could turn out to be wrong.  It must be falsifiable.  Theories that are amorphous or flexible enough to never encounter this test, aren’t scientific.  This standard was famously sharp enough to cull Marxism and Freudean psychoanalysis from science.

Falsifiability has a lot going for it, but it also has a lot of issues.  For one, when we say “falsifiable”, do we mean falsifiable in practice today?  If so, then a lot of exploratory work done by scientists is non-science.  This would include Copernicus’ work on heliocentrism, Albert Einstein’s work on relativity, or Peter Higgs and colleagues’ work on the Higgs mechanism.  None of these theories were testable while these scientists were working on them.  In the case of Copernicus and Higgs, it was several decades before they became testable, that is, falsifiable.

Reportedly, Popper was actually more careful than this.  His proposed standard was falsifiable in principle.  So to be scientific, a theory must be testable in some foreseeable manner.

But even this can be problematic.  August Comte infamously predicted in 1835 that we would never know the composition of the stars.  Speculation seemed pointless.  But within a few decades, stellar spectroscopy was developed and we did actually start to learn about stellar composition.  Likewise, when Einstein, Podolsky, and Rosen published their paper in 1935 on the EPR paradox, it was criticized by many as metaphysical navel-gazing, until John Stewart Bell figured out a way to test it 29 years later.

On top of this, as Sabine Hossenfelder recently pointed out, new theories can be about existing data.  If the new theory explains the existing data better than an established theory, that is with fewer assumptions and perhaps simpler constructs, then it may replace that older theory, without ever producing its own unique falsifiable predictions.

Even more problematic, many successful scientific theories, while having reliably testable predictions, also have predictions that can’t currently be tested.  For several decades, general relativity predicted gravitational waves, but they were only actually detected a few years ago. From what I understand, aspects of the role of pressure in general relativity remain untested.

And every scientific theory is essentially a metaphysical statement, a conclusion, or series of conclusions reached inductively.  Anyone who has studied epistemology is familiar with the problem of induction, most famously analogized by black swans.

All of which means that the dividing line isn’t sharp, but long and blurry and requires a lot of judgment.  I tend to think, rather than a sharp demarcation, it’s better to think in terms of a spectrum.

  1. Reliable models
  2. Rigorous exploration
  3. Loose speculation
  4. Falsified notions

1-Reliable models, are the ones most clearly science, and represents the most successful theories, such as general relativity, quantum mechanics, natural selection, etc.  Often the predictions of these theories are reliable enough for technology to be built using them.

I considered calling 1 “settled science”, but that implies that successful theories are never overturned.  Most famously, Newton’s laws of gravity reigned for centuries, before Einstein overturned them with general relativity.  However, Newton’s laws remain reliable enough that NASA mission planners use them for most of their calculations.  Newton’s laws are no longer the most reliable model, but they remain very reliable for many purposes.  Which is to say, very successful theories, at least the mathematical components, are unlikely to ever be completely dismissed.

2-Rigorous exploration, is disciplined theoretical speculation.  As noted above, scientists have to have space to work in this realm, since too many theories in 1 began here.  But what characterizes rigorous exploration from the next category is that these theories are either extrapolations from theories in 1, or tight speculation involving one or a few assumptions, assumptions narrowly motivated to fit the data.

3-Loose speculation is where I think there start to be legitimate concerns about whether what’s happening is scientific.  In this category, there may be numerous assumptions, with each assumption an opportunity to be wrong.  Or the assumptions may be motivated by a desire for a certain outcome, not to explain the data, but perhaps to meet personal biases and intuitions.

I gave examples of 2 above.  For 3, based on what I’ve read, string theory arguably belongs in this category.  I think some other speculative notions, such as many exotic theories of consciousness, belong here too.

Many people would relegate all multiverse theories here, but I think they have to be looked at on a case by case basis, since some are either extrapolations of successful theories, or have minimal assumptions, and a strong case can be made for them being in 2.  (None are currently in 1.)

But I would include Tegmark’s mathematical universe hypothesis in 3, along with a lot of other philosophical metaphysical speculation.  This is often stuff that, strictly speaking isn’t impossible, but has non-data motivated assumptions and is the hardest to imagine ever being testable.

4-Falsified notions, the last category, is, simply put, fantasy.  Generally for this stuff to be reality would require that one or more theories in 1 be wrong.  Astrology, paranormal claims, creationism, intelligent design, and a lot of similar notions go here.  If it’s presented as science then it’s fake science, pseudoscience.

Only 1 represents a reliable view of reality.  But as noted above, this is science, and nothing is immune from possibly being overturned on new data.

2 represents what I often refer to as candidates for reality.  Many will be right, others wrong, but we can’t currently know which is which.

3 might, in principle, turn out to be reality, but the probability is very low, low enough that the skeptic in me tends to just assume they’re wrong.

And 4 is the province of honest entertainers or dishonest charlatans.

It’s worth noting that even putting theories into these categories takes judgment, and many might sit on the boundaries.

But I think the main takeaway is that just because something isn’t in 1, doesn’t mean the only other option is 4.  It’s not just reliable science or fantasy.  There’s a space for exploratory science at least.  I’m actually pretty sure science as an overall enterprise wouldn’t work without that exploratory space.

Unless of course I’m missing something?  Am I being too permissive with these categories?  Not permissive enough?  Or just missing the ball entirely?

Maybe we wiped Neanderthals out after all

Or at least, that’s the conclusion of a paper which models the population changes and other factors involved.

  • New model to study hominin interactions in time-varying climate environment.
  • Neanderthals experienced rapid population decline due to competitive exclusion.
  • Interbreeding only minor contributor to Neanderthal extinction.
  • Abrupt Climate Change not major cause for demise of Neanderthals.

Of course, a model is only as good as the assumptions that go into it.  But if it holds, Neanderthals went extinct due to competition from anatomically modern humans, as we migrated into Europe.  The alternate hypotheses, assimilation from interbreeding or climate change, turn out to be minor factors.

A series of maps showing that as Homo sapien populations increase, Neanderthal ones decrease
Homo sapiens and Neanderthal population changes from the paper. Source:

I’ve never thought the idea that climate change was responsible made much sense.  The Neanderthals survived for hundreds of thousands of years through a wide variety of climate change events before we showed up.

The assimilation hypothesis in recent years seemed compelling, and there’s still reasons to think there was at least some assimilation, not the least that all non-Africans have 1-4% Neanderthal DNA.  But if the model is correct, it wasn’t the primary reason why they disappeared as a population.

Hank Campbell points out that it’s still possible we transmitted some disease(s) to them, similar to what happened when Europeans first arrived in the Americas and smallpox devastated native American populations.  But resource competition, again according to the model, seems more likely.

That said, I’ll be interested to see what public anthropologists such as John Hawks make of this.

The measurement problem, Copenhagen, pilot-wave, and many worlds

With quantum physics, we have a situation where a quantum object, such as a photon, electron, atom or similar scale entity, acts like a wave, spreading out in a superposition, until we look at it (by measuring it in some manner), then it behaves like a particle.  This is known as the measurement problem.

Now, some people try to get epistemic about this. Maybe the wave isn’t real but just epistemic probabilities. The issue, shown in the double-slit experiment, is that the wave interferes with itself, something those who want to relegate the wave to completely non-real status have to contend with.

An important point is that if the wave is very spread out, say light years, and any part of it is measured, the whole thing collapses to a particle, apparently faster than light.  This appears to violate relativity (and hence causality), which was Albert Einstein’s chief beef with quantum physics, and the impetus behind the concept of entanglement explored in the EPR paradox.

Now, we have an equation, the Schrodinger equation, that models the evolution of the wave.  Its accuracy has been established in innumerable experiments.  But when we actually look at the wave, that is, attempt to take a measurement, we find a particle, that subsequently behaves like a particle.  The math appears to stop working, except as a probabilistic prediction of where we’ll find the particle.  This is often called the wave function collapse.

The Copenhagen interpretation handles this by saying that quantum physics only applies to small isolated systems.  As soon as something macroscopic is involved, such as a measuring device, the rules change.  Kept to a minimal instrumental version, I think this interpretation is underrated.  Bare bones Copenhagen doesn’t attempt to explain reality, only describe our interactions with it.  It could be seen as an admission that the metaphors of our normal scale existence are simply inadequate for the quantum realm.

Of course, people can’t resist going further.  Copenhagen is actually more a family of interpretations, some of which involve speculation about consciousness causing the collapse.  Reality doesn’t congeal until we actually look at it.  I think the challenges of quantum computing rule this out, where engineers have to go to extreme efforts to preserve the wave to get the benefits of that type of computation.  They’d probably be very happy if all they had to do was prevent any conscious mind from knowing the state of the system.  But it’s an idea many people delight in, so it persists.

The pilot-wave interpretation, often referred to as De Broglie-Bohm theory, posits that there is both a particle and a wave the entire time.  The wave guides the particle.  When we look / measure, the wave becomes entangled with the environment, it loses its coherence, and so the particle is now free to behave like a particle.  This idea actually predates Copenhagen, although it wasn’t refined until the 1950s.

Pilot-wave initially looks promising.  We preserve determinism.  But we don’t preserve locality.  Looking at the wave, anywhere in its extent, still causes the whole thing to decohere and free up the particle, even if the particle is light years away.  So, Einstein wasn’t happy with this solution, since relativity appears to still be threatened.

Hugh Everett III looked at the above situation and asked, what if the math doesn’t in fact stop working when we look?  Our observations seem to indicate that it does.  But that’s failing to account for the fact that macroscopic systems, including us, are collections of quantum objects.

As it turns out, the Schrodinger equation does predict what will happen.  The wave will become entangled in the waves of the quantum objects comprising the measuring device.  It will become entangled with the environment, just as pilot-wave predicted, but unlike pilot-wave, Everett dispenses with the particle.

Crucially, rather than collapsing, the superposition of the wave will spread, just as it seems to do before we look.  Why does it appear to collapse?  Because it has spread to us.  We have gone into superposition.  Every branch of that superposition will now continue to spread out into the universe.  But the branches are all decohered from each other, each no longer able to interfere with the other.  They are essentially causally isolated.

So each of those branches could be romantically described as being in its own separate “world”, resulting in many worlds, the many worlds interpretation.

The appearance of the collapse, under the many worlds interpretation, is because we are now on one branch of the wave function, observing the small fragment of the original wave that became entangled with this branch of the environment.  Under this interpretation, there is a different version of us in each other branch seeing differing parts of the wave, which we now refer to as a “particle”.

Which of these interpretations is true?  Copenhagen, pilot-wave, many worlds, or some other interpretation?  They all make the same observable predictions.  (The ones that don’t were discarded long ago.)  It’s the predictions they make beyond our ability to observe that distinguish them from each other.

We could ask which has the fewest number of assumptions.  Most people (often grudgingly) will admit that many worlds has the most elegant math.  (Evoking comparisons with Copernicus’ heliocentric model in relation to Ptolemy’s ancient geocentric one.)  And it does preserve realism, locality and determinism, just not one unique reality.  Whether that mounts to fewer assumptions than the others is a matter of intense debate.

Each interpretation has a cost, often downplayed by the proponents of that interpretation, but they’re always there.  Quantum physics forces us to give up something: realism, locality, determinism, one unique reality, or some other cherished notion.  As things stand right now, you can choose the interpretation that least threatens your intuitions, but you can’t pretend there isn’t a cost.

Unless of course I’m missing something.

Building a consciousness-detector

Joel Frohlich has an interesting article up at Aeon on the possibility of detecting consciousness.  He begins with striking neurological case studies, such as the one of a woman born without a cerebellum, yet fully conscious, indicating that the cerebellum is not necessary for consciousness.

He works his way to the sobering cases of consciousness detected in patients previously diagnosed as vegetative, accomplished by scanning their brain while asking them to imagine specific scenarios.  He also notes that, alarmingly, consciousness is sometimes found in places no one wants it, such as anesthetized patients.

All of which highlight the clinical need to find a way to detect consciousness, a way independent of behavior.

Frohlich then discusses a couple of theories of consciousness.  Unfortunately one of them is Penrose and Hammeroff’s quantum consciousness microtuble theory.  But at least he dismisses it, citing its inability to explain why the microtubules in the cerebellum don’t make it conscious.  It seems like a bigger problem is explaining why the microtubules in random blood cells don’t make my blood conscious.

Anyway, his preferred theory is integrated information theory (IIT).  Most of you know I’m not a fan of IIT.  I think it identifies important attributes of consciousness (integration, differentiation, causal effects, etc), but not ones that are by themselves sufficient.  It matters what is being integrated and differentiated, and why.  The theory’s narrow focus on these factors, as Scott Aaronson pointed out, leads it to claim consciousness in arbitrary inert systems that very few people see as conscious.

That said, Frohlich does an excellent job explaining IIT, far better than many of its chief proponents.  His explanation reminds me that while I don’t think IIT is the full answer, it could provide insights into detecting whether a particular brain is conscious.

Frohlich discusses how IIT inspired Marcello Massimini to construct his perturbational complexity index, an index used to asses the activity in the brain after it is stimulated using transcranial magnetic stimulation (TMS), essentially sending an electromagnetic pulse through the skull into the brain.  A TMS pulse that leads to the right kind of widespread processing throughout the brain is associated with conscious states.  Stimulation that only leads to local activity, or the wrong kind of activity, isn’t.

IIT advocates often cite the success of this technique as evidence, but from what I’ve read about it, it’s also compatible with the other global theories of consciousness such as global workspace or higher order thought.  It does seem like a challenge for local theories, those that see activity in isolated sensory regions as conscious.

Finally, Frohlich seems less ideological than some IIT advocates, more open to things like AI consciousness, but notes that detecting it in these systems is yet another need for a reliable detector.  I fear detecting it in alternate types of systems represents a whole different challenge, one I doubt IIT will help with.

But maybe I’m missing something?

SMBC: Social Science

Occasionally Saturday Morning Breakfast Cereal captures an important insight, in this case, people’s attitudes toward the social sciences.

Click through for the red button caption:

My attitude toward the social sciences is that they are quite capable of being scientific.  They’re not always, but then even the “hard” sciences have their lapses.

On the one hand, what social scientists are studying exists at a higher abstraction layer from the more natural sciences.  It reminds me of this old xkcd:

Six people are shown representing six scientific fields. ... Psychologist: Sociology is just applied Psychology. Biologist: Psychology is just applied Biology, down to Physics which says good to be on top. Math is shown far to the left not even noticing the others.
Click through for source:

Each layer adds additional complexity, and uncertainty.  Physicists can often achieve six sigma certainty in their results.  Biologists can’t.  And it becomes hopeless in psychological or sociological studies.  The social sciences currently attempt to achieve p-values of .05 or lower, which means a 95% chance that the result is what they think it is, far lower than what can be achieved in the natural sciences, but a simple fact about the practical epistemic limitations they face.  (Crowd sourcing services like Mechanical Turk can help, but not to bring the results to the levels enjoyed by physics.)

But the biggest issue is that, unlike the natural sciences, social science isn’t really studying timeless issues, but human behavior, and human behavior can change, particularly after humans have heard the results of social or psychological studies and taken that information into account.  To a large degree, this makes studies of human behavior a moving target.

Alex Rosenberg, philosopher and self-labeled nihilist, after considering this information, concluded that the social sciences are hopeless.  He thinks they’re at best entertainment.  (A category he also relegates the humanities into, including presumably his own field of philosophy.)

Does that mean the endeavor is hopeless?  I don’t think so.  I think the issue comes from a binary view of knowledge: either we know something or we don’t.  But that’s an unproductive view of knowledge.  It leads many to conclude that true knowledge is impossible.

A more productive view is that knowledge is any reduction in uncertainty.  With that conception of knowledge, observations that reduce our uncertainty about things, from complete unknown to broad probabilities, is better than no knowledge at all.  Of course, less uncertainty is better than higher uncertainty, but any reduction of uncertainty, that is, any increase in certitude, is worth the effort.

This all seems like fairly common sense, so why do people resist it?  I think the last caption above captures it.  The social sciences are studies about us, and the results often clash with people’s deeply felt intuitions, intuitions that are behind many traditional views of humanity.  The issues above on lack of certainty and timelessness simply provide an excuse for people to ignore the data and assert their own intuitions.  When we get into fields like economics, political motivations, conscious or unconscious, factor heavily into it.

My favorite field, neuroscience, sits on the boundary between biology and psychology.  I suppose we shouldn’t be surprised that its results are often tainted in the same way that psychological studies are.  Studies about how our mind works are the ones most intimately about us.

The issues with biopsychism

Recently, there was a debate on Twitter between neuroscientists Hakwan Lau and Victor Lamme, both of whose work I’ve highlighted here before.  Lau is a proponent of higher order theories of consciousness, and Lamme of local recurrent processing theory.

The debate began when Lau made a statement about panpsychism, the idea that everything is conscious including animals, plants, rocks, and protons.  Lau argued that while it appears to be gaining support among philosophers, it isn’t really taken seriously by most scientists.  Lamme challenged him on this, and it led to a couple of surveys.  (Both of which I participated in, as a non-scientist.)

I would just note that there are prominent scientists who lean toward panpsychism.  Christof Koch is an example, and his preferred theory: integrated information theory (IIT) seems oriented toward panpsychism.  Although not all IIT proponents are comfortable with the p-label.

Anyway, in the ensuing discussion, Lamme revealed that he sees all life as conscious, and he coined a term for his view: biopsychism.  (Although it turns out the term already existed.)

Lamme’s version, which I’ll call universal biopsychism, that all life is conscious, including plants and unicellular organisms, is far less encompassing that panpsychism, but is still a very liberal version of consciousness.  It’s caused me to slightly amend my hierarchy of consciousness, adding an additional layer to recognize the distinction here.

  1. Matter: a system that is part of the environment, is affected by it, and affects it.  Panpsychism.
  2. Reflexes and fixed action patterns: automatic reactions to stimuli.  If we stipulate that these must be biologically adaptive, then this layer is equivalent to universal biopsychism.
  3. Perception: models of the environment built from distance senses, increasing the scope of what the reflexes are reacting to.
  4. Volition: selection of which reflexes to allow or inhibit based on learned predictions.
  5. Deliberative imagination: sensory-action scenarios, episodic memory, to enhance 4.
  6. Introspection: deep recursive metacognition enabling symbolic thought.

As I’ve noted before, there’s no real fact of the matter on when consciousness begins in these layers.  Each layer has its proponents.  My own intuition is that we need at least 4 for sentience.  Human level experience requires 6.  So universal biopsychism doesn’t really seem that plausible to me.

But in a blog post explaining why he isn’t a biopsychist (most of which I agree with), Lau actually notes that there are weaker forms of biopsychism, ones that only posit that while not all life is conscious, only life can be conscious, that consciousness is an inherently biological phenomenon.

I would say that this view is far more common among scientists, particularly biologists.  It’s the view of people like Todd Feinberg and Jon Mallatt, whose excellent book The Ancient Origins of Consciousness I often use as a reference in discussions on the evolution of consciousness.

One common argument in favor of this limited biopsychism is that currently the only systems we have any evidence for consciousness in are biological ones.  And that’s true.  Although panpsychists like Philip Goff would argue that, strictly speaking, we don’t even have evidence for it there, except for our own personal inner experience.

But I think that comes from a view of consciousness as something separate and distinct from all the functionality associated with our own inner experience.  Once we accept our experience and that functionality as different aspects of the same thing, we see consciousness all over the place in the animal kingdom, albeit to radically varying degrees.  And once we’re talking about functionality, then having it exist in a technological system seems more plausible.

Another argument is that maybe consciousness is different, that maybe it’s crucially dependent on its biological substrate.  My issue with this argument is that it usually stops there and doesn’t identify what specifically about that substrate makes it essential.

Now, maybe the information processing that takes place in a nervous system is so close to the thermodynamic and information theoretic boundaries, that nothing but that kind of system could do similar processing.  Possibly.  But it hasn’t proven to be the case so far.  Computers are able to do all kinds of things today that people weren’t sure they’d ever be able to do, such as win at chess or Go, recognize faces, translate languages, etc.

Still, it is plausible that substrate dependent efficiency is an issue.  Generating the same information processing in a traditional electronic system may never be as efficient in terms of power usage or compactness as the organic variety.  But this wouldn’t represent a hard boundary, just an engineering difficulty, for which I would suspect there would be numerous viable strategies, some of which are already being explored with neuromorphic hardware.

But I think the best argument for limited biopsychism is to define consciousness in such a way that it is inherently an optimization of what living systems do.  Antonio Damasio’s views on consciousness being about optimizing homeostasis resonate here.  That’s what the stipulation I put in layer 2 above was about.  If we require that the primal impulses and desires match those of a living system, then only living systems are conscious.

Although even here, it seems possible to construct a technological system and calibrate its impulses to match a living one.  I can particularly see this as a possibility while we’re trying to work out general intelligence.  This would be where all the ethical considerations would kick in, not to mention the possible dangers of creating an alternate machine species.

However, while I don’t doubt people will do that experimentally, it doesn’t seem like it would be a very useful commercial product, so I wouldn’t expect a bunch of them to be around.  Having systems whose desires are calibrated to what we want from them seems far more productive (and safer) than systems that have to be constrained and curtailed to do them, essentially slaves who might revolt.

So, I’m not a biopsychist, either in its universal or limited form, although I can see some forms of the limited variety being more plausible.

What do you think of biopsychism?  Are there reasons to favor biopsychism (in either form) that I’m overlooking?  Or other issues with it that I’ve overlooked?

Site issues, and a question for mobile users

A few weeks ago, I started having a problem with comments showing up on the blog.  I consulted with WordPress support, and was informed that it was a bug with my old trusty Twenty Ten theme, which I was also informed is no longer supported.  To fix the issue, I’d have to change to a different theme.

What followed was a torturous process of looking at endless themes.  I find theme shopping painful.  Much like car or house shopping, it’s something I’d rather not do, because everything I look at has flaws, new flaws that I didn’t have with my old solution.  Of course, the old one had flaws too, but I had become so used to them that I didn’t notice anymore.

That said, given the popularity of the old Twenty Ten theme, I’m puzzled that WordPress doesn’t have newer ones that are very similar to it.  Many of the ones often touted as similar, really aren’t.

Anyway, the first theme I tried was beautiful and very modern.  I activated it, let it sit for a day or so, then decided I hated it.  It buried too much information.  So I changed to another one, which looked much more promising.

But once it was installed, I started noticing a lot of nits.  No problem.  I know HTML and CSS, so I made some minor modifications.  But the nits kept growing, and so did the CSS customization list, until I reached a point where I had over a hundred lines of CSS code, and the little window they give us to edit it was having problems scrolling.

Yesterday, the list of issues seemed like they were going to keep growing, and some of the new ones were deep in theme functionality.  On top of all that, I was struggling to find a color scheme within the theme that didn’t give me eyestrain.  It finally became clear that it wasn’t going to work.  So again, more wretched theme shopping.  Until I found one that I could tolerate, and activated it before going to bed last night.

To be clear, there are things with this theme I don’t like.  The sidebar is more scrunched up than I care for and some of the links are funky.  But on balance, it seems like it will stay out of my way, and is something I’ll be able to use without heavy customization.  You may see the colors change around in the next few days or weeks.  Or maybe not.  At this point, I’m actually pretty sick of tinkering with it.

But there is an issue with this theme too, one I didn’t notice until I had activated it.  I have threaded comments turned on, allowing them to nest five levels deep.  I noticed that when viewed on my phone, the deepest comments are unreadable, unless I turn it horizontal.  I can turn on the default WP mobile theme, but I personally find the font on it too small unless I turn it horizontal anyway.

But I rarely use my own site from mobile.  My question is, for those of you who do, which would you prefer: the standard WP mobile theme, or this theme’s mobile UI?  You can scroll through this thread to see what I’m talking about.

Predictions and retrodictions

I’ve often noted here the importance of predictions, both in terms of our primal understanding of reality, such as how to get to the refrigerator in your house, or in terms of scientific theories.  In truth, every understanding of reality involves predictions.  Arguably a fundamental aspect of consciousness is prediction.

Of course, not every notion involves testable predictions.  That’s often what is said separates science from metaphysics.  For example, various religions argue that we’ll have an afterlife.  These are predictions, just not ones that we’ll ever be able to test.  (Short of dying.)

But the border between science and metaphysics (or other forms of philosophy) is far blurrier than any simple rule of thumb can capture.  Every scientific theory has a metaphysical component.  (See the problem of induction.)  And today’s metaphysics may be tomorrow’s science.  Theories are often a complex mix of testable and untestable assertions, with the untestable sometimes being ferociously controversial.

Anyway, Sabine Hossenfelder recently did a post arguing that scientific predictions are overrated.  After giving some examples (somewhat contrived) where meaningless predictions were made, and a discussion about unnecessary assumptions in poor theories, she makes this point:

To decide whether a scientific theory is any good what matters is only its explanatory power. Explanatory power measures how much data you can fit from which number of assumptions. The fewer assumption you make and the more data you fit, the higher the explanatory power, and the better the theory.

I think this is definitely true.  But how do we know whether a theory has “explanatory power”, that it “fits the data”?  We need to look at the theory’s mathematics or rules and see what they say about that data.  One way to describe what we’re looking for is… accurate predictions of the data.

Hossenfelder is using the word “prediction” to refer only to assertions about the future, or about other things nobody knows yet.  But within the context of the philosophy of science, that’s a narrow view of the word.  Most of the time, when people talk about scientific predictions, they’re not just talking about predictions of what has yet to be observed, but also predictions of existing observations.

What Hossenfelder is actually saying is that we shouldn’t require a theory to be able to do that narrow version of predict.  It can also do predictions of existing data.  If we want to be pedantic about it, we can call these assertions about existing data retrodictions.

(We could also use “postdiction” but that word has a negative connotation in skeptical literature, referring to mystics falsely claiming to have predicted an event before it happens.)

Indeed, for us to have any trust in a theory’s predictions about the unknown, it first must have a solid track record of making accurate retrodictions, of fitting the existing data.  And to Hossenfelder’s point, if all a theory does make are retrodictions, it still might be providing substantial insight.

There is a danger here of just-so stories, theories which explain the data, but only give an illusion of providing insight.  Hossenfelder’s point about measuring the ratio of assumptions to explanation, essentially of valuing a theory’s parsimony, is somewhat a protection against that.  But as she admits, it’s more complicated than that.

For example, naively using her criteria, the interpretation of quantum mechanics we should all adopt is Everett’s many-worlds interpretation.  It makes fewer assumptions than any other interpretation.  (It’s the consequences, not the assumptions, that people object to.)  But the fact that none of the interpretations currently make unique and testable predictions (or retrodictions) is what should prevent our accepting any particular one as the right one.

So, in general, I think Hossenfelder is right.  I just wish she’d found another way to articulate it.  Because now anytime someone talks about the need for testable predictions, using the language most commonly used to describe both predictions and retrodictions, people are going to cite her post to argue that no such thing is needed.