The specificity problem

Henry Shevlin has an interesting paper from 2021 in Mind & Language that just went open access: Non-human consciousness and the specificity problem: A modest theoretical proposal. Shevlin discusses the problem of applying cognitive theories of consciousness, developed within the context of human psychology, to non-human systems, such as animals or artificial systems.

For example, under global workspace theory (GWT), how much of what constitutes a human workspace is necessary in a system which has global sharing dynamics? How simple can those dynamics be and still meet GWT’s definition of consciousness? Are there particular human modules that are necessary, such as episodic memory coordination, self awareness, or planning?

Or in higher order theory (HOT), how sophisticated do the meta-representations need to be? Is being aware of which sensory inputs result from internal versus external changes sufficient? What about the capability to assess confidence in one’s beliefs? Or is a more comprehensive theory of mind type capability required? (And what constitutes that?)

Shevlin points out that these issues apply to any cognitive theory of consciousness. (He lists GWT, HOT, working memory theories, Prinz’s attended intermediate representations theory, and Graziano’s attention schema theory as examples.) He calls this the specificity problem.

The specificity problem seems similar to an issue other theorists have raised: the small network problem. For just about any information processing theory of consciousness, it’s possible to implement the structure of that theory in a trivially small network, perhaps down to a few neurons. The idea is that this has panpsychist-like implications for these theories.

Some, like integrated information theory (IIT) embrace these implications. Most of the others don’t. The argument is that this makes them incomplete. Proponents of these theories typically argue that their models are more context dependent than that. But it’s exactly the boundaries of that context which is the question for both the small network and specificity problem.

Anyway, Shevlin identifies four common solutions to the specificity problem: conservatism, liberalism, incrementalism, and rejectionism.

Conservatism requires most, if not all of the characteristics the theory ascribes to humans be present. This has the result of only counting humans and maybe a few other species as conscious. Shevlin notes that few people today are enthusiastic about this solution.

Liberalism is pretty much the opposite. It accepts any system that has dynamics similar to the theory in its most general sense as conscious. Many are enthusiastic about this approach for living systems (biopsychism), but less so for engineered ones. Shevlin points out that a network of computers running the network time protocol (NTP) implements an extremely simple global sharing system, but most people dismiss the idea of that network being conscious. The biggest criticism of this solution is it includes too many systems in club-consciousness.

Incrementalism tries to steer a path between conservatism and liberalism. It views consciousness as something which can be present in degrees. So while animals don’t have full human consciousness, they are conscious to an extent, depending on the species. And artificial systems can have aspects of consciousness without the full animal or human package. But this solution violates many people’s binary intuition about consciousness being either present or absent.

Finally, rejectionism just rejects the question of animal consciousness as ill-posed, with no fact of the matter answer. This view is advocated by Peter Carruthers, which I discussed a while back. Shevlin rejects this solution, but admits he doesn’t have an adequate response to it in the paper.

My own view here is that incrementalism and rejectionism are the same solution, just with different definitions. Incrementalism has a more liberal terminology, and rejectionism a more conservative one. Given that Carruthers moved to rejectionism from the conservative camp, it’s not surprising he prefers that terminology.

These solutions all fall under what Shevlin calls the theory-heavy approach. He also discusses the theory-light approach, advocated by Jonathan Birch, which takes reliable markers of consciousness in humans (such as trace conditioning, rapid reversal learning, and multisensory learning) and looks for them in other creatures. Using this approach, Birch finds some evidence for consciousness in bees.

Shevlin sees three issues with the theory-light approach. One is that just because something is a marker for consciousness in humans doesn’t mean it’s one in a radically different species. Second is that failure to find such markers doesn’t necessarily indicate lack of consciousness. There’s a danger of false negatives. And finally, there is a danger of a false positive in artificial systems “gerrymandered” to show the relevant markers without the necessary underlying framework.

Shevlin’s solution is a modest theoretical approach, essentially a hybrid of theory-heavy and theory-light. It involves using the theory-light markers to identify candidates for conscious systems, but then scrutinizing these systems in a theory-heavy manner.

He uses the example of a lancelet, a pretty simple animal most people do not consider conscious. He notes that if it did show some markers for consciousness, our next approach would be to study its nervous system more carefully to see if any of the structures and dynamics predicted by theories of consciousness are present.

He also cites Searle’s Chinese Room as an example of a system that might show markers, which we could then examine more carefully to see if its dynamics actually matched a particular theory. I’m sure if this paper had been written more recently, he would have used Google’s LaMDA system as another example.

The paper covers some interesting ground and I recommend it for anyone interested. My chief criticism of Birch’s theory-light proposal was the binary thinking, that consciousness is something either fully present or completely absent. Shevlin’s recognition of the incremental solution largely addresses that concern.

My only real criticism of Shevlin’s approach is that it seems to assume consciousness can’t be implemented in an architecture different from humans. Given that evolution often solves the same problem in different ways, this seems unwarranted. The only way I know around this possibility is to study the system’s capabilities, and decide based on those whether it’s conscious. (Of course, the question is which capabilities matter?)

The specificity and small network problems are interesting. But I wonder if they’re really more converging predictions of multiple theories, predictions we shouldn’t be hasty in dismissing as “problems”. I don’t really see the implications as panpsychist; it’s hard to see workspaces, metacognition, making predictions, etc., as meaningfully present in protons, rocks, or stars. But it does appear to indicate that the boundary between conscious and non-conscious systems is much blurrier, much more a matter of interpretation, than most people want to admit.

Unless of course I’m missing something?

Featured image source

15 thoughts on “The specificity problem

  1. Thanks for that, Mike. This issue has bothered me for a long time now — largely because of reading too much SF. 🙂 Stanislaw Lem in particular is brilliant in undercutting any notion of consciousness being easily recognisable and/or human-like to whatever degree. I’ll need to find time to read that paper of Shevlin’s.

    BTW, I guess he does not realise that the “modest proposal” in the title bears distinctly satirical historical vibe.

    Liked by 1 person

    1. Thanks Mike. I think the difficulty is the only consciousness we ever have access to from the inside is our own. That makes it an inherently anthrocentric concept, whether we want to admit it or not. When we ask if a particular system is consciousness, I think what we’re really asking is does it process information like us. The “like us” view makes the interpretive nature obvious. But I’m definitely in the minority with that opinion.

      Yeah, not sure if he’s aware of it. It definitely has a sort of ironic feel to it, at least until you get into the paper and realize what the title refers to.

      Like

  2. I find your posts to be rather like quality poetry. One must take the time to read and interpret each word and their relation to the words around them in order to fully grasp the intent. That’s why I rarely read quality poetry, it takes a concerted effort to isolate my mind in a distraction free zone while I consume the thoughts presented.
    It’s so much easier to Fool-Scroll Instagram-of-coke…

    Is it even possible to avoid an anthropocentric bias, ever? I have a reader/friend who complains about my infrequent stories of aliens, or at least alien minds. “What aliens?” I ask him. Every story is about humanity, written by a human, in the context of human emotion with no possible way to create a truly alien scenario.

    Liked by 1 person

    1. Quality poetry? Thanks…I think. I’m not trying to be cognitively demanding or opaque with these posts, but summarizing a paper like this in a manageable blog post does involve assuming the reader has some prior knowledge, or at least hoping they can pick up the gist of related concepts through context or quick searches. Maybe I need to work harder though to keep it approachable.

      Yeah, I’m not sure it’s possible to ever completely escape our anthropocentric biases, except by the reality checks science forces on us. And if science can’t currently adjudicate it, we’re stuck trying to make sense of the world the best way we can from the perspective of hairless apes who escaped the savanna.

      Definitely the vast majority of sci-fi aliens are just different aspects of us, futuristic elves, dwarves, orcs, and dragons. Realistic extraterrestrials in fiction are rare because they’re so…alien.

      Liked by 2 people

  3. Thanks for bumping that article. [I think I bookmarked it, or something, but never got back to it.]

    If nothing else, it highlights the pitfalls of using the top-down approach (using human consciousness as the paradigm case). It also helps me classify my own bottom-up theory, which is liberalist and incrementalist, but with a definite cut-off at the bottom. Molecularist, then.

    *

    Liked by 1 person

    1. If you’re like me, that bookmark list is getting pretty deep. Usually if I don’t attend to it in a few days, it scrolls down into oblivion.

      I’ve long known you were pretty liberal with the pervasiveness of consciousness, but this is the first I can recall you expressing a cut-off at the bottom. Does it include or exclude unicellular organisms? Sounds like it might include them.

      Like

      1. In an above comment you said “… I think what we’re really asking is does it process information like us.” I think what we’re really asking is pretty much “does it process information”. I think all of the current theories are simply looking at various ways of processing information, and none is going to be able to say “you need this specific kind of information process”, except for the one at the bottom.

        So at bottom is a particular process, specificly, a particular information process, the Psychule. If a unicellular organism uses such a process, it has some consciousness. Ditto for computers, robots, etc.

        And it’s not just any information process. It’s a combination of two processes: 1. creating a representation, and 2. interpreting that representation.

        *
        [reasons available on request]

        Like

        1. Your first two paragraphs seem to imply a naturalistic type of panpsychism, but the final paragraph narrows it substantially. We’re not just talking about any information processing, but processing involving representations and interpretation. That still seems to fit within my “like us” standard. You’re just very liberal with the “like”.

          The question, of course, is what counts as a representation? Or an interpretation? Does the temperature reading in a thermostat and its response count? Based on our previous conversations, I suspect you’ll say yes. But what about the atmospheric dynamics on a planet changing depending on how much it’s tilted toward its star? Could we consider the temperature on the planet a representation of the current relation between planet and star? And the change in dynamics an interpretation?

          [yes please!]

          Like

          1. [Into the weeds it is. You will have seen some of this before. Hope the repetition is helpful.]

            In order to explain representation as used here you need to invoke some metaphysics, namely, causation, information, (teleonomic) goals, and patterns.

            At the core of Psychule theory is process. Like Philip Goff likes to say, we can’t know what matter is, only what matter does. Every process looks like
            input(variables)->[mechanism/system]->output(variables)
            We say: the mechanism *causes* the output when presented with the input. (The mechanism can be an extended system. Natural selection is such a mechanism. Note: I will use system and mechanism interchangeably.)

            The psychule theory is in fact panprotopsychic, the protopsychic property being correlation. At the quantum level you would call the property entanglement. All matter is entangled/correlated to some extent, and the measure of the entanglement/correlation between any two things is referred to as the mutual information. We say: Causation creates mutual information.

            This mutual information can become an affordance for something with a teleonomic goal. A system/mechanism with a teleonomic goal is a system that moves the environment toward a specific non-equilibrium state. [Just gonna say “goal” from here on out.] There are many natural systems with goals: tornadoes, rivers, etc. (autopoetic stuff). Some systems can create subsystems which function to achieve or at least aid the goal. Natural selection is such a system, creating life. The function of the subsystem becomes the goal of that subsystem. The function of life is the perpetuation of life. Natural selection also creates subsystems within life. These subsystems have particular functions, like move toward food, because those functions move the larger system toward its goal state (self perpetuation).

            So now we get to information processing. A system with a goal could create a subsystem whose sole function is to create an output that has a specific correlation (mutual information) w/ something in the world. Some would call this output a representation (and a good semiotician would call it a sign vehicle). But that alone doesn’t do much for you (or the system). The system also needs to create a subsystem which responds to the sign vehicle, this response furthering the goal of the higher system. This response is called an interpretation. In fact, the system could create another subsystem which responds to the sign vehicle in a different way. This would be a different interpretation.

            So to sum up, a psychule looks like this:

            Input(variables)->[mech1]->Output(mech2,mech3)
            and
            Input(variables)->[mech2]->Output(sign vehicle)->[mech3]-> Output(valuable response for mech1)

            Note: mech 1 here need not be extant. It’s just part of the explanation for getting mechs 2 and 3.

            If you can find mech2 and mech3 in the thermostat, and explain the relation to mech1, I’ll call that a psychule. Don’t think that’ll work for the planet dynamics.

            *
            [easy peasy, right? hmmm … forgot to do patterns. Available on request.]

            Like

          2. Thanks for the description!

            Let me summarize in my own terminology and you can tell me if I’m off. The necessary components of a conscious system are goals, representations, and interpretation of those representations in service of the goals. For the goals, you mentioned they were teleonomic, indicating you recognize there’s an interpretational aspect to this, which I think is good. The representations can be relatively simple. Sound about right?

            Since I don’t think there’s any strict fact of the matter, I can’t say this is wrong. I will say that my own intuition of a conscious system seems to require a bit more. I need the representations to be models with some degree of detail (perceptions). (Maybe what you’d say about patterns would speak to that.) And there needs to be automatic reactions in relation to the goals (affects), for those reactions to be part of the modeling (feelings), and for them to be overridable (volition), at least to some extent.

            I can’t defend those extra requirements except to say they feel necessary. But if everyone accepted your view, like any value rule society creates, I’d have to adapt to it.

            [As easy peasy as this stuff can be. Thanks again! I wouldn’t mind hearing about the patterns, but only if you feel up to it.]

            Like

          3. A good summary. The one caveat I would make is that the goals don’t have to be currently extant. The goals only explain why the two mechanisms of the key process (er, double process) came to exist. Thermostats doesn’t have goals, just functions (and only some use psychules).

            *
            [will do patterns tomorrow]

            Liked by 1 person

  4. It used to be thought that various chemical elements bonded because the had “an elective affinity” for each other, a rather emotionally loaded concept Then we discovered that it all had to do with electrons and outer shells.
    Dan Dennett argues that “mystery gives way to mechanism,” but this does not mean that “mystery” will ever completely be dispensed with.

    I agree with the “no fact to the matter” position, within living things. I doubt we will ever again find it USEFUl to think of inanimate things as like “persons” or “us” or “agents” or having some rudimentary consciousness. That seems to be a Convenient place to draw the line, at living things. So no facts will ever be discovered for consciousness, beyond what it takes to physiologically be “alive.” Rudimentary con. starts with simple self-interest and s-preservation and reproduction. So the lancet is conscious in a very simple way, and Dennett contends infants and mentally impaired humans are conscious, in a simpler and “honorary” way, more or less like the lancet

    I like that you recognize “an inside” to consciousness. It is a very anthropomorphic term, very subjective. If a thinking person does not recognize it (an inside) here, in us and in con., there will be no place for it anywhere. It seems your personal experience of yellow is basically (workably) like mine, at least we can hope and argue for this—very much ‘outsidedly’ apparent evidence suggests so.

    Yes, ambiguity is a big part of these issues. We are within “an experience” that has elements both seemingly undeniably “outside” (objective) and “inside” (subjective). Only a giant roadmap around and between all these elements will lessen the ambiguity of any one or two of them. That is why these cognitive experiments on different animals are merely suggestive but not in themselves conclusive.

    Thanks for your efforts on these wayward matters!

    Liked by 1 person

    1. Thanks Greg. Appreciate seeing your thoughts on this. Drawing the line at life is often called “biopsychism” these days, although that term is sometimes used for the position that only life can be conscious rather than that all life is. It sounds like you see consciousness as a matter of interpretation within the scope of life.

      My interpretive scope is broader. I’m open to animate non-living things (robots, etc) being conscious. But ultimately that interpretive stance is that there’s no clear fact of the matter, just how we decide to regard those systems, whether they’re like us enough to deserve our moral consideration. Few people are tempted by the current systems, and really unless we go out of our way to make them life like, I’m not sure that’ll change. But it’s not hard to see reasons why we might make some of them life like (artificial companions, etc.).

      Thanks for commenting!

      Like

  5. Incrementalism is our solution to a great many “problems” like this. For example, clean vs dirty dishes. If you have a stainless steel pot in your kitchen that contains only steel, air, and 12 molecules of rancid fat, that is an astoundingly clean pot and you should thank the person who scrubbed it. Given the ubiquity of incrementalism, it is only reasonable to extend it here.

    I was interested in Prinz’s theory because it sounded a lot like a hypothesis of mine, so I looked it up here. Sure enough, he intends it as a theory in which “Consciousness, he says, is defined by reference to the having of phenomenal qualities.” It is precisely one interesting class of phenomenal qualities – the ones that in my view make the hardest “problem”, or should we say hardest fact – that I agree with Prinz about Well, I would say “attendable intermediate representation” where Prinz says “attended”, but I don’t think anything in philosophy of mind hangs on that. (In ethics, it does.)

    Liked by 1 person

    1. Good point about incrementalism. “Clean” of course has a degree of interpretation to it, although it’s tied to a pragmatic goal, not getting sick from any residue left in the pot. But getting sick is a probability, and exactly how low we need to make that probability isn’t a strict fact of the matter. And our standards might vary, from one level at a camp site, to another for a restaurant kitchen.

      It’s been a while since I read about Prinz’s theory. He had a chapter on it in the The Blackwell Companion to Consciousness. Skimming that link, it seems similar. I don’t recall it being a complete account, but mostly an observation about what kind of information makes it into conscious awareness. One of these days, I might read his book, but it’s pricey and will need to come down a bit first.

      Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.