How much can we change the causality of the brain and keep consciousness?

James of Seattle clued me in to a thought experiment described by Dr. Anna Schapiro in a Twitter thread.

It’s very similar to one discussed in a new preprint paper: Do action potentials cause consciousness? Like all good thought experiments, it exercises and challenges our intuitions. In this case, it forces us to contemplate how we think consciousness actually works. The paper authors claim it challenges major theories of consciousness. I don’t think it does, but you might feel differently.

  1. Let’s say we’re able to record the voltage spikes from every neuron in your brain for 30 seconds. (This can actually be done for small numbers of neurons, but let’s assume we can somehow do it for all 86 billion.) During the 30 seconds, you wave to yourself in a mirror.
  2. Now, let’s further assume we can use the recording to induce the same spikes in those neurons in the same order. We do this using voltage clamps on each neuron. (Again voltage clamping is currently doable for small numbers of neurons (fascinating example), but let’s assume we can do it for all 86 billion.) So we run the recording, inducing the spikes in your neurons in the same order. During the sequence, are you seeing yourself wave in the mirror? (In Schapiro’s poll, 70% thought they would be.)
  3. Okay, let’s suppose we have the technology to block the neurotransmitters in all of your synapses, and we do so. We’re now going to run the recording, inducing the spikes in the same neurons in the same order. Still seeing yourself waving? (Only 28% of Schapiro’s respondents thought so.)
  4. Now we physically distribute your neurons to labs throughout the world. We send instructions on exactly when, in a precisely timed operation, they’re supposed to induce spikes in the neurons they receive. When those instructions are followed in all the labs, are you still conscious of yourself waving in the mirror? (24%)
  5. Finally, it turns out someone screwed up on the time zones, so the ordering in 4 got messed up. Still seeing yourself waving? (11%)

So when in this sequence do you think consciousness disappears, if ever?

According to the paper authors, many of the major theories of consciousness, such as global workspace, higher order thought, or recurrent processing, are committed to you being conscious through all the steps, although I suspect the proponents of those theories would argue that their theories are more context specific than that. The paper authors also assert that integrated information theory, with its causal structure requirements, and their own Dendritic Information Theory, will have bailed by step 3.

When considering this, I think it’s important to ask Daniel Dennett’s hard question: And then what happens? To be conscious of something, I think, involves more than just the immediate processing. It’s being in the moment with your memories, recognizing with your learned associations, and then remembering it afterward. It’s having the episode be part of your episodic stream of consciousness.

With that in mind, I think there is a chance of still being conscious in step 2. But step 3 is where we unquestionably mess with the causal dynamics. At that point, consciousness in the sense described above seems absent. But as usual it comes down to exactly how we define consciousness.

In step 1, you’re presumably fully aware of where you are and what you’re doing. In step 2, you might be confused about exactly what’s happening, but with your synapses still working, you should remember that you’ve already done this once, and might remember the second time after it’s done.

But in step 3, when we block the synapses, you no longer have the ability to remember the previous attempts. (Remember, memories are thought to be stored in the varying strengths of synaptic connections.) If it makes sense to say you know anything, you think this is the first time you’ve done the waving. And most importantly, assuming we unblock your synapses afterward, you won’t have any memory of the run in step 3. From your perspective, you will have blacked out during that period, maybe similar to the experience of an epileptic patient having a seizure. If we call that state “conscious”, it seems completely separate from your overall autobiographical consciousness.

But isn’t it reasonable to assume that some kind of experience takes place in step 3? What exactly do we mean by “experience”? If we can’t be changed by the sequence, in what manner is it really an experience?

Perhaps most importantly, what test could we ever do to establish there was still a consciousness present in step 3? We couldn’t trust any self report, behavioral indicators, or brain scan patterns, because those would just be from the recording playing back. And again, you’d have no memory of it afterward, so couldn’t self report your experience after the fact. We’d have an epiphenomenal version of consciousness, something with no causal effects.

If step 3 isn’t conscious, then steps 4 and 5 are moot. But if we do retain consciousness in step 3, the paper authors note that electromagnetic field theories of consciousness should bail at step 4, when all the neurons are now too distant to affect each other purely through the field. If we still think consciousness is present at step 4, it’s hard for me to see the rational for deciding it’s absent in step 5.

What do you think? Where do you draw the line? And why? And do you see any issues with the one I drew?

33 thoughts on “How much can we change the causality of the brain and keep consciousness?

  1. Thanks for writing this. Made me re-think it and changed my mind. [Second time in so many weeks?] I originally had consciousness all the way to the end, but now I don’t think it gets past 1.

    As you are wont to say: there is no fact of the matter, because it depends on what you require to call it consciousness. For myself, I want to identify the smallest unit, the Psychule, (the H2O) whereas you want to require a sophisticated arrangement/ coordination of psychules, suggesting some arrangements are insufficient (clouds,steam, snow?) for “real” consciousness (streams, rain, puddles).

    For me, everything after 1 is playing back a recording of consciousness, like playing a record of someone playing music. Is playing the record the playing of music?

    The big question for me is what is the state of the neurons at the end? Have they changed or not. If they have changed, for example such that they hold new memory, then that counts as consciousness. If there is some other change, not necessarily a memory, such as a change in hormone levels, that would also count. This is Dennett’s “and then what happens”.

    *
    [this is my current gut feeling, subject to change by analysis]

    Liked by 1 person

    1. Already want to change this. Wanna take back the part about counting if the neurons/hormones changed. I think that would be more like implanting a false memory. I don’t think the implanting procedure counts as experience.

      *
      [still thinking]

      Liked by 1 person

    2. Thanks for alerting me to it. I had seen Victor Lamme’s urging for everyone to read the paper, but he didn’t give any strong indication of its nature the way you did.

      I have to admit I struggled with this one yesterday. It took a good amount of thought. Right off the bat, messing with the causal dynamics disturbed me. I felt like something was wrong there. But I went through a period of wondering whether this bullet would really have to be bitten and consciousness in the distributed mind accepted. It took a while for me to settle down and consider why the causal flow mattered (Dennett’s hard question). Once I did, the view in the post kind fell out.

      I agree with the point about it just being a playback of consciousness rather than consciousness itself, at least from step 3 on. But in step 2, I think the synapses still working make me think consciousness could still be mixed in. And that video example Schapiro linked to is worth checking out. It’s only stimulating the FSF region, but it gives you an idea there can be stimulation mixed in with the normal dynamics.

      That said, I’ll admit to still being nervous about my conclusion. I’m wondering if anyone will make a convincing argument that 3 or later are conscious. (Or at least one difficult to dismiss.)

      In the end though, I think you hit the nail on the head. There’s ultimately no fact of the matter. Our strong intuition that there must be is just our natural born dualism firing off. This is the kind of exercise that forces us to confront the fact that the boundary may be much more blurry than we’re comfortable with.

      Like

  2. Mike,
    It seems to me that in situation 2 one general problem with getting the same experience of waving to yourself in a mirror when a second power source is added that causes the same neurons to fire, is that the endogenous power source should still exist as well. Thus here I’d expect chaos both in terms of extra firing and extra inhibiting of firing rather than a coherent image.

    Thus it actually seems to me that situation 3 could potentially work (given cemi) since without synaptic connections the neurons might indeed fire over those 30 seconds as before to thus create an EM field where you perceive waving to yourself in a mirror. (Of course you wouldn’t be, unlike before, so that should be an important difference that might screw things up too.)

    That clip you added where they were directly causing certain neurons in a patient to fire, in which case he’d see facial distortions (though interestingly not distortions in text) seems to support this double power source distortion idea. It reminds me of the way that I propose McFadden’s cemi be tested. For that we’d add fake neurons in someone’s skull that aren’t hooked up to the brain, and see if the EM radiation that we create through various patterns of synchronous firing could distort someone’s conscious experiences. If so then it seems to me that McFadden’s theory would be hard for scientists to refute.

    Liked by 1 person

    1. Eric,
      Good point about the endogenous activity. It seems like it would definitely be an issue with the synapses still working. And it wouldn’t just be endogenous. An exogenous nose itch might trigger an impulse to scratch instead of wave. Not sure how that might play out. Or what the experience of it might end up being.

      Schapiro somewhat covers it by stipulating it doesn’t need to be exactly the same experience, just a similar one. (Sorry, forgot to include that.) Depending on what else the brain is doing when the playback begins, it could conceivably not be similar at all, but a confusing mishmash. What might ameliorate that is that the playback would be affecting neurons throughout the entire brain, including inhibitory ones. So it might shrink the variance enough for the experience to still to be similar. Maybe.

      Depending on exactly how cemi works, I could see it still positing consciousness at step 3. Although I think it would have the same issues I noted in the post, not have access to previous memories and not be remembered after the playback. At least unless the field somehow encodes memories that persist long enough to affect later experiences.

      Liked by 1 person

      1. Mike,
        It’s interesting to me that so many people read this thought experiment and yet it slips their minds that existing brain function should be laid out on top of technology based exogenous firing. I’d think that even the status quo should have a problem with that.

        In any case this thought experiment doesn’t suggest anything funky about McFadden’s cemi. The answer is no to 2 given all the expected conflicting function, though maybe somewhat to 3. That such an experience would not be remembered given no synaptic connections to strengthen is irrelevant. It’s merely a thought experiment. Apparently the worry here is from the “information without physics based instantiation” status quo. I have better ways of illustrating how popular theorists continue to fail science here. I’ll be doing an in depth post on this some time soonish.

        (It looks like I double pasted in that last comment so you might want to clean that up.)

        Liked by 1 person

        1. Eric,
          It didn’t really slip my mind. I just ran through the thoughts I noted above, but didn’t get around to mentioning them in the post. (Trust me, if I went through everything I thought about this, it would have been a very long post.) And I think Schapiro’s stipulation meant it occurred to her as well.

          (Fixed. Sorry, meant to ask but forgot.)

          Liked by 1 person

          1. I was actually referring to the people polled Mike, not you specifically. 70% said that if you were to wire up all 86 billion neurons in the head and exogenously spiked them to fire as they did when you were waving to yourself in a mirror, then you’d perceive yourself doing so once again. But no, I think what should actually result is chaos given that one’s neurons in general should be altered in inappropriate ways over the course of a half hour. Neuron function associated with your existing sight should exist endogenously, and also exogenous neuron spikes given the past visual image situation. Neuron function associated with your existing heart beats should exist endogenously, as well as exogenous spikes given the past heart beat situation. With such tampering across the board I’d expect chaotic involuntary muscle spasms and unintelligible conscious experience which bears little potential for coherent thought. You might also get to the same assessment from that video you shared under “fascinating example” — that is if you consider the percentage of the patient’s neurons that were tampered with versus wiring up all 86 billion.

            If the endogenous neuron fuel source were eliminated however, it seems to me that this might actually work, or at least for an EM field theory. But wouldn’t it also work for global neuronal workspace theory? McFadden is actually partial to this theory initially, though not when it posits no need for any consciousness actualization mechanisms (like EM fields or any other falsifiable proposal). That’s where GNWT it gets spooky. And if consciousness (by which I mean Schwitzgebel’s innocent version) happens to exclusively be about brain neurons firing properly, though not mechanizing anything by means of that firing, then why would it matter where each of these neurons happen to be? If firing correctly is all that matters then why couldn’t they create a given conscious experience if properly fired while spread across the earth?

            Liked by 1 person

          2. Eric,
            Dehaene talks about GNWT as a theory situated within the cognitive neuroscience framework, focused on how consciousness happens in a brain. He does think it provides insight into possible AI consciousness, but it requires that AI to have a lot of other stuff from neuroscience, and he’s very cautious about extending it that way. I’m pretty sure he’d say messing with the reasons why the neurons fire would be outside the context of the theory. I know Baars would for GWT overall. (Baars has already made similar statements to that effect in interviews.) In other words, they don’t make the universality type claims IIT advocates are notorious for making.

            Liked by 1 person

          3. Interesting Mike. From what you’re saying here it sounds like the “global” theorists are safe from not only this thought experiment, but funky thought experiments in general such as Searle’s, Block’s, Schwitzgebel’s, and mine. Well good! Let me try to break things down here as simply as I can.

            Apparently in a natural world “machine information” cannot exist as such independently, but rather only in respect to something that it “informs”. Thus a note written in English is not inherently information, except in respect to something that can in some way interpret it. Otherwise it’s just “stuff”. The same goes for encoded tapes, genetic information, and certainly the information associated with computer function. Without a machine that effectively uses something, “machine information” cannot exist. I’m quite sure that you grasp this in a general sense. This ought to be considered the first rule of engineering.

            So what does this rule mandate about phenomenal experience? (Or qualia, or innocent consciousness, and so on?) In a natural world it mandates that no machine, brain or otherwise, can create this without associated instantiation mechanisms. It’s the instantiation mechanisms that would render any signals here to exist as “information”. Thus when your thumb gets whacked and you experience this as pain, in a natural world you can be sure that information about the event was animated by a mechanism of some sort in order for you to have that pain. The only reasonable account I know of that follows this rule is McFadden’s cemi. Here whacked thumb signals become “information” in the sense that they cause neurons to produce a kind of electromagnetic radiation which itself exists as thumb pain. Furthermore because this fundamental rule of engineering is followed, his proposal is immune from all of the silly implications which various thought experiments demonstrate.

            From what you’ve just said apparently the “global” people are also fine. They must still be on the fence about what mechanism their global workspace effectively animates to create phenomenal existence. The theorists who aren’t fine however are the ones who posit that the brain processes whacked thumb information into other information, and that this processing itself exists as thumb pain. That where all sorts of silly implications result. That’s where things get supernatural.

            Liked by 1 person

  3. I have several immediate responses, in the following order:

    (a) “And then what happens?” But that’s sort-of covered by the stipulation of the period of 30 seconds, which is long enough for some “then” to take place.

    (b) As stated, I would bail out at step 2, but that’s easily enough overcome by stipulating that every neuron in all your body is being recorded/manipulated, not just the brain ones.

    (c) The thought experiment is implicitly predicated on the assumption that centra nervous system (CNS) is sufficiently digital-like for the replay to be exact. That’s simply unrealistic (with no prejudice to the question whether digital systems can be conscious :-)) — we do not know what degree of tolerance may be involved. E.g. if Penrose is right and quantum effects matter, the thought experiment fails. But, OK, let’s imagine a conscious digital system then. E.g. a computer. Mutatis mutandis, the problem still applies.

    (d) I have a gut feeling that the whole issue is intimately connected on the one hand to the old notion of “Boltzmann brain” and on the other to Greg Egan’s “duct hypothesis” in _Permutation City_. So let’s explore a radical re-interpenetration vaguely on those lines. Given the number of humans (and primates? end mammals?…) on the planet and the number of neurons each one has, it is not implausible to think at at at least now-and-then a situation may arise in which each neuron of your body is firing exactly in step with a particular neuron in some other body. Do the set of all such neurons also generate consciousness at that time? (But… then what happens?)

    (e) Going back to (c) above. Does a computer running a complex simulation still do that if its bits are still flipping on and off even though disconnected and scattered across space and time, exactly as they would in some given computer? I’d say no. So by analogy I say no to case 5 and case 4. That leaves case 3, where I am inclined to think that the upshot of the thought experiment is, indeed, that there is no fact of the matter.

    Having said all that (rather more than I thought I would :-)). I reserve the right to change my mind after further pondering the matter. Whi8ch may take a while. What’s your policy on revisiting past topics?

    Liked by 1 person

    1. (a) Good point. I was looking at the connection of those 30 seconds to what came before and what happens afterward. The question is how productive it is to talk about consciousness in completely isolated systems.

      (c) I left out an important stipulation from Schapiro, that you wouldn’t have the exact same experience, but one similar to it. Although Eric and discuss some complexities with that. Not sure how it might all shake out.

      (d) You might find this post interesting.

      Are rocks conscious?

      (e) I think it depends on whether it’s still able to produce the right outputs. Consider the distributed nature of the internet. Or something like SETI@home. But that still presumes that the overall system moves from one state to another for the right reasons, that the right causal structure is preserved. If the transitions are because of a recording being played back, that’s different from it moving due to an algorithm.

      No deadline on revisiting old topics. I don’t cut off comments on old posts. Although if your comment goes to the spam folder on an old post, I’m less likely to catch it. So if you post a comment that instantly disappears, drop me an email (About page) or ping me on Twitter.

      Like

      1. Re “I left out an important stipulation from Schapiro, that you wouldn’t have the exact same experience, but one similar to it.”

        No, no… That simply would not do! Whence the assumption that the physical/mental mapping is such that small changes in one correspond to only small changes in the other? I’d say we have good reasons to believe it is false!

        Like

        1. We do? I can certainly see how a small change could make a major difference in recognizing an object, or triggering a feeling. But remember, any cascades might not have much room to work with since we’re also firing a lot of inhibitory neurons.

          Like

          1. That just makes explicit the same huge assumption in respect to inhibitory neurons — small changes have small effects. A biological CNS features a complex interplay between analogue chemical processes and digital-like neural firings and we have zilch data on tolerances involved — on how much variation would alter one’s mental processes. As AI researchers realised back in 1960s/70s, variability in responses to near-identical situations is evolutionarily advantageous as a way of avoiding an organism getting stuck in small local maxima of behavioural fitness. Variability is. of course, also an asset in evolutionary arms race, of course. (And then there is also the issue of token-token identity of mental/physical not entailing type-type identity, as discussed with A Eric in the previous consciousness topic — which suggests that discontinuities are likely to arise.)

            In any case, while in principle we could record neural firings of all neurons in the body, there are serious problems with the notion of a replay, even if one buys into the above assumption. The YouTube video you cite shows stimulation of a region, not of making individual neurons firing or not firing to order — not at all the same thing. (Nor is that a new observation, of course — I recall reports of similarly stimulated olfactory sensations some 50 years ago.)

            The problem with a replay is again down to the analogue-to-digital character of neural activity. Slight infelicities in replay would force a neuron to do something slightly different from what it would naturally do, which in itself would have an effect on its complex bio-chemical state, potentially making subsequent slight infelicities to have further cumulative effects, requiring a bigger and bigger “correction” on the part of the replay (e.g. making some neurons fire before before they are ready to do so or even just after they did fire — um… how exactly?). So chances are, instead of a smooth evolution of the overall CNS state, there would be an increasingly jerky one and at worst the whole thing could go off rails altogether, possibly to the point of causing neural damage.

            I suspect the whole idea comes under the heading of possible in principle, but in principle impossible in practice. Much better to stick with arguing about the effect of this thought experiment on the notion of conscious computers, where it does have a much stronger bite because such complications do not arise (though they might if a-sync architectures ever actually happen).

            Like

          2. There is considerable variability in the response to chemical synapses. So I’m onboard with your observations in that regard. However, the thought experiment somewhat obviates that by bypassing the synapses. It just leaves the synapses there in step 2 to do their thing. But the clamps control what the neurons will do.

            And neural clamps are real. https://en.wikipedia.org/wiki/Voltage_clamp
            Although obviously doing 86 billion will probably never be possible. This is very much a thought experiment only. (On the other hand, for a civilization that sufficiently masters nanotechnology, who knows what might eventually be possible. I’m pretty cautious with the i-word.)

            Like

  4. Yes I read this thought experiment and it served its purpose well in making me think I had missed something in assuming the conscious self to exist in the mind, represented by firings of neurons. Thinking of the example as replay – like replaying a movie, that is not itself the real world scene, but just a capture of aspects of it in frames of pixels – seems to help. Similarly it helps to think about a conventional computer running a word processing program – if you spread that representation over space and time it would still have the same patterns and make the same steps, but it would not connect back into the real world input and output coherently enough to do useful word processing.

    Therefore I concluded that the key is to think about consciousness in terms of it characterising interactions over a block of time and space (in order to achieve a unified decision making process at the whole organism level), and that is embodied in the real world. As also informed by recently listening to Mark Solms, I think that puts at the heart of the matter the following:

    – a decision-making cycle that is at the whole organism level, with a duration of a few tenths of a second, to give time for integration of all necessary inputs and coordination of all resulting outputs (I act as a single thing-as-a-whole)
    – valence (positive or negative affect) in service of survival (homeostasis) and reproduction as the basis for that organism-wide decision (it is ‘for me’ and I have good reason to care about the outcome)
    – a structured representation of self, world beyond self, valence, sensory data and candidate actions, and the relationships between these things, in service of the decision (I am aware, I know, I understand)
    – the ability to read and write that structured representation by reading it as if it were sensory data, and writing like motor action; this lets me compute, albeit a little slowly (I know that I know)
    – the representation captures past, present and potential future cycles, providing time continuity. (I am single thing over time, but capable of change).

    That decision-making cycle could be thought of as the heartbeat of the mind, and it supports consciousness (but is not all that it is), just as the physical heartbeat supports life, but is not life. Valence is crucial – like a life force – and is what we add to what is otherwise just physics. In effect the brain sets up a very carefully preserved high nonlinear single point (but distributed) decision making system in which we (the bit of the universe we identify as ourselves) control outcomes, mentally and physically.

    The structured model of self (what am I, what can I see, what do I know, what can I do, how do I feel, how will I feel) and how it relates to world is fundamental here in what we refer to as consciousness. It is notable that religions and philosophies for life such as Buddhism, deal with these core elements of the relationship between ourselves and the world, and what good looks like for us, and religious conversion could be seen as a fundamental up-ending of this core model.

    One final analogy is to think of the mind as the satnav for the body. We only know of ourselves and our relationship to the world beyond through being represented on the satnav…but we can also put questions into the satnav and get answers out that influence what we do and become next. In a sense the body is continually throwing forward a coherent representation of itself, and catching it again at the next decision cycle.

    Concisely, it’s all interactions…and it’s different from physics because it only makes sense to characterise those interactions at the space and time scales of the whole organism.

    Liked by 1 person

    1. Thanks Peter. A lot of good stuff here I may need to think about and provide additional thoughts on later.

      I had an analogy similar to your word processor one in my post, but had to cut it for space. (And to avoid the inevitable “Brains are NOT computers!” reaction.)

      Imagine we have a video-game of chess, and set the computer to play itself. (Some versions have historically allowed this, just to watch what happens.) But we do this in a virtual machine, and record all the machine states while the game is playing out. Afterward, we replay all the machine states again. But during the replay, is there really a game happening? The reasons why the machine moves from one state to another have been lost in the recording. It’s no longer the same causal structure.

      But someone could still insist that chess is happening. The difficulty is that there’s no real fact of the matter. It’s all in what we choose to label a “game”. It’s easy enough to accept that for something like a game of chess, but much harder for a mind. Here, our innate intuitive dualism makes us think there simply must be an answer. But there’s no good reason to think it’s any more true for a mind than it is for chess.

      Liked by 1 person

  5. Thanks for your reply. I believe that a difference in the chess example is that you invoke an external observer deciding if there is a game happening, but that in consciousness, it is its own subjective observer. That does still beg the question if in a replay situation it feels something. I guess what would be missing is the possibility of a different outcome, which is pretty fundamental. It would be ‘locked in’ not only in not being able to take any external action outside the mental game, but also in its own decisions changing the result.

    I have been trying to summarise my view of how consciousness works in 1 page (mostly what I put in my earlier comment) and 2 pictures, but sadly cannot share pictures here. Instead I have put them as a picture in a comment on your Twitter feed about the same article and would welcome any additional thoughts when looking at them alongside my earlier comment.

    Liked by 1 person

    1. Hmmm. One thing we might consider is a VM running Windows where the resource manager is running. Now we have a system monitoring itself. If we record all its machine states, then later play them back, is resource monitoring happening in any meaningful sense? But I like your points too.

      One thing you can do is link to images, and also tweets with images. Put on a line by themselves, WordPress expands them. (This is often a frustrating feature, but when it’s handy, it’s handy.) Hope you’re okay with me sharing yours.

      Liked by 1 person

  6. There are so many problems with this “thought” experiment I’m not sure it is even worth considering.

    After step 1, the neurons are no longer in the same state as they were before step 1. Among other things, they have encoded the memory of the 30 seconds. This probably explains in part why brain scans of the people performing the same task are not consistent over time.

    In step 2, are we supposed to imagine that the neurons are somehow forced to fire as they did during step 1. Is this a brain in vat? What about the sensory input that occurs during step 2. Are we to imagine it is simply not present? We could, I suppose, expand the neurons to include every neuron in the body, not just the ones in the brain, then move the body and brain to a vat somewhat like in The Matrix.

    If we stop the neurotransmitters in step 3, most likely wouldn’t we die? The brain does more than consciousness, it controls vital functions too. I doubt there is some hard line between the what is and isn’t related to vital functions.

    I imagine the experiment is trying to get at whether or not consciousness is simply some neurons firing but it has too much extra baggage to be useful.

    Liked by 1 person

    1. It is a thought experiment, so we have to give it a lot of leeway on the logistics. (Honestly, it’s not nearly as silly as a lot of the ones from the philosophical literature.) The only question is whether those logistics alter the intuitions. (As I think they do for the Chinese Room.)

      After some discussion with others in this thread, and refreshing my understanding of how the clamps work, in that they control the voltage level of the neural membrane at all times, I actually think the clamping would prevent signals from the peripheral nervous system from propagating into the brain during the playback. The clamps would completely control the voltage state of each neuron. In step 2, the output would be the same as in the original experience. If the body were in a similar state, that might be fine. If it’s in one where different responses are required, it might not.

      But stopping the synapses in step 3 might definitely have severe consequences. It’s like an anesthetic that goes too far. We could imagine them keeping the synapses on the borders between the CNS and PNS working. Although again, we have to assume the experimenters provide whatever life support is necessary to keep the body running (or at least keep the brain running). And it’s not at all clear the body is still an issue in steps 4 and 5.

      I think overall the thought experiment is trying to get at our (or neuroscientist’s) intuitions about the causality of consciousness.

      Liked by 1 person

      1. In step 2, I am trying to figure out how you induce the spike in the neurons. Neurons spike based on neurotransmitters, input from other neurons, possibly from an external EM field, state of ion concentrations in surrounding tissue, and importantly their own state. So even if all other stuff could be replicated in step 2, the difference in neuron state could generate a different behavior and the behavior over time could drift increasingly from the original experience. The problem is that neurons are not passive chips that always behave the same way. The way they behave changes with their state.

        So you could simplify the entire experiment.

        Record the voltage spikes in step 1.

        Create a web of 86 billion tiny chips that can fire on command.

        Replay the voltage spikes on the chips.

        Are the chips experiencing anything?

        Liked by 1 person

        1. As I understand it, the voltage clamp is typically inserted into the membrane, usually along the axon, and measures the current voltage in relation to a desired voltage at that point, it then applies what’s necessary to bring the membrane to the desired level. So if the firing level is desired at that time, it brings it to that level. If the inhibited one is desired, it brings to that one. So it really takes control of the overall output of that neuron.

          Of course, this ignores the details of what might be happening in the dendrites and synapses, which normally alters the state of the neurons for subsequent processing. It also ignores the other factors you mention. In step 2, since that processing can still happen, it might mean we remember the playback.

          But in step 3, blocking the synapses seems to ensure nothing happens at that level. As I said, you’d be unable to remember anything except whatever memories might have happened during the original 30 second interval, and would be unable to remember anything about the playback afterward.

          The paper acknowledges the existence of other thought experiments like what you’re describing. Their goal is to change the causality without changing the substrate. Although arguably the playback device and clamps have become part of the substrate, so they’re really only partially succeeding at that. But I think their overall goal is to promote their own theory of consciousness, which focuses on dendrites.

          Like

          1. “it then applies what’s necessary to bring the membrane to the desired level. So if the firing level is desired at that time, it brings it to that level”

            “Their goal is to change the causality without changing the substrate”

            Without any consideration to the ions, the dendrites, or anything else? How would that be different from the simple chip in my example. Essentially you have changed the substrate. Isolated neurons with voltage clamps are hardly the same substrate as networked neurons in assemblies.

            This is another of those thought experiments where you keep dropping or replacing pieces from a brain that are normally essential and then asks are we still conscious. All of these, of course, are completely impractical because as soon as you drop out ions, dendrites, neural assemblies, the subject would become unconscious or likely dies. We can, however, cut out large pieces of brain structure and still have consciousness. But whatever is leftover has for the most part all of the critical elements of organized and networked neurons if the person remains alive and conscious.

            Liked by 1 person

  7. This thought experiment seems to working with the idea that the firing of the neuron is the critical thing for consciousness. Of course, real neurons not only fire but also receive impulses from other neurons as well as have some internal processing they do that determines whether or how to fire. So consciousness could be more related to either the reception of impulses or the internal processing rather than the firing.

    Do you have some reason to think that firing by itself alone is what creates consciousness?

    My own thought is that the “awareness” of each neuron, possibly through the EM field, of other neurons that is critical for consciousness.

    Like

    1. My view is that it’s a mistake to focus on any one aspect of the brain’s operations and say, “This causes consciousness.” It’s a bit like looking at the way transistors, or disk drives, or networking cabling works in a computer cluster implementing a corporate accounting system, and saying that one thing is responsible for the accounting system, when in reality the running system is a holistic thing. That doesn’t mean every last component has to be there; a resilient system should be able to recover from the individual absence of many components, just that the system as a whole can’t be localized to any one aspect.

      So I wouldn’t say nerve spikes, in isolation from everything else, causes consciousness. Or dendritic integration (as the paper authors push). Or thalamic loops. Or any other one aspect of the brain’s processing. It’s the operations of the brain in relation to memories and being in a body and environment.

      We can remove aspects and argue about whether or not there’s still consciousness happening, but there will be a broad hazy border with many edge cases, where there won’t be a fact of the matter answer. Our strong intuition that there must be a definite answer comes from the instinctive dualism we all have, and constantly have to be on guard against being misled by.

      Liked by 1 person

      1. If it’s a mistake to focus on one thing, then it would be mistake to assume consciousness is replayed simply by triggering neurons as in step 2, wouldn’t it?

        And based upon what you write it would seem you agree that step 2 would not be likely to produce conscious experience. Aren’t nerve spikes in isolation exactly what step 2 is?

        “So I wouldn’t say nerve spikes, in isolation from everything else, causes consciousness”.

        Liked by 1 person

        1. There are plenty of accounts of neurons in parts of the brain being stimulated in a way that alters people’s conscious perception. And in step 2, the synapses are still functional, so they have the possibility of growing or weakening in the process, which in turn implies they’d affect subsequent neural processing after the playback, so it’s possible you’d remember the sequence afterward, and that it was the second time you’d experienced it.

          But I characterized step 2 in the post as there being a chance of still being conscious. With that kind of heavy handed manipulation of every neuron in the brain, it seems hard to be confident of any predictions.

          Liked by 1 person

          1. “With that kind of heavy handed manipulation of every neuron”

            Which leads back to my original point about their being so many problems with the experiment it probably isn’t even worth much consideration.

            Liked by 1 person

  8. I like the main ideas behind your response, Mike. But isn’t there also a causality interruption at step 2? Perhaps not entirely, though, because we could view it as a delayed causality, from the first experience, through the voltage recording devices, then down to the downstream neurons.

    Liked by 1 person

    1. In step 2, the voltage clamps do seem to take away the causal effects of the synaptic and dendritic states on whether the neuron will fire. But it doesn’t take away the causal effects of the neuron firing on the downstream synapses and dendrites. How exactly would that manifest in our experience? I don’t know that anyone can say for sure, but it seems possible, after the replay is done, we can remember it as the replay, that is, as the second time we’ve done it. But it might be we perceive just as much of a blackout as in step 3, or maybe some bewildering confusion we can’t make sense of.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.