The Great Consciousness Debate: ASSC 25

This is a long video. The first hour or so features presentations on the Global Neuronal Workspace Theory by Stanislas Dehaene, Recurrent Processing Theory by Victor Lamme, Higher Order Thought Theory by Steve Fleming, and Integration Information Theory by Melanie Boly. It also has some brief recorded remarks from Anil Seth on Predictive Coding. (Fleming actually mixes in some predictive coding ideas in his presentation as well.)

But the real meat of the session begins at the 1:12 mark when it switches into debate mode. There is some excellent back and forth between the theory proponents. Overall, the debate runs almost three hours. I highly recommend it for anyone interested in scientific theories of consciousness. There’s a lot of good discussion here.

The Great Consciousness Debate at ASSC 25 in Amsterdam

One of the things that resonated with my own views was that the differences between these theories largely amount to varying definitions of consciousness, which affects the evidence they look for. Each of the theories seem to focus on neural processing that definitely happens. The question is whether that processing amounts to a conscious state.

Here my sympathies largely lie with Dehaene and Fleming and their functionalist stance. Lamme has an interesting take, but his focus on early sensory processing, and the idea of us being conscious of things we can’t know about or remember, just doesn’t strike me as getting at what most people mean by “consciousness”. At one point there’s an interesting discussion on what evidence for multiple consciousnesses in the brain might look like, an implication of Lamme’s theory.

Dehaene makes some interesting points. One is that the global neuronal workspace is not something that happens in any central location (a frequent misconception), but is distributed in the prefrontal and other associational cortices. And that it largely tracks with attention. Toward the end, he’s largely dismissive of qualia (in the philosopher’s sense) but not phenomenology overall.

Fleming attempts to reconcile the prediction error processing in his model with the global ignition of Dehaene’s. Here I really felt Anil Seth’s absence. (He couldn’t attend due to illness.) It would have been interesting to hear this thoughts on this.

Boly does as good job at a quick introduction of IIT (Integration Information Theory) as any I’ve seen. But like most descriptions, I found it somewhere between baffling and dubious. All of the individual words make sense, but the overall description just doesn’t click. While there are some aspects of the theory that might provide insights, it remains hard for me to see it overall as a productive model.

But as Kevin Mitchell noted when he shared this video on Twitter, it seems like each of these theories do have something to offer. I agree with Anil Seth that consciousness is complex in the same manner as biological life. There’s no one theory of life, just a vast collection of interlocking theories that explain various aspects. I’ll be surprised if consciousness is any different.

Have you watched it? If so, what did you think?

46 thoughts on “The Great Consciousness Debate: ASSC 25

  1. I thought the first part, where they each gave summaries, was useful. And I agree that a major issue is that there is not a generally accepted concept of consciousness. However, I’m more sympathetic to Lamme’s take, I guess because it comes closer to my bias. I thought the point on multiple consciousnesses was key, and the examples of split brains and blindsight did not get enough discussion. Both show examples of alternative “report” systems.

    I think ‘ what most people mean by “consciousness”’ refers to the subsystem referred to as the autobiographical self, and I can appreciate the intuitive attraction, but that intuition fails to explain the behavior of split brains and blindsight. A smaller concept of consciousness can be used to explain those behaviors as well as the higher order behaviors which get organized into attention and higher order thought, etc., just as a concept of living things made of cells was not intuitive but highly explanatory.

    *

    Liked by 1 person

    1. I thought you might be sympathetic to Lamme’s view. If he’s right, the question for me is what is not conscious, particularly if he were to shift to even seeing feed forward processing as conscious? He admits panpsychic leanings in the debate, so his answer is maybe nothing.

      I actually don’t see blindsight as supportive of that view as many do. We have to remember just how limited blindsight abilities are. Every case is a little different, but subjects typically can indicate whether an object is present or not, but can’t discriminate what type of object it is or any of its attributes, nor act on that information. To me blindsight is just an indication of what’s possible unconsciously. You could argue that “unconscious” is only relative to a particular consciousness, but if so, it’s relative to our consciousness.

      Split brains are more interesting. But the fact that neither side sees itself as an independent entity, that they confabulate explanations for the actions of the other side, seems to indicate we’re talking more about a divided consciousness than two separate ones. The biggest lesson for me is that we’re not as unified as we think we are.

      That said, as you know, I don’t see any strict fact of the matter here. This seems like just another example of nature frustrating our little categories with edge and oddball cases. “Consciousness” is a category we use for own purposes within certain domains of applicability. We should be okay with it not necessarily being a natural kind.

      Like

      1. I understand that you don’t see a fact of the matter. But do you see what every theory has in common? Do you see the common denominator? Information processing, strictly understood. Which is not panpsychic, but is panprotopsychic.

        *

        Like

        1. Information processing certainly seems like a common denominator in physicalist theories. But to the point I think you’re making with the panprotopsychism reference, information processing alone isn’t sufficient. It just gets us proto-consciousness rather than consciousness. The question remains how we can get from proto-consciousness to consciousness. That’s where I don’t think there’s any strict fact of the matter, where the answer depends on what we think is necessary and sufficient to be in club consciousness.

          And whenever I’m tempted to use the word “panprotopsychism” in this manner, I remember this point by Chalmers in his paper discussing it.

          One might worry that any non-panpsychist materialism will be a form of panprotopsychism. After all, non-panpsychist materialism entails that microphysical properties are not phenomenal properties and that they collectively constitute phenomenal properties. This is an undesirable result. The thought behind panprotopsychism is that protophenomenal properties are special properties with an especially close connection to phenomenal properties. To handle this, one can unpack the appeal to specialness in the definition by requiring that (i) protophenomenal properties are distinct from structural properties and (ii) that there is an a priori entailment from truths about protophenomenal properties (perhaps along with structural properties) to truths about the phenomenal properties that they constitute. This excludes ordinary type-A materialism (which grounds phenomenal properties in structural properties) and type-B materialism (which invokes an a posteriori necessary connection)

          source: https://consc.net/papers/panpsychism.pdf p. 15

          Like

          1. The panprotopsychic property is information. To be more exact, every physical process has an informational description which is determined by the physics but is multirealizable. This doesn’t mean that every physical process is or has consciousness. But the common denominator of the consciousness theories is the *use* of this information as information, i.e., as useful because it is correlated with something else (has mutual information). The high level theories are mostly just highlighting specific ways that the most complicated and integrated system in humans ( the autobiographical system) are doing this processing, and focusing on different parts of the elephant in the process (attention, broadcast, higher order thought). But theories like Lamme’s are pointing out that smaller systems are essentially doing the same basic thing, even as sub-parts of the larger system.

            *

            Liked by 1 person

          2. This is the small network problem, the idea that no matter what architecture we focus on, there are typically smaller systems doing something similar. I don’t know that there’s any sharp objective answer to it. It comes down to the context of the system, and how similar it is to us. Which is why I often say that a better phrase than “something it is like” or “what it’s like”, is how like us the system is.

            Like

  2. My take on the IIT presentation. She said that they don’t know the substrate of consciousness, though begin from phenomenal existence and then try to work their way back to grasp the nature of a given consciousness substrate. Apparently the extendedness of space is thought to be the least difficult question here. Then it was decided that pyramidal grids would be the right kind of substrate to create the feeling of the extendedness of space. Was that essentially her presentation?

    Liked by 1 person

    1. That was part of it. The grid topology is seen as promising because it results in a higher phi than other regions. Pyramidal cells exist throughout the cortex, so I think she just meant them in combination with grid layouts. It’s why IIT focuses on the “posterior hotspot” at the back of the brain.

      The difficult question is how to test that proposition when any report has to involve the frontal regions. Even no report paradigms have to eventually tie back to self report. Although per Dennett, even asking that question assumes there’s some fact of the matter, that there’s a consciousness finish line somewhere, where the order of arrival equals the order of presentation. If not, then the question is meaningless.

      Still, apparently the GNW and IIT folks agreed to experiments that make that determination under the Templeton project. Maybe we’ll get an answer next year. Although I wonder how determinative it can really be.

      Like

      1. These pyramidal cells, particularly L5, have been invoked by a number of people is the possible place where consciousness happens. I think they connect from the thalamus to the top of the cortex in columns. They’re also found other places in the brain. So that also aligns with some assertions that we should look at the corticothalamic circuit for consciousness.

        Like

        1. From everything I’ve read, they definitely play a crucial role, not just with the thalamus but many subcortical structures, as well as directly connecting cortical regions to each other. But it seems like that applies to non-conscious as much as conscious processing.

          Like

    2. I was ready to comment that I was unlikely to watch the video but your comment about “extendedness of space” and “pyramidal grids” caught my attention. Can you elaborate a little more or point me to minute in video where it is discussed. I think it likely that external spacetime mirrors itself in some strange way in internal grid structures which supports my thoughts about the physical relations of neurons mapping (again in some strange way) to the external reality.

      Liked by 1 person

        1. Thanks. I actually did listen to that segment and it wasn’t exactly what I hoped it would be. Also, I came away again with the impression of how inadequate IIT is to explain what it is trying to explain. Some of the questions could be answered as easily by simply pointing to brain wave patterns without trying to invoke any phi metric and the complicated equation. The equation seems somewhat like the Drake equation which may be entirely correct but unfortunately nobody knows what any the values are with any great accuracy.

          I’m starting to the think the brain is like a kinetic device that resonates to its input like this sculpture resonates to the wind.

          Liked by 1 person

          1. Every time I read or hear about IIT’s axioms and postulates, I feel like there must be something I’m missing. After all these years I’m fairly well read on both the philosophical and scientific side of consciousness studies. At some point these descriptions should start making some sense. But they don’t. I’m trying to resist the word salad conclusion, but it’s not easy.

            Liked by 2 people

          2. IIT may be “word salad”? I consider that to generally be the case in academia regarding theory of how the brain creates consciousness. To refresh my memory on IIT however I typed the title into the search bar over at The Splintered Mind, (which is easy to miss but at the very top). https://schwitzsplinters.blogspot.com/search?q=Integrated+information+theory

            Relevant posts come up and they tend to link to other such posts. Thus I was reminded that IIT seems quite arbitrary and invented, though cloaked quite well with themes which merely suggest “science”. The Templeton foundation funding head to head experiments pitting IIT against GNWT? Whatever…

            If it’s true that IIT actually is little more than an arbitrary invention, shouldn’t it’s massive prominence give us pause regarding the theories that it stands shoulder to shoulder with today? In general I consider it unwise to consider the often wordy material that an author provides about their consciousness proposal exclusively. We should also consider reductions of that theory by others to help us grasp what it effectively implies. Thus for example my thumb pain thought experiment regarding those who tell us that consciousness exists by means of “information processing alone”. In a causal world information should not exist independently, whether in the form of a CD, marked paper, neural firing patterns, or anything else. In a causal world information should only exist given what it informs, or an associated substrate.

            In any case I was interested how it was claimed that IIT supporters were looking for a consciousness substrate. That’s a useful perspective I think. Such a substrate would be animated by neural firing (or marks on paper, or whatever), to exist as a phenomenal experiencer by means of now instantiated and thus causal information.

            Liked by 1 person

          3. I’m always cautious with the word salad accusation, because to someone not schooled in the relevant material, a lot of valid stuff will look like that. But as I noted above, I’m pretty well read in this subject. If someone like me struggles to understand what they’re talking about, they may have a communication problem.

            I can’t see the logic that because IIT has issues, every other theory that “stands shoulder to shoulder” with it does as well. I can understand the reasoning behind the other theories, even if I don’t agree with most of them. To read Dehaene’s book is to learn about a wide range of experiments and data. That’s also true for Lamme and Fleming. These are theories grounded in, yes, particular philosophies, but also scientific data. You may reject the philosophy, but the data is what it is.

            In terms of substrate, the point is that IIT begins with the phenomenology and then looks for what might satisfy it. It’s an approach that puts a lot of faith in our introspective impressions. And its exactly its reasoning from phenomenology to substrate constraints that I struggle to understand.

            Liked by 2 people

          4. I think the problem with all of these theories is they don’t ever finish tying the knot to even get us to the solution to the Chalmers easy problem. They have principles, equations, predictions, evidence, and so on but they have an explanatory gap in how exactly phenomenal experience arises.

            Liked by 1 person

          5. As I understand it, the problem of phenomenality is the hard problem, or explanatory gap in Joseph Levine’s language. The easy problems are all about functionality, which are scientifically tractable. All of these theories, aside from IIT, focus on functionality. (The proponents either actively dismiss qualia, as Dehaene does, or redefine it to something structural and relational.) IIT focuses directly on phenomenality. How well it succeeds depends, I think, on your attitude about the axiom and postulate language.

            Liked by 1 person

          6. The problem, I think, is that Chalmers easy problems are entwined with the hard problem.

            The easy problems are per Chalmers:

            • the ability to discriminate, categorize, and react to environmental stimuli;
            • the integration of information by a cognitive system;
            • the reportability of mental states;
            • the ability of a system to access its own internal states;
            • the focus of attention;
            • the deliberate control of behavior;
            • the difference between wakefulness and sleep.

            Some key words in Chalmers list: discriminate, categorize, cognitive, internal states, attention, deliberate, wakefulness

            Does you or Chalmers think that any of these things can be explained without any reference to experience or phenomenality?

            Liked by 1 person

          7. Chalmers certainly does. That’s what the philosophical zombie thought experiment is all about. (As well as inverted qualia and Mary’s room.) That we can have all that functionality without any phenomenal experience.

            Myself, it depends on what we mean with the words “experience” or “phenomenality”. I’m an eliminativist toward Chalmers’ conception of these words, of something fundamental and inaccessible to science. I do think the easy problem solutions collectively solve the more grounded version of experience (at least once we include affects in them).

            Like

          8. I am a harsh critical regarding the softest areas of science, and believe that they should only progress markedly by means of radical changes (like the ones that I propose often enough). Conversely you’re far more accepting of the status quo Mike. So our diverging themes continue…

            I’m pretty sure that you understand the logic whereby groups which stand shoulder to shoulder with less reputable groups, should then be considered less reputable as well. The association is quite generic. Instead it should be that you’re happy with the positions of certain people in this conference, and so would rather disregard that they happen to be standing shoulder to shoulder with people for whom you have far less respect. We each have a different take on what brings these people together however. Does just a bit of word salad exist, or rather is it word salad all around? It seems to me that in these fields scientific data tends to be one of the most popular ingredient from which to create a highly effective word salad.

            On IIT, Schwitzgebel’s reduction is that the more something integrates information, the more conscious it will be. I think he spoke of a website which tells how conscious various things happen to be on the basis of the phi algorithm. Furthermore there’s an exclusion principle which prevents nested consciousness as well. Whatever has the greatest phi renders the phi of various other elements to not be conscious. So he says that a complex enough US election should integrate enough information to render all Americans non-conscious, since the whole would integrate more information than a single person. Without these sorts of practical observations people should tend to just feast on the word salad rather than consider effective implications of what’s being said.

            This is also why I’d like people in general to consider whether information should technically be said to exist on a lone CD for example, or rather only in respect to what the CD animates, such as a CD player. If causal information only exists in respect to what it causally animates, then consciousness should not exist when the right lone information is properly converted into more lone information, as Dehaene and many others posit. Instead consciousness should exist by means of some sort of causal substrate which associated information animates. It’s a question which seems unknown in science today, though I’d like it generally assessed.

            Liked by 1 person

          9. Eric, I’m certainly aware of the impulse to assign guilt by association. It doesn’t seem like sound reasoning, more like a dangerous assumption. The people working on GNW, HOT, PPT, etc, have no control over what the IIT proponents say or do. A lot of them regard it as pseudoscience.

            I’ll say it once for the record in this thread. That point about information is not an accurate reflection of Dehaene’s view. Or mine. Or for most people focused on information processing accounts. Note Dennett’s hard question which focuses on the causal chain issues you’re discussing: https://selfawarepatterns.com/2020/01/05/daniel-dennett-on-consciousness-and-the-hard-question/
            It’s the idea of intrinsic phenomenality that doesn’t.

            Liked by 1 person

          10. I didn’t exactly mean “guilt by association” there Mike. That would of course be a fallacious way to implicate a non offender. What I meant was that the state of the field today is so poor that the world’s most prominent theorists stand shoulder to shoulder even though their positions are all over the map. That’s not a positive indicator for the work of any such theorist. Contrast this for example with the field of medicine. You and I would dismiss any medical conference which included speakers advocating homeopathy. Regardless let’s get into the details here since you seem to be agreeing with me in a way that I consider quite significant.

            Apparently we agree that a CD doesn’t inherently harbor information regarding recorded material. Instead it should only have this in respect to what it causally animates, such as a CD player. Good. Of course in general we nominally speak as if a CD stores information inherently, though technically we only mean this in reference to the sort of thing which unlocks such material.

            If that’s the case however then wouldn’t brain produced consciousness need to follow this rule as well? Wouldn’t brain information need to animate the proper kind of substrate for there to be “thumb pain”, “fear”, “blue”, and so on? At least IIT does speak of substrate in this capacity and so doesn’t inherently violate the rule (and even if its “phi” parameter could only have rectally emerged 😜).

            As I understand it however, the defining element of “computationalism” is the denial that a phenomenal experiencer will exist by means of brain information animated substrate. Here it’s theorized that information can exist beyond animated substrate, with consciousness proposed to emerge by the right processing of such substrate-less information. Thus if the right markings on paper were converted to the right other markings on paper, then an entity without substrate would emerge to experience what you do when your thumb gets whacked. Conversely I insist that information cannot exist in a causal world without substrate, and as you implied above, I hope that you insist on this as well. This is where McFadden’s substrate based proposal might come on to convert what should only ever remain soft science, into hard science.

            Liked by 2 people

          11. Eric,
            You know my views on information. There is physical information and semantic information. Information processing only needs physical information. Information processing is causation, and physical information is a snapshot of that causation. Semantic information is relative to an observer who understands, at some level, the causal relationships of the patterns.

            Insisting that only semantic information is real information leaves us needing a name for patterns that exist before their significance is understood. Whatever that term is, we can then talk about processing in that term. If all else fails, we can talk in terms of causal processing. Ultimately it makes no difference. (A lot of people like to gerrymander their definitions for particular ontological claims. I’ve never found that a productive approach.)

            On the substrateless charge, maybe someday this reply will stick: https://selfawarepatterns.com/2022/05/29/susan-blackmores-illusionism/comment-page-1/#comment-157868

            Liked by 1 person

          12. You think my computer sees any meaning in what you are writing?

            Who knew? I always it was just bit shifting and switching based on the rules of its programming.

            A room of air with a heater in one corner could develop an organized pattern of flow which we could describe as information. Could it be sentient? Or, does the particular pattern matter? Maybe only patterns that sense meaning?

            Like

          13. It depends on what you mean by “meaning”. Commands entered into a command-line prompt have meaning in that they affect the system’s actions, as does my face in my laptop’s login webcam. If you mean fitting into an overall worldview, an umwelt, then few devices currently have even rudimentary forms of that. Although self driving cars and similar systems seem to be establishing a beachhead.

            Bit shifting and switching, compared with selective propagation of action potentials (or opening and closing of ion channels and other protein functions if we want to go lower). The higher level organization, yes the patterns, matter.

            As far as I can see, sentience is in the eye of the beholder, whether the system in question triggers our intuition of a fellow being. We can talk about the capacity for joy and suffering, but then we have to deal with what those terms mean.

            Liked by 1 person

          14. I’m not sure what gave you the impression I was insisting that only semantic information be real information Mike. For example in the causal relationship between a CD and a CD player, I do not posit a need for something that exists semantically. It should make no difference whether I or anything understand the mechanics by which such a relationship works, but only the causality itself. Semantic information should exist as a tiny subset of causal information, and of course would follow those parameters as well. Similarly “battery powered airplanes” should be considered a tiny subset of “airplanes” which not only follows the general parameters of airplanes, but ones that are also battery powered. Rest assured that I’m not addressing a small subset of information but rather an effective definition in full. It is with this basic conception of causal information that I observe a non-causal element for a standard belief which you and many others hold.

            Regarding my charge that computationalism violates causality because it posits substrateless information, let’s check that. Even good arguments can ultimately get clumsy if enough care isn’t taken. I certainly don’t mean that I believe marked paper itself harbors no substrate! Paper should obviously be considered to have substrate (plant based I think), with the stuff which is used to mark it also existing through associated substrate. Even an idealist might say that marked paper exists by means of mental substrate (not that either of us are that). The issue we’re having is different. To hopefully illustrate what’s going on I’ll go back to the simple example of a computer and computer screen.

            Here the input information to a computer from a pressed letter on a keyboard will be algorithmically processed into new information that may go on to do all sorts of things, and so perhaps display that letter on the screen. Similarly the neural information from a whacked thumb informs the brain that then goes on to algorithmically do things on the basis of this input information. One thing that we know the brain tends to do here is create an experiencer of thumb pain. Furthermore if it overcomes a famous “hard problem” to animate a substrate of some sort which causally exists as that experiencer of pain, then we know that we’re at least following the causal model of the computer animating a computer screen. So here it would behoove us to look for appropriate brain based causal dynamics that might serve as a phenomenal experiencer substrate.

            While I don’t know how anyone could effectively challenge this particular causal model, others and yourself posit that brain information needn’t animate a causal substrate for an experiencer of thumb pain to exist. Instead it could convert whacked thumb information into other information which doesn’t animate a consciousness substrate. It’s thought that converting the input information into output information of some kind should be sufficient — the processing from one to the other would itself yield an experiencer of thumb pain.

            What this model mandates is the if paper with marks on it that are correlated well enough with whacked thumb information, were converted into another set of paper with marks on it correlate well enough with the brain’s response, then something here would experience what you do when your thumb gets whacked. But observe that this experiencing entity should reside without substrate and thus causality, even given the substrate of marked paper. The marked paper should not be considered informational in the intended sense if it doesn’t effectively animate or inform an appropriate substrate.

            Apparently in my last comment I errored when I said “with consciousness proposed to emerge by the right processing of such substrate-less information”. Instead I should have switched around the last two words to read “information-less substrate. The point is that in a causal world there should be no information, processed or otherwise, without the right kind of substrate for it to then animate. This is displayed by the information encoded on a CD, that a computer sends its screen, and all other examples of information that I know of. To potentially justify the causality of computationalism I suppose you could try to think of a case where this isn’t true and so argue that consciousness instead functions causally like that. Or of course we could go on as before. I do feel that my argument is tightening up though.

            Liked by 2 people

          15. I think sentience is about semantic information. It is especially about knowledge about what is important to the organism and how it acts in the world.

            I don’t think there is a need to be mystical about it but a theory of knowledge and meaning might have to be a part of any theory of consciousness. I don’t actually think that should be difficult. Fundamentally I think meaning represents a sync between the internal and external representations of the world that arises when neurons and brain circuits resonate to some stimulus.

            Liked by 1 person

          16. I do of course see a connection between sentence and semantic information James. The ordering that we phrase this connection however should matter. Saying that sentience is about semantic information suggests to me the case being made by computationalists — all we need to do is build chatbots that seem more and more like they understand us, and then bam, that apparent understanding will be displays that these creations are also sentient. Clearly evolution didn’t create us by means of creatures that merely seemed like they grasped natural languages until they actually did understand.

            When we instead posit semantic information to be about sentience however, then things begin to make sense. Originally there should have been an entirely non-conscious type of creature, though with an associated epiphenomenal sentient component for evolution to potentially incorporate functionally. Apparently with enough iterations this component did become functional and so was given more and more informational, processing, and output resources. Would it be effective to say that one of these incredibly primitive examples of sentience would have “understood” anything? That seems like a stretch, and even if full human consciousness ultimately evolved from such primitive examples of sentience.

            Liked by 1 person

          17. “all we need to do is build chatbots that seem more and more like they understand us”

            That isn’t what I mean but I can see how it might be interpreted that way. I’m saying meaning is something in the resonating neurons – a form of experience itself like pain or pleasure. It is coalescing of our model or some part of our model of reality around a representation.

            Liked by 1 person

          18. Okay James, I guess I’m generally good with the representational perspective. For example there is this comment (though in the next one I displayed an openness to dispositionalism as well). https://schwitzsplinters.blogspot.com/2022/06/dispositionalism-vs-representationalism.html?showComment=1656265410179#c7374992288272411445

            In truth however I find abstract discussions very challenging. In them I often feel that I must guess at what someone means.

            Liked by 1 person

          19. Regarding abstract discussions – part of my “modelism” implies that each one of us has somewhat idiosyncratic views of the world like fingerprints. That makes any discussion difficult but especially abstract ones.

            Like

          20. Eric,
            You talked about the CD not having information if there wasn’t anything to make use of it. But the CD always has physical information. If I understand what you mean by “causal information” that’s always there as well. Something caused the patterns on the CD to exist. That their further downstream causal effects require additional causal forces (the player and someone to put it in the player) doesn’t change this. Consider a logical AND gate. It only signals on if both its inputs signal on. Does that mean only one input signaling on isn’t information? By itself, it can’t have downstream causal effects, just like the CD data without being put in a player.

            On the substrate business, so your issue isn’t with multi-realizability, but with the fact that we don’t posit a second substrate generated from the first? If so, then your summary of this issue needs a lot of work.

            On the paper marks thing, your way of describing this scenario downplays the actual processing, or the fact that it would have to include vast numbers of marks on paper at every stage of the processing, likely hundreds of billions to cover the experience you discuss. I’ve covered these issues in my criticism of Searle’s Chinese Room. Like all philosophical thought experiments, it’s just an intuition pump. I don’t share your intuition, so it has no effect on me. And I think most people’s intuitions are affected if we adjust the scenario properly.

            In other words, I find the second substrate redundant. To channel Laplace: I have no need of that hypothesis. Give me specific reasons other than just your strong conviction, and I’ll consider them.

            Liked by 1 person

          21. Mike,
            If there is only one input signaling “on” for an AND gate, is it informational given that the condition would not be met? Yes I’d consider this informational since such causal function would thus occur. It’s the same if the condition were met as well. My point is that this input signal should not be considered informational inherently, but rather only informational (in the intended sense) in respect to the gate itself. The gate would be an instantiation mechanism which renders this signal informational. Otherwise (which is to say in a non intended capacity) this would just be “stuff”.

            At this point in the discussion I’m able to assert a very simple conclusion. There exists no inherent information, whether encoded on a CD or in any other capacity, but rather only potential information. This is because in a causal world information can only exist in respect to what’s informed. So the potential information encoded on a CD should only be considered to exist as information in respect to what animates it, such as a CD player.

            The problem that this simple observation presents for you is that it renders as magical a position that you’ve been indoctrinated into. If the processed brain information associated with the sensation of a whacked thumb does not exist as such except in relation to what animates it (just as the potential information of a CD and all else requires animation to exist as such), then consciousness should only exist by means of an associated substrate which unlocks that information. That’s where McFadden could come in to save the day for naturalism by demonstrating an empirically implied consciousness substrate for brain information to animate.

            On my thumb pain thought experiment, I certainly don’t want to downplay the amount of processing that might be required for something to exist which has such a phenomenal experience. I suppose I could say something like “If markings on less than an infinite number of pages were correlated with the information that your thumb sends your brain given that it’s been whacked…”. Would that help level the playing field?

            Liked by 1 person

          22. Eric,
            As I noted above, gerrymandering the definitions makes no ontological difference. Nothing physical changes with the patterns once someone or something begins reacting to them (unless of course that reaction involves altering the patterns themselves). If someone buys your definition of “information”, all they’re obliged to do is change labels. If you want to call it “potential-information processing” rather than “information processing”, you’re free to do that, but it makes no difference. So nothing to rescue.

            On the paper markings, so you’re saying it would require an infinite number of pages? If so, how is that not a magical position? My point was that you’d need at least as many marks as there are synaptic state changes in a pain experience, and that’s a very large number, but far from infinite.

            Liked by 1 person

  3. Still only at the introduction, but it seems like Liad Mudrik was making the same point, more politely, that you are making with the analogy to life. I strongly agree that both life and consciousness are multi-dimensional and that it pays to study all the dimensions, and to allow from the beginning that they might come apart in some cases.

    Liked by 3 people

    1. I think there is a sort of élan vital and it actually is exactly what Galvani found a few hundred years ago. It is electricity or more precisely the spontaneous electrical low-frequency oscillations (SELFOs) that appear in single cell life and extend to the specialized oscillations involved with the brain. When they cease, the brain the dead, consciousness is gone, the organism is dead. The oscillations provide the organization and dynamism that allows complex life to exist.

      Like

  4. Maybe somebody else has had this thought but I think there could even be a way we are missing something big in how we are researching the topic, especially with fMRI related research. This problem would even apply to McFadden, I think.

    We believe that actually a fragment of the available information about the world makes it from the senses to upper levels in the brain. In some cases, I think there are even more downward connections than upward connections. This makes senses because we could probably burn up our calories for the day in a few minutes if we tried to process everything. However, this would imply that consciousness is mainly generated from information already in the brain with periodic updates to align the internal model with the external world (Is that Dennett?). So most of the heavy duty activity we see from fMRI or other means of measuring activity probably is not consciousness per se but instead the updating activity, only a part of consciousness. Given that consciousness also must have in its model not only what is happening now but also what happened recently, then there may be no apparent representation at all on brain scans for the corresponding phenomenal experience. The model must persist for some time (seconds to minutes) for us to have continuity in our experience. I think this is beyond what is normally encompassed in the term short-term memory.

    I went back and looked at my SELFOs post while commenting to Paul. That post was primarily based on this article:

    https://royalsocietypublishing.org/doi/10.1098/rstb.2019.0763

    The author calls attention to the default mode network as master integrator of information and writes:

    “Another way to think about the potential role of the DMN in human self-construction is as the top layer of the hierarchical predictive coding ‘self-model’ as put forth by Friston [74,75]. Like Buszáki’s theory, which predicts the need for an ultimate brain integrator or ‘reader’ (i.e. a ‘self’), a hierarchical predictive coding model also implies the need for an ultimate brain integrator or ‘predictor’ (also a ‘self’) at the top of the hierarchy. According to predictive coding brain models, prediction error is passed up the hierarchy from the low-level primary, unimodal sensory areas to the ultimate, multi-modal ‘predictor’ at the top of the hierarchy, which contains a high-level abstract representation (of the ‘self’) that then passes predictions back down to the lower levels [74,75]. In this way, the DMN, oscillating at the lowest frequency in the brain, might act as the brain’s ultimate information integrator, receiving input from all the lower-level, otherwise isolated units (oscillating at higher frequencies), and passing on one unified ‘self’ prediction back down to generate coherent, adaptive behaviour”.

    The DMN would meet the requirement of building a model of the world while using little energy. And the very slow waves – like a pilot wave? – would allow a persistence possibly for seconds to minutes conceivably.

    Buszáki’s theories usually get left out of this discussion probably because he can write entire books barely mentioning “consciousness”.

    Liked by 1 person

    1. Dennett’s view is really more a variant of global workspace, but deemphasizing the idea of content breaking into consciousness at any single point. A lot of what you describe here fits with the predictive coding view. Of course these views are compatible with each other. As Fleming discusses in the video, we can view the global ignition of GNW as one and the same as the error correction signal from lower order sensory regions to higher order processing regions.

      How does this view of the DMN accord with the finding a few years ago that it and the dorsal attention network activation seem to be exclusive to each other? Activation of the DAT seems to inhibit the DMN.

      On Buszaki, I actually like the sound of a book that barely mentions consciousness. Many of the hardcore neuroscience books I’ve read either don’t mention it at all, or discuss it in a sidebar somewhere in the attention chapter.

      Liked by 1 person

      1. DMN and DAT are almost defined as opposite but the evidence is from fMRI, right?

        Any activity requiring attention is going to light up regions not usually defined as part of DMN. But I think the view of DMN as master integrator would mean that it is active all of the time we are conscious but probably not as active as focused attention areas. The whole idea is that DMN is responsible for maintenance and continuity of consciousness so it will always need to be a low-level, low energy process that might barely show up in f MRI.

        I think it is also possible that the concept of DMN is a misleading or even wrong.

        This is interesting:

        The surprising role of the default mode network in naturalistic perception

        “Network coactivation was selectively correlated with the state of surprise across movie events, compared to all other cognitive states (e.g. emotion, vividness). The effect was exhibited in the DMN, but not dorsal attention or visual networks. Furthermore, surprise was found to mediate DMN coactivations with hippocampus and nucleus accumbens. These unexpected findings point to the DMN as a major hub in high-level prediction-error representations”.

        Also (not the “so-called”

        “The default mode network (DMN) is a group of high-order brain regions, so-called for its decreased activation during tasks of high attentional demand, relative to the high baseline activation of the DMN at rest1,2,3. Much research has been conducted in the pursuit of the enigmatic role of this network, consistently pointing to DMN activity during internal processes such as mind wondering, mental time travel, and perspective shifting4,5,6. However, recent neuroimaging studies suggest that the DMN is important not only for internally-driven processes, but remarkably, for long-time scale naturalistic processing of real-life events7,8,9,10,11, making it central to understanding how our brain tackles incoming information during everyday life. This discovery was enabled by computational advancements in the analysis of neuroimaging signals, which now allow us to track the dynamics of continuous naturalistic processing in healthy human brains, noninvasively”.

        https://www.nature.com/articles/s42003-020-01602-z

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.