A neuroscience showdown on consciousness?

Apparently the Templeton Foundation is interested in seeing progress on consciousness science, and so is contemplating funding studies to test various theories.  The stated idea is to at least winnow the field through “structured adversarial collaborations”.  The first two theories proposed to be tested are Global Workspace Theory (GWT) and Integrated Information Theory (IIT).

GWT posits that consciousness is a global area that holds information that numerous brain centers can access.  I’ve seen various versions of this theory, but the one that the study organizers propose to focus on holds that the workspace is held by, or at least coordinated in, the prefrontal cortex at the front of the brain, although consciousness overall is associated with activation of both the prefrontal cortex and the superior parietal lobe at the back of the brain.

IIT posits that consciousness is the integration of differentiated information.  It’s notable for its mathematical nature, including the value Φ (phi); higher values denote more consciousness and lower values less consciousness.  According to the theory, to have experience is to integrate information.  Christof Koch, one of the proponents of this theory, is quoted in the linked article as saying he thinks this integration primarily happens in the back of the brain, and that he would bet that the front of the brain has a low Φ.

While I think this is going to be an interesting process to watch, I doubt it’s going to provide definitive conclusions one way or the other.  Of course, the purpose of the collaboration isn’t to find a definitive solution to consciousness, but to winnow the field.  But even that I think is going to be problematic.

The issue is that, as some of the scientists quoted in the article note, GWT and IIT make different fundamental assumptions about what consciousness is.  Given those different assumptions, I suspect there will be empirical support for both theories.  (They are both supposed to be based on empirical observations in the first place.)

From the article, it sounds like the tests are going to focus on the differences in brain locations, testing whether the prefrontal cortex in the front of the brain is really necessary for consciousness, or whether, as Koch proposes, that the parietal lobe in the back is sufficient.  However, even here, philosophical distinctions matter.

By “consciousness” do we only mean sensory consciousness, that is awareness of the information provided by the senses, both exteroceptive (of the outer world) and interoceptive (of the insides of the body)?  If so, then the parietal lobe probably would be sufficient, provided subcortical structures like the brainstem, thalamus, basal ganglia, and others are functional and providing their support roles.

Or do we mean both sensory and motor consciousness?  Motor consciousness here refers to being aware of what can be done and having preferences about various outcomes, that is, having volition and affective feelings (sentience).

If by “consciousness” we only mean the sensory variety, then Koch will likely be right that only the back of the brain is needed.  But for anyone who considers both sensory and motor consciousness essential, an empirical accounting of the sensorium will not be satisfying.

What complicates a discussion like this is that our intuitions of consciousness are not consistent.  We have intuitions about what subjective experience entails, and that usually includes feelings and volition.  But if we see a patient who’s had a prefrontal lobotomy, who is still able to navigate around their world, and respond reflexively and habitually to stimuli, even if they’ve lost the ability to emotionally feel or plan their actions, we’ll still tend to think they’re at least somewhat conscious.

Which brings me to my own personal attitude toward these theories.  I find GWT more grounded and plausible, but as I’ve progressively learned more about the brain, I’ve increasingly come to see most of these theories as fundamentally giving too much credence to the idea of consciousness as some kind of objective force.

Many of these theories seemed focused on a concept that is like the old vital force that biologists used to hunt for to explain the animism of life.  Today we know there is no vital force.  Vitalism is false.  There is only organic chemistry in motion.  The only “vital force” is the structural organization of molecular systems and the associated processes, and the extremely complex interactions between them.

I currently suspect that we’re eventually going to come to the same conclusion for consciousness, that our subjective experience arises through the complex interactions of cognitive systems in the brain.  Cognitive neuroscience is making steady progress on identifying and describing these systems.  Like molecular biology, we may find that there’s no one simple theory that explain it all, that we have little choice but to get down to the hard work of understanding all of these interactions.

Still, maybe I’m wrong and these “structured adversarial collaborations” will show compelling results.  As Giulio Tononi mentions in a quote in the article, the tests may well teach us useful things about the brain.

What do you think?  Am I too hasty in dismissing consciousness as some kind of objective force?  If so, why?  Are there things about GWT or IIT that make one more likely than the other, or more likely than other theories such as HOT (Higher Order Theory)?

40 thoughts on “A neuroscience showdown on consciousness?

    1. Yeah, they once funded intelligent design advocates and organizations, and they’ve historically been a little too eager to mix science and religion. So their motivations are always a bit suspect. On the other hand, a lot of scientists in recent years have taken money from them and claimed that there was no pressure for any particular outcome. But it’s always worth considering the organization’s background when these kinds of things come up.

      Liked by 2 people

  1. What do I think? I think both GWT and IIT are both mostly right and a little bit wrong. I think they are both correct and compatible in their basic statements, and both are wrong as to where the pertinent structures and mechanisms reside.

    From the IIT 3.0 paper:
    an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space)[emphasis added]

    I believe this “qualia space” is the “global workspace”. And I think both are efficiently described by Chris Eliasmith’s Semantic Pointers. Here is some explication of Eliasmith’s model this paper:

    In previous work (Stewart, Choo, & Eliasmith, 2010a), we developed a model of action selection that conformed to the anatomy of the basal ganglia, thalamus, and cortex. Groups of neurons in the cortex represent state information, and connections between the cortex and basal ganglia compute the similarity between the current state and the ideal state for each action available to the agent (as in Figure 2). The role of the basal ganglia is to find the maximum of these values, and its output to the thalamus should inhibit all actions except for the one action whose ideal state is closest to the current state. Connections from thalamus to cortex implement actions in two ways. Direct actions involve sending a particular vector to a cortical area […]. Routing actions indicate that information should be sent from one cortical area to another. These are implemented by having a neural group that takes input from the first group and passes it on to the second(both connections computed using Eq. 3 where M is the identity matrix). We then add a group of neurons in the thalamus which inhibit all of the neurons in this middle group (ωij=-1), causing this communication channel to do nothing most of the time. The thalamus can now implement the action of passing the information between the cortical areas by inhibiting these inhibitory neurons. [emphasis added]

    The reference to “a particular vector” highlighted above is essentially a vector in IIT’s qualia space, or the global workspace. The “global workspace” is the space through which information from one (or more) area of cortex can be “integrated” and “routed” (as per above) to another area of the cortex, potentially any other area of cortex, i.e., “globally”. Note: according to Eliasmith’s model, control of this workspace is largely managed by (prefrontal?) cortex via basal ganglia, although I would wager that there are mechanisms (managing the four F’s) which can alter/override this control.

    So if I’m right, both groups will not find what they are looking for as long as they’re only looking at the cortex.

    *

    Liked by 2 people

    1. I think both groups expect the subcortical structures to play a role, although probably not the one you describe or described in the excerpt. I say “probably” here because I have to admit to not understanding exactly what is being described. (Sorry, I’m really short on sleep today, so maybe it’ll make more sense if I read it tomorrow.)

      I can see the argument that the two theories are describing different aspects of the same system. I can also see HOT being work in. I guess my take on it is, do these aspects amount to what the theories claim about them? Certainly sensory information is integrated, and we have plenty of empirical evidence for working memory (although the global workspace is usually not considered to be the same thing). But as comprehensive theories, they all seem inadequate.

      I still haven’t read Eliasmith’s book. I think the reason I’ve held off is that he isn’t cited by any of the neuroscientists I’ve read (Damasio, Koch, Gazzaniga, Feinberg, Mallatt, Roth, Sapolsky, Panksepp, or others), so he appears to be well outside of the mainstream. (These authors do cite people who disagree with them, such as Bjorn Merker.) The reason might be his focus on architecting something rather than focusing on the actual brain.

      Like

      1. Mike, instead of waiting to read Eliasmith’s book, you might want to skim the paper I linked above to get a feel for his model. Eliasmith doesn’t seem too interested in pursuing the “theory of consciousness” angle of his work. For what it’s worth, here is my limited (and possibly mistaken) understanding of Semantic Pointers:

        The model uses simulated neurons, as opposed to standard neural nets. For any given neuron, it can potentially have multiple firing rates, let’s say from low to high, with some number of intermediate values. It is possible to organize a (large) set of such neurons such that they take input from other neurons and establish a pattern of firing which will persist barring further input. This pattern of firing constitutes a vector in an n-dimensional space (maybe one dimension for each neuron?). So it’s possible to have an input to these neurons which sets up a pattern, say pattern “A”. It’s also possible to “add” an input (using neurons) such that there is a new pattern “A+B”. Likewise, it’s possible to “subtract” patterns, such that, if the current pattern is “A+B”, and the subtraction operation uses “A”, the remaining pattern will be “B”. Thus, a “Semantic Pointer” is such a set of neurons firing in a specific pattern.

        Note: while I assume it’s possible that this set of neurons could be spread throughout the cortex, I’m going to guess that their functionality for this purpose prefers or requires that they be dedicated for this purpose and that they be located together, and not so spread out.

        Now, to me, this set of neurons exhibits all the requirements of a global workspace as well as the maximally information-integrated qualia space referenced by IIT.

        While Eliasmith doesn’t seem especially interested in the theory of consciousness angle, Paul Thagard is a philosopher at the same institution (University of Waterloo) who has used the model for a theory of consciousness. Peter Hankins (Conscious Entities) blogged about it here, referencing this paper by Thagard & Stewart (Stewart being a neuroscientist that co-authors with Eliasmith, I think). The problem with Thagard’s theory is, like you point out for IIT and GWT, it’s not so much a comprehensive theory of consciousness as it is a theory for how consciousness works in the human brain. If you look at Table 1 in Thagard’s paper, they explicitly state that consciousness requires neurons.

        Again, as you say, none of these theories are comprehensive. What they lack is an explanation of the fundamental unit of consciousness, i.e., the psychule.

        *
        [too blatant?]

        Like

        1. Thanks for the tip. I’ll bookmark that paper for later reference.

          Based on my understanding of the semantic pointer concept, I would tend to think they exist in numerous regions in the brain. For example, the amygdala would have a semantic pointer to information in the brainstem that for a particular reaction, which it uses to map to another semantic pointer is has to a memory in the frontal cortex, which itself may contain pointers to sensory information in the parietal and temporal lobes.

          I see I participated in the discussion on Peter’s post. It sounds like I must have read the referenced paper back then, although I’m drawing a complete blank now on its contents. But skimming Peter’s piece, it seems like there may be resonance between the semantic pointer concept and the higher order representations in HOT. In the discussion on Peter’s post, I linked them as a mechanism for Michael Graziano’s attention schema. Or am I misunderstanding how they work?

          Too blatant? Not for me. I prefer clarity. It saves time. 🙂

          Like

          1. Mike, I’m not sure you quite have the correct idea of the Semantic Pointer. Per Eliasmith, a Semantic Pointer is analogous to a software pointer. Where a software pointer points to an address in memory, and is treated as a substitute for whatever value is at that address in memory, a Semantic Pointer works as a substitute for a Semantic value, i.e., a concept. So the pointer is not the set of neurons. The pointer is the value (a vector) which those neurons are currently representing. [It may be possible that the analogy is tighter. It might be possible that the pointer can activate the neurons in the cortex that were the input for generating the pointer in the first place. I haven’t studied this stuff enough to say that for sure.]

            I can believe that there may be similar pointer-generating sets of neurons, which I’m going to start calling workspaces, in other places. But multiple workspaces is completely compatible with multiple conscious agents within the brain. Possibly the more complex ones are associated with Damasio’s selves. But I will wager that there is exactly one which is responsible for the autobiographical self, the one via which arbitrary concepts can be created, and remembered, and reported. Consider the number of concepts that have to be individually available as inputs to this workspace. That’s everything you might be conscious of.

            As for attention, I think it is essentially the gateway into the workspace. According to my currently simplified thinking, there are vast numbers of potential inputs into the workspace. There are almost certainly various mechanisms that suppress most of the inputs so that just a few, or maybe even one at a time, win out.

            And yes, I think HOT theories are similarly compatible.

            *
            [you did get that “blatant” referred to my reference to the psychule, which is my baby, right?]

            Like

          2. James,
            I actually did understand that the term “semantic pointer” is meant to be analogous to computer science pointers. However, those pointers are dependent on a particular type of architecture. (There are hardware registers which hold memory addresses for op codes to invoke actions against and a bus architecture to facilitate direct access memory.) But when considering how they might work in a nervous system, I think we have to be cognizant that they would be different, because the architecture is radically different.

            My impression was that Eliasmith qualifying the term with “semantic” is a concession to this fact. But I’ll admit I’m jumping to this conclusion having read only second-hand summaries of his stuff.

            I guess the question might be, are the neural firing patterns that ultimately trigger firing patterns in a far away region of the brain effectively a semantic pointer? But I wonder if the brain distinguishes between these types of “data access” and “function calls” where actual action is initiated in those remote regions. It might more effectively be modeled like method calls against remote objects, although how parameters are passed would be an interesting question. (This is really just me thinking out loud, so I’m not considering my language carefully here.)

            On the multiple workspaces, I think it’s possible that only one has access to the language generation centers, although it probably has innumerable connections to the others, that is, it is fed distilled information from the others.

            [yes I did get that 🙂 ]

            Like

  2. “While I think this is going to be an interesting process to watch, I doubt it’s going to provide definitive conclusions one way or the other.”

    For me, your second clause (which I completely agree with) invalidates the first. I’m not sure what possible value this can have. It’s just going to be people talking at each other, isn’t it?

    And then, the Templeton Foundation… I mistrust their intentions.

    Liked by 2 people

    1. You might be right, although as long as real science is going on, we might learn something. And I hope it’s real science, and not just gimmicky stunts to mix science and religion. But yeah, you never know for sure with Templeton.

      Liked by 2 people

    2. I think the innovation here is that the two groups are expected to collaborate to find a test which would differentiate one from the other. If they find such a test, the results should influence one group to concede to the other.

      *

      Liked by 1 person

  3. Hi Mike,

    I agree with your take more than with either GWT or IIT. I think most of us are a little confused about what consciousness is, and I don’t think we have any satisfactory objective definition so trying to find objective criteria for consciousness is doomed to fail.

    My position is that to be conscious is just to be in a functional state akin to that of a typical alert adult human. That doesn’t mean that babies/dogs/aliens/robots can’t be conscious, but it means that the word “conscious” is more or less appropriate depending on how akin the functional states are to those of an alert human. Asking whether a fly is really conscious is to me a bit like asking if a snail can really walk. The answer is “Sort of”, depending on how much you want to stretch the definition.

    I’m a bit more hostile to IIT than GWT. While I suspect that integrated information is indeed a feature of human consciousness (and so a prerequisite of anything I would care to call consciousness at all), it’s not sufficient. It seems to me that it would be possible to build systems which integrate information just as Tononi demands without them being conscious.

    Furthermore, my understanding (which could be wrong) is that IIT emphasises relatively direct physical relationships between the system and the information being organised. So a brain with its many complex interconnected neurons is conscious, but systems with indirect relationships where there are layers of abstraction between the information and the physical substrate are not conscious. My understanding is that Tononi would not deem a complex simulation of a brain on a relatively simple substrate such as a Von Neumann architecture computer to be conscious because the physical substrate would not meet his physical postulates of integration and exclusion. Also, this expectation of a direct relationship between the physical and the mental seems to preclude one physical substrate hosting multiple minds (Simulation Hypothesis scenario) or one mind simulating and hosting another (in a Chinese Room Scenario) or many minds cooperating to simulate one mind (in a China Brain scenario). I would deem all these simulations to be conscious in their own right; Tononi would not. I don’t see how any experiment could ever settle this question. As far as I can see, it’s not open to empirical evidence. That means that both of our positions are unfalsifiable and unscientific, as much as Tononi would hate to admit this.

    GWT, from what I know of it, strikes me as plausible as far as it goes. Again, what it describes strikes me as a likely feature of human information processing, so having such a global workspace is probably a prerequisite for consciousness. But it’s presumably not enough, and I don’t think it begins to address the Hard Problem, so it doesn’t really give much of a grounding to opine on what kinds of entities can be conscious or answer thought experiments like those mentioned above.

    On whether only the back of the brain is needed for consciousness, again I don’t think there’s any clear answer (because there is no satisfactory definition of consciousness), but going by my definition, I’d say no, because you need more than the back of the brain to be in a functional state akin to that of a typical alert adult human.

    Liked by 1 person

    1. Hi DM,
      It sounds like we’re mostly on the same page here. I definitely agree that GWT is the more plausible of the two. I’m a little leery of the idea of the workspace being in one physical location though. I tend to think of it more like a farm of computers, each with their own API services that make information available to the other computers in the farm. So the “workspace” ends up being a virtual one. But one where if any one node, or even a portion of the nodes, are knocked out, the system continues to function.

      I agree completely on IIT. Integration is crucial, but not sufficient. What’s interesting about this is that there are numerous integration regions in the brain. Most of them aren’t part of our conscious experience, although many of them might contribute to it.

      The physical relationship requirement strikes me as wrong too. If the information flows, why should the ultimate physicality make a difference? Koch described it as a causal framework rather than an information one, but that’s using some arbitrary version of “information.” For me, information are patterns that are useful due to the causal history that produced them. So saying that a system is causal rather than informational is just insisting that it be a particular type of causality, which of one of the many things about IIT I find arbitrary.

      On the front and back of the brain, I think you’re exactly right, for the conception of consciousness you described, and probably for the most common intuitive sense of subjective experience. Of course, which parts of that experience are crucial is not a fact of the matter, so if someone insists that sensory awareness without volition or most emotional feeling is adequate, then the back meets their particular view of consciousness.

      Liked by 1 person

        1. Consider tree rings. What makes them informative? The causal history of how they’re generated, with the tree growing a new outer layer in pulses depending on the seasons.

          Or the verbiage in this comment. They’re informative about my mental state because you can assume I typed them, that in doing so I went through mental states, including language centers, then the physical process of actually typing it. Without that causal history, the letters and words you’re looking at would be meaningless.

          I don’t know if this conception covers all instances of information, but it seems to cover most of the cases I can think of. I’ve used the term “causalation” before to refer to it when considering information processing.

          Like

      1. Mike, you said:

        I tend to think of it more like a farm of computers, each with their own API services that make information available to the other computers in the farm.

        I suggest your concept here of sharing information among separate computers might require digital information, and I don’t think that’s how the brain works. Would love to hear the explanation if I’m wrong.

        *

        Like

        1. Hi James,

          As far as I can see it only requires digital information because the computers we are familiar with use digital information. I don’t see any reason why analog computers could not also share analog information. I mean, why not? What’s to stop one analog computer from passing an analog message to another?

          This isn’t much of an explanation, I just don’t see the problem.

          Like

          1. [Be warned, the following has not been thoroughly worked out]
            Sending analog information from one machine to the next is not a problem as long as the receiving machine has a dedicated information channel for that information, i.e., a hardwired line, like a neuron.

            So Mike was talking about passing information around a farm of computers, which I understand as a farm of essentially equivalent computers, as opposed to a farm with a central hub and a bunch of special purpose satellite computers. I can see how you can pass a concept from one computer to the next (A -> B) via a set of neural connections (see Semantic Pointers discussion), but the question is how you could pass that same information, that same concept, from B to C. I don’t think you can have a duplicate set of neurons firing in a duplicate pattern. I think to do B -> C you would have to do B->A->C. But this pretty much means you need dedicated wires (neurons) from every computer to every other computer and from every source of concept information to every computer. I think having a lot of such computers would get “expensive” quickly, and I’m not sure what utility you gain.

            I guess it seems to me that a central hub (workspace) for concept information is going to be the better architecture. Time will tell.

            *

            Like

        2. My reaction is very similar to DM’s. I also wouldn’t get too hung up on the details of “API services”. It was an analogy meant to quickly convey a concept. Obviously the detailed way it works in nervous systems would be very different. As a concept, it also seems compatible with the semantic pointer concept we’re discussing. These could effectively be what the semantic pointers point to.

          Like

  4. I’m always weary when I hear Templeton Foundation. On the one hand, I do celebrate the fact that they are funding research into areas impossible to find ordinary grants, but they’re ulterior motive (as stated by John Templeton) inspires distrust.

    Liked by 3 people

    1. I know what you mean. It does seem like the foundation, after John Templeton died, moderated its stance on things and became more careful about what it funded and how it funded it. For example, building guaranteed publication into the process is a nice step. You always have to wonder how they’ll react if the results of a study are not spiritually affirming.

      Liked by 1 person

      1. Have you ever read through the grants they’ve awarded? I have. I approached them a few years ago for research funds to continue *study* into The Owner of All Infernal Names. I was buoyed by the sheer number of purely religious-philosophy papers. Of course, TOOAIN is a joke, a parody, but I never make that public and submitted a mountain of data coming from the first book justifying the areas I wanted to *study* further. Naturally, I was following their stated goal of exploring “spiritual realities.” On paper it’s hard to see how they could possibly turn such a proposal down. As expected, an Omnimalevolent Creator is not exactly the “spiritual realities” they want to throw money at. Odd, I had a similar response from The Discovery Institute 😉

        Liked by 1 person

  5. “The issue is that, as some of the scientists quoted in the article note, GWT and IIT make different fundamental assumptions about what consciousness is. Given those different assumptions, I suspect there will be empirical support for both theories. (They are both supposed to be based on empirical observations in the first place.)”

    Yeah, right. If science can’t agree on a definition, is “consciousness” even a useful scientific term?

    Liked by 1 person

    1. I’ve gradually come to the conclusion that it isn’t. Being scientific requires that we qualify what we’re talking about, such as “sensory consciousness”, “motor consciousness”, “wakefulness”, “self consciousness”, etc. But used by itself, it’s about as precise as terms like “beauty”, “love”, “art”, or “virtuous”.

      Like

      1. Qualify or quantify?

        I’m not sure I see how adding “sensory” in front of it changes the picture. We are dealing with something that we seem to feel or know intuitively but it is subjective. For scientific purposes we can ask humans for verbal accounts and deal with the unreliability of that or devise behavioral measures. That leaves us more limited for non-humans.

        I’ve long realized that the definition of consciousness is an issue and source of confusion in many discussions. But until we can define “consciousness” in a measurable way the bigger questions might need to stay in the realm of metaphysics. We can still do plenty of neuroscience using the things we can measure in the mean time.

        Like

        1. I actually meant “qualify”, as in being specific or precise. IIT is known for its mathematical nature and its attempt to quantify the degree of integration. But the value of what it’s measuring is the question. It might be useful for determining whether a particular human is currently conscious, but that doesn’t mean the same quantities in other systems are meaningful.

          Totally agreed that most of neuroscience is making plenty of progress without concerning itself with consciousness per se.

          Like

  6. @ James Cross,,,
    “If science can’t agree on a definition, is “consciousness” even a useful scientific term?”

    I concur counselor…. We may all be mesmerized by our own first person phenomenal “experience” of consciousness, but labeling that “experience” as consciousness, and then grounding that experience as the reference point from which to understand that experience is misdirected. First and foremost, consciousness needs to be understood according to its grounding underlying form. Unbiased metaphysics is the only field of exploration that is capable making such a discovery. Unbiased means; scrapping every model that we inherited from the Greeks beginning with subject/object metaphysics. These are the models of our current paradigm: Idealism is nothing more than a religion, the science of materialism is also faith based and hopelessly lost, substance dualism is a sect of idealism and property dualism is a branch of science.

    The only viable option of the table for a metaphysical definition of consciousness that can move us forward lies in a revolutionary “revision” of Kant’s transcendental idealism, a revision that results in an architecture that is freed from the build in paradox of the ontological primitive, the “thing-in-itself” being “unknowable”.

    Like

    1. “Objective force” admittedly isn’t a very precise term. It’s meant to cover a putative fundamental force, but also a composite one. It generally refers to the belief that the label “consciousness” refers to some objective corporeal thing, such as a field of some type, or a form of ghostly ectoplasm.

      Like

  7. “I currently suspect that we’re eventually going to come to the same conclusion for consciousness, that our subjective experience arises through the complex interactions of cognitive systems in the brain.”

    You are still thinking that consciousness originates in the brain. What if soul actually exits, only soul has the consciousness property, and it controls our brain? That is, the most important problem is – which one is the cause and which on is the effect. Is the brain cause or the consciousness is the cause?

    If we are all guided by destiny, which can be precisely predicted and has been predicted million times all over the world, and they are all documented, then we are like a robot, and our destiny is the controller. Destiny is created by the simultaneous action reaction of all souls of all objects in the universe.

    Any corporation is an example of destiny. Here we know what exactly we will do tomorrow morning when I go there for my work. This is so, because we have jointly made a project plan and the detailed schedule. What if this corporation is a miniature version of the global destiny of the universe? Take a look at
    https://www.academia.edu/38590496/A_COMPARISON_OF_MODERN_SCIENCE_WITH_VEDIC_SCIENCE

    Like

    1. Hi there,
      Thanks for commenting! I can’t say I’m too familiar with Vedic conceptions of consciousness. It sounds like a variant of substance dualism, similar to the view from most religious traditions.

      My view is pretty relentlessly monist and functionalist. I actually don’t think consciousness “originates” in the brain as though it’s something that floats above or around it, but that it’s what the brain does. We can say the soul exists, but only in terms of an information processing system, not a separate substance.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.