Back in March, I did a post on a proposed Templeton Foundation project to test major scientific theories of consciousness. The idea was to start with a head to head competition between the integration information theory (IIT) and global workspace theory (GWT). Apparently that project got funded and, according to a Science Magazine article, there are now active plans to move forward with it.
The first two contenders are the global workspace theory (GWT), championed by Stanislas Dehaene of the Collège de France in Paris, and the integrated information theory (IIT), proposed by Giulio Tononi of the University of Wisconsin in Madison. The GWT says the brain’s prefrontal cortex, which controls higher order cognitive processes like decision-making, acts as a central computer that collects and prioritizes information from sensory input. It then broadcasts the information to other parts of the brain that carry out tasks. Dehaene thinks this selection process is what we perceive as consciousness. By contrast, the IIT proposes that consciousness arises from the interconnectedness of brain networks. The more neurons interact with one another, the more a being feels conscious—even without sensory input. IIT proponents suspect this process occurs in the back of the brain, where neurons connect in a gridlike structure.
To test the schemes, six labs will run experiments with a total of more than 500 participants, costing the foundation $5 million. The labs, in the United States, Germany, the United Kingdom, and China, will use three techniques to record brain activity as volunteers perform consciousness-related tasks: functional magnetic resonance imaging, electroencephalography, and electrocorticography (a form of EEG done during brain surgery, in which electrodes are placed directly on the brain). In one experiment, researchers will measure the brain’s response when a person becomes aware of an image. The GWT predicts the front of the brain will suddenly become active, whereas the IIT says the back of the brain will be consistently active.
Tononi and Dehaene have agreed to parameters for the experiments and have registered their predictions. To avoid conflicts of interest, the scientists will neither collect nor interpret the data. If the results appear to disprove one theory, each has agreed to admit he was wrong—at least to some extent.
The whole thing has a bit of a publicity stunt feel to it. As I noted back in March, both of these theories make differing philosophical assumptions about what consciousness fundamentally is, and the authors of both theories used empirical data, as it existed at the time, when formulating their theory. So I’m not expecting the results to be overwhelmingly conclusive. (Although it’d be good to be proven wrong on this.)
What might be interesting is the front of the brain vs back of the brain thing. I’ve noted this debate before. Some scientists, notably people like Tononi and Christof Koch, see consciousness as concentrated in the back part of the brain, in the sensory processing regions including the temporal and parietal lobes. Others, such as Dehaene, Joseph LeDoux, and Hakwan Lau, think we don’t become conscious of something until it reaches the prefrontal cortex.
This also has relevance on the distinction between first order and higher order theories, that is, between theories that hold that the representations and processing in sensory regions are conscious ones, versus theories that hold that further “higher order” processing in the prefrontal cortex is necessary for us to be conscious of them.
Part of the difficulty is that scientists depend on subject self report to know when those subjects are conscious of something. However, self report requires the frontal lobes. There are protocols to minimize the confounding role of self report, such as comparing brain scans of people who see something and report being conscious of it with people who see the same thing but without the requirement to report it. But a first order advocate can always insist that any remaining frontal activations are superfluous, that all that’s needed for actual consciousness is the posterior activity.
My own money is that the frontal regions are important, perhaps crucial. But this is complicated. It’s possible for sensory information from the back part of the brain to trigger sub-cortical activity, such as habitual or reflexive action, without frontal lobe involvement. It’s even possible to remember what happened during that behavior and consciously retrieve it later, giving us the impression we were conscious of the event during the event, even if we weren’t.
But if we insist that consciousness must include emotional feelings, then I think the frontal lobes become unavoidable. The survival circuit activations that drive these feelings happen in subcortical regions in the front part of the brain, which have excitatory connections to the prefrontal cortex. Of course, you could insist that the felt emotions lie in those subcortical circuits rather than the cortex, but severing the connections between those circuits and the prefrontal cortex (like what reportedly used to happen with lobotomies) typically results in deadened emotions.
And all of this is aside from the fact that the introspection machinery is in the very front part of the prefrontal cortex (the frontal poles). Are we conscious of it if we can’t introspect it?
As I said, complicated. A lot of this will depend on the assumptions and definitions the experimenters are using.
Still, I’m curious on exactly what they plan to do to test the back versus front paradigms. If they do figure out a way to conclusively isolate conscious perception with one or the other, it might answer a lot of questions. And if they do plan to eventually move on to testing theories like local recurrent processing or higher order thought theories, this work might provide a head start.
What do you think? Am I being too pessimistic on whether these experiments will validate or falsify IIT or GWT? Or are these theories all hopelessly underdetermined, and we’ll still be arguing over them months after the experimental results are published?
51 thoughts on “A competition between integration and workspace”
How have they defined consciousness? Slime mold (physarum polycephalum) can remember beneficial behaviour for up to a year. Couldn’t we call that some kind of rudimentary consciousness?
LikeLiked by 1 person
Good question. IIT and GWT are as much different definitions of consciousness as they are theories about it. I haven’t read anything to indicate slime mold would qualify under either. I can’t see that it does much integration of information, or has anything like a global workspace.
But the slime mold question is why I usually talk about a hierarchy:
1. Reflexes, automatic responses to stimuli. (Including reflexive learning.)
2. Sensory representations of the environment.
3. Prediction of immediate cause and effect enabling goal directed behavior.
4. Prediction of alternate and novel action-sensory scenarios.
5. Introspection. Predictions of 1-4.
Would you see slime mold as making it past 1?
LikeLiked by 1 person
I think you’re spot on in relation to hierarchy. To me it seems an emergent phenomenon.
Regarding definition, how can they possibly agree on something if they don’t even share the same definition of of what it is they’re looking for?
LikeLiked by 1 person
That is the difficulty. The article notes that there are a lot of neuroscientists who don’t even think it makes sense to compare IIT and GWT, since they see IIT as more philosophy than science. Of course, the IIT advocates, who are themselves neuroscientists, disagree. On the other hand, GWT seems incomplete to me.
But I doubt any one of these theories, by itself, will explain consciousness. It will require a number of theories. We should be most skeptical when theory proponents say, “consciousness is (whatever mechanism they’re describing)”, as opposed to something like, “(mechanism) is correlated with phenomenal aspect X of consciousness.”
LikeLiked by 1 person
First, I don’t how anything in this competition will come close to answering how perception and experience comes about.
That said, I think there is a problem distinguishing between consciousness and the contents of consciousness that I believe relates back to the paper I’ve discussed and linked to on my blog recently. Since the frontal regions are a big part of the human brain, of course, they will be very active in human consciousness. It would almost be impossible to imagine that a conscious human brain would need to shut down a major part of its brain to be conscious. But are those parts of the brain directly required for or responsible for consciousness or are they simply where the contents of consciousness are being processed in humans?
While I don’t like IIT for a variety of reasons, I think its emphasis on circuits connecting various regions is more likely on track with deeper brain regions, like the thalamus, critical to coordinating the brain activity.
On the distinction between the contents of consciousness and consciousness, it seems like there are multiple concepts here.
1. The sites where the sensory contents are generated. In terms of at least first order sensory representations, I don’t think anyone disputes that the back parts of the brain are involved.
2. The sites where felt emotions happen. Everyone seems to agree that subcortical structures initiate the impulses that are eventually felt as emotion, but where the feeling is seems to be disputed.
3. The sites where the contents, both sensory and emotional, are consumed, that is, utilized for planning and decision making. The frontal lobes seem heavily involved in this, although simpler habitual or reflexive action seem to be handle by subcortical circuits.
4. The regions that control the level of consciousness, the level of arousal. This seems to rise up through and from the reticular formation to the basal ganglia and thalamus and into the cortex. It’s more of a pace setting system.
Do any of these capture what you mean by consciousness as distinct from contents? If not, then how would you describe it?
On the thalamo-cortical loop theory in that paper, I think there’s room for the grounded version to be true at a different level or organization from the other theories.
I’m uneasy about IIT for the same reason I’m uneasy about the more inflated conception of the thalamo-cortical loop theory presented in some of the commentary. They seem to be looking for the ghost in the machine, an enterprise I see as fruitless.
Consciousness as distinct from contents – what I am getting at here is that a bat, a dolphin, dog, cat, and a human can all be conscious but the contents of the consciousness would be different. The bat and dolphin world would be much more auditory. The cat and dog world more olfactory. The human world more visual. Yet there is something in common that makes them conscious. You might dispute the bat but I added it to the list because it goes back to the most basic definition of consciousness – something it is like to be something – Nagel’s bat .
It seems to me there is something apart from the contents of consciousness required for consciousness but that probably consciousness cannot exist without content, although some Eastern philosophies have a concept of a pure consciousness seemingly without object. Evan Thompson has some speculations regarding whether there is some element of consciousness in deep sleep.
If the original role of consciousness was integrating senses and actions, we know all of that passes through some evolutionary old layers of brain. It would make sense that the older layers would maintain a critical role as new structures evolved to refine and improve sensual and decision processing.
The issue, I think, with the “what it’s like” concept, is that it’s vague. Many people do take it to be a kind of primal foundational ineffable essence that exists in addition to all the content and dispostional stuff. My take is that it refers to the overall package, which only comes into being with most or all that stuff present, but I might feel different if I had your intuition of something in addition to the other stuff being necessary.
On old layers of the brain, be careful not to fall into the triune brain model, which is outdated. In its most primitive incarnations, vertebrates have hindbrains, midbrains, and forebrains (including a diancephalon (thalamus) and telencephalon with a pallium, the precursor to the cortex). You have to go back to pre-vertebrate cephalochordates, which are pretty primitive, to get before that hindbrain / midbrain / forebrain model.
I was thinking more of the thalamus through which in humans all sensory input passes except some olfactory input. It appeared a couple hundred million years in some form.
Ah, ok. I think I see what you’re getting at. The thalamus existed prior to mammals, but the sensory distribution role seems to have come in with the mammals (and later the birds). It represented a shift in sensory processing from the midbrain to the forebrain, and coincided with a tenfold increase in brain size relative to body size.
The fact that sensory information, other than olfaction, doesn’t make it to fish and amphibian forebrains, which has seems to have always been where instrumental learning takes place, makes it seem like for whatever consciousness they might possess, it’s utterly strange and alien by our standards.
Yes but I’ll take your word on fish and amphibians.
I was basing my thoughts more off the paper I’ve been discussing as well as some reading on the thalamus that the paper prompted me to read. The fact that almost all sensory input goes through the thalamus must have a good explanation. If consciousness is for integrating sensory information with actions, then the fact sensory information passes through it would seem to put it in the critical path to be involved in some way.
The fact that it is also closely tied to the reticular formation which controls sleep and alertness seems to also tie it back to come general capability for consciousness independent of content, although how that actually works is hazy to me at this point.
I think the role of the thalamus is mostly like a network switch, a relay hub of communication between various cortical and subcortical regions. I say “mostly” because this is biology and nothing is ever absolute. There are indications that it may be involved in some signal modification. Evolutionarily, the diencephalon (thalamus and hypothalamus) has always had a similar role in relation to the telencephalon (the rest of the forebrain).
The close relationship between the thalamus and reticular formation was called into question by a study last year, which identified the basal forebrain and hypothalamus as the actual arousal pathway.
The thalamus is still thought to be involved in attention and awareness.
James, I haven’t finished reading that paper yet, (I read the abstract), but I just wanted to point something out. You noted the “fact that almost all sensory input goes through the thalamus must have a good explanation” and that this fact “would seem to put [the thalamus] in the critical path to be involved in some way.” It may have been that, evolutionarily, sensory signals went to the thalamic-type midbrain for processing, but when the forebrain came into being, those signals got passed to it. However, those L5p cells mentioned in the paper are (I think) involved in sending signals back to the thalamus after cortical processing. I’m guessing it’s these signals that bear the content of consciousness.
By my reading you are correct that it is not a one way circuit through the thalamus to the cortex. It seems stuff goes back and forth with the L5p neurons being the exchange point according to the theory.
The theory is trying to reconcile thalamo-cortical theories with cortico-cortical theories by identifying a common exchange point so I don’t think it would be in conflict with GWT theories.
Caption from a diagram in paper.
“Cortical layer 5 pyramidal (L5p) neurons (the black-colored neurons on the image) play a central role in both cortico-cortical and thalamo-cortical loops. By being central to both loops, they effectively couple them, functionally coupling the state and contents of consciousness.”
“It may have been that, evolutionarily, sensory signals went to the thalamic-type midbrain for processing, but when the forebrain came into being, those signals got passed to it.”
A couple of, perhaps pedantic (sorry), points.
1. The thalamus and midbrain are different structures. The thalamus sits above the midbrain. Their differentiation appears to go back to early vertebrates.
2. The forebrain existed before (non-olfactory) sensory signals were sent to it, at least in the raw topologically mapped form they are in mammals.
Hilarious! I can’t define it and I can’t discriminate it from anything else I might discover but, given sufficient funding, I can test for the existence of my theory about it.
How do we get in on this kind of easy money? I have a plan to experimentally test for fraggibertiousness which, as you know, is either a “selection process” or some unspecified level of “interconnectivity” among fraggiberteons although it’s conceivable that I’ll only locate a single fraggibert or the contents of a cluster of fraggiberts.
We’ll all feel a lot better afterwards …
LikeLiked by 1 person
How dare you imply that fraggibertiousness doesn’t exist! Isn’t it the most obvious thing ever? You must be making a desperate lunge to explain your inability to account for fraggibertiousness. Be brave Stephen, and embrace the mystery of fraggibertiousness!
Meh. Wake me when someone comes up with some hard data that actually means something. I’ve gotten thoroughly weary of all the hand-waving and hypothesizing.
LikeLiked by 2 people
But Wyrd, what will we do with ourselves with no hand-waving or hypothesizing?
LikeLiked by 1 person
LikeLiked by 2 people
Pretty sure we need some hand waving in baseball. 🙂
[imagining someone stealing second without pumping those hands]
And we need to give signs, so clearly not all hand-waving is bad! (We certainly want to wave hello and goodbye.)
I think the difference has to do with hand-waving while hypothesizing (HWWH)… it can be like being in front of a hot air fan… 😉
LikeLiked by 1 person
Touche* Batter up!
Pancake or waffle? That’s a debate we can sink our teeth into!
LikeLiked by 1 person
You’re so right.
BTW, these are not “scientific theories of consciousness” … they neither define consciousness nor hypothesize the neuroanatomy/biology. by which it’s produced, nor do they propose any experiment to confirm the location, existence and operation of that precise neuroanatomy/biology.
These two theories essentially say “neural activity happens here and/or there and, Shazam!, consciousness!”
IIT and GWT (and the rest, like panpsychism, neutral monism and etc.) are philosophical theories regardless of the specialties and academic degrees of those who support them. As far as I can tell, there are currently no scientific theories of consciousness at all.
LikeLiked by 2 people
What would you consider the minimum and sufficient qualities for a theory of consciousness to be scientific?
“What would you consider the minimum and sufficient qualities for a theory of consciousness to be scientific?”
Certainly the first requirement would be a definition of consciousness in biological terms. All known and confidently inferred instances of consciousness are biological. Biology seems to be largely ignored in all of the philosophical theories where, instead, we have ‘workspaces’ and ‘information’ and ‘representations’ and the like, all of which are abstract logical conceptions. Philosophical theories have no apparent evolutionary proposals for those ‘logical’ entities either. One might say that “a global workspace that contributes to survival to reproductive age” satisfies the evolutionary requirement but it doesn’t.
I find the suggestion that physics is somehow directly implicated in consciousness to be baffling. I have no idea where that idea comes from—we don’t look for our understanding of digestion or respiration in particles and waves. Perhaps some think that quantum behaviors are mysterious and consciousness is mysterious so the two are connected, but that’s patently shaky logic.
Following a credible biological definition, like Damasio’s for instance, a scientific hypothesis would specify the neuroanatomy, i.e. brain structures, and the neurobiology, i.e., tissues and biochemistry that give rise to consciousness. As an example, a hypothesis incorporating Damasio’s “consciousness is a feeling” definition might locate a feeling of touch in the brainstem’s reticular formation, specifically in such-and-such a type of neural cluster and propose that a specific touch feeling can be rendered inoperative via some precisely identified molecular level intervention. That, I believe, would constitute a scientific hypothesis.
It seems obvious from that example that we are very far from such a hypothesis because our investigatory tools are crude. I suspect nanotechnological probes will be necessary. Additionally, we’re likely looking in all the wrong places, being primarily misled by philosophical theories like IIT and GWT that are committed without any evidence to a faith-based belief in cortical consciousness.
LikeLiked by 1 person
Stephen, your criteria for a scientific theory sound suspiciously like, “Must conform to Stephen’s preconceived notions of consciousness.”
Personally, all I require of a theory to be scientific is that it be grounded in empirical observation and make testable predictions, or at least lay the groundwork for such predictions. Such theories may not answer the consciousness question once and for all. They may only move the epistemic needle forward a bit. But I personally doubt there will ever be one theory that settles the matter. Like the old vitalism of biology, I think the concept will eventually be replaced by a galaxy of concepts, each with their own theories.
In that spirit, while I do have issues with IIT (it seems to have a lot of philosophical axioms and postulates), I don’t see the problems you do with global workspace, higher order thought, or other relatively grounded theories.
I will note that studying consciousness seems to inevitably require taking a philosophical position, and those who don’t take the same position won’t accept the scientific findings. It’s why I think consciousness is in the eye of the beholder.
Aha* I have learned to WAIT upon your commentary, after the first of your entertaining (less instructive) forays (however well exercised) into Consciousness, as it ever appears your conclusions resound with a similar tone, ie. “It’s why I think consciousness is in the eye of the beholder.” Indeed…This is Philosophy, NOT Science; though I’m neither sorry to say so, nor displeased. Thanks.
LikeLiked by 1 person
I agree. I made the same point.
If you stay within materialistic confines, I would like to see something at the level of physics to start with, but then you could use biology, chemistry, etc to show how it happens in living organisms.
One approach would be actual new particles or waves or some assemblage of existing particles and waves. I’m dubious that will work out but I don’t think it can be ruled out. If there are more dimensions than the 3+1, the particle/waves could exist in an extra dimension(s) which would explain why consciousness doesn’t seem to have an actual location and seems without mass.
A different approach might derive consciousness from some sort of Universal Darwinism that could involve some fundamental theory about how representations of the world arise in entities and lead to actions that improve survival. Certainly there are aspects of consciousness that are similar to recording. Memory obviously comes to mind. I tried to argue in one of my posts that consciousness itself may be much like memory except working on millisecond time frames. Certainly some of the views of the brain/consciousness as a prediction engine could weave into this sort of theory.
If there is no physical theory, I can’t see how consciousness could be instantiated in something other than wet brains. Just saying it magically appears after enough computing power gets running isn’t convincing to me. But it might be that it only can happen in biological organisms.
LikeLiked by 2 people
Well, starting with David Deutsch’s Constructor Theory, you can say that everything that happens in the universe can be expressed as Input—>Mechanism(or system, or physical context)—>Output. This is also compatible with Relational Quantum Mechanics.
To start on this path you need to explain how you get (teleonomic) (archeo)purpose/meaning. Carlo Rovelli talks about this in his essay (https://fqxi.org/data/essay-contest-files/Rovelli_Meaning.pdf). To put it into the Input—>[Mech.]—>Output paradigm, Natural Selection leads to mechanisms which have a “mutual information” relation to an Input. This is an information theory concept which essentially says there is some correlation between the form of the mech. and the form of the input. Another way to say this might be that the mechanism exists because it responds in a particular way to the input.
This is enough to get you to Mike’s first level of consciousness: reflex, but it’s not enough to get you to representation, yet. You start on the path to representation when you need to have the place and time of the input separated from the place and time of the output. One way to do this is to have two mechanisms in series. The first mechanism takes the input and generates a messenger as output. The (teleonomic) purpose of this messenger is to represent the input event. The messenger then can move across some distance and become the input for a second mechanism which then produces the desired output. Note: this system is still at the level of reflex.
The next level involves combining two such systems, input1–>mech1–>output1 and input2–>mech2–>output2 where outputs 1 and 2 are respective messengers and combine as inputs to a third mechanism which creates output3 which acts as a messenger of the combination. I think this is the level which Mike would call perception, but I’m not sure it is enough “representation” for most of those referring to consciousness. This is the level of the frog shooting it’s tongue at a dark moving spot.
So the next level is where the output of mech3 is not just a messenger but instead is a new mechanism, mech4, such that the output of mech4 is a messenger (output4) which “means” the same concept as the output3 messenger, i.e., the concept of “input1 + input2”. Mech4 could have multiple possible inputs, but the output is always output4. This is the level at which you can have memory as well as imagination. Any input that triggers mech4 produces the messenger that “means” the concept.
So there you go, and no wet brains required.
LikeLiked by 1 person
I think your Universal Darwinism suggestion is at the core of Ruth Millikan’s biosemantics. That’s not a theory of (all of) consciousness, but a theory of an important subdomain of consciousness: representations. I think she’s on the right track.
LikeLiked by 1 person
Not familiar with Millikan but am taking a look. From what I gather, it does seems to be in the vicinity of my hazy and incomplete ideas.
Mike, I’m responding to your “Stephen, your criteria for a …” comment at root level for its readable column width. And—advance notice—I’m expressing some frustration in this comment. It’s not emotional. It’s an intellectual frustration. I’ve numbered the paragraphs for your easy reference should you reply.
1. Are my “preconceived notions” of consciousness unscientific? Is your use of ‘notions’ referring to my considered analysis of evidence or to what you believe is a groundless leap from unfounded assumptions to insupportable conclusions?
2. Your two requirements for a theory to be scientific are: a) grounded in empirical observation and b) make (or support the making of) testable predictions. Once again, Mike, you don’t include a definition of consciousness as a prerequisite. You may not realize how frequently you dismiss the very idea of a definition with remarks like “everybody has their own definition” and “consciousness is in the eye of the beholder.” How can any conversation, let alone a scientifically grounded one, proceed meaningfully without the parties to the conversation having a common understanding of the terminology being used? Please explain for all of us and—thanks in advance Mike!
3. And please identify what specifically is “grounded in empirical observation” about GWT and IIT. Both propose configurations of neural activity in cortical tissue that, through either the integration of information or global workspacery—Shazam!—produce an undefined result called ‘consciousness’, excuse me, called whatever-it-is-ness. Regarding your “testable predictions” of both theories, how will it be determined that any follow-on cortical neural activity, even if observed as predicted, is not the (unspecified) neural activity associated with the resolution of the contents of consciousness as opposed to the (also unspecified) neural activity that might be consciousness itself? Uh, excuse me again—that’s the contents of whatever-it-is-ness as opposed to whatever-it-is-ness.
4. If you recall, at your request, I provided a thorough description/definition of consciousness as ‘feeling’, progressing from rudimentary bodily feelings (by example) through feelings that are less obviously physical, like thought, that comprise our extended consciousness. I explained that the entire spectrum of those feelings are rooted in embodiment. I offered the BRASH proposal which is fully conformant with actual experimental and observational evidence and is as scientific as currently possible while we await further research. Yet you completely ignored the definitions, explanation and the entirety of the BRASH proposal without comment. You contributed neither well-reasoned objections nor falsifying evidence. Additionally, you have never objected at all, let alone with meaningful specificity, to Damasio’s definition of consciousness as a feeling composed of multiple sub-tracks of feelings corresponding to sensory inputs. You clearly disagree both with me and Damasio but never identify any evidentiary contradiction or illogic. Why is that?
5. At the same time, you confidently promote a hierarchy of whatever-it-is-ness that lacks any specification for what qualifies an item to be included, nor do you provide any information about the levels in the hierarchy—what does it mean to be first, second, or fifth? You include biological functionality that is irrelevant to consciousness, or whatever-it-is-ness, like ‘reflex’. You place repeated importance on ‘prediction’ as if cortical functionality produced predictions. But to ‘predict’ is to say what will happen in the future. In contrast, to ‘expect’ is to resolve what is happening now. Cortical functionality resolves ‘expectations’—it compares an accumulating incoming sensory story with remembered stories and promotes the likely expectation to consciousness, based on the remembered outcomes of matching stories. (All of that cortical processing, of course, is influenced by neurochemicals and inputs from other brain subsystems.)
6. And why do you claim there must be multiple explanations of whatever-it-is-ness? Are there three distinct explanations of digestion, two of respiration or five explanations of electromagnetism? And “… a galaxy of concepts, each with their own theories”?
Your “studying consciousness seems to inevitably require taking a philosophical position” reminds me of your refusal to distinguish physics (QM) from philosophy of physics (QM Interpretations). Same scarecrow, different clothes. Philosophy of consciousness is a morass, as Hacker observes. Your perspective appears to resolve to “There is a something-it-is-like-ness to a whatever-it-is-ness.”
LikeLiked by 1 person
1. My point was that your criteria are biased toward your preferred theory. You’re accusing every theory you happen to disagree with of being unscientific, apparently because you disagree with them.
2. There isn’t a consensus on a definition of consciousness. (Other than vague synonyms like “subjective experience” or “what it’s like”.) You can argue for your preferred definition, just like everyone else. There are more and less productive definitions, but the fact remains, we disagree on what aspects of our experience are necessary and sufficient for the label “consciousness”. I explained this more fully in a blog post: https://selfawarepatterns.com/2019/01/27/consciousness-lies-in-the-eye-of-the-beholder/
As you noted, the philosophy of consciousness is a morass.
A case can be made that we should regard the various definitions as theories in themselves, and test them scientifically, at least to the extent they are testable.
3. You’re ignoring what I said about IIT. You have a tendency to do that, ignore what your conversation partner actually says, often in an obviously intentional manner. I’ll tell you now this actually cuts short many of our conversations.
On GWT: https://selfawarepatterns.com/2019/06/23/dehaenes-global-neuronal-workspace-theory/
4. I actually do disagree with Damasio on some things, but having read him at length, I perceive you also disagree with him on certain things, some of which you yourself have mentioned, such as whether the cortex is conscious. On your theory, I did note that I had issues with you overloading the term “feeling” to mean everything mental.
Here’s something I came to understand that everyone should learn. No one is as much into our ideas as we ourselves are. Very few people catch all the nuance and detail I put in my posts and comments. It’s just a fact of life I learned to accept a long time ago. My advice, break your ideas into smaller chunks and present them that way. You”re much more likely to get the feedback you want that way. That and actually be open to feedback.
5. On the hierarchies, you’re applying your own definitions again. The whole point of the hierarchy is to relate multiple definitions to each other. And yes, some people do define “consciousness” as any interaction with the environment, which includes reflexes.
I can’t see that a debate about “predict” vs “expect” is going to be productive.
6. From what I can see, what we commonly call “consciousness” is a collection of capabilities, many of which get mentioned in the hierarchy: perception, attention, imagination, introspection, etc. I suspect each of those will need their own particular theories. Indeed, each of those items can themselves be broken into different capabilities which will probably each need a theory. Vitalism got replaced by the complex intersection between molecular biology and organic chemistry. I see something similar happening with consciousness, although more in terms of neural circuitry.
And THIS is how PH D’s are born…”which will probably EACH need a theory.” I have to wonder if Philosophy will EVER be separated from Science on the subject of Consciousness, and FEEL like a good prediction may conclude, from the evidence: Never. Highly entertaining guys. Thanks.
LikeLiked by 1 person
Mike, sorry for the delay in responding but life often unexpectedly intervenes. I’ve broken my response into six individual comments to assist in following and clearly understanding each. Please don’t rush replies … better a thoughtful and well-considered one-a-day or so. We’ve got all the time in the world, don’t we? … 😉
1. “You’re accusing every theory you happen to disagree with of being unscientific…”
My “preconceived notions,” as you (somewhat dismissively, again) characterize them, are derived from repeatedly confirmed experimental evidence from many sources, such as this evidence (from Merker) which directly supports elements of BRASH:
Penfield and Jackson, as reported in 1954(!), “… routinely removed sizeable sectors of cortex in conscious patients for the control of intractable epilepsy. By performing the surgery under local anesthesia only, the authors ensured that their patients remained conscious, cooperative, and capable of self-report throughout the operation. … They then proceeded to remove cortical tissue while continuing to communicate with the patient. They were impressed by the fact that the removal of sizeable sectors of cortex … never interrupted the patient’s continuity of consciousness even while the tissue was being surgically removed. Penfield and Jasper note that a cortical removal even as radical as hemispherectomy does not deprive a patient of consciousness, but rather of certain forms of information, discriminative capacities, or abilities, but not of consciousness itself.”
That’s clear evidence that cortical operations resolve the contents of consciousness but do not produce consciousness. What is GWT’s and IIT’s scientific explanation for Penfield and Jackson’s findings? I can find no indication that either theory accounts for them in any way. I don’t “happen to disagree with” those theories—I find them grounded in evidence-free suppositions and thus insupportable.
The only other interpretation of the Penfield-Jackson results that I can imagine is to support the cortical production and ‘display’ of many different kinds of consciousness, a multiple consciousnesses theory you appear to believe in that’s weak in several respects. See Item 6 for details.
2. Regardless of the raft of meaningless to imprecise definitions of consciousness that abound, it must be the case that the proponents of any theory of consciousness are responsible for defining the term as they intend to use it before proceeding with the details of their hypothesis. That’s where panpsychism, IIT, GWT and the others fail. How are we to even understand a hypothesis that fails to clearly define its core terminology Mike?
In BRASH I define consciousness per Damasio. Rather than ‘overloading’ the term ‘feeling’ to mean “everything mental” as you suggest (see item 4), I use it specifically to apply to the entire spectrum of consciousness contents and, as well, to the ‘meta-feeling’ that is consciousness itself, so the term is precisely inclusive of those contents, not ‘overloaded’. If you disagree with the definition, Mike, it’s your responsibility to identify any consciousness content that cannot be characterized as conforming to the definition, so please provide your list.
3. What, specifically, is scientific about IIT and GWT? Please be specific. What foundational scientific evidence exists that supports the proposition that the cortex produces conscious images as opposed to resolving the contents of consciousness? The Penfield-Jackson evidence I noted above contradicts the assumptions of both IIT and GWT.
Dehaene’s “conscious avalanche” merely associates neural activity with an eventual conscious perception, from which he concludes without justification that “consciousness is just brain-wide information sharing.” Perhaps you’re dazzled by the neuroanatomical buzz he relates, but he fails to causally connect A to B either logically or with scientifically credible evidence.
4. You still haven’t registered a single objection to the BRASH proposal nor any evidence that falsifies it. You don’t have to be “into it”—if you believe it’s flawed and not worthy of you own concurrence, what is the nature of the flaw(s)? I am open to feedback, Mike. I keep requesting it and you keep not providing it, let alone with specificity.
Damasio hedges on subcortical consciousness by noting that some cortical conscious images have been identified, although his remark lacks specificity as to which images and where in the cortex they’re produced. He may be referring to the feelings associated with direct cortical stimulation. Regardless, it’s very important to either avoid or successfully address the overall (big-B) Binding problem—how do asynchronously generated conscious images generated by widely separated regions end up in a single synchronous conscious stream? BRASH suggests those images Damasio refers to are pre-conscious images transmitted to the subcortical subsystem that ‘displays’ a single integrated consciousness. The BRASH solution also explains Libet’s repeatedly reproduced experimental timing results. Neither of the IIT/GWT theories address the Binding problem or can explain Libet’s timings.
So, what specifically do you see as a falsifying flaw in BRASH?
5. It appears you’re the one providing your own definitions, specifically the definition of ‘hierarchy’ which is unquestionably defined as a ranking based on clearly specified criteria. I’m somewhat uncomfortable being the one to point out to you that you have no ordered ranking (1, 2 … n) and you have specified no criteria for any ranking. “Relating multiple definitions to each other” is not a hierarchy. At best, it’s an unordered collection of co-equal pseudo-definitions with no specified relationship of any kind (other than ‘relating’?) between the members.
And a grab-bag collection of multiple definitions of consciousness that includes elements that are obviously non-conscious and unconscious, such as reflex and prediction respectively, is additionally meaningless. Perhaps you selected the word ‘hierarchy’ to lend some organizational credibility to your unordered collection of whatever-this-is elements but your ‘”Hierarchy of Consciousness” is not a hierarchy Mike.
Further, I’m not debating ‘predict’ vs ‘expect’ … I simply provided definitions and noted that ‘predict’ is not what the brain’s unconscious processing is doing. Nothing about consciousness is prescient. Should you continue to disagree, please specify your reasoning.
6. Truly, Mike, your goal appears to be obfuscation through complexity which is at odds with my quest for clarity and simplicity in explanations. Once again, you appear to assert the existence of “types of consciousness” like your “affect consciousness” and “secondary consciousness.” You propose “perceptive consciousness,” “attentive consciousness,” “imaginative consciousness,” “introspective consciousness” and etc. to uncountability.
It would seem that, based on your interpretation of lesion studies and neuroanatomical subsystems and the like, you believe that there are distinct entities like “visual consciousness” and “auditory consciousness” and etc., each of which is produced by a distinct cortical subsystem. Yet this conception of distributed consciousness continues to be plagued by all the aforementioned deficiencies—Binding, Libet’s timings and conformance with the Penfield-Jackson evidence. Can you address those point-by-point vis-a-vis your apparent multiple consciousnesses theory?
LikeLiked by 1 person
Good grief Stephen. Sorry, I’m not parsing through all that and responding to all of it. Most of it would be responses you’ve already seen anyway. And frankly, I don’t have the time to do the research some of the answers you want would require.
I will say this. Your views seem formed from reading a tiny subset of the neuroscience literature. I wish you’d read more generally in it.
You might start by reading Damasio at length. I haven’t read his latest book, but ‘Self Comes to Mind’ is excellent, and far more nuanced than you portray him. Another excellent book would be a used edition of Mark Bear’s textbook, ‘Neuroscience: Exploring the Brain’, or John Dowling’s book, ‘Understanding the Brain’. ‘Neuroscience for Dummies’ is also excellent, although the author tends to just relay what’s known without getting into the empirical research that led to us knowing it, something I’m pretty sure you’d insist on.
If you want to get into the evolution of brains, you’ve seen me mention Feinberg and Mallatt a lot. Other sources are Gerhard Roth’s ‘The Long Evolution of Brains and Minds’, and Gerald Schneider’s ‘Brain Structure and Its origins’.
Much of this material doesn’t directly address consciousness, but it gives you a background that makes it easier to assess the theories that are out there.
Good Grief Michael! Although I suggested a one-a-day plan in addressing my list of questions, you chose instead to ignore every question I asked. So you did provide “a response I’ve already seen,” since this is what you do every time I pose a substantive question.
Since you reject parsing—reading and understanding—and responding to “all that” here’s the distilled essence of my requests Mike:
1. How do IIT and GWT account for the Penfield-Jackson evidence?
The answer, easily typed is: They don’t. Neither theory considers credible and well-established evidence about consciousness in any way, because they’re philosophy and not science.
2. After I observe that a clear definition of fundamental concepts is a necessary requirement before theorizing anything, a proposition which your aversion to clarity finds unacceptable, I responded to your statement that I’ve defined ‘feeling’ to be “anything mental” with the question:
What consciousness content cannot be characterized as ‘feeling’ per my extensive definition?
The answer, again easily typed, is None.
3. What, specifically, is scientific about IIT and GWT?
The answer, once again easily typed, is Nothing.
4. What specifically do you see as a falsifying flaw in BRASH?
Your answer is apparently also, and also once again, ‘Nothing’ otherwise you would have provided that information weeks ago when I first provided a detailed explanation of BRASH.
5. Why are you using the word ‘hierarchy’ for an unordered collection that’s not a hierarchy?
Once again, your dismissive attitude towards the clear definition of the words we use is in play and it’s clear that the answer is You don’t know what the definition of hierarchy is.
6. How does your apparent “multiple consciousnesses” theory account for the Libet and Penfield-Jackson evidence and explain the unified presentation of consciousness?
The answer is, of course, It doesn’t otherwise you would have cheerfully provided substantiating information and, we could only hope, clearly enunciate your mysterious multiple consciousness proposal—the one you continuously allude to.
Here is a summary these six responses:
They don’t. None. Nothing. Nothing. You don’t know. It doesn’t.
Pretty easy actually.
As to your presumptuous assumption that my views are formed from my “reading a tiny subset of the neuroscience literature” I would suggest that your logically confused and scientifically insupportable views are formed from your reading way too much “neuroscience literature” with no organizational insight or critical review on your part. As a for-instance, your often cited Feinberg and Mallatt pulled a switcharoo on you by relocating consciousness in humans to the cortex without even a hint of evidence or rationale as to how and why it moved from subcortical structures. You didn’t notice that they did that magic trick Mike because you’re thoroughly wrapped in confirmation bias and never question your own ill-formed conclusions, constantly finding supportive company with those who, like yourself, gullibly believe in evidence-free proposals based on words without definitions. And you repeatedly ignore evidence that threatens your “preconceived notions” if I might borrow your own phrase.
My focus for years has been the study of consciousness, not neuroscience. You may not have noticed, but those are two different things Mike. Along the way I’ve read the neuroscience that credible scientists cite as relevant to the production of consciousness. But the focus of neuroscience (or neurobiology) is the anatomy, architecture and functionality of the Nervous System and The Brain—the focus of neuroscience is NOT consciousness.
Neuroscience for Dummies? Indeed! As a certified Dummy then, my most heartfelt recommendation for you Mike is to learn to use a dictionary. Look up ‘hierarchy’ for starters—it’s under ‘H’. If there’s a Basic Logic for Dummies and Evaluating Evidence and Hypotheses for Dummies you’d be well served to carefully read and learn from those as well, speaking as one Dummy to another of course. 😉
Neuroscience and the study of consciousness are definitely not the same. I agree with you there, Stephen.
LikeLiked by 1 person
Tit for Tat, the Philosopher who would be a Scientist collides with the Scientist who shunts Philosophy. I can’t imagine that this contest ends with a (good feeling) hug of mutual admiration. But such is the Nature of Open Dialogue, if it could (and should be) maintained…if at all either Science and Philosophy are at play, and not ego or misplaced or hasty conceits.