Joel Frohlich has an interesting article up at Aeon on the possibility of detecting consciousness. He begins with striking neurological case studies, such as the one of a woman born without a cerebellum, yet fully conscious, indicating that the cerebellum is not necessary for consciousness.
He works his way to the sobering cases of consciousness detected in patients previously diagnosed as vegetative, accomplished by scanning their brain while asking them to imagine specific scenarios. He also notes that, alarmingly, consciousness is sometimes found in places no one wants it, such as anesthetized patients.
All of which highlight the clinical need to find a way to detect consciousness, a way independent of behavior.
Frohlich then discusses a couple of theories of consciousness. Unfortunately one of them is Penrose and Hammeroff’s quantum consciousness microtuble theory. But at least he dismisses it, citing its inability to explain why the microtubules in the cerebellum don’t make it conscious. It seems like a bigger problem is explaining why the microtubules in random blood cells don’t make my blood conscious.
Anyway, his preferred theory is integrated information theory (IIT). Most of you know I’m not a fan of IIT. I think it identifies important attributes of consciousness (integration, differentiation, causal effects, etc), but not ones that are by themselves sufficient. It matters what is being integrated and differentiated, and why. The theory’s narrow focus on these factors, as Scott Aaronson pointed out, leads it to claim consciousness in arbitrary inert systems that very few people see as conscious.
That said, Frohlich does an excellent job explaining IIT, far better than many of its chief proponents. His explanation reminds me that while I don’t think IIT is the full answer, it could provide insights into detecting whether a particular brain is conscious.
Frohlich discusses how IIT inspired Marcello Massimini to construct his perturbational complexity index, an index used to asses the activity in the brain after it is stimulated using transcranial magnetic stimulation (TMS), essentially sending an electromagnetic pulse through the skull into the brain. A TMS pulse that leads to the right kind of widespread processing throughout the brain is associated with conscious states. Stimulation that only leads to local activity, or the wrong kind of activity, isn’t.
IIT advocates often cite the success of this technique as evidence, but from what I’ve read about it, it’s also compatible with the other global theories of consciousness such as global workspace or higher order thought. It does seem like a challenge for local theories, those that see activity in isolated sensory regions as conscious.
Finally, Frohlich seems less ideological than some IIT advocates, more open to things like AI consciousness, but notes that detecting it in these systems is yet another need for a reliable detector. I fear detecting it in alternate types of systems represents a whole different challenge, one I doubt IIT will help with.
But maybe I’m missing something?
Hmmm. I didn’t get anything new out of the Frohlich paper, in contrast to this other paper from 2015 ( https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4574706/pdf/pcbi.1004286.pdf ) by Michael Cerullo which someone tweeted today and which I tweeted separately and which I was kinda hoping you would pick up on and look at and write up … 🙂
What I got from the Cerullo paper (and tweeted) was the idea of cataloguing 4 kinds of consciousness:
1. Panprotopsychism (Chalmer’s version, wherein quarks have something which makes them protoconscious)
2. Noncognitive Consciousness (or noncog. proto consciousness), which includes IIT
3. Phenomenal cognitive Consciousness (Block’s version, Consciousness associated w/ senses)
4. Access cognitive Consciousness (Block’s version, Consciousness associated w/ represented content)
Cerullo points out that IIT says certain operations have consciousness regardless of whether they are serving a function of any kind. I think that is the intuition that most, like Aaronson, get hung up on.
What I also got from that paper was the use of the label “computational functionalism”. I didn’t know you were allowed to stick those together. By extension, that would make me a computational functional representationalist.
*
LikeLiked by 1 person
I didn’t follow my Twitter feeds well today, so I missed your tweet until this comment.
On the paper, I certainly agree with much of the abstract. And calling IIT a theory of proto-consciousness fits with my own assessment of it, that it identifies some aspects of consciousness, crucial aspects even, but not all the necessary and sufficient ones.
Applying even the term “proto-consciousness” to quarks strikes me as unproductive. It’s a bit like saying quarks are “proto-Minecraft”. There’s a way to think about it that makes it true, since quarks could be seen as proto-anything from atoms on up, but I’m not sure what actual work it does. I know they got that notion from Chalmers and his panprotopsychism.
On IIT and certain operations, from what I understand, it’s worse than that. IIT says certain structures are conscious, even if they’re not doing anything. I could maybe see saying they’re capable of being conscious, but it’s hard to see how saying they are conscious makes any sense.
I actually see computationalism and representationalism as subsets of functionalism, so to me they’ve always been linked. That said, I don’t see generic functionality, computation, or representation as sufficient for consciousness. Like Minecraft, it takes a certain collection of functional capabilities.
LikeLike
I jumped on the “4 kinds” because it maps to my thoughts exactly. The protopanpsychic property is information, mutual information to be precise. I didn’t get that IIT says structures, without action, are conscious. I would assume, as you say, they simply have the potential of consciousness.
I think representation is a particular kind of computational function, and it is the basic building block of consciousness. I think human consciousness will best be described in terms of particular kinds of representations.
*
[working on it]
LikeLike
On the structure aspect, the proponents often emphasize structure in their narratives, but not to the extent of saying a static one is conscious. From what I understand, it falls out of the mathematics. Can’t remember where I read it. Maybe in one of the papers criticizing IIT.
I think representations are important but not sufficient. (You’re probably getting tired of seeing that word sequence from me.)
LikeLike
I agree that IIT identifies important attributes of consciousness, and that they’re not sufficient. I especially like the idea behind the “Exclusion” principle in the Wiki article on IIT, which draws the “boundary” of consciousness according to maximum integration of information. Maybe that needs to be tweaked, but I expect something similar should be correct.
LikeLiked by 1 person
The exclusion principle is probably why neural systems have such high phi and standard computer systems don’t. Neural systems store their data throughout the network, which means that at any one time, only a subset of the network is in use.
Standard computing segregates data from execution. Only a minute part of the data is being accessed at any one time, but the execution part of the system has far lower rates of exclusivity.
But that’s one of the reasons I think IIT wouldn’t be that useful for assessing machine consciousness. It doesn’t take into account the fact that there are always many ways to skin a cat.
LikeLike
I think the exclusion principle is the biggest liability for IIT. It leads to intuitively unacceptable results. For example, you can get enough people and organize them in such a way that the whole group is a consciousness, but that would mean that none of the people are. And then if one goes on lunch break, all the people become conscious again, until that person comes back. Stuff like that.
*
LikeLiked by 2 people
Just realized that my previous response wasn’t actually talking about IIT’s exclusion principle, but the one about specific information. Sorry, my bad.
LikeLike
Understandable, as you were referring to what Cerullo called “the principle of information exclusion”.
LikeLiked by 1 person
May as well use a dowsing stick. 😉
LikeLiked by 1 person
Well, hopefully we can do better than that.
LikeLike
As we have talked about before, I think consciousness is fundamentally “loud” and that it attests to itself. But it may require another consciousness to evaluate that.
Consider this: If computationalism is right, there may be a Halting problem of sorts in determining if any given computation or system is “conscious” or not. Both Gödel and Turing demonstrated that a mechanized or algorithmic approach simply cannot solve all questions about complex systems.
LikeLike
On being “loud”, I agree, in a normal functioning system. Consciousness evolved for reasons, and those reasons involve enabling adaptive behavior in novel situations.
But the article talks about brain injured patients who have been diagnosed as vegetative, but through brain scans, determined to have a “covert” consciousness. (Although obviously not convert by choice.) In that case, it’s not going to be loud.
Godel applies to the system knowing itself. I think we’ve well established that our knowledge of our own mind is far short of whatever limitations his theorems might impose. Turing himself wasn’t concerned about these issues.
LikeLike
I’ve said the same! Consciousness may turn out to be messy and course-grained and hence non-deterministic. Only Turing’s infallible machine is fully determined. Wrong answers must be unpredictable.
Which makes what I said all the more true. If fully determined computation is beyond our ability to fully analyze, what does that say about fallible computation?
As for the small number of cases of trying to determine if a former consciousness is still functioning in a locked in situation, yeah, that would be a good problem to solve. I’d try taking as many fMRI data from known functioning conscious brains in myriad states from fully awake to drugged to every state I could get. Then I’d feed that into a neural network as training data. Next I’d try to use cases where there was fMRI before and after data of questioned cases that did wake up. I’d likewise try cases where the patient didn’t, see if I could spot a difference.
LikeLiked by 1 person
Unlikely that we can do better than dowsing sticks in this still primitive age of inquiry, however intrepid modern aspirants of explanation. But always good stuff on this blog, notwithstanding the SelfAware’s late cat-skinning comment. Oh, she took notice, but disdains to respond in kind—and is currently cooking up something actually substantive to say about consciousness.
LikeLiked by 1 person
Thanks Jeff.
I just want the cat overlords to know I have nothing against cats.
LikeLike
Regarding the fMRI tests with seemingly unconscious people being asked to imagine playing tennis or walking through their home.
Both activities (at least for people who have actually played tennis) could involve a significant amount of brain activity that is unconscious. Since we can’t be sure either whether some portion of language comprehension might also be unconscious, we could find brain activity similar to that of playing tennis without any actual consciousness on the part of the person.
The Angelman syndrome is something I haven’t heard and would seem to be a significant challenge to most theories of consciousness including the EM field ones.
LikeLiked by 1 person
On imagining activities, I could see that for simple prompts, but when they’re able to effectively have conversations with the patient, with the patient imagining different things for YES and NO, it seems pretty hard to doubt there’s some consciousness there.
Agreed on Angelman, but I tend to agree with Frohlich that’s it’s likely a measurement issue.
LikeLike
Five patients showed brain activity on prompts but only one of them could have a “conversation”. So if the one was conscious, what was the brain activity in the others showing? That brain activity on fMRI may not be a reliable indicator of consciousness?
LikeLiked by 1 person
Ah, I didn’t catch that only one could have the conversation. Excellent point.
Ultimately, detecting consciousness comes down to report. We can use activity associated with report in healthy subjects to look for it in non-responsive patients, but apparently the data is never easy to interpret.
LikeLiked by 1 person
You have to go to actual study to understand that only one could have a “conversation”.
Even with reports, there still could be an issue untangling detected brain activity that directly corresponds to conscious experience from brain activity that plays a subsidiary or supportive role in conscious experience. You know the necessary/sufficient thing. 🙂 Brain activity that is necessary for consciousness but not sufficient. In this case, four of the five generated activity that their analysis showed matched brain activity of awake people imagining various motor activities yet couldn’t have a “conversation” so we would presume they were not conscious.
On the other hand, the whole method of analysis might be flawed and it could be we can’t really draw any conclusions from it.
LikeLike
Ah, good, glad I wasn’t just being sloppy on my reading of the article. Thanks!
We’ve talked about the no-report paradigms. But for them to work, they still need stimuli that had previously been established as conscious through report, or with a report control group, or done in such a manner where the report activity can be filtered out. But I’m increasingly starting to see these efforts as misguided. I think the version of consciousness they’re trying to isolate wouldn’t be real consciousness. It’s too disconnected from the reasons consciousness evolved.
On the four who couldn’t converse, I think we have to be careful viewing consciousness as something that’s either completely there or completely absent. “Minimally conscious” is a recognized medical state. So it could be that these people have a degraded consciousness. But since they’re unable to report, either verbally or behaviorally, we can’t know to which extent. I actually see it as a relief if they’re not actively aware in a hell of boredom.
LikeLiked by 1 person
About the four you’re right it could be about them being unable to report.
Almost all of the research tying brain activity to consciousness is necessarily conducted with conscious individuals. It could be the brain activity in these cases does serve as a useful proxy for conscious experience and any conclusion derived might be valid. However, it could also be that the brain activity measured is not the actual brain activity responsible for the conscious experience. That might be either some subtlety in the brain activity measured or something else happening in the brain that we aren’t measuring.
LikeLiked by 1 person
“Actively aware in a hell of boredom”? Hey, you’ve got some literary talent. The sentiment, though, kind of reminds me of the year I spent working at Disney World. Circa 1983.
LikeLiked by 1 person
I’ve had jobs that seemed that way too, like you’re in a lonely solitude of perditious boredom. In the case of a patient, I suspect they’d eventually become acclimated to it, but I wonder if their consciousness would degrade over time, just due to lack of stimulation.
LikeLike
No doubt of an atrophy of feeling one way or another in long-term clinical conditions. Cases of akinetic mutism I guess is what we’re talking about. Degradation of rationality would surely follow. But what would it actually be like to feel or think neither one way or another? A kind of free-floating, unmoored awareness of simply being? I don’t know about you, but that’s a condition I could possibly stand—as long as the morphine drip kept mercifully flowing. On second thought, I would also want this: a continuous loop of, The Adolescents”, “Escape From Planet Fuck” (punk band from the early eighties).
LikeLike
I don’t know if those patients have akinetic mutism, but based on what I know about it, it doesn’t sound like it’s an unpleasant state to be in. On the flip side, it’s also not a pleasant one. It’s just…neutral, non-feeling, a state of utter indifference. From the outside, it seems like a state to be avoided, a zombie like existence. But from the inside, well, you have no cares, which I think would be far preferable to being aware with feelings about it, but locked in.
LikeLike
Like the old joke about the search for intelligent life elsewhere in the universe, “Hey, we are still looking for it on Earth!” looking for consciousness … in all the wrong places?
Building a consciousness detector is essentially impossible … until we can define exactly what consciousness is. I think the way forward is expanded studies on unconscious behaviors. We have learned more about these in the past ten years than in all of previous history (IMHO of course). If we can list all of the autonomic process that occur in our bodies and all of the unconscious processes, what’s left, according to Sherlock Holmes, might be some truth about what the conscious really is.
LikeLiked by 1 person
The problem with looking at unconscious processing is that anything in the brain can be unconscious. There’s nothing about any processing that intrinsically makes it conscious. It’s a bit like looking for fame by looking at each person in the population individually. None of them will have the fame attribute. Discovering that requires looking at the person in relation to the overall population.
LikeLike