When writing about the mind and consciousness, and how it exists in material systems, many of us resort to functional hierarchies. (Mine typically start with physical interaction and work all the way up to self reflection.) Ogi Ogas and Sai Gaddam have a similar idea, and have written a whole book on it: Journey of the Mind: How Thinking Emerged from Chaos. Although they take the hierarchy up into culture.
Ogas and Gaddam start off by noting that a mind is not defined by any type of substance, but by activity. Their frequent analogy is a basketball game, which only exists when players are on the court and engaging in specific activities. Their definition of a mind is a physical system that converts sensations into action, taking input from the environment and then altering that environment for its own purposes. This is a pretty liberal definition of “mind”, largely equivalent, I think, to agency.
Using it, they consider the simplest minds to be molecule minds, such as the sensory-motor circuits that exist in unicellular organisms like archaea and bacteria. They then go up the chain to neuron minds, such as the ones found in jellyfish and worms. They consider insects such as flies particularly sophisticated versions of neuron minds.
But it’s with what they call module minds that they reach a stage that most of us would be more comfortable with the “mind” label. This starts with fish and goes up, with increasing sophistication, to chimpanzees. They give simple labels to the modules: How module for motor coordination, Where module for the location of things, What module for identification and discriminations, When module for time sequenced understanding, Why module for understanding causal relations, etc. They stay away from neuroanatomy, which makes the book a much easier read for anyone not interested in the neuroscience weeds.
But while O&G define “mind” as starting at the unicellular level, they’re much more selective with “consciousness”, adopting Stephen Grossberg’s ART (adaptive resonance theory) of consciousness.
The main thrust of ART is the idea of different representations resonating (as in neural oscillations) with each other, such as the representation of a top down expectation resonating with a representation from bottom up sensory input. Or representations from the What modules resonating with ones in the Where module. Consciousness happens when there is resonance among representations in a number of modules, that O&G call the “Consciousness Cartel”.
Grossberg isn’t as well known as many consciousness researchers, although he’s better known within neuroscience circles. He has a reputation for having pioneered many types of research long before others independently reached similar conclusions. The result is that ART seems to have a lot of similarities with global workspace, recurrent processing, and predictive coding theories, but with a different terminology. Although Grossberg gets into details of learning that I’m not sure the other theories touch on. He has his own book out, which I own but haven’t read yet.
For O&G, the next rung up on the mind latter is superminds, minds composed of module minds. Basically a supermind is a culture, a group of minds, that interact to form a functional system, like a tribe or nation. Which means that superminds compete with each other. Superminds, according to O&G, are enabled by language, which they characterize as “qualia sharing”, so they only see them existing among humans.
And superminds can have their own forms of resonance, and so their own form of consciousness. They contemplate that a supermind could be considered a type of god, but one in which we’re all part of. (Indeed, one thing we can say is true about historical gods, is that they were cultural forces, even if not cosmic ones.)
But O&G see the supermind having profound meaning for us. Human self awareness, they assert, only exists through interaction with the supermind, that is, through language. This has some resonance (no pun intended) with theories which take our model of self to be our model of other minds turned inward. But it seems like a much stronger thesis.
It’s a theory of human consciousness that reserves a special role for language. It puts language and culture at the center of human self awareness. O&G relate a common story of a ruler (Akbar, the Mughal emperor, in this version) who has children raised without any exposure to language. When brought to court after twelve years, the children display no signs of humanity, but act like beasts, and seem incapable of learning to be human.
O&G also discuss the famous mirror experiment to test self awareness in animals. Here they assert that a chimpanzee is self aware, but only while it’s looking at itself in the mirror. Remove the mirror, and it loses self awareness. So what then is the mirror humans use to retain self awareness? The supermind, enabled through language, reflects a sense of self back at us.
This is an interesting book, and I recommend it for anyone who finds these topics interesting. It’s a relatively easy read.
But I felt like the use of “mind” for unicellular agency was misleading. I think “agent” would have been a more accurate phrase. The supermind discussion was interesting, but it seems like the authors ignore clear signs of superminds (or at least proto-superminds) in other social animals, such as wolf packs, monkey troops, lion prides, ant colonies, and many other examples.
Certainly language enables far vaster and more intelligent superminds. And language, and more broadly symbolic thought, not to mention culture, are crucial aspects of human cognition. But again, this seems to ignore the continuity that exists between human self awareness and the simpler forms in other animals. Human self awareness is built on the self awareness shown in social mammals, which in turn is built on the simpler self awareness of other species. Characterizing it as something that only comes into existence with language is, I think, ignoring too much of what is known about animal cognition.
That said, the book is a fairly gentle introduction to Grossberg’s theory, and reading it reminded me that I still need to read his book. (Although I fear it won’t be nearly as easy a read.)
What do you think of the ideas of molecule minds, superminds, and language centered self awareness? Am I too dismissive of them? Or too accepting of the supermind concept?
[yea! Yet another book to add to my stack. And yes, Grossberg’s book is always at the bottom of that stack. LIFO.]
Sounds like this book mirrors my understanding of mind. If it’s an easy read, I might actually get to it. And seems like I’m with them in that agency implies mind. For me, a mind is something that acts based on information, and an agent is something that makes choices based on information, so must have at least two actions available. [Not sure if I should consider no-action a choice of action. Maybe …]
This also works for hierarchies, but I’m curious whether the authors see a hierarchy within the human mind. I can understand modules, but I anticipate that what most people call the self, and Damasio calls the autobiographical self, is a module of modules. I think this is what language did, requiring a module that assigns words to other modules. Once you have that, lots of possibilities are opened up, but especially culture, which is more modules of modules.
*
LikeLiked by 1 person
I thought about you when I was reading about the molecule minds. Their take is very similar to your input-process-output concept.
On action selection, the very first one considered has a sensor on each side of the organism connected to a flagella motor on the same side. The flagella is simulated to whip when the sensor detects shadow, a lack of light, which causes the organism to turn toward the light. Later versions have the wires crossed, so that detected food on the left side stimulates the right side motor, causing the organism to move toward the food. Even later versions get into the tumbling reflex, which I think we’ve talked about before. There’s also the habituation factor, which allows for responses to gradients.
I didn’t take them to see a hierarchy within the mind. They seem to take the standard GWT that there’s no control center anywhere, just interacting modules. They do talk about a “language stack”, but I take it to be more of a recursive processing thing.
Anyway, the book is pretty easy reading. I think you’d enjoy it. I do think they have a tendency to talk like some speculative concepts are established truth, which might give the wrong idea in places. And they seem very concerned that the reader not get the impression they’re talking about the operations of a typical digital computer, and so eschew the word “compute”, which is striking given their liberal use of “mind”. Although they’re forced to back off of that in the endnotes when they quote scientists who don’t have that hang-up.
LikeLiked by 1 person
Not a lot new I can see in this book. Of course, we can always quibble over terminology.
I don’t think cultures are super-minds. Single bacterial cells communicating with other bacterial cells is to an autonomous organism as individual humans communicating with each other is to a human super-mind. The Borg might be a super-mind.
Maybe too much emphasis on language. A lot of learning of all sorts takes place from 1-5 years old so trying to extract language as the critical element might be oversimplifying.
I’m increasingly being to think of the self as model of the organism and its relationship with its environment. So it doesn’t require language.
LikeLiked by 1 person
It seems like a large proportion of philosophical debates are terminological ones, often without the debaters realizing it.
I think whether cultures are superminds is a philosophical issue, one where there’s no strict fact of the matter. We often talk about things rising in our societal consciousness. And words like “groupthink” seem to convey an intuitive sense we have about these things. Someone can insist this is all metaphorical language. But sometimes our metaphors are revealing.
Agreed on language. I think they overinterpret the role it plays. Which isn’t to say its role isn’t a major one in human cognition.
I definitely think the self is a model, and there can be wide variances in how sophisticated it is. A social animal’s model is likely more sophisticated than a non-social one. The human autobiographical version is probably far more sophisticated than anything else out there, but it’s continuous with the ones in other great apes, primates, mammals, etc.
LikeLiked by 1 person
I don’t believe there is sufficient integration of the parts to create a human super-mind. There may be beginnings of something in human culture and our technology might inadvertently end up turning us into a super-mind. But we’re not there yet. My thought anyway.
LikeLiked by 1 person
I’m on board with some of the aspects of the “supermind” idea. Human self-awareness is profoundly affected by our social interactions; I could see how someone might even stipulate that without this, a Homo sapiens wouldn’t count as “human”. Also, since the very content of our thoughts is thoroughly dependent on the world around us, and since the social world is such a vital part of the human world, what we think depends partly on social context. Especially when we think in words, but probably not only then. Some of our thoughts are in words, and those inform other thoughts.
LikeLiked by 1 person
I think that’s a good assessment. The authors oversell the case, but they’re not entirely wrong about the crucial aspects of culture and language to human cognition. There are plenty of cases of children raised in isolation to show how crucial it is to have early exposure to other people and language. (Although separating real cases from hoaxes seems like a challenge.)
Using Damasio’s framework, I don’t think our core self depends on language or culture. (Although to your point, it probably does depend on at least some exposure to an environment.) But our full autobiographical self probably does depend on language and social dynamics, and more broadly symbolic thought.
LikeLike
The supermind is an interesting idea. When you say that Ogas and Gaddam believe superminds can have their own form of resonance and thus their own form of consciousness what does that mean? It sounds as though either a) they don’t view consciousness as an internal experience, or b) they are positing the notion that groups of humans in this resonance produce a consciousness that has this internal experience?
The story of the children raised to twelve years of age without language is an interesting one is well. I’m just thinking that some things may exist that are not dependent on language, but which cannot easily be teased apart from the whole. For instance, when I try to imagine how hard it would be to raise children without language in a human society it strikes me that they would have to be isolated, and that their natural tendency for language use would have to be “starved.” Related, perhaps other needs humans possess as social creatures would not have been met either, so that effectively it may not be possible to raise humans with all the emotional and social “nutrients” they require for healthy development, EXCEPT for language. It may take too much, in other words, to withhold language, and this is reflected in the “beastly” behavior of humans raised in such deprivations. In this case it wouldn’t be a strong recommendation for the theory language is essential to identity, but I am probably missing something as they would surely see this as well. I do think language can play a profound role in shaping our perceptions so I’m not wholly opposed to this notion, but the idea that raising children without language and having them turn out poorly in some way is for me a weak bit of evidence (though of course there could be more to it).
On the chimpanzee mirror test, it’s interesting to wonder what the authors really mean by self-awareness. Certainly when I look into a mirror I’m aware of myself in a way that I’m not when I’m lost in thought while making the morning commute and listening to music, for instance. Or when I’m having a meal with friends. Or when I’m working on a complex problem and am “lost in thought.” Without more context it seems that the authors are focused on a self-model that is only the sort of self-consciousness held in consciousness, in active processes of awareness. To say that chimpanzees lose self-awareness when not looking in a mirror seems incorrect to me without knowing more, and I’m guessing it relates to what they mean by self-awareness.
If these are what is meant by consciousness then it’s really confusing to understand what they mean by the consciousness of a supermind!
LikeLiked by 1 person
O&G pretty much go all in on supermind consciousness. To understand their reasoning, remember that the idea of information resonating in various modules within a mind is petty much Daniel Dennett’s fame-in-the-brain analogy. (O&G use the example of George Floyd’s murder entering our collective consciousness as the media reported (magnified) the effects of that situation.) The idea is that this same dynamic is what makes something consciousness within a module mind.
But of course, the fame-in-the-brain analogy is based on societal dynamics. I think this example in their supermind chapter shows their overall take.
On children raised without language, that’s a good point. How much of the social dynamics depend on language vs just interpersonal dynamics? In the Akbar story, the children were raised by mute wet-nurses, and so would have had some interpersonal dynamics. The problem is that this story may be a legend. Actual documented cases are usually of abused children held in isolation, typically dark rooms. Separating out the language deprivation from the overall social deprivation seems difficult. It would require “forbidden” monstrously unethical experiments. It has been done with monkeys, but obviously that only gets at the socialization, not the language issue.
I don’t recall the authors clarifying exactly what they mean by self awareness, which is unfortunate. If they had stipulated that they were talking about a social-self awareness, that is, awareness of oneself within an overall group dynamics, then I think they would have been on stronger ground. But that social-self awareness, I think, is built on a preexisting combination of mind-self and body-self awareness.
I actually don’t think language is possible without mind-self awareness. So it might be that mind-self awareness and language end up correlating, but I think the direction of causality is the reverse of what they’re imagining. Or maybe it’s bidirectional, with evolution strengthening the relationship over time. Whichever is the case, I don’t think mind-self awareness is absent in someone who’s suffered social deprivation, but social-self awareness obviously is. But maybe there will eventually be data that shows I’m wrong.
LikeLiked by 1 person
Thanks for taking the time to copy some of that text directly, Mike. I cannot escape the notion there is some sort of categorical error going on here, or at least the possibility of one that is smoothed over based on O&G having a different notion of things than I do. For instance, the notion of an “American” consciousness or an “Indian” one is predicated on how, within a nation-state, the media and/or social media processes create a collective awareness of events, which generates a response by the people of the nation. I wonder if this works, however, if the human beings in America or India didn’t already consciously identify as “Americans” or “Indians.”
Is the idea that the processes in a human brain are categorically identical to the processes described in the quote? Or just similar? For instance, if people have to know they are American to contribute to the American supermind, does this imply or require that neurons and cells of a human being would “know” they are part of the same organism to work? My thought would be that generally the neuroscientists and philosophers who advocate this sort of model would say no. I don’t honestly know if that’s true but if it’s not true, then if it were the case that cells had to be conscious of the whole to function as they do it leads to a sort of “consciousness all the way down” model.
If the idea is neurons and cells in the brain do not have consciousness but some sort of human scale consciousness results, then applied to a supermind it would suggest we could be oblivious or unconscious of our supermind affiliations. We don’t know and it doesn’t matter that we know, but somehow our individual interactions give rise to the supermind. We become, in a sense, the unconscious cells.
Do you follow the question? Are they suggesting the supermind of America has a subjective experience of what it’s like to be America, and that we, like neurons and cells, produce that without any real awareness or knowledge of being American?
Michael
LikeLiked by 1 person
On quoting the book, no problem Michael. I have the ebook version, so it’s fairly easy. And I had that passage highlighted.
On the category error, remember that O&G see a mind defined as dynamic activity that takes in sensory input and produces output that modifies the environment. Here’s what they say early in the discussion on superminds.
I do think you zero in on one of the issues, that the boundaries between superminds isn’t very distinct. On top of that, in modern society, it seems like each of us would be in multiple superminds at the same time, a nation we live in, a company we work for, maybe a faith or ideological movement, or even just a sub-culture interested in talking about certain topics, like consciousness. 🙂 And modern technology enabling communication across the world seems to be blurring the old distinctions for some of these superminds. The globalization issue would be seen as tension between those who want to be in an overall world supermind vs those who want to stay in their local national one.
All of these seem like different dynamics than what happens inside a single skull. I suspect O&G would say that yes, the dynamics aren’t all the same. But the dynamics were different in the move from molecule minds to neuron minds, and then again from neuron minds to module minds. So we shouldn’t expect all the dynamics to be the same with superminds. Remember, their definitions of a mind is a system that take sensory input and transforms it into action on the environment. So any group of people working together is going to fit. (And as I noted in the post, that actually applies to any group of social animals as well, which they seem to ignore.)
They also discuss scenarios where language allows a module in one module-mind to work with another module in a different module-mind. Imagine someone watching as a driver backs their car in, giving them directions based on what they’re seeing. They argue it can be seen as their visual system communicating with the driver’s output systems.
I don’t think anyone in a supermind has to be aware of what they’re part of, although obviously if you buy the concept, then the fact we’re discussing it means it’s possible. But a neuron obviously has no idea that it’s part of a brain, or a brain module, just as the proteins in a neuron have no idea they’re part of a neuron, or the sensory or motor proteins in a bacteria that they’re part of a molecule mind. Not sure if any module in module mind would ever know, by itself, whether it was in an overall mind. Probably not.
As to consciousness, well, you know my stance. It’s in the eye of the beholder. There is no fact of the matter, just what each of us is prepared to accept. I think it’s fair to say that if America or India are conscious, it’s a very different form of consciousness than the one in each of our heads. But news articles do regularly talk about when something entered our group consciousness. You can say that’s a metaphor, but given that, under GWT and similar theories, we understand the consciousness in our heads with metaphors in the other direction, sometimes the metaphors are revealing.
I do think O&G are saying the supermind of America has a subjective experience of being America. Just before the quote above, they discuss the conflict during the American civil war, when the American supermind became divided. Each division had its own understanding of itself and the other, from its own perspective, understandings that were often distorted or wrong.
All that said, I’m not sure myself yet how I feel about the idea of superminds. We can give the same account with talk of cultural forces. It’s not clear what calling these groups “minds” really buys us.
LikeLike
I don’t think I will reach out for the book because I can’t see what such a functional description can tell us? Do we know more by saying that “consciousness happens when there is resonance among representations in a number of modules” or that it is a “recurrent processing”? Why should a sequence of internal physical states, however complicated, complex, intricate, convoluted, or multilayer-networked, give rise to a subjective experience? The technical jargon, so in fashion nowadays, has only a seemingly explanatory power and adds nothing to a real understanding. Choosing labels such as ‘recursive,’ ‘feedback loops,’ ‘resonances,’ ‘information integration,’ ‘synchrony,’ ‘phase-locking,’ ‘re-entrant circuits,’ ‘self-referential’, or whatever, doesn’t lead to any new insight. The reason why we are sentient, having qualitative experiences, qualia, and self-awareness, etc., remains as mysterious as it was in the times of Descartes.
LikeLiked by 3 people
Each of the terms you list have functional meanings. If you understand what they mean, and how they fit in overall causal chains, you can get an idea how sensory information is processed in the brain, and how it leads to memory, behavior, and self report, including self report about subjective experiences. Unfortunately due to space limitations I couldn’t get into the details, but the book does.
I do discuss a version of those details in a post on global workspace theory.
But that does entail accepting that qualitative experiences are functionality. If you can’t accept that, then there’s nothing here that addresses an experience separate from activity. It’s not clear to me that anything actually can.
LikeLike
“But that does entail accepting that qualitative experiences are functionality. ”
In fact, accepting always needs a leap of faith. Accepting that qualitative experiences are functionality is, at least for me, the acceptance of magic.
LikeLike
That’s pretty much the opposite of the conclusion I reach. It seems to me that we can provide a functional account for every aspect of subjective experience, and through that, an evolutionary account of how and why it was naturally selected.
Except, that is, for some ineffable something or other that no one can define or articulate. The only reason to suspect this ineffable undefinable thing might exist is introspection. But we have decades of psychological data showing that introspection is an unreliable source of information. So in my view, that acceptance is a natural step, and the only way to avoid magic.
But as I’ve noted to others, if someone insists the ineffable undefinable thing exists, it’s beyond science’s ability to show it’s not there, or there.
LikeLike
The question is not whether introspection is reliable or not. It is why it is associated with a subjective phenomenal experience. One can speak of feedback loops or self-monitoring modules, etc. but that does not explain why a subjective experience arises. Would it not be for the introspective first-person subjective dimension, we would not even write about these things here. In principle, science must deny phenomenal consciousness because there is no evidence for it (that’s, after all, what impels eliminativists). If we nevertheless are talking about it, it is only because we have an inner experience that can not be determined from an exclusive third-person perspective and also assume that others have it too. There is no logical or scientific reason for our conversation to exist other than that ineffable, undefinable, “unreliable source of information” that, nevertheless, is damn hard to deny.
The only thing we can say for sure is that we experience something, that there is sentience with its related mental contents. Everything else we think of is derivative from that. When you look at a chair, you don’t see the chair “as it is in itself”, right? It is a useful abstraction necessary to make sense of the world, yet an experience. The same fits with what we call “function”, “functionality”, “process”, etc., which are already mental phenomenal constructs that pretend to explain themselves. Any attempt at a functional account of subjective experience misses the point, it is a non sequitur from the outset. Theories such as IIT or GWT do not furnish much insight when it comes to the philosophical issues related to phenomenal consciousness. At best, they shed some light on what Chalmers called the “easy problem”, but there is a qualitative, conceptual, logical, and almost abysmal hiatus between function and sentience. There is no reason to believe that whatever complicated, self-referential, self-reporting (name it) functions, processes, physical phenomena (name it) should elicit a private introspective phenomenal experience, and that we loosely call ‘sentience’. If it does, then I don’t see why it does, other than saying that it is pure magic.
LikeLike
I’m not an eliminativist. I think phenomenal consciousness exists. I just don’t think it exists separately from access consciousness. I think it is access consciousness from the inside. Consider a donut hole. Does it have any existence separate from the donut itself? Or is it something that exists due to the donut’s structure? Or democracy without voters? Or a news media without events and watchers?
Like these other things, I think phenomenal consciousness exists within the structure of information processing and access and utilization of that information. In this view, phenomenal consciousness requires access and utilization of content to be phenomenal. It’s not that phenomenal content exists and is then accessed. The very dynamics of access is what makes it phenomenal.
I actually think what Chalmers calls the “easy” problems (which are far from easy) are the real problems. Although he omits affective evaluative reactions from his functional list. If he had included them, the answer would probably be more obvious. That the hard problem is just the sum total of all the easy problems.
Gilbert Ryle solved the hard problem in 1949 when he pointed out that considering the whole separate from its parts, which is what the hard problem does, is a category mistake. He uses the example of a tourist at Oxford being shown the lecture halls, administrative offices, meeting faculty and students, and then saying, “But where is the university?” Or a child being told that a parade will feature a particular division. The child watches large numbers of soldiers and equipment go by, but then asks, “But where is the division?” In both cases, they’ve seen the trees without realizing they were seeing the forest.
For now, if we can’t agree whether there’s anything more than the “easy” problems, maybe we can agree there’s value in making progress on those problems. If there is something other than those problems, solving them will make it more obvious. If there isn’t, we’ll be done.
LikeLike
Wonderful observation Marco! Even though science has progressed in so many ways since the days of Descartes, regarding “consciousness” things seem to be just as much of a joke today as they were back then. And perhaps even more so given that the popular illusionist band of philosophers seems intent upon overturning his “cogito ergo sum”. There are two things that give me optimism however.
One is Eric Schwitzgebel’s innocent conception of consciousness. How might science effectively grasp something that it can’t effectively define? Apparently not well. If widely adopted I believe that his definition could help set things straight for these scientists and so counter those who profit from endless failure given magnificent demonstrations of their charisma and so forth.
Click to access DefiningConsciousness-160712.pdf
Secondly there is Johnjoe McFadden’s proposal that consciousness might exist in the form of certain neuron produced fields of electromagnetic radiation. Unlike standard proposals which theorize consciousness as certain generic information that’s properly converted into other generic information (and so have any number of ridiculous implications that are effectively ignored), his proposal is strange in the sense that it’s actually falsifiable. Furthermore it has no ridiculous implications that I know of.
https://aeon.co/essays/does-consciousness-come-from-the-brains-electromagnetic-field
Let me know if you’d like any specific thoughts on Schwitzgebel and/or McFadden.
Regarding functionalism, is this not a tautology? Yes functionalism should be true in a natural world to the extent of a given functional reference. If one number cruncher produces the same as another then in that limited functional sense they should do the same thing. Finer analysis however should demonstrate that they don’t do the same things however, and to whatever degree that we’d like. Thus the functionalist can always claim that they mean whatever level of function that you demand and so have no potential to be wrong.
Regarding consciousness I’ve noticed functionalists to tend to theorize that if a standard computer can speak as well as a human can (as in the Turing test), to then presume that it must be conscious. Firstly I think they should be more humble about this since there is no indication that a standard computer can do so whatsoever. Secondly they seem to presume that their merely theorized human speaking standard computer demonstrates that algorithm function alone explains how the human brain itself creates the phenomenal element to our existence. If you’re interested I think that my thumb pain thought experiments demonstrates tremendous problems with this position.
LikeLiked by 1 person
“I’m not an eliminativist. I think phenomenal consciousness exists. I just don’t think it exists separately from access consciousness. I think it is access consciousness from the inside. Consider a donut hole. Does it have any existence separate from the donut itself? Or is it something that exists due to the donut’s structure? Or democracy without voters? Or a news media without events and watchers?
Like these other things, I think phenomenal consciousness exists within the structure of information processing and access and utilization of that information. In this view, phenomenal consciousness requires access and utilization of content to be phenomenal. It’s not that phenomenal content exists and is then accessed. The very dynamics of access is what makes it phenomenal.”
I don’t feel this really saves functionalism from the objections above. Saying that phenomenal consciousness is access consciousness only states something as an axiom, but does not make it more comprehensible. Because the question then is who or what is making the access? If we say that it is the action of accessing and utilization an information process (say the synchronous information retrieval and/or sharing from/between many memory modules at once a la GWT or IIT, or whatever…), then one wonders why “accessing” is supposed to be phenomenal consciousness? Could access and utilization of content not be what it is–namely, a process–without being also at the same time something else entirely new–namely, a subjective perception–apparently by magic? Therefore, this sounds to me like a lexical move that hides the very concrete fact that in reality, access and utilization of content, for some mysterious reason, is not but becomes phenomenal consciousness.
As to the analogy, consider that the “donut hole”, “democracy” or “news media” aren’t phenomena, other than in and of our minds. We do not “access” these “things”, we reify them in us and, only at that stage, do they become the most concrete and tangible “phenomena”. This again raises the question who or what does the reifying?
“Gilbert Ryle solved the hard problem in 1949 when he pointed out that considering the whole separate from its parts, which is what the hard problem does, is a category mistake. He uses the example of a tourist at Oxford being shown the lecture halls, administrative offices, meeting faculty and students, and then saying, “But where is the university?” Or a child being told that a parade will feature a particular division. The child watches large numbers of soldiers and equipment go by, but then asks, “But where is the division?” In both cases, they’ve seen the trees without realizing they were seeing the forest.”
Can your mind think of something which is only a part without being also a whole? Our mind always perceives and thinks in terms of wholes. Sounds like Ryle wasn’t aware that Fichte already answered this objection. Think of your body. Unless you have some pain or some stimulus that attracts your attention, you feel and think of it as a whole (you don’t particularize in the arm, legs, head, etc.) Then think of your arm, it is apprehended as a whole again (you don’t particularize into the hand and fingers). The same fits with the hand, you can conceptualize and feel it as a whole without necessarily focusing on each finger. Experience shows us that our mind always catches wholes, and only later separates in parts, which appear again as wholes. Which leads me to think the other way around: considering the parts separate from its whole is a category mistake.
“For now, if we can’t agree whether there’s anything more than the “easy” problems, maybe we can agree there’s value in making progress on those problems. If there is something other than those problems, solving them will make it more obvious. If there isn’t, we’ll be done.”
Absolutely! 😊
LikeLike
On equating phenomenal consciousness with access consciousness being axiomatic, consider this. You’ve mentioned zombies a few times in this thread. So let’s say we’re not concerning ourselves with a theory about p-consciousness, only one that explains the capabilities and behavior of a philosophical zombie, including the zombie’s ability to discuss their (non-existent) phenomenal experience, in other words, their fake-consciousness, or f-consciousness.
Does a theory of f-consciousness present any particular difficulties of the kind you discuss? Suppose we develop such a theory, are able to validate it empirically, and then build a system according to its principles. In your view, this would be a philosophical zombie, right? But it would be one that insisted it had p-consciousness.
Now, for someone who doesn’t believe in p-zombies, is there any reason not to accept f-consciousness as p-consciousness? And for someone who does believe in p-zombies, by what standard do they decide which entities are conscious and which are zombies?
LikeLike
I’m not sure to understand what you are trying to point out here… Anyway, if p-zombies exist the question is whether they have the ability to discuss their (non-existent) phenomenal experience in the first place? I doubt that. It is like Mary the neuroscientist who is color-blind but knows everything about the neurology of color perception and fakes the ability to see colors. Maybe she can to a certain extent (say she has a spectroscope that tells her the wavelength that every object emits) but I guess she will sooner or later betray her lack of experience. Sort of how AI automatic translators betray their lack of semantic understanding. So, the only standard a p-zombie could decide which entities are conscious and which are zombies is by guessing or by repeating others’ assessments or by whatever empiric and logical extrapolation. But it will never do so before it will see others talking about it. That is, it has nothing to introspect. Contrary to a conscious being, that does it by the assumption that others are conscious. That is, one first introspects.
LikeLike
If the zombie can’t describe their (non-existent) experience, then it doesn’t seem to meet the definition of a philosophical zombie. From the SEP article on zombies (emphasis added):
More importantly, their inability to discuss their experience would be a functional difference between them and an actual conscious entity, which puts us back in the position of functional descriptions and theories.
So, if p-zombies are possible, then consciousness is metaphysically epiphenomenal and impossible to establish scientifically, and the best we can ever do is a theory of how a p-zombie works. If p-zombies are not possible (my view), then a theory of how a p-zombie would work is a theory of consciousness (or at least includes such a theory).
Unless of course I’m missing something.
LikeLike
Good point. I agree that p-zombies are not possible precisely for the argument I made previously: If we suppose, ad absurdum, that they exist, then this leads to contradiction (there are many who consider it a flawed argument). But I suspect we have different reasons to think so. My argument is that if an organism or a machine has no experience, it may emulate to a certain degree of perfection the associated behavior of having that experience, but only “asymptotically”. Sooner or later the conscious beings surrounding it would notice a difference. Because there are things one simply can’t describe or relate to without the experience, however smart one might be.
But, again, that inability to discuss an experience hasn’t to do with a functional difference. Because in such a machine, there is no experience to discuss in the first place. It will have always to infer the description of the experience from outer observations of the behavior of others or on empirical data of the world, and then emulate it asymptotically. Whereas, a conscious being has no necessity to do so.
So, I agree p-zombies aren’t possible, but a theory of a p-zombie isn’t possible either for the above reasons. Therefore, one can’t build upon it a theory of consciousness.
LikeLike
I’m actually on the same page about a putative zombie only being able to emulate experience “asymptotically”. It is possible to create a behavioral zombie that can fool a casual observer for a few seconds. But it only takes minutes of interaction for the difficulty to skyrocket, to the point that no program has yet (legitimately) passed the common version of the Turing test, which only requires fooling one third of human interrogators after five minutes of conversation. Fooling more than half of the human interrogators after hours, days, or weeks quickly escalates things to the point that faking it requires astronomical, and eventually cosmological scale resources.
But yeah, we disagree about whether the inability to self report is a functional difference. I suspect we’re getting back to that ineffable stuff, so I won’t loop.
Appreciate the conversation!
LikeLike
“One is Eric Schwitzgebel’s innocent conception of phenomenal consciousness. How might science effectively grasp something that it can’t effectively define?”
The problem with defining consciousness shouldn’t be surprising. Every definition is a statement of the meaning of something. Meaning is already a realization of a semantic and lexical content that presupposes a conscious experience a priori. Otherwise, one could not frame that definition or, at least, could not understand it. To define consciousness we need semantic objects that are already objects of conscious experience. It ‘s circular logic. There is no way out, not even in principle. That’s why definitions of phenomenal consciousness can only rely on examples of lived experiences, as Schwitzgebel does. A zombi could not make anything out of it.
“Regarding consciousness, I’ve noticed functionalists to tend to theorize that if a standard computer can speak as well as a human can (as in the Turing test), to then presume that it must be conscious. Firstly I think they should be more humble about this since there is no indication that a standard computer can do so whatsoever. Secondly they seem to presume that their merely theorized human speaking standard computer demonstrates that algorithm function alone explains how the human brain itself creates the phenomenal element to our existence.”
In fact, a simulation is not the duplication of a phenomenon. The philosophical zombi thought experiment makes this clear. Moreover, a computer algorithm does not even guarantee semantic knowledge, as Searl’s Chinese room thought experiment makes evident. computers may well pass a Turing test, but there is nothing that “understands”, has a “perception of meaning”, or has a “semantic experience.
LikeLiked by 2 people
I can’t dispute any of that Marco. My consciousness exists as me fundamentally. It’s the medium through which I experience existence. It’s the single element of reality that I cannot possibly be wrong about having. Yes, a priori. Nevertheless many tack on all sorts of other ideas to this innocent consciousness conception. (No need to require that consciousness be ineffable Mike). There’s already enough noise in the field for failure even if a simple and effective definition did happen to be agreed upon in general.
I’m also quite aware that a simulation of the weather, for example, will not provide that weather itself. Simulations may effectively be considered models to some level of precision. In this age where people tend to ignore the various silly implications that certain consciousness proposals have, it seems to me that the message on our side needs to become more focused upon the essential problematic element to what they propose. Consider the following attempt:
It’s generally presumed that when my thumb gets whacked, that nerves send signals to my brain about the event. From that point my brain should somehow use this information to create the pain that I thus experience phenomenally. For this reason I presume that whacked thumb information effectively animates some sort of head based phenomenal experience producing mechanisms which harbor the right kind of physics.
Conversely many consciousness theories take a shortcut by presuming that no such informationally animated mechanisms exist. They propose instead that the brain simply takes whacked thumb information and processes it into a proper second set of information. I consider this supernatural in the sense that information only exists as such in respect to the instruments that it informs — no animated mechanisms mandate no processed information. So from their position it would seem that if whacked thumb information on paper were scanned and processed into another set of information that’s correlated with the brain’s response, then something in this paper to paper conversion should thus feel what I do when my thumb gets whacked!
Conversely if the second set of inscribed paper were fed into a machine which could interpret it and was armed with the physics that the brain uses to create phenomenal experience, then something here should essentially feel what I do when my thumb gets whacked. It’s the inclusion of this physics that should convert a supernatural proposal which should thus have all sorts of funky implications, to a natural proposal which does not.
So what might this brain physics be? Observe that every time a neuron fires it effectively creates electromagnetic radiation associated with the event. Thus it could be that phenomenal experience exists in the form of certain parameters of amazingly complex electromagnetic fields. What other element of the brain harbors the fidelity of neuron firing, though in the combined sense of a serial processing experiencer? I can’t think of a second reasonable option.
Furthermore there is mounting experimental evidence supporting this position. For example the following paper discusses a recent experiment where scientists demonstrate that when a monkey correctly remembers to look in the proper direction to get a reward, it’s not the same neurons that are found to fire, but rather the same electromagnetic fields that are created by means of often different neural firing. Thus it could be that the noted similar EM field when successful harbored the proper phenomenal memory within, whereas when unsuccessful the proper memory was absent. https://www.sciencedirect.com/science/article/pii/S1053811922001872
LikeLiked by 1 person
“(No need to require that consciousness be ineffable Mike)”
I agree Eric, as long as we stick to a causal account. That’s what a functional access approach entails. I was referring to the claim that there’s something extra. Of course, maybe that extra thing is what you mean by “consciousness” here. In which case, my challenge to you is to describe it with more than just vague gestures toward it.
(Note that if you use Eric S’s approach of examples, you should be prepared for me to analyze them from a functional evolutionary perspective. Some people get upset when I do that.)
LikeLiked by 2 people
Ultimately I think “consciousness” will need to be defined as some sort of physical property that can be measured. However, that would seem to me to be true for any explanation. Functionalism doesn’t really seem to do this because it makes “spotting the function” into an imaginative exercise.
My hunch for what this might be entail includes to EM fields, possibly new information theory, and even more speculatively additional dimensions where the “space” of consciousness unfolds.
LikeLiked by 1 person
It is becoming apparent to me that the “space” where consciousness unfolds is deep within the gray matter of one’s brain at the quantum level, most likely the microtubules. Consciousness is a separate and distinct system that is quantum, one that emerges from and is intrinsically linked to the classical brain. The classical brain animates and brings this system online, a system which in turn has casual power in a feed-back loop.
Science has demonstrated that all classical systems are objective whereas the quantum realm of quantum mechanics and consciousness are the only systems that are subjective: don’t know is any of you contributors can recognize the correlation here. 🧐
LikeLiked by 1 person
I think microtubules are one option. I still wonder about the role of magnetite in the brain. Certainly if we are into new dimensions, then QM would probably get involved along with possibly string theory if it ever reaches the point where it could predict something or distinguish between its myriad of implementations.
LikeLike
There is fascinating research being done at the level of microtubules. For example: the same chemical compound that is used as a sedative to put a human beings to sleep is the same chemical compound that puts a plant to sleep; and that sleep and wakeful state in plants is directly observed by the behavior of the microtubules. In addition, the quantities of that compound used in humans and plants is the same ratio in relationship to body weight.
As a footnote: there are those who object to mind being a quantum system citing the problems inherent with quantum computers and the need to operate these systems at close to absolute zero. But keep in mind that quantum computers are classical systems modeling a quantum system, they are not quantum systems per se; therefore, the energy restraints of a “so-called” quantum computer do not apply to a truly low energy quantum system like the mind.
LikeLike
What leads you to think “consciousness” needs to be defined as a physical property?
The question that always comes up for me, for any proposed property, why that property in particular? What about that specific substrate makes it necessary and sufficient for whatever we mean with the “consciousness” label?
For me, identifying a substrate is only part of the puzzle. There still needs to be an explanation of that substrate’s causal role. And why that substrate is the only one who can play that role (if that’s the proposition being made).
LikeLiked by 1 person
How do you propose to study it scientifically if it isn’t physical property?
I didn’t say anything about a particular substrate in this context, although I have elsewhere. If it becomes understood as something physical and measurable, then the questions about substrate will be answered. I tend to think it prefers biological organisms but I can’t say if it is exclusive to them.
If it is just a bunch of assorted functions, then why those functions in particular? What is the rationale for including or excluding any particular function? Could we in the future come up with new functions or remove old ones?
Can we name one function that only a conscious entity can perform?
All the proposed functions in the lists seem like they could be performed by a variety of devices to some increasingly improving degree.
LikeLiked by 1 person
I guess it depends on what you mean by “physical property”. Does it include physical processes? It seems strange to call a hurricane a “property”, but a “process” seems much more natural. In that sense, I agree with O&G that a mind is activity, a set of physical processes, which seems completely up science’s alley.
I can’t see any strict fact of the matter for which functions are included in consciousness. But like life, there are certain ones that are at least necessary to trigger most people’s intuition of a fellow consciousness, such as taking in information from a body and the environment (at least at some point in system’s history), memory, attention, a model of at least bodily-self, and having quick evaluative reactions (feelings) which can be overridden after further sensory-action simulations, used for learning, etc.
Any one of the above in isolation seems unlikely to trigger our intuition of a consciousness. It takes the package. Some people might see taking in information from the environment as sufficient, but balk when that’s applied to something like a self driving car.
All of which is why I conclude that consciousness lies in the eye of the beholder.
LikeLike
Mike,
I do use Eric S’s example approach from which to define consciousness. And no, I don’t mean anything “extra”. To do so should sacrifice the innocence of his conception. I look forward to your analysis of this example based perspective through a functional evolutionary perspective. As I recall he presented positive examples such as remembering various things that display consciousness as we see it. He also presented various negative examples such as not being conscious of cell lipid absorption. Then he even presented certain questionable examples that he decided would not be helpful to judge one way or the other. In any case I suspect that you know what we mean by “consciousness”, and have mentioned that you do as well. Thus if you’d like to use some examples of your own to assess in a functional evolutionary perspective, that should also be fine.
I’m not sure why a functional evolutionary perspective of innocent/wonderful consciousness would or should upset me. Actually I’ve developed such a perspective of my own. Surely you must recall it somewhat. Here’s a refresher:
Theoretically brains evolved as non-conscious computers from which to help operate certain forms of life in a macro sense, which is to say in terms of joined cells as a whole. But apparently brains couldn’t be algorithmically programmed well enough to meet various “open environment” circumstances, which is to say the kind where variables can be quite diverse. (Conversely the game of Chess lies in a relatively closed environment that’s quite appropriate for algorithm based function alone.) Thus for a while there should have been biological robots that were held back by the inadequacies of their standardly computational brains. I presume that the best of them functioned far more autonomously that any of our robots today are able to, and yet still could not advance further by means of such algorithmic instruction alone.
At some point a primitive phenomenal experiencer must have emerged in some versions, whether by means of certain neural EM fields or some other kind of brain physics. Thus consciousness should have technically emerged, though epiphenomenally at that point. Eventually certain iterations must have been given opportunities to affect organism output function however, and some of these agents must have done well enough for evolution to give them more and more resources from which to function in a more full cognitive capacity, ultimately leading to us.
LikeLiked by 1 person
Eric,
My advice, for any philosophical discussion, is to be cautious in assuming the other person knows what you mean by a term. I’ve written extensively about how amorphous and protean the “consciousness” term is. People can mean anything from physical interactions to self reflection by it.
Based on our past conversations, I generally take you to be referencing a system that has affective feelings. (Although I think it’s been a while since you articulated that. So my understanding could be dated.) Affects are built on reflex arcs, which evolved because they were adaptive. And affects themselves basically provide the valuation functionality used in deliberation and learning. That’s about the only functional analysis I have at the moment.
I’m broadly onboard with a lot of your evolutionary account, but as we’ve discussed before, I think phenomenal experiencing evolved much more gradually than your take, and I doubt it was ever epiphenomenal. At least we can agree, I think, that it is most definitely not epiphenomenal today.
LikeLiked by 1 person
Technically I didn’t say that I assumed you understood my conception of consciousness Mike, but rather that I suspected you understood. I do actually presume quite often, though rarely assume. This was merely a suspicion however. Regardless if you would have provided an example of consciousness that I didn’t agree with, I should then have been able to correct you. As it happens you didn’t provide any so I guess you haven’t yet demonstrated that you understand innocent consciousness yet. I still suspect that you understand however.
On your problem with innocent consciousness ever being epiphenomenal, I presume this is given your functionalism — no function, no existence. Unfortunately however this position seems at odds with how life is thought to evolve, which is to say, serendipitously. Thus certain mutations might give something a trait that isn’t functional, though it merely exists and propagates by chance. For example let’s say that four hand digits become five in a given gene line, though the new digit is small and essentially useless. That would be somewhat like an epiphenomenal consciousness existing when the right neurally produced EM fields create it. And just as that initially useless fifth digit might with enough iterations become functional, and eventually even the famous opposable thumb, that initially epiphenomenal consciousness might eventually think and write words such as these.
LikeLiked by 1 person
Eric,
Mutations and associated traits are random. But unless the trait is completely energy neutral, it always has a benefit or cost. It could be a spandrel, a side effect of another beneficial trait, but might persist for a number of generations as long as its energy cost or other costs don’t exceed the benefits of the selected trait. But spandrels are controversial, and if they exist, they tend to be small and limited in scope. So any window for it to have been epiphenomenal (in the functional rather than metaphysical sense) would have been very small.
Myself, I think consciousness is built on prediction, and the earliest predictions would have been exceedingly simple, barely noticeable, but something that made a slight difference in selection. That’s all evolution would have needed to get started.
LikeLiked by 1 person
Mike,
Clearly there are things which exist because other things cause them to. The heart’s adaptive purpose is to pump blood, even if doing so also creates heat by means of such movement. I see from Wikipedia that they’re saying the human chin might be a spandrel. Whatever on that, though you needn’t question whether or not spandrels exist. Of course they do. Your energy concern is a real one, though might be overcome with probability over time. It’s also not just one spandrel to be concerned about, but many that can come and go. Given enough iterations sometimes the right ones should all be present at once to produce adaptive function.
I wonder if you could use a word other than “prediction” for what you’re talking about? To me that term seems too phenomenal from the start. I predict by means of my conscious function (the innocent kind that I suspect you grasp). Conversely as I understand it, my computer never predicts anything. It doesn’t “predict” that I actually meant to capitalize the first word of this sentence for example, but rather capitalizes the word by means of effective algorithms.
Beyond finding a better term for “predict”, I’d like you to try to provide a sensible argument that a given machine might accept some variety of information, and then process it into other information that itself exists as phenomenal experience. I’d like you to justify how processed information could exist that itself is the product, which is to say needn’t animate anything else that creates that product by means of the processed information. This seems impossible in a natural world.
In any case observe that my own account for the emergence of functional consciousness from non-functional consciousness, would be very low energy and so would have tremendous evolutionary potential. It wouldn’t initially require any dedicated cells — just a series of the right synchronous neural firing that thus feels like something to a functionless experiencer. This is a natural accout which is thus unburdened by all manners of silly implications.
LikeLiked by 1 person
Eric,
You should think about why so many scientists like the word “prediction” here. You can use other words, like “expectation”, “conclusion”, or “inference” (and I do occasionally use all of them), but “prediction” I think gets at the adaptive value this activity provides. Consider, if I have to determine whether an object in the distances is food or a predator. When I’ve reached a conclusion about that, what is the adaptive value of reaching that conclusion? Ask Dennett’s hard question here, “And then what happens?” That conclusion is part of an affordance based conclusion on what will happen if I approach the object vs avoid it.
“I’d like you to justify how processed information could exist that itself is the product, which is to say needn’t animate anything else that creates that product by means of the processed information.”
Sorry, but I’ve already explained why I’m not going to discuss this anymore: https://selfawarepatterns.com/2021/09/06/clarifying-agnosticism/comment-page-1/#comment-144810
LikeLiked by 1 person
Okay Mike, I won’t ask you that question anymore if you’d rather I not. Maybe evidence will turn the tide some day soon and you’ll change your mind regarding this matter. Regardless I suspect that my argument is pretty good so it’s something that I enjoy discussing. After Eric Schwitzgebel’s next book is published I’ve promised to get him the best version of the argument I can manage. That’s one reason that I’d like feedback in general.
On the prediction term, instead of yourself assessing whether something in the distance is food or prey, what would you say about one of our robots analyzing camera information about something in the distance? I’m uncomfortable calling anything there “a prediction” given the consciousness association, and even if the information it detects suggests something that will come and beat it to bits. I’d say it would just be following its algorithms regardless of what it does, and the same for a non-conscious organism armed with light based information senses. But if that information were to also cause something to have phenomenal fear, for example, and this fear were to cause it to run various evasive action algorithms (perhaps incited by certain forms of electromagnetic ephaptic coupling), then I could see using the “prediction” term. Any problem with that assessment?
LikeLiked by 1 person
Eric,
I’m not clear why you’d be uncomfortable with the idea of a robot making a prediction. Every time I open my laptop, it scans my face to recognize whether I’m the logged in user (using a neural network which initially has to be trained). That’s essentially a prediction, a probabilistic conclusion, one if it gets it wrong, may provide access to the wrong person. (Or deny me access, which is the more likely scenario, particularly if I open it wearing a facemask.)
And why don’t you think your perception is algorithmic? Most visual illusions are demonstrations of the algorithms going wrong. We can use the word “processes” if you prefer, since running algorithms are always physical causal processes. It seems like any physical theory of consciousness is going to reduce to such processes.
LikeLiked by 1 person
Mike,
On the terms that we use, observe that they’ve generally evolved over the centuries to reflect conscious rather than non-conscious function. We can euphemistically say that one of our computers is “thinking”, “happy”, “angry”, “hurting”, and so on, though we obviously don’t mean that it phenomenally experiences its existence. “Prediction” would be another such term. I phenomenally predict various things, though presume that my computer was not built to function that way. And if we happen to be talking about what it would take to build something that actually is conscious (as you and I commonly do), it would seem reasonable to not use words which suggest that a given computer already has some kind of proto-consciousness associated with its function. That seems to presume the truth of a not yet empirically demonstrated belief. You displayed this belief when you said:
“Myself, I think consciousness is built on prediction, and the earliest predictions would have been exceedingly simple, barely noticeable, but something that made a slight difference in selection. That’s all evolution would have needed to get started.”
Thus here you seem to imply that because your computer matches up a current picture of your face with one on file, that its at least proto-conscious. So yes, I think I’d prefer you to use a term like “process” rather than “recognize” in such situations.
Actually I do consider my own perceptions to be logarithmic, though very differently so from how the brain itself works, as well as the computers that we currently build. My dual computers model of brain function runs like this:
The computers that we build are generally powered by means of electricity. The brain is powered by means of electrochemical dynamics. Furthermore under the right circumstances brain neurons can fire in a way that creates something that feels good/bad, which is to say a phenomenal experiencer that probably exists in the form of associated EM fields.
The standard computer is algorithmic in the sense that electricity forces it to accept input information and process it into other information. The produced information might then go on to animate the function of a computer screen for example.
The brain is algorithmic in the sense that electrochemical dynamics force it to accept input information from the body and process it into other information that might go on to animate an organ such as the heart.
Then finally an EM field that exists as the experiencer of existence is algorithmic in the sense that the phenomenal experience that it receives gets processed, which is to say “thought about”, and it’s algorithmic output will be to do what it decides will make it feel best from moment to moment. So yes I do consider everything here algorithmic in the end, though this phenomenal variety is quite different from the other.
Then regarding visual illusions, I essentially consider this to be a matter of brain algorithms providing the experiencer with an incorrect perception. This would be a non-conscious algorithmic result that can trick a phenomenal experiencer, either of which should function algorithmically in the end.
LikeLiked by 1 person
Nerd point for anyone who spotted where I said “logarithmic” when I meant “algorithmic”. 🤪
LikeLiked by 1 person
Eric,
Note that I said consciousness is built on prediction, not that it is prediction in the sense that any prediction is conscious. As I’ve discussed in the hierarchy posts, you need a lot more functionality to trigger the intuition of a conscious system from most of us. So saying a machine predicts something isn’t saying it’s predicting it the way our currently evolved nervous system does it. At least not yet. My point though is that evolution probably started with very simple predictions over reflex arcs, with improvements being selected from there.
“So yes I do consider everything here algorithmic in the end, though this phenomenal variety is quite different from the other.”
Thanks for clarifying this. The interesting question for me is which algorithms. That’s what most of the theories I’m interested in deal with.
LikeLiked by 1 person
Okay Mike, you’re not saying that your computer is even proto-conscious given that it can effectively unlock for you but not for someone else. Good. Still your reasoning that it isn’t conscious does seem suspect to me. Should it matter that something can “trigger the intuition of a conscious system from most of us”? Is that what we should consider phenomenal experience to be constituted by (such as the ones that you have visually) — the results of some human opinion poll? Why should what people in general think determine whether or not something is phenomenally “seen” by a machine such as yourself? It seems to me that opinions should not be relevant to the existence of such an example. Thus many of us consider the “something it is like” heuristic to at least be somewhat helpful, though Schwitzgebel’s “innocent consciousness” paper provides a far more thorough account. It’s a personal value dynamic that somehow exists for entities that experience their existence, and not otherwise.
As for your point that evolution probably started with very simple predictions over reflex arcs, with improvements being selected from there… yes I think that’s right. My model does go that way as well however. But logic suggests that things couldn’t have begun in a phenomenal capacity without the existence a phenomenal experiencer. This is why I think consciousness must have evolved in the same manner that evolution works in general, which is to say that first something emerges which isn’t useful, and then given this existence it should have some potential to become implemented to thus become useful. Thus a mandated epiphenomenal beginning. I don’t see how innocent consciousness could have possibly become functional until after it existed non-functionally. And surely many iterations must have existed that failed given that the right circumstances didn’t happen to also exist in those situations, though eventually things did work out even given those earlier non-functional examples of it. And it seems to me that even today various examples of conscious function should be non functional regardless of their general potential usefulness. For example there is there horror of being held captive for torture.
Regarding “which algorithms”, my thought would be the kind which animates the right variety of physics. 🙂
LikeLiked by 1 person
Eric, looks like you double posted. I kept this one and deleted the one below. Let me know if I need to revive it.
Remember, I am the “consciousness is in the eye of the beholder” and “like us” guy. It’s a matter of classification. Whatever happens in a particular system happens. The question is how to categorize it. I agree with Anil Seth that consciousness is more like life than temperature. There isn’t any one simple objective property that we can observe to be there or absent to indicate its presence or absence. I know you disagree. All I’ll say is that evidence would change my mind, or at least a compelling reason why the substrate explored by neuroscience is insufficient. But right now, I think cognitive neuroscience is getting the job done.
LikeLiked by 1 person
I agree on us favoring different definitions Mike, though theoretically if I’m assessing one of your ideas then I need to use your definitions, and if you’re assessing one of my ideas then you need to use mine. Otherwise effectively evaluations shouldn’t work. There are no true/false definitions, but rather only more and less useful ones in a given context.
In any case yes, I’m sure that you’d alter your perspective with good enough contrary evidence. By the way, I’m not sure you’ve ever told me if you think that my way of testing McFadden’s theory would, if successful, be reasonably conclusive? You must have heard me mention it several times, as I just did below to Marco. If some number of charges were wired into the skull to fire with about the strength and frequency of neurons, and under the right conditions this were ultimately found to tamper with things like the person’s vision, memory, thought, and so on, which would be quite reportable, then do you think you’d begin to suspect that innocent consciousness probably exists in the form of certain neuron produced EM fields that must get tampered with when other EM fields alter them? Marco disagreed. What’s your perspective?
LikeLiked by 1 person
Sorry Eric. While I’ve seen you discuss it, I have to admit I haven’t been keeping up with the latest versions of your testing scenario. The trick, I think, is reproducing the field in the same manner that neurons produce it. Are you going to insert a tiny electrode by every neuron? Anything short of that, such as inserting electrodes every centimeter or so, would change the geometry of the field.
The strength of an EM field falls as the inverse square of the distance. So the field would be much stronger right at the electrode and much weaker half a centimeter away. But neurons are all much closer to each other. So you can have a field at the electrode be at the same strength as emitted by neurons (which is very tiny), but exponentially tinier half a centimeter away. Or you can compensate by having it be stronger at the electrode so that it matches the natural field at the half centimeter mark. But now you have a much stronger field right by the electrode.
From a practical perspective, you might be better off focusing on exactly how strong a TMS pulse needs to be to have reportable effects. You can then calculate what that strength is at the affected neurons. If it’s at the strength of the brain’s endogenous field, that might count as success, but failure won’t count as falsification because of the geometry differences. (I’d be surprised if someone like McFadden hasn’t already tried this.)
LikeLiked by 1 person
So you admit to not keeping up with all of my schemes Mike? Well perhaps we can rectify that somewhat for this particular scheme.
Apparently I haven’t specifically illustrated to you what exactly my proposed test happens to be. You make a good point about the strength of a field being inversely proportional to the square of the distance from it. That just means however that certain implanted transmitters might need to be similarly close to each other as the neurons that they’d be meant to emulate. I’m not entirely sure that this would be required for an exogenous test however. The point wouldn’t exactly be for a given transmitter to be affected by another transmitter’s energy, even if that’s sometimes required for actual neurons to get them to fire synchronously, but rather for the right number of transmitters to ultimately fire synchronously in the right way to create the right overall EM field. It seems to me that if we were to wire our transmitters in series, then a single power source should inherently create synchronous firing. Conversely neurons each have independent power sources and therefore may sometimes need field effects to achieve certain synchrony. (Of course they might also achieve such synchrony synaptically, but regardless.)
It’s interesting that you speculated that McFadden has probably tried to use TMS experimentation to verify his theory, when apparently he not only has tried, but has been successful. On his site as the lead up to presenting his seminal 2002 paper he says this:
“I discuss the role of the brain’s EM field in neurons and brains and provide evidence, for example from Transcranial Magnetic Stimulation (TMS) of the brain, that the brain responds to EM fields of similar structure and magnitude as the brain’s endogenous EM field.” https://johnjoemcfadden.co.uk/popular-science/consciousness/
Anyway my proposal runs like this:
It is theorized that all phenomenal existence for something conscious (and mind you that I mean nothing ineffable or otherwise non-innocent here!) reside in the form of certain electromagnetic fields that are created when various neurons fire with the right synchrony to get into the right zone. Furthermore it is theorized that a given decision of the experiencer to do something will not only exist as an associated part of that total field, but that that part of the field may also have field effects which incite such output function upon associated neurons, whether muscular output or another kind. Thus a decision to move in a certain way would exist as an element of the EM field that also causes associated neurons to actually cause such movement by means of ephaptic coupling.
If consciousness exists as an amazingly complex neuron produced EM field however, then it ought to be possible to tamper with that field in ways that would seem quite strange and reportable for a human experiencer. That’s the crucial point to this in the end. McFadden has displayed that his cemi is consistent with lots of experimental evidence, though his colleagues collectively yawn perhaps given their conflicting priorities. But if an EM field that itself exists as the conscious experiencer could be affected by exogenous EM fields that we create (hopefully without otherwise affecting the brain itself, or at least if it were demonstrated that such externalities were irrelevant), and thus the conscious experiencer could tell us things about how expected phenomenal experience has been altered (like image alterations, or even weird feelings), I don’t see how science could continue to ignore this proposal. Continued verification should force scientists to agree that consciousness must exist specifically as certain explorable neuron produced EM fields. A huge door of study should thus open to the field while also closing the door on a vast array of existing proposals. Such a discovery should be on the order of confirming Einstein’s relativity theory. Nevertheless far greater upheaval should occur given far greater conflicting investments today versus the investments of physicists back then. Does this seem about right to you Mike?
Though my thought has been to try to essentially implement what synchronously fired neurons do (and so implant something that might need to be quite substantial into the brain), it could be that this isn’t entirely necessary. It could be that one or a small number of transmitters could get the job done by replicating the field produced by hundreds, thousands, or even millions of synchronously fired charges. That could be tried outside the brain with technological field detection. If so then wonderful, let’s try that. Remember that it’s only the EM field itself that matters here, not the machinery which creates the field. This is experimentally displayed in so called “representational drift” — it’s observed that neurons for a given application change, though seem to continue displaying similar EM function.
LikeLiked by 1 person
Eric,
I’d be curious to see an evaluation of McFadden’s TMS evidence by someone with the relative expertise. Stimulating neural firing with a TMS pulse, I think, wouldn’t do it, unless it can be clearly demonstrated (again to a relevant expert) that the pulse at the effected neurons is about the same strength as the field generated by the neurons.
Sorry, but your plan still seems infeasible to me. I think you would have to put an electrode by every neuron. We imagined something similar with the thought experiment the other day, but the idea of really doing it seems fraught with issues. You say the equipment doesn’t matter, but I think for the geometry of the field to be right, it does matter.
LikeLiked by 1 person
Mike,
The feasibility of what I propose isn’t actually the question that I was asking you. I only went into some specifics to perhaps help fill in some blanks for you. If you don’t like those specifics then fine, you could instead use your own. So let’s try this as simply as can.
McFadden theorizes that the brain exists as a non-conscious computer. Furthermore he theorizes that “consciousness” exists within the field of electromagnetic radiation that it produces neuronally, and under certain not yet discovered parameters. (Again I always mean Schwitzgebel’s innocent conception of consciousness.) So how might we conclusively validate his theory if true?
Let’s say that scientists were to create an electromagnetic field inside a person’s skull somehow, and when they did so in certain ways the person would report strange phenomenal experiences. Furthermore let’s say that scientists could confidently rule out that their EM field was altering the brain itself to have such effects. Maybe vision would go in and out. Or maybe a strange hiss would become audible. Let’s also say that with enough practice on various people scientists would figure out how to create fields that make a person feel uncomfortable, or perhaps lethargic, and so on. My thought is that this would be quite conclusive evidence that consciousness must exist as a neuron produced electromagnetic field. Surely here the exogenous field would be tampering with the endogenous field as the two join. But what do you think?
As Marco said, I’m pretty sure that he’d continue to believe in universal consciousness anyway. And if this sort of evidence would not conclusively suggest to you that consciousness exists electromagnetically, then what would your reasoning be? In that case I’m curious how you might salvage your current beliefs?
LikeLiked by 1 person
Eric,
Do me a favor. Stop repeating the theory over and over to me in every response. I get it and don’t need the constant reminders. It’s starting to sound like a religious mantra.
“Furthermore let’s say that scientists could confidently rule out that their EM field was altering the brain itself to have such effects.”
The problem is that in order to have people report about the effects on their experience, you have to alter their brain. Otherwise how can they report on their experience? How are the signals going to get to the motor regions to excite the efferent nerves to generate speech (or typing, button pushing, etc)?
LikeLiked by 1 person
Okay Mike, I agree. Here the brain would be affected by the science created EM field in the sense that there would be an associated report. I didn’t mean to suggest that the brain wouldn’t be affected in a manner that was entirely consistent with McFadden’s theory. Thus I could have included that specific exception. And surely I needn’t again describe how McFadden proposes that the EM field would create such brain function.
Now that it should be clear that report wouldn’t be a problem for his theory, as I see it you have two choices. One would be to admit that if experimentally verified in enough ways then this should be relatively conclusive evidence that consciousness exists as an associated aspect of electromagnetic radiation (I think with far greater human implications than the experimental verification of Einstein’s relativity). Or alternatively you might disagree that such evidence would have this implication and state why.
LikeLiked by 1 person
Eric,
I’m not clear what you’re proposing at this point. But it seems like your real goal is an attempt to nail down what would change my mind. I see two issues.
1. Does the brain use its EM field for substantive communication between neurons?
2. Is that communication consciousness?
I’d change my mind on 1 if there was evidence that a substantial portion of the neuroscience field judged to be indicative of it. I wouldn’t trust my or your judgment, or the judgment of any small group of scientists in isolation.
2 seems much more difficult. It requires establishing that consciousness is completely correlated and only correlated with that EM field communication. Again, it would take a substantial portion of the field accepting it as at least plausible.
Hope that helps.
LikeLiked by 1 person
Mike,
I wasn’t asking if you’d believe what was accepted by neuroscientists. I was asking if the sort of evidence that I’ve mentioned would strongly support the idea that consciousness exists as brain produced electromagnetic radiation? It’s something that you might simply agree with. I’d need no explanation since this already seems like a sensible implication to me. Or it could be disagreed with sensibly if you were able to come up with a second plausible explanation for those results. Then I suppose a third way to go would be to decide not to answer the question at all.
We talk about basing our beliefs upon experimental evidence rather than just faith. In that quest I think it’s good to plainly acknowledge the sort of evidence it would take for us to deny some of our most cherished beliefs.
LikeLiked by 1 person
Eric,
Actually I need to amend what I said above. If what I described happened, it would make me consider changing my mind, but whether or not I did would depend on what I judged to be the strength of whatever the evidence was. I wouldn’t be tempted to do it though without at least a substantial portion of the field seeing it as valid. Evaluation of evidence is a complex thing, and amateurs often make basic mistakes, overlook basic methodological errors, etc. That’s not faith. It’s simply accepting that competence matters.
LikeLiked by 1 person
Okay Mike, in this scenario I’ll clarify that the evidence I speak of wouldn’t depend upon just your assessment of it alone. Respected associated scientists in general would agree that this added cranial EM field would seem not to directly affect brain function to create the reported phenomenal effects. Furthermore to cement this perception we can add that over the previous year researchers worldwide would have learned all sorts of things about how to fashion their exogenous fields for various specific and reproducible effects reported by the subject.
Would you consider this strong evidence for EM field consciousness? Or can you instead provide credible reason to not consider this strong evidence for EM field consciousness? Or would you rather not answer this question?
LikeLiked by 1 person
Eric, before I agree to a proposition, I like to know what I’m agreeing to. Your description seems mottled, which could lead to future scenarios where we disagree whether it matches what you’re talking about.
Part of the problem here is I think you’re vastly underestimating how much of the neural processing has to change for someone to report something. Based on what I know about McFadden’s theory, I’m not even sure that’s the right goal anyway, since I think the theory involves the neural processing constantly being changed by the field. You’re better off actually trying to change that neural processing with modifications to the field at the same strength as the naturally generated one. Although that really only establishes 1 above.
LikeLiked by 1 person
Mike,
I don’t think that I’m vastly underestimating how much neural processing has to change for someone to report something. I simply haven’t gotten into that because it’s not an essential element of the question that I’m presenting. I’ll now give you an account of how the EM field would theoretically affect neural function, and well beyond for report alone. Then I’ll bring things back to my simple question.
Theoretically consciousness exists as an electromagnetic field that certain synchronously firing neurons thus create. But this would be no good evolutionarily if the EM field couldn’t also affect brain function itself. So how might that work under McFadden’s theory?
Let’s consider the case of myself. The light information from your last response animates associated neurons to fire such that the electromagnetic wave that is me becomes informed of this information, gains a reasonable understanding of it, contemplates various ways that I might present an intelligent response, and hopefully does so through motor neurons that animate applicable muscles for me to type that response.
Here I know that you’ll think “Whoa, hold on Eric! How might an EM field do all that?” It wouldn’t do this alone because in order for an EM field to change, neuron function must change in associated ways. Theoretically the field can and does help change neural function through field based ephaptic coupling in appropriate ways as things progress each moment. This is to say a feedback loop is theorized to exist between the vast parallel processor side (or non-conscious function made up of everything except for the experiencer/thinker itself), and a tiny serial conscious part that is me. The “me” part in this scenario would occasionally be given some chance to participate, and based upon it’s perception of how good to bad it perceives various responses would make it feel. Thus I’m saying that everything we are psychologically exists in terms of a progressively changing EM field, and this field can effectively feed back into associated neural function that goes on to propagate the field in associated ways.
So now that I’ve gone a bit into these behind the scenes details, my question remains. If it were extremely well established that scientists could inject an electromagnetic field inside a human brain that does not itself tamper with that brain, though does alter that person’s consciousness for oral report, and perhaps even does so in novel ways in different people, what would your response be? Would you consider this strong evidence that consciousness exists as certain neuron produced electromagnetic radiation that must have thus been tampered with? Or would you be able to present good reason that we should not consider this strong evidence that consciousness exists as such? Or would you rather not answer this question?
LikeLiked by 1 person
Eric,
“I don’t think that I’m vastly underestimating how much neural processing has to change for someone to report something. I simply haven’t gotten into that because it’s not an essential element of the question that I’m presenting.”
(emphasis added)
“If it were extremely well established that scientists could inject an electromagnetic field inside a human brain that does not itself tamper with that brain, though does alter that person’s consciousness for oral report,”
Sorry, but I think that will have to be it for now.
LikeLiked by 1 person
No worries Mike. And though I didn’t admit it, you did put me to task earlier today. Yes theoretically this EM field would need to constantly alter brain function to exist as a coherently changing conscious entity by means of associated neural firing. Though I’ve had a vague sense that such ephaptic coupling would certainly occur when needed for muscle operation, I don’t think that I quite appreciated that this would need to occur perpetually in at least in some capacity for effective conscious function. So another tool added to the box.
I think if I could do it again however I’d say that “the injected EM field does not conventionally tamper with the brain, though does alter a person’s consciousness for conventional oral report”. Perhaps another time on that though. I think you’ve done well to avoid stating that you won’t answer my question one way or the other, or even say that you decline to answer it. Perhaps at some point you’ll find a good way to escape this particular quandary that I’ve developed for people of a certain status quo sort of perspective…
LikeLiked by 1 person
“They propose instead that the brain simply takes whacked thumb information and processes it into a proper second set of information.”
The word “information” in physics has a different meaning than what people mean, which is usually semantic information. Physical information is just a number, like entropy, or the temperature. The difference between physical information and semantic information is that the former could be compared to the storage capacity of HD, the latter is the amount of data that is meaningful. My backup HD has 4TB, which is its physical information capacity. But it could be filled up with a random sequence of ones and zeros, meaning it would still contain zero semantic info. Therefore, a “whacked thumb information” is first of all a sequence of bioelectric spikes that, as such, have zero semantic information, unless there is an agent that is sentient and associates with a meaning. Before that, the whacked thumb information is a stream of signals like every other stream of electrochemical impulses. Saying that this first information is processed into a proper second set of information, is in itself not a very insightful statement unless someone explains how that information is translated properly in a conscious perception of pain giving it a mental significance (note that I also distinguish between consciousness and mind, something that, IMO, is a common fallacy… but that might be another topic for later).
Moreover, saying that “information is fundamental”, as many seem to believe, sounds to me like a too far-fetched logical extrapolation. Because information always needs a carrier (a device such as an HD, a piece of paper, a material object, or otherwise, at least an EM wave which is not material but is still something physical). If information is the ultimate primitive, then what are its carriers? They must be even more fundamental, contradicting the statement.
My impression is that, since we live in an information age, we tend to believe that information is that holy grail, that ultimate quid that explains everything. But, at a closer inspection, this is a too simplistic approach to reality. IMO, there must be more.
“Thus it could be that phenomenal experience exists in the form of certain parameters of amazingly complex electromagnetic fields.”
That could be well the case. But even if so, at the end of the story, what does the cemi field theory really explain when it comes to the good old hard problem? Again, it might at best bring us forward in regard to the easy problems, shifting it from neural correlates to EM fields. But why is an EM (however complicated, amazingly complex, intricate, resonant, synchronous…. name it) supposed to lead to an experiencing subject? It leaves me with the same unanswered question marks as any other theory does. The only thing I can think of is a sort of modified panpsychism where all EM waves have an elementary consciousness/mind or, rather, are a proto-consciousness. Something reminiscent of an EM idealism. Thus, I don’t see where the “supernatural proposal” has been avoided.
LikeLiked by 1 person
It becomes a question of whether consciousness, although not in itself everything as in idealism, becomes somewhat like gravity is a property of space-time around matter. It would not necessarily be fundamental; it could be emergent. That is still debated about gravity. Consciousness could be a physical property that emerges when space-time in effect curls back into itself to reflect the external world. EM fields might be the cause or the mediator essential to its existence.
LikeLike
Again I think we’re on the same page Marco, though let me try to clarify my position in a couple of ways. The easy one is that no, I’m not implying that any amount of empirical validation for cemi would solve the hard problem of consciousness. What such validation should do however is kill an assortment of ridiculous consciousness notions that populate academia today. In that case this hard problem would remain and yet we should be able to study how consciousness works in various effective ways.
Consider gravity. Newton never solved this particular problem, nor Einstein, nor anyone else. What such people have done however is help us understand how gravity effectively works. That should also be all we can hope for regarding consciousness. It should take a person like McFadden to lead us out of this particular dark age. Even if we end up building machines that phenomenally experience their existence, certain elements of reality here should never be grasped beyond “because that’s how things seems to be”.
Then regarding information, I suppose that I meant the physics version more than the semantic version, though not in a fully basic capacity. Apparently physicists say that if you write a note and then burn it, you won’t destroy any of that written information in an ultimate sense because the letters should simply exist in an altered form that we’re unable to decipher. Thus the true primitive here should be causality. We naturalists believe that all of reality reduces back to this in full.
What I was actually referring to however seems more in the vein of your assertion that information requires a carrier. Let’s call the carrier “a machine”, and thus the information that it uses would be “machine information”. If I pass a legible note to my wife and she reads it, the note should not just carry semantic information given her understanding, but machine information in the sense that she’s a human machine. If no machine ever uses that note as input information however, then it should not effectively be “machine information”. In that case what I’ve written should just be “stuff that exists”. It’s the same for the genetic information which can potentially operate a cell, the Bluetooth information which my keyboard sends my computer, the stuff encoded on a VHS tape, and so on. None of them would be machine information in a natural world without a machine that uses it.
The single exception to this rule for many who like to call themselves naturalists, is consciousness. This is to say that they support proposals where consciousness exists when certain information is properly processed into other information, though no consciousness producing mechanisms are proposed to be animated by that supposed information. Thus they seem to demonstrate a supernatural position, as many thought experiments imply given the weird implications that they suggest.
But how might a given consciousness proposal rid itself of such implications, and indeed, stay natural? By either remaining agnostic about the brain physics which creates consciousness when animated by certain information, or as McFadden does, to directly propose such physics for potential testing. In either case Chinese rooms, China brains, USA consciousness, consciousness by means of functional neurons distributed across our planet, or even by means of paper encoded with certain characters that is properly converted into other encoded sheets of paper.
LikeLike
Agreed on all… except, perhaps this: “But how might a given consciousness proposal rid itself of such implications, and indeed, stay natural? “
Well, then let’s see if it works better by giving a chance to something non-natural. 😉
LikeLiked by 1 person
It’s good to hear that you agree Marco, that is except for taking a supernatural turn at the end. At least your position is coherent. What disturbs me is when people assert that there’s nothing unnatural about what they believe, though do so erroneously. I’ve recently characterized the ideas of Dennett this way here, to wide derision. Admitted supernaturalism should not harm science, though latent supernaturalism should.
Regardless I’d be interested in hearing your thoughts on the perhaps unnatural nature of consciousness, whether now or perhaps when Mike puts up other relevant posts. Actually the best of my ideas do not depend upon consciousness being natural, and even though I’m as strong a naturalist as you’re likely to meet.
LikeLike
What the “unnatural nature of consciousness” might be, depends also on what we mean by “naturalism” and that something is “natural” or what is “Nature”. Let’s not forget that many things that appeared once to be supernatural, nowadays are explained as “natural phenomena”. I think too often people conflate naturalism with materialism or physicalism.
At any rate, my “supernatural” stance is that consciousness is the fundamental primitive, not matter. While many would label this as supernatural, for me, it sounds like the most natural thing and most natural conclusion. Since our subjectivity can’t be explained by a bottom-up construct of purely subjectively entities we call “matter”, “neurons”, “brains”, and “particles”, since these are already qualia itself. Thinking the other way around may help. So not a physicalist, but feel comfortable with “naturalism” nevertheless if taken in the broader sense. However, I’m not a panpsychist, or just an idealist a la B. Kastrup either. More the guy in line with Eastern philosophies of universal consciousness.
LikeLiked by 1 person
I am relieved to hear that you don’t believe in magic Marco. There’s lots for me to unpack regarding your position itself however, and hopefully you’re still unpacking as well. But I can see how it would be sensible from the presumption that if qualia does not exist bottom up as certain matter/energy, then it must be the other way around. Universal consciousness might be such a solution. I suppose I’m not familiar enough with eastern philosophies. What would be the difference between universal consciousness and panpsychism?
My standard question for panpsychists is to have them explain how human consciousness can be lost by means of anesthesia? The consensus seems to be that the human body remains conscious in an experiential “white noise” sense observed by all of reality, though they can’t say what it is that the brain does to change that white noise to what’s actually experienced by a human. To this I roll my eyes and observe that panpsychism itself is just another element of noise in a field which is populated by it.
Regarding myself, I do actually like to use my Occam razor to present natural function as a more explicit term for materialism or physicalism. The definition I use for this is system based causality. If we live in a system where things happen given nothing more than causal dynamics from within, then it would all be perfectly natural function that could thus be explored scientifically. Any deviation to causal dynamics from within, such as by means of a god or fundamental randomness, would be supernatural or magic.
LikeLike
The difference between panpsychism and universal consciousness is, to put it bluntly, that the former takes a reductionist approach (“all particles or all matter possesses a proto-consciousness”) while theories of universal consciousness claim that the universe as a whole is a conscious being. Panpsychism is an old doctrine that dates back to Leibnitz but has been recently resurrected because of the problems that a physicalist approach encounters in dealing with consciousness. Some of its most notorious contemporary supporters are philosophers like Philip Goff or Galen Strawson.
But it doesn’t come without drawbacks either, in particular it is affected by the so-called “combination problem” (“why should a combination of proto-conscious particles lead to a more conscious organism?”). That’s why ideas of universal or cosmic consciousness result more appealing to some people (as me) who posit that the universe as a whole is consciousness. The Western versions you can read in idealist philosophers like Bernardo Kastrup who posits a “Mind at Large” (term borrowed from Aldous Huxley) or Itay Shani who coined the term “cosmopsychism”. These are, IMO, essentially reduced and readopted versions of the Eastern Vedanta philosophy who claims that “all is Brahman, the One without the second”. These metaphysical ontologies lead to the opposite problem, that of the so-called “decombination problem”: if there is only one (super-cosmic-universal-)Consciousness why do we feel separate embodied identities? The answer is, again putting it very bluntly, that the separation is an illusion, in reality you, I, all the others reading here are the very same Being that fragments-projects itself into itself and has the illusionary experience of being many, but it is still maintaining its one-identity.
Well, that was a very superficial down to earth summary, but I hope that it will give you some direction to research further by your own.
Going quickly through your other questions/statements….
Are you sure one “looses” consciousness under anesthesia? On the basis of what do we jump to this conclusion? Do we lose consciousness when we sleep? In deep sleep? What really can we claim to know or not know about that?
As to Occam’s razor, did you notice how people from the most and even opposite philosophical positions resort to it by claiming that their own theoretical framework is the most “parsimonious”? The physicalists say that we shouldn’t multiply entities, and their theory (“there is only matter”) is therefore the simplest one. An idealist, such as Kastrup, will tell you that positing that there is only mind is the most parsimonious conclusion. While a panpsychist such as Goff will tell you that positing conscious matter but maintaining the reductionist approach of science seems the most simple one because it solves the hard problem from the outset. So, at the end of the day, it turns out to be a methodological approach that everyone resorts to justify one’s own position. Occam’s razor is used to posit as an axiom one’s ideological paradigm, rather than leading to new insights. To the contrary, it has frequently led in the wrong direction in the history of science and favored stagnation rather than progress.
These, at least, my few cents….
LikeLiked by 1 person
Thanks for that assessment Marco. Though concise it seemed sufficiently thorough for my purposes. My take is essentially that physicalists have a “hard problem”, panpsychists have a “combination problem”, and idealists have a “decombination problem”. Therefore your universal consciousness position seems to be in the clear. One issue this raises for me however is unfalsifiability. This of course does not mean that it’s false (by definition), though to me the explanation does seem a bit contrived. Theism would be a competing unfalsifiable and highly contrived solution, though with far more elaborate circumstances depending upon the religion. Here one or more gods even seem quite evil! Of course the existence of evil gods doesn’t mandate that it’s wrong, though this feature does seem to create havoc with the narrative of many theists. It’s a position which is based upon faith rather than reason.
If we’re all a single conscious entity throughout a universe of existence that fragments itself into you, me, and others, I guess I’d ask if this understanding is important? What would some of the practical implications be? Do you think that you’ve lived your life any differently given your current belief versus how you’d live as a physicalist?
Regardless I hope that you don’t entirely dismiss the possibility that at least some empirical progress will be made on a physics based explanation of how the brain might create a consciousness dynamic. Science is still a young institution that seems not to have yet found its way in certain regards. I mentioned that it seems to have made progress on the problem of gravity. Might you be swayed if McFadden’s cemi were similarly quite verified experimentally?
Consider my proposal for testing it. Let’s say that you were to agree to be compensated so that researchers could then implant millions of tiny electrodes inside your skull, each of which would fire a charge about the strength of a standard neuron. Then let’s say that you were to sit down with these researchers to tell them if anything seemed phenomenally strange while they were playing around with various synchronous charge firings. The point would be to see if they could set up an electromagnetic field that interferes with the theorized electromagnetic field which exists as everything that you feel, think, remember, believe and so on, which is to say all that is “you” at a given moment. If it were established that such exogenous electromagnetic fields alone could tamper with your otherwise normal phenomenal existence, would this sway you? Here I suspect that many would throw in the towel (including Mike) and decide that consciousness must exist in the form of certain neuron produced electromagnetic fields. In that case do you think that you might become a physicalist?
Regarding anesthesia, it seems to me that it is better for someone to have surgery with it rather than without it (such as for implanting millions of electrodes in someone’s skull!). I consider sleep however to exist as an altered state of consciousness, though a natural one rather than drug induced. I wouldn’t suggest surgery while merely being asleep! Surgery shouldn’t hurt quite as much under the influence of certain drugs, though I’d suggest full anesthesia for any otherwise highly painful procedure.
On Occam’s razor / parsimony, yes I know — we’re all self interested products of our circumstances who thus tend to use heuristics like this one to our own advantage. Nevertheless I did love Johnjoe McFadden’s recent book describing this medieval friar’s life. Though a spiritual man Occam was adamant that the spiritual world must be held separate from the earthly world. And indeed, I do use his principle to make a binary distinction between magical stuff and non magical stuff. Either reality exists by means of system based causal dynamics in full, or magic exists also. But any magical component would thus be impossible to grasp from within because it would depend upon associated non system based dynamics. So I’m a physicalist somewhat out of convenience — the addition of magic would be less parsimonious.
LikeLike
Oh… so many questions that would certainly deserve much more detailed analysis. I can give only a telegraphic (and most certainly unsatisfying) answer.
Most of the inspiring data will come from biology, I guess. There is a consciousness, awareness, and cognition (whatever you might call it) that is becoming increasingly evident in brainless organisms. Down to cells and cellular structure. A consciousness that reveals itself as the basic primitive of reality. This is going to thin the wall between matter and spirit.
Definitely yes. The implication is that everyone is the divinity, and one would have a completely different way of seeing the world and also people (and, yes, also the role of what we call ’evil’). Let us make a not too mystical example. Say that instead of seeing Nature as a blind mechanistic, deterministic soulless clockwork (sort of a la Dawkins egoistic gene, etc.), you recognize it as the unfoldment and expression of a Divinity that has plunged itself in the night of matter and, from there, evolves out of matter by a self-finding bottom-up process (I’m not trying to convince you this being the case, but let us assume for a moment you see that way). Would we still behave and perceive the outer world in the same way? And also very practically, how would we tackle with environmental problems? I think we would act also from a very practical point of view very differently.
But of course. Science has a lot to offer. We will see tons of other insights and progress that will clarify many things we actually don’t understand. But, at the end of the day, its biggest virtue will turn out to be that it will show that the world as we perceive it from the level of the analytic and sensory mind is an illusion.
You see? We are slowly moving towards a “subjective first-person science.”
I don’t see why this implies physicalism more than the neural correlate argument does (interfering with neuronal activity can change your state of consciousness, ergo, neurons produce consciousness). Sounds to me like a correlation-causation fallacy: if something is modified and produces different results, then it must be the source of those results. Or, to put it bluntly, if the sale of ice creams correlates with the sale of sunglasses, then icecreams make people more willing to buy sunglasses.
Of course… 😊 But that was not my point. What makes you believe that one isn’t conscious during anesthesia? One might well be perfectly unaware of any pain a surgical intervention would cause in the waking state, but that does not imply that one isn’t conscious of anything. I’m not saying this is the case, just that it is an unwarranted logical extrapolation (BTW, this reminds me how, contrary to the common belief, we now know how at least some subjects in vegetative coma are conscious.)
What is “magic”? What we nowadays consider magic might well be natural in the future. Magic is when we extrapolate from a process to an emergent phenomenon or property without being able to explaining how the former leads to the latter. I think this is a temporary distinction dictated by our actual knowledge and… cultural belief system. From the view of universal consciousness, there is no such distinction. Considering consciousness as the basic primitive and matter it spatio-temporal concentration sounds to me much more parsimonious. So, you see, “parsimony” is a subjective notion that depends from our belief system. That’s why I consider using razors having not much significance.
A system-based dynamics is fine. But it is only one aspect of the whole. I see it as a marvelous expression of the One which plays within its temporal dimension by multiplying itself in an infinite dynamic multitude. System-based dynamics is fascinating and nowadays a focus of attention because it lends itself to a certain degree to numerical analysis, but I’m a skeptic when it comes to its supposed power of explaining mind, consciousness, and life. How could we know that? Just accept the introspective investigation without taking the third-person perspective as the only possible. Intuition tells us that this is the case. The science that will follow from that will sooner or later confirm it. If you want to become acquainted with these perspectives I suggest you to read something about Bernardo Kastrup’s idealism, or Itay Shani’s cosmopsychism, or the Eastern philosophy of Advaita Vedanta.
At any rate, I don’t want you to become a non-physicalist. I’m only saying that, much more than scientific and logical reasoning, the philosophical side we take in these regards, is strongly related to our way of seeing the world and that we tend to posit from the outset without questioning it. If we do not question our way of seeing we will always see contradictions in others’ way of seeing, even if there are none.
LikeLiked by 1 person
I’d say that those are some satisfying answers Marco. They don’t change my mind, though they do help me understand your perspective. Furthermore if you haven’t already I think you should meet Ed Gibney, an American philosopher based in England. He doesn’t technically believe in universal consciousness, though in practice you guys seem well aligned. This includes observations of highly advanced brainless function, an extreme fondness for life, general environmentalism, and he’s also a fun guy! https://www.evphil.com/blog
LikeLike
Marco,
I find myself to be sympathetic to the metaphysical position of idealism simply because materialism is a naive assumption however; if one earnestly wants to understand the true nature of reality, I do not feel it is “prudent” to project our own experience of consciousness onto a fundamental reality. The apex of a Zen Buddhist meditative state is the experience of the one engaged in that practice and in no way a proof and/or conformation of anything other than the experience itself.
Having many close family members and friends who are inline with Eastern philosophies of universal consciousness I also realize that such a metaphysical assumption satisfies a deep rooted psychical need intrinsic to our nature. However, if one chooses a metaphysical position of intellectual honesty over the need to satisfy an intrinsic psychical void, a world of unknown and unforeseen opportunities will present itself. Those are personal choices that we must make for ourselves…..
Good luck and be at peace my internet friend……
LikeLiked by 1 person
Lee Roetcisoender, I agree that, in principle, we should not project our subjective feelings, perceptions, emotions onto a fundamental reality. The question is: is it possible to do otherwise? We believe that taking a strictly third-person scientific data-driven approach is the epitome of selfless “objectivity”. Is it really? How far is, say the measurement of the position and momentum of a particle, removed from a subjective representation of reality in our mind? To keep it simple, for example, just think of looking at a chair. Do you really believe that you can see that object we call “chair” as it is in itself? Good old Kant had to say something about that, and I believe he was right. So, finally, we never ever can make any statement based on purely “objective” observations and considerations. When it comes to the philosophical questions of the ultimate essence of things (like the ruminations about the reality or non-reality of consciousness are), then the scientific perspective is no less proof and/or confirmation of things like that of Zen Buddhist experience.
True is that in most religious minded people, their “metaphysical assumption satisfies a deep-rooted psychical need intrinsic to our nature”. But in what sense is the physicalist exempt from these psychological needs? The naturalists do embrace naturalism because they posit analytical reason as the ultimate tool for knowledge. But this also is a modern cultural assumption, no less metaphysical than that of the mystic who claims mind being a stumbling block to true knowledge. Especially in these rationalized times, there is an habitual and unquestioned inner fear that by silencing the mind one “loses one’s mind” and to fall back to superstitions of naive metaphysical ideologies preceding the Age of Enlightenment. We believe (more or less consciously) that silencing the mind leads to shutting down our cognitive faculties. The opposite is true, silencing our chattering thoughts gives clarity of mind. So, ultimately also rational materialism is motivated by ideological factors and deep psychological needs and fears as well. And there is nothing wrong with that, but we should become aware how these psychological mechanisms are at work not only in others who think otherwise, but also in ourselves.
LikeLike
Marco,
For the record: I am not a materialist nor am I an idealist. Also, I am in full agreement with your first paragraph, specifically: “the scientific perspective is no less proof and/or confirmation of things like that of Zen Buddhist experience.”
As a species, we are all in the same boat adrift on the immense wilderness of the unknown therefore, it becomes imperative that we look for another way, and that “way” begins with not projecting our own experience of consciousness onto a fundamental reality nor putting our trust in the empiricism of science.
You asked a compelling question: “is it possible to do otherwise?” The short answer is yes; it is possible however, since we live in a deterministic universe where all of the systems that make up that universe have only a limited degree of self-determination, the very properties that make us what we are as a system become the very obstacles that prevent the natural migration of that system to the next evolutionary level of experience. Evolution is a process and that process trudges on with or without us…..
LikeLike
“Their definition of a mind is a physical system that converts sensations into action, taking input from the environment and then altering that environment for its own purposes”.
“Sensations” are in the definition but “sensations” already implies mind. Probably needs to say simply reacts to the environment like a thermostat or light sensor, which might qualify as primitive minds, by their definition.
LikeLiked by 1 person
Since I’ve actually used “sensations” in a low level manner before, and been called on it, I have some sympathy for their use (misuse?) here. But they actually dig themselves in even deeper with the way they use “thinking”.
As I noted in the post, this feels a bit too cheap for labels like “mind” or “thinking” to me.
LikeLiked by 1 person
It’s difficult to describe consciousness or mind without referring to something that already implies consciousness or mind. If you just talk about things that something with a mind might do (for example, navigate a challenging terrain), then your definition can always be challenged by finding a device or machine that we can pretty much agree is not conscious and can perform the same task. Even if we cannot find such a device in existence today, there would seem to be no barrier to creating one in the future that can perform the same task or function without consciousness. Piling on additional functions or levels of sophistication doesn’t work.
This is, of course, why there is a debate about the role and primacy of consciousness. Definitionally it can’t be defined without referring to itself in some way. It sits apart from the functions and activities it does.
LikeLiked by 1 person
I think consciousness can be described without referencing itself, but the problem is people often won’t recognize it as a description of consciousness unless you link it to the terms that do imply it.
To your final point, I think that’s the primary bone of contention. If we remove all the functions and activities, is there anything left? I don’t think so, but many do. As I noted to Marco, it seems like the only path forward is to make as much progress as we can in understanding those functions and activities. Once we get there, to the extent of being able to reproduce it, if there remains something missing, it should be much more obvious.
LikeLike
Why not take a stab at the definition if you think it can be done?
Let me take a guess. It’s your hierarchy of functions.
Is there anything left? Let’s us the basketball game analogy.
Let’s take one of the players and have him dribble the ball in the street. Let’s take two more and have them passing a ball to each other in a living room. Take another on a court in California shooting foul shots. Take every piece of the game, scramble it into a different order, move it to a different location – is it still a basketball game? Keep everything in exactly the same order but moving them to different locations – is it still a basketball game?
LikeLike
Yes, both the functional and perceptual hierarchies are the closest thing I have to a definition. (They’re really a hierarchy of definitions.) If pressed for something short, I’d go with a view based on how similar a given system is to the way our brain works, with no bright line on the similarity spectrum for when consciousness is present. (Essentially “like us” instead of Nagel’s “like something”.)
For both the game and a mind, it comes down to how we want to think about those concepts. My take is that the causal factors are important, the reason why the system moves from one state to another. In the basketball game, the reason why a player does something in those scenarios seems disconnected from the reasons other players do things.
Now, in your last version, if they were all wearing virtual headsets and playing on a virtual court with a virtual ball, so that the causal factors were preserved, then we’d have a distributed game going on.
LikeLiked by 1 person
” In the basketball game, the reason why a player does something in those scenarios seems disconnected from the reasons other players do things”.
Yep. It seems like you’re admitting there is something more than function and activities. The context of the functions and activities? Their relationship to each other?
I could also argue that basketball is definitely played on a court so maybe substrate is important. 🙂
LikeLiked by 1 person
I don’t know James. Causal relations seems pretty inherent in any concept of functionality I’m familiar with. Sorry, but that seems like a reach.
On basketball being played on a court, it depends on your definitions. Originally the game actually involved throwing the ball in a basket rather than a hoop, didn’t allow dribbling, was played with a soccer ball, and could happen in a variety of settings. Who knows what “basketball” might mean after a century of ever improving virtual technologies?
LikeLike
Player 1 passes to player 2. There isn’t a causal relationship between actions of player 1 and player 2 unless you stretch your definition of “causal” to include the entire game being causal to the actions that occur in it. Player 2 might not have even been expecting the ball. The actions of the players are based on rules, strategies, and judgments which sometimes are in sync between the players and sometimes not. The actions of each players and refs follow from the rules and understandings of the game.
It’s the rules and strategies that are what is extra that makes the game comprehensible, that explains what is happening. So there is something more than the actions of the players.
LikeLike
I’ll concede there is a difference between functionality and activity. The authors might have been better served with using something a little more higher level than “activity”. But a core aspect of functionalism is the causal role, relations, and organization of the components. It doesn’t exclude the factors you’re discussing.
LikeLike
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
LikeLike