One of the things about consciousness I’ve tried to call attention to on this blog is the ambiguity of its most common definitions, such as Thomas Nagel’s definition of it being “like something” for a particular system. The problem is that when people try to get more specific, they come up with a wide variety of answers, and then end up debating past each other with those different definitions.
Externally, consciousness is being responsive to the environment, or it’s goal directed behavior, or deliberation, or language. Internally it’s the results of self reflection, or attention, or it’s all perception regardless of whether it’s currently being attended to or reflected upon. Which one sounds right to you has a big impact on which scientific theory of consciousness you might favor, and on your attitude toward how widespread consciousness might be in the animal kingdom, or beyond.
All of which has long led me to conclude that consciousness is in the eye of the beholder. If forced to come up with my own brief definition, I typically replace Nagel’s “like something“, which seems literally meaningless to me, with “like us“, a label we slap on systems with impulses we recognize as similar to ours, and with a similar ability to process information.
So it was with some interest that I read Jacy Reese Anthis’ paper: Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness (Note this link is to the preprint, since the official Springer version is paywalled.)
Anthis’ goal is to step around the typical “intuition jousting” that goes on in these discussions, and come up with a formal argument. His core argument, as I understand it, is that the most common definitions are imprecise, yet typical use of the word “consciousness” implies precision, therefore that most common version doesn’t exist.
It’s worth noting that Anthis makes a distinction between a couple of different notions:
He sees the first as undeniable. It’s the second that he’s claiming doesn’t exist. I actually see the first as meeting my “like us” definition above, or at least in the same neighborhood. But the second is definitely along the lines of Nagel’s definition.
He also spends some time on the word “exist”, providing a specific definition for it.
Existence: A property exists if and only if, given all relevant knowledge and power, we could categorize the vast majority of entities in terms of whether and to what extent, if any, they possess that property.
Overall, the point is that even if we examined a system as an omniscient observer, there would be no fact of the matter on whether consciousness as a property exists within that system. Therefore, this property doesn’t exist, and consciousness, in this sense, doesn’t exist.
As usual, whenever I discuss variants of eliminativism or illusionism, I have to admit I agree with the ontology, but not the language used to describe it. In other words, my difference with eliminativism is what Chalmers calls a “verbal dispute”.
It’s true that I don’t think certain versions of consciousness exist. But then anyone who thinks about consciousness will think that certain versions exist while others don’t. That’s what it means to disagree about the nature of something. The problem is using the phrase “consciousness doesn’t exist” implies that none of them exist. I’ve yet to meet an eliminativist who actually thinks this, which is why I disagree with using that phrase.
I understand the idea of trying to challenge people’s intuitions, but in my experience, it almost always derails the discussion, turning it from what may or may not be the nature of consciousness, to whether the eliminativist is claiming there is no such thing as pain, suffering, joy, etc. Again, I haven’t encountered anyone who actually thinks these things don’t exist, so using that language doesn’t seem productive.
It’s worth noting that Anthis, in the conclusions section of the paper, makes clear he’s not suggesting that we discontinue use of the word “consciousness”. Just that, as a goal in scientific investigation, we’re better off focusing on specific capabilities like sensory discrimination, reportability, affective evaluations, metacognition, etc, essentially Chalmers’ “easy problems”. Focusing on the “hard problem” is unlikely to be productive.
In this view, the concept of consciousness is like the concept of life in biology. Anthis points out that biologists don’t agonize over the distinction between life and non-life, they instead investigate replication, homeostasis, metabolism, and a host of related processes. While it’s an interesting question to ask whether something like a virus is alive, most biologists consider it a philosophical one, not a scientific one. They’re more interested in just studying how viruses work.
This is very similar to Anil Seth’s point that consciousness is more like life than it’s like temperature. Temperature is a relatively simple emergent property measurable as a single number. Life is a complex one that defies simple characterizations. Consciousness seems to be in the same category.
So, in terms of predicting what is likely to be fruitful in scientific research, I completely agree with Anthis. Although my stance is less categorical. I’m not a fan of telling scientists it’s a waste of time to study areas they’re interested in. Despite the direction the evidence has been trending for some time, people like Anthis and I could still conceivably turn out to be wrong. If we are, it’s likely to be discovered by someone exploring alternatives we find unpromising.
What do you think about consciousness semanticism? Or my language dispute with eliminativists / illusionists?
139 thoughts on “Consciousness semanticism”
Thanks for the pointer to Anil Seth’s paper. I have started to read it, and I’m finding a lot that I agree with.
I have never liked the Nagel view. I really don’t even know what it is like to be me. The word “like” suggests a comparison, but I have never been anyone else so I cannot make that comparison.
Similarly, I am not much impressed with Chalmers’ “hard problem”. As I see it, the Chalmers “easy problem” is far harder than Chalmers will admit. And it seems to me that solving the easy problem goes a long way toward solving the hard problem.
LikeLiked by 2 people
Thanks. In this case it’s actually Jacy Reese Anthis’ paper. (Seth is actually more conciliatory to Nagel’s view.)
I’ve always had the same reaction to Nagel. Like what? The word “something” just means it’s comparable to some unspecified thing. The phrase is really just a tag that people attach a collection of views to. But the reason I go with “like us” is that it replaces the “something” with the only thing the comparison could be with, our own internal experience.
I’m with you on the hard problem. I actually think it’s just all the easy problems together, with a failure of imagination on how they add up.
LikeLiked by 1 person
“His core argument, as I understand it, is that the most common definitions are imprecise, yet typical use of the word “consciousness” implies precision, therefore that most common version doesn’t exist”.
How does an imprecise definition translate to non-existence?
There are plenty of things that can have a varying degrees of precision but we don’t argue they don’t exist. The color “yellow” is imprecise. Is it is khaki yellow, a key lime yellow, a maize yellow? Mikado Yellow doesn’t even look like yellow to me. Yellow must not exist.
What exactly are the Appalachian Mountains? Apparently definitions vary on the exact boundaries. They must not exist either.
“Anthis points out that biologists don’t agonize over the distinction between life and non-life, they instead investigate replication, homeostasis, metabolism, and a host of related processes”.
Yes but biologists don’t study rocks so clearly there are things that are alive and things that are not even if there are fuzzy boundaries when it comes to things like viruses.
Strangely in the end, he wants to keep the term “consciousness”, He writes:
“For now, I suggest we continue to use the word ‘consciousness’. While vague, the
term still fills an important social niche that no other term is currently poised to fill.”
And then writes:
“While we may continue using the term ‘consciousness’, I suggest that we no
longer approach consciousness as if it is some potentially discoverable property and
that we avoid assumptions that there is a ‘hard problem’, a ‘problem of other minds’,
‘neural correlates of consciousness’, or any other sort of monumental gap between
scientific understanding of the mind and the ‘mystery’ of conscious experience.”
All in all his argument seems to be an attempt to sidestep the hard problem.
LikeLiked by 2 people
Sounds like you went through the paper, so I won’t go into the details of his reasoning. As I noted in the post, saying consciousness doesn’t exist isn’t the move I’m inclined to make. I’d be more supportive of saying it isn’t a natural kind, or isn’t anything fundamental.
Good point about colors and mountains. These are basically categories. Like consciousness, they’re not a natural kind, just a category we make, with some judgment involved in what fits in or doesn’t. I do think there’s value in understanding that’s what they are, but it’s a very different point to saying they don’t exist.
I do agree with him that pursuing the hard problem scientifically isn’t likely to go anywhere. But I don’t expect anyone troubled by it to find that answer satisfying.
LikeLiked by 1 person
Perhaps consciousness is best thought of as a spectrum. And where any one entity falls on that spectrum is up to the entity, primarily, and the observer, secondarily.
LikeLiked by 1 person
I think that’s right about the spectrum. You and I are like each other, but a chimpanzee is less like us, but more like us than a dog. And the spectrum of comparison goes down with mice, frogs, fish, worms, plants, and unicellular organisms.
LikeLiked by 1 person
The problem with the spectrum approach is that you will try to put everything in one place on that spectrum. So where does an octopus fit? What about an octopus’s arm? What about something that has great problem solving in some circumstances but not others? See my response re: pattern recognition below.
LikeLiked by 1 person
A single spectrum is certainly not perfect. An octopus is more like us in some ways but less than others. This is where Jonathan Birch’s dimensions of animal consciousness might come in handy, with them being closer to us on some dimensions, but farther on others, like maybe unity. But the octopus’ arm might be too far by any standard for most of us to regard it as conscious.
I suspect the main issue with identifying an octopus arm as conscious is that most think of consciousness as necessarily unitary (see postulates of IIT), and so don’t consider the possibility of consciousness existing in hierarchies. Ah well.
It’s hard to say. Does the octopus arm have its own emotions, sense of self, episodic memories, attention, etc? If so, it would probably trigger my intuition of a consciousness.
Am curious. Do you think groups of people have a combined consciousness? If maybe, what would it take?
[“From the mind of Minolta”]
Depends on your definition of consciousness. I would note that my common way of explaining global workspace theory is an analogy of a rowdy meeting, and the attention schema with one of how the news media adjusts a society’s model of itself. Sometimes analogies are revealing.
LikeLiked by 1 person
There’s something that I’d like you to do for me. Now that you’ve written this post I’d like you to go through Eric Schwitzgebel’s innocent/wonderful conception of consciousness, found here for example: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.htm
I consider the man to be an utter genius in this regard! It seems to me that he goes through and answers each of the concerns that you’ve presented here in a very sensible way. What emerges from his essay is what I personally mean by “consciousness”, and always have as far as I know. This seems not to bear any of the epistemological or metaphysical commitments presented in your post, and indeed, I’m a full realist regarding the resulting idea. I’m willing to say that this variety of “consciousness” cannot possibly not exist to me, and indeed, I consider it to exist as “me”. There is no other element of Reality that I can say this for. I’m as strong a naturalist as it gets, and yet this particular definition doesn’t even presume that element of my beliefs.
Though it’s a ridiculously simple academic paper, I’d actually prefer you to not read it right now, and indeed, not read it at all. I’d prefer for you to get the Speechify app on your phone and listen to it for one of your walks, or while doing the dishes, or some other mundane task which leaves you able to think. It’s a 20 minute listen. This app has improved tremendously over the years, and even the free version that I use with fewer bells and whistles. Then I’d like you to tell me whether the “consciousness” idea that we mean when we use the term, does or does not seem like a useful idea to you.
LikeLiked by 1 person
I actually read that paper when it was published in 2016, and skimmed through it again a few years later when Eric S. himself sent me to it. While I think it made some important points in response to the illusionism discussion in the JCS issue it was published in, I don’t think it actually solves the definition issue itself. (I told Eric S. this, although it’s been awhile.)
And I actually think we’ve discussed this paper before ourselves. My issue is that it’s still ambiguous. It seems like it dodges the ambiguity by resorting to examples, without identifying what’s supposed to unify them into some kind of coherent concept.
That said, are there particular points in the paper you want to call my attention to? In general, I dislike homework assignments of the type, “just read (or watch) this and you’ll see the light” variety. I think it will be more effective if you discuss how you think it addresses the issue.
LikeLiked by 1 person
I did realize that you’ve read this paper before Mike, though it seems to me that a person couldn’t coherently have written the post you did while also factoring Schwitzgebel’s definition into their thinking. For example you talk about people defining consciousness in ways that add disputable external and internal parameters, though as far as I can tell he added nothing of the sort. I guess you’d have to go through the paper to potentially contradict me on that. I doubt you’d find anything however. As advertised he does seem to have developed an innocent conception. Thus the criticism that you’ve present in this post for many consciousness definitions seems not to apply to his.
Here you might ask what that innocent definition happens to be? It’s not one that I can directly provide you with however. In order to get it you’d need to go through his positive and negative examples and come to such an understanding on your own. This is actually how we gain understandings for the vast majority of the terms that we use in life. I’d wager that you long ago reached an effective definition for “furniture” this way, which is to say the sorts of things that are and are not meant by this term. Even when a given written definition is provided we should still generally ponder various positive and negative examples in order to gain an effective understanding of it.
I suppose that what I need to do is grasp that you mean “like us” when you use the “consciousness” term. Would this apply to an actual human that appears to be alert, and also something that seems human through special effects and whatnot? Of course in some respects dead people seem quite like us so you might clarify your definition a bit further in certain ways.
Then on the other side I’d like you to grasp that when I use “consciousness” I mean something that phenomenally experiences its existence essentially in the manner that I presume you commonly do, or a dog, or a fish, or various other things which naturally harbor the associated physics, or even supernaturally harbor the right kind of magic. I don’t know how to innocently provide you with this definition however, though you might reach such an understanding yourself in the same manner that most of our definitions are attained, or by means of considering various positive and negative examples. So I guess there’s a bit of a communication issue between us right now regarding this term. Or do you think that you now grasp what Eric and I mean by “consciousness”?
LikeLiked by 1 person
Furniture is actually a good comparison, because it’s also more a category rather than a distinct thing. And it’s a hazy one. So we’d probably agree that chairs, tables, desks, chester drawers, etc, are furniture. But what about a rug, stereo speaker, or dehumidifier? Personally, I think arguing about these edge cases would be pointless, because there’s no fact of the matter, which is much the same for edge cases of consciousness.
The “like us” term is based on the fact that the only consciousness we ever know in the way you’re talking about it, is our own. We never get access to any other consciousness in that way. So, the only basis we have for comparison with other systems is how closely we can infer their way of taking in information and making decisions is to ours. In that sense, a corpse might be a lot like us in some physical regards, but not in any dynamic sense. (I find it interesting that people who bend over backward to give Nagel interpretational charity on his phrase feel the need to strawman this one.)
I think I grasp your use of the word “consciousness” (and Eric S’s). It strikes me as just doubling down on Nagel’s. It’s fine as far as it goes, but only as far as it goes. If we’re not willing to pierce the veil, to try to dissect this view, then I don’t see how any progress is possible. It’s like marveling at how great pizza is, but then refusing to consider anything about recipes, ingredients, ovens, etc. By excluding all the possible solutions, we create an inevitable sense of mystery, but from my view, it’s one artificially preserved.
LikeLiked by 1 person
Great illustration using furniture. To extend this to something biological (as consciousness has been), I wrote this in my review of Just Deserts about free will:
“Such rigid definitions work well in the precise worlds of mathematics and Newtonian physics, but not in the fuzzy world of biology. In that realm, the ethologist Nikolaas Tinbergen gave us his Four Questions which are now the generally accepted framework of analysis for all biological phenomena. To understand anything there, Tinbergen says you have to understand its function, mechanism, personal history (ontogeny), and evolutionary history (phylogeny). As a very simple example, philosophers could tie themselves in knots trying to define ‘a frog’ such that this or that characteristic is A or not-A, but it’s just so much clearer and more informative to include the stories of tadpole development and the slow historical diversion from salamanders. So, is free will more like a geometry proof or a frog?”
The exact same question goes for consciousness. Is it more like a geometry proof or a frog? Why stop with single in/out definitions when a holistic picture is possible?
LikeLiked by 1 person
Thanks Ed! Why stop indeed? I think the holistic picture is not only possible, but necessary. I’m pretty skeptical that consciousness will ever be reducible to a single measure (like IIT’s phi) or equation. Like biological life, it will involve a galaxy of models.
What if it were empirically demonstrated that one of your naturalistic heroes, and indeed, one of the four horsemen of new atheism, were actually an unwitting agent for a supernatural cause? Yes I’m referring to Daniel Dennett. If this were to empirically come to light, what would you do? Would you stand by him anyway? I don’t think so. I think you’d disavow that position. I hope that he would too. Of course the empirical evidence that I speak of has not yet been achieved, but let me illustrate for you the path that I think this will take.
The status quo in academia today and for the past several decades has harbored the notion that “qualia” (if you’re comfortable with that term, though I could use others), exists when the right information is properly processed into other information, though without any mechanical output of that processed information. This is famously displayed by the though experiments of Searle’s Chinese room, Block’s China brain, and Schwitzgebel’s USA consciousness as a whole. To them I add my own thumb pain thought experiment. It posits that these theories mandate that if the right inscribed sheets of paper were properly converted into another set of inscribed sheets of paper, then something here would experience what you do when your thumb gets whacked.
The reason that these popular theories have such ridiculous implications, I think, is because each of them take a supernatural turn. Instead of positing some sort of “hard problem physics” associated with brain function, they presume that information processing alone gets the job done, and even without any mechanical instantiation whatsoever. Thus all sorts of ridiculous implications ensue.
The way around this supernatural fate would be to either remain agnostic about the mechanical nature of qualia, or to indeed propose a physics based solution. And note that such an answer, unlike popular modern answers, would thus be falsifiable unlike them. Johnjoe McFadden proposes such a physics based solution for example, and one that I consider to be a strong possibility. It’s that qualia exist by means of electromagnetic radiation associated with a certain kind of synchronous neuron firing.
In any case given my strong naturalism I believe that science will some day convincingly validate some such proposal and so render the ideas of Dennett and popular theorists in general, “not of this world”. So I’d like you to consider the possibility that a natural reckoning for these theorists may be in the cards.
Then in terms of a consciousness hierarchy, as I just implied with Mike above, I suppose I am able to go this way epistemically somewhat, though not ontologically. Furthermore in a psychological sense it seems to me that my consciousness will always be useful to consider in a binary sense, which is to say either “on” or “off”. If so then this should be the case for all creatures as well. So as I just asked Mike, could you consider consciousness to exist in a discrete way ontologically, even if not yet epistemically? Thus ultimately no hierarchy but rather a defined sort of existence that’s either on or off?
You know how you catch more flies with honey? Well you also catch more shit by being an ass. That’s just an old saying I made up.
Don’t ever imply I or Dan Dennett don’t follow the evidence. You just sound ignorant.
–> “Of course the empirical evidence that I speak of has not yet been achieved, but let me illustrate for you the path that I think this will take.”
In other words, let you guess the metaphysical future???? No thanks.
–> The status quo in academia today and for the past several decades has….
Been nothing at all as you characterised it. You’re an outsider. Don’t try to lump academia into a monolith. It doesn’t sound informed.
—> Schwitzgebel’s USA consciousness as a whole…
Is a terrible piece of philosophy IMHO. I said so in my FAQ’s of Consciousness in Q #18. I believe the whole embodied consciousness movement agrees.
–> The way around this supernatural fate would be to either remain agnostic about the mechanical nature of qualia, or to indeed propose a physics based solution.
Yep. I did that too. But it’s pretty hard to test unless you can rewrite the rules of the universe to create a comparison condition.
–> Johnjoe McFadden proposes such a physics based solution for example, and one that I consider to be a strong possibility. It’s that qualia exist by means of electromagnetic radiation associated with a certain kind of synchronous neuron firing.
This doesn’t make any sense to me because of the aspects of consciousness that exist in life before neurons. Sorry. Unimpressed.
–> I believe that science will some day convincingly validate some such proposal and so render the ideas of Dennett and popular theorists in general, “not of this world”. So I’d like you to consider the possibility that a natural reckoning for these theorists may be in the cards.
Until then, stop being so preachy about it. The rest of us are waiting for evidence.
–> in a psychological sense it seems to me that my consciousness will always be useful to consider in a binary sense, which is to say either “on” or “off”.
In a psychological sense? Hahahahaha. Nice security blanket. Yeah, sure, your life is on or off too. But that doesn’t make life a single binary essence. It’s a complex phenomenon worth teasing apart and actually understanding.
–> could you consider consciousness to exist in a discrete way ontologically, even if not yet epistemically? Thus ultimately no hierarchy but rather a defined sort of existence that’s either on or off?
No. That’s so overly simple as to be useless.
Hey Ed, I know Philosopher Eric was being provocative, and maybe deserved a sharp reply, but using words like “ass” and “ignorant” is going beyond my comfort level for keeping the discussion friendly.
Do me a favor guys, don’t continue this particular interaction.
LikeLiked by 1 person
My apologies to you Mike. I’m happy to follow your recommendation.
LikeLiked by 1 person
That was well handled Mike. I am indeed provocative, though there are both civil and uncivil ways to deal with people with whom we disagree. Civil methods let the ideas themselves do the talking. Uncivil methods stray into the fallacy of using ad hominem to do the work. And yet given internet discussions in general I think it’s good to let less familiar people try to find their way to the civil side when the work of a given hero, for example, happens to be framed unfavorably. Of course you know that I smiled through that entire self righteous monologue. It certainly is better that I don’t respond since it would be difficult to bring the discussion back into the realm of civility. Surely this won’t happen again.
I’ve struggled with how to respond to this, which unfortunately has come with a cost. From what I can see, you set out to provoke an angry response, got one, and forced me to say something to responder. And here you took some gratuitous swipes with your “non-response”, after I asked that this discussion end. So don’t be thinking you’re guilt free in this interaction.
You’re capable of cogent conversation Eric. Please exercise those muscles rather than picking fights.
Okay Mike, I will try to do better. I don’t feel guilt free and I am sorry about how things went.
On Thu, Mar 31, 2022 at 5:16 AM SelfAwarePatterns wrote:
> SelfAwarePatterns commented: “P Eric, I’ve struggled with how to respond > to this, which unfortunately has come with a cost. From what I can see, you > set out to provoke an angry response, got one, and forced me to say > something to responder. And here you took some gratuitous swipes w” >
LikeLiked by 1 person
I suppose you’re right that I went a bit “strawman” with that “special effects” observation Mike, as well as how your definition might admit a corpse. Well spotted. If I were going to go your way, instead of “like us” I think I’d use “like me”. I merely presume consciousness in others though it necessarily exists for me to me, or for you to you, or for a dog to it, and so on. Actually yes, I’ll agree that things that are conscious are “like me” in this regard. This is not to say like my body, but rather like something that my body seems to often create, or a dynamic phenomenal entity (that might very well exist by means of certain parameters of neuron produced electromagnetic radiation). Here existence can be good/bad and not otherwise, or a value element.
I think I just happened to use the furniture example because Schwitzgebel mentioned it in his paper, though not because I (or even he) wanted to illustrate a soft edged idea. Actually it seems to me that my phenomenal experience should be quite hard edged, as in “on” or “off”. Thus it should be somewhat like the discrete existence of a light (or even some variety of neuron produced electromagnetic radiation). Sometimes I should be quite conscious while other times my consciousness should be degraded heavily, and yet still exists in some form. Then conversely if I were perfectly anesthetized, or killed, then “I” should not exist at all given the non existence of the a associated physics. Schwitzgebel does discuss edge cases, though in order to keep them from affecting his innocent consciousness definition. Ultimately this definition does seem essentially binary however.
I realize that you (and Ed) are instead quite interested in hierarchy consciousness. This is an opposing idea however, or inherently not binary. For me “yes” and “no” will exist ontologically in any specific case given the associated physics, and even if only more and less conviction will exist epistemically for us by means of evidence. But it seems to me that your hierarchy is entirely epistemic. Could you go discrete in an ontological sense, or even then would you rigidly defend a “no fact of the matter” conception? Must Schwitzgebel’s conception be “in the eye of the beholder”, even if taken ontologically?
I am happy to hear that you think you understand innocent consciousness Mike. And if so you should also be able to observe that I do dissect this notion. As you know, many years ago I developed a psychology based model of how this works. Then only a couple of years ago I learned of a model that might describe the physics which underlies this psychology, authored by McFadden. So it seems to me that I do get into the recipes, ingredients, ovens, etc. for such “pizza”. I also enjoy questions on the subject if you have any.
My sense is that if/when McFadden’s proposal does become empirically validated in a clear way, future thinkers should look back and wonder how people today could have been so taken by the various popular “information without mechanical instantiation” theories. Beyond just hindsight however they should have all sorts of benefits that are missing in academia today. One of those benefits should be Schwitzgebel’s innocent conception of consciousness. Imagine the futility of trying to figure out the nature of something that specialists in the field are unable to effectively define.
LikeLiked by 1 person
I should note that you’re far from the only person to be inconsistent in assessing “like something” and “like us”. Eric S. himself, along with one of his guest bloggers, have done it, along with a number of other professional philosophers I’ve interacted with A lot of them simply judge Nagel’s phrase from a famous paper decades ago by a different standard than they do any new upstart one.
I agree that it actually begins with “like me”. Once we accept that each other are conscious, then it broadens into the “like us”. But it’s all in relation to how we ourselves work.
As I’ve noted before, a key question for me is, is a proposed theory just describing a substrate identity, or is it describing what happens in that substrate, how it generates perception and decision making. I’m much more interested in the latter type of theory.
LikeLiked by 1 person
It might not be that people judge Nagel’s work by a different standard than yours, but rather that he’s saying something that they understand and consider useful, though not so much for “like us” and the resulting hierarchy that you provide. You must admit that you and Nagel end up going very different ways.
Consider this thought. From what you’re saying it sounds like the pinnacle of your consciousness hierarchy would be the “like me” idea, and then the very tip of that could even be restricted to Schwitzgebel’s innocent conception (as displayed by examples of positive and negative cases for personal assessment). Furthermore in that case we would seem to disarm illusionists and eliminativists. If they’re going to complain that consciousness doesn’t exist when extra things are tacked on, then fine, let’s not tack any of those extra things on. An innocent conception should be a productive conception. Furthermore if this would remove motivation for people to say “consciousness doesn’t exist”, that might be beneficial as well.
Observe that an innocent conception would not mandate that panpsychism is wrong, nor the “only biological” camp, nor the “always biological” camp, nor even the “information to information sans instantiation” status quo. Any of them could still be right under such a definition, though now everyone would be using the same basic “consciousness” idea. Indeed, a panpsychist who simply likes to define anything causal to be “conscious”, wouldn’t actually be a panpsychist in that sense. Instead they’d just be using the term differently and so shouldn’t cause quite as much trouble. Wouldn’t it be helpful for people to not talk past each other so often through diverging consciousness definitions?
“As I’ve noted before, a key question for me is, is a proposed theory just describing a substrate identity, or is it describing what happens in that substrate, how it generates perception and decision making. I’m much more interested in the latter type of theory.”
I’ve never been into substrate identity either. I only got into substrate function when I realized how logical it would be for consciousness to exist in the form of the right neuron produced EM radiation. Why are all of your diverse subjective experiences jammed together into a single complex experience at any given moment? Perhaps because they all exists in the form of a single unified electromagnetic field that’s made up of those components.
Perceptions would be conscious input, like light, pain, itchiness, and so on. I can’t tell you why neurons that produce the right EM radiation would create perceptions of pain, light, itchiness, and so on, but just that this seems to be a strong possibility, and strangely one that’s even falsifiable. I consider perceptions to always be value based however in at least some sense. Pain can be strongly so with less of an information component, and “red” should be weakly so, though tend to be more informative.
Once you have something with value based interests you should inherently have something that thinks in the sense of valued interests in at least some capacity. It’s theorized that electromagnetic based decisions here then affect the brain through ephaptic coupling to move muscles in desired ways, or even previously learned ways such typing.
LikeLiked by 1 person
On “like something” vs “like us” and the response of some philosophers, maybe so. But if they’re going to be pedantic about the specific words in one expression, they should be pedantic about the other. If they’re going to give one interpretational charity, they should give the other at least the same degree of charity.
As I noted above, I think the innocent conception is a decent response to illusionism, notably in calling attention to the idea that if consciousness is an illusion, then the illusion is the experience. But it has the drawbacks you mention, in that it doesn’t rule out anything. Strictly speaking, it doesn’t even rule out illusionism, as Frankish pointed out in his response to that paper. And a common definition that rules nothing in or out is, I think, limited in its usefulness.
LikeLiked by 1 person
I agree that it’s uncharitable to be too pedantic with definitions, new or old. My first principle of epistemology implies this as well. But let me further suggest that the main issue here might be that the goal of your heuristic versus Nagel’s, is very different. If for example you’re talking with a person who has not been exposed to terms such as “qualia” and “phenomenal experience”, then Nagel’s phrase might be helpful to illustrate how they’re commonly understood. Here the person might observe that there is always “something it is like” regarding them and so gain a reasonable understanding. Furthermore the person might wonder if there is “something it is like” to exist as a salamander for example. If so then many ordinary people would say that it’s “conscious”, and even if many in academia have gotten themselves so confused by “the trees” that they’ve lost sight of “the forest”.
If you were to instead tell this person that terms such as qualia may be considered “things like us”, then I think you’d need to do a great deal more explaining to get to the same definitional point. And of course you don’t mean for your heuristic to be used to illustrate how such terms are commonly understood. That’s the point. Some of us consider Nagel’s heuristic somewhat useful in the manner that I’ve just displayed. This may be considered the innocent tip to your consciousness hierarchy. Even though your full hierarchy might reference all sorts of ways that people refer to “consciousness” today, going forward I consider this one of several problems to overcome.
“But it has the drawbacks you mention, in that it doesn’t rule out anything.”
Actually I don’t consider an innocent consciousness definition which thus doesn’t rule out anything, to be a drawback. I consider this to be a strength. One minor reason for this is because people of all beliefs — naturalist, supernaturalist, idealist, panpsychist, and whatever else, could potentially accept it. And indeed, philosophers have so far failed scientists in terms of presenting any generally accepted metaphysical principles anyway, so in a sense it would be too restrictive to use a definition which addresses natural ideas only.
The most important reason to confine the “consciousness” term to Schwitzgebel’s innocent conception, I think, is because otherwise associated questions should be too confusing for scientists to grasp. How might you effectively study something when you can’t even conceptualists what you’re studying? I think Schwitzgebel’s definition would help found the work of associated scientists.
Consider the position of Keith Frankish. On one hand he says that consciousness doesn’t exist when you add various pet conceptions, such as “ineffable”. Then on the other hand he says that if you don’t add anything extra then what you’re left with is pointless. Thus his position seems unfalsifiable.
Let me emphasize that I don’t mind this in terms of traditional philosophy where nothing ever gets worked out. In this sense philosophy exists as an art to potentially appreciate throughout the ages. I’m fine with that. Scientists however need to get various things figured out so that humanity might use those understandings. If traditional philosophers happen to be in charge of defining “consciousness”, then this should thus be problematic for scientists who instead need effective answers. Thus a new breed of philosopher might be needed that can indeed figure out how to usefully define “consciousness”, as well as other questions that philosophers perpetually ponder. Perhaps we’d instead call them “meta scientists”.
In any case I consider innocent consciousness useful, not for what it rules out, but rather for what it rules in. It rules in the single element of reality that I can know exists with perfect certainty — all else could be wrong. This “Truth” could also reside as a foundation from which to base a posteriori speculation. Apparently here there are brain dynamics from which that single Truth emerges. Furthermore in a causal world it might be observed that this should not just emerge through the processing of certain information into other information, but a processing that also animates the right kind of physics. It’s this physics which could potentially be measured empirically. Otherwise a given proposal should remain unfalsifiable.
LikeLiked by 1 person
Philosopher Eric, something doesn’t have to sound hot-headed to be uncivil. Something can sound even tempered but still be meant, in an uncivil manner, to elicit a reaction.
The following of your quotes sound ad hominem to me: “What if it were empirically demonstrated that one of your naturalistic heroes” and “If this were to empirically come to light, what would you do? Would you stand by him anyway?”. No reason to assume such a stance of anyone, nor to bring it into a discussion.
LikeLiked by 1 person
I’ve noticed civility to be in the eye of the beholder, and namely “Our side is civil while their side is not”. Therefore Astronomer Eric, I suppose it is possible that I haven’t been able to see my own incivility here. I suppose objective parties could decide. The way I try measure this however is to ponder the behavior of one of my own heroes. Have I been more uncivil in my struggles against a failed status quo (failed in my eyes of course), than Mahatma Gandhi was to his status quo? If so then I have indeed failed. Civility brings us far more power than incivility, as demonstrated by his amazing achievements.
It was just some advice, my good sir, that you can choose to take or not. I’ll repeat it once more for good measure: In a social situation, it’s best not to assume something about someone and announce it directly in an interaction with them (we can’t help assuming things, but we can keep them to ourselves). It’s kind of a social taboo, sort of like if you were to have a conversation with an acquaintance, but instead of talking with them at a normal distance, you got right up close to them and whispered everything in their ear. It’s kind of an invasion of personal space and makes people uncomfortable. Assuming someone is someone else’s hero (especially when using that assumption in a conversation that is argumentative in nature) feels uncomfortable. It’s provocative, and not in a good way.
A better way for you to measure your potential levels of incivility might be to see how people react to what you say. But again, this is just some advice that you are welcome to take or not.
Here’s my definition of consciousness which seems coherent with an objective and subjective understanding of what it is and how it may work.
To be conscious of something is to know that we know it.
Therefore to understand what it means to be conscious we need to define what it is to know.
We know something if we have instantiated in our mind a mental structure that enables us to pay attention to significant inputs, discriminate different values of those inputs, and produce useful outputs in order to give us control relative to that thing. The mental structure takes the physical form, in the brain, of a connected set of neurons firing in a particular pattern.
For example, if we drive, we pay attention to the road in order to move the steering wheel and pedals to control the car to keep it safely moving along the road.
When we know that we know, we have instantiated in our mind a second mental structure that enables us to pay attention to the first mental structure, and produce outputs that give us control relative to it.
This amounts to applying the mechanism of knowing not to the external, sensed world, but to the internal representations of our mind.
In the example of driving a car, we deploy this sort of structure when an unusual situation causes us to pay attention to our options.
Simply knowing enables us to act in response to the existence of a stimulus, without knowing that we know about it. This could be termed ‘first order knowing’.
Knowing that we know enables us to respond to and communicate about our own mental structures. We can also externalise them, enabling learning and the storing and sharing of knowledge. This could be considered as ‘second order knowing’.
Something else is implicit in this definition of consciousness. Since I know that I know, I know that there is an ‘I’ (me). I am that which knows; and knows that it knows.
LikeLiked by 1 person
That’s along the lines of the self reflection option I mentioned in the post. It’s most compatible with higher order thought theories. It also resonates with Michael Graziano’s attention schema theory, which itself can be considered a higher order thought theory. (Although HOTT advocates don’t seem keen to adopt AST as one of their own.)
It’s a pretty sparse view of consciousness. It may not leave much room for it to be pervasive in the animal kingdom, since an animal needs to have some ability for introspection. It’s possible some great apes have that capability, but isolating it in other species has been problematic, although weaker less comprehensive forms of metacognition have been observed, reportedly all the way down to rats.
It also means that someone with brain injuries that lose the ability to introspect shouldn’t be considered conscious, even if they can navigate their environment successfully. It’s not clear to me that they could meaningfully communicate with language in that state, but they may be able to use language in a habitual sense, just not in a manner that could discuss their own mental states.
This is a view I gave serious consideration to for a while. In the end, it felt too restrictive. But that doesn’t mean it’s wrong. It’s all in what you decide is necessary for the label “conscious”.
As usual I’m pretty much in agreement with your response to Anthis’ paper. I think the paper did a very good job of pointing out the bind of the consciousness discussion: the term “consciousness” is usually defined in vague terms, but then the questions asked (is “X” conscious?) require an exact definition. I think the whole discussion would be better served if it were couched in terms of pattern recognition, a la Dennett’s Real Patterns.
“Consciousness” is a vague term because it, like all categories, is determined by a pattern recognition system [psst … unitracker]. For a given pattern recognition unit there may be a set of properties which contribute to the pattern, say, A,B,C,D, and E, but maybe having any 3 of the 5 is sufficient to trigger the pattern. And some people’s patterns may be different, for example, some saying that “A” is absolutely required in every case.
The “easy” (so, scientific) problems are identifying those properties. The hard problem comes about for other reasons I can get into if desired.
[my project is to determine the minimum properties found in everyone’s pattern for consciousness, the psychule]
LikeLiked by 1 person
I like the point about pattern recognition. Maybe there are X number of factors, with some combinations being more germane than others, so it’s probably more complex than a simple majority present type mechanism. For example, some form of affect may have a bigger role than episodic memory.
But even here, I don’t know that everyone’s factors and weightings will be the same. Just based on the discussions I’ve seen and participated in, some people seem singularly focused on affects, others on perception, attention, or memory, yet others insist on introspection. By what standard do we say anyone is right or wrong? It doesn’t seem like there’s any strict fact of the matter.
I am kind of curious about the reasons you see for the hard problem. It just seems like all the hard problems combined. It seems distinct because it’s not easy to think about all the solutions interacting with each other to produce the whole. At least that’s my current conclusion.
The hard problem results because the properties of consciousness involve information processing, which brings with it the software/hardware, mind/body issue. I think many people don’t understand that “subjectivity” and “feelings” imply the software (physically multiply realizable) perspective. When you say “I see red”, that’s the software perspective talking. Reference to “what it is like” refers to a pattern of unitrackers. Some unitrackers are more “like” others. Understanding this perspective is “hard”.
Not sure I’m following this, so apparently it’s hard for me too. I do think to say “I see red” requires sensory processing and triggering of associations, many of which are affective reactions. Redness is a categorizing conclusion reached by sensory regions which in turn cascade into additional associated conclusions in many other regions, all adding up to the experience of red. Understanding that red is a categorizing conclusion, an overall pattern match in its own right, now that’s hard, requiring a mental shift many aren’t willing to make.
But do you recognize that all of that is substrate independent? I think that’s the hard part. That a robot that uses different hardware to do the same processing, including “affective” reactions, would have the same experience of red.
I certainly recognize that substrate independence is possible, even probable, but are you saying something about this scenario necessitates it?
Doesn’t it require an ability to “feel” red?
I’m becoming more comfortable with the idea of describing consciousness as “feeling” since I began thinking about how different aspects of consciousness get rendered different ways. Too often we fall back on visual examples, almost always color.
Redness isn’t a categorizing conclusion. It is simply a language category for neural processes that became rendered as the feeling of what we call “red”.
That’s where the substrate independence falls apart. Unless the substrate can actually feel, then there isn’t consciousness.
I guess it depends on what you mean by “feel” or “feeling”. If you mean affective reactions, then I’d say the category of red in the brain includes those affective reactions. They come along anytime our early visual system reaches that particular conclusion. So to the question, “Why does it feel like something to experience red?”, the answer is, because our nervous system has associations between the particular stimulus and those affective reactions. It’s ultimately pattern completion.
That said, I know people sometimes mean something else by “feeling”. Often it comes down to being a synonym for conscious experience. In any case, I don’t really see the challenge to substrate independence. But maybe I’m overlooking something?
My meaning is more simple. I’m trying to arrive at a neutral term that includes sensory perception of all sorts (touch, smell, sight, hearing, etc), internal perceptions (hunger, fullness, pain, fear, etc), memories, imagination, reasoning, and intuitions.
-“Why does it feel like something to experience red?”, the answer is, because our nervous system has associations between the particular stimulus and those affective reactions. It’s ultimately pattern completion.-
I know you are bravely trying to convert something felt into a simple logical matching operation to justify your argument that any suitably programmed computer could do it but I don’t think living organisms work that way.
I like “perception” for that role. “Feeling” I think has baggage. But as I noted before, a lot of people do use it to refer to all conscious experience.
How do you think living organisms work if not that way?
Is a memory of red a perception?
“How do you think living organisms work if not that way?”
Not exactly sure what you are asking. The problem I have is that pattern matching as an answer is that it doesn’t require consciousness or even what we think of as perception unless you think, for example, that face recognition software produces consciousness. There is no “feeling” or “perceiving” required. A system can be programmed to select a best fit for something from an inventory of objects but that doesn’t mean that consciousness or perception are involved. Something else is required to produce the subjective experience. Presumably this “something else” is something necessary and required for living organisms.
So, yeah, there is a lot of “pattern completion” going on with conscious organisms and it may be a necessary part of consciousness but it is not by itself consciousness; otherwise, my iPhone would be conscious.
On a memory of red, I guess it depends on how we define “perception”, but it would definitely be a reconstruction of that perception. Of course, no two perceptions are ever identical, so the reconstruction might be on the commonalities, and is itself never complete.
On pattern completion, I think it depends on the pattern being completed. You’re comparing relatively simple patterns. I’m talking about a pattern that includes the sensory conclusions but also all the associated affective reactions, which hopefully sounds closer to what you might think of as “conscious”.
Yes, it would be some sort of mental activity but not exactly perception, as it is normally used, even if based on a previous perception or some imagined red based on many perceptions. That’s why I’m reaching for a more general term for the rendering/representation that would include memory, imagination, perception, and other mental activities (MIPO).
There’s a good chance the components of MIPO can’t be readily disentangled anyway. Buzsaki I think has argued that the categories of activities currently studied by neuroscience derive from vocabularies from the time of William James. So they probably don’t map well to how the brain actually works.. The parts of MIPO are not likely distinct processes in the brain but completely woven together and entangled. There may be distinct processes but they may cut across completely different dimensions and we may need years and a paradigm shift to understand how.
On pattern completion, the only difference in your more inclusive pattern completion and the iPhone is its scope. The iPhone can complete the pattern of a face but once done it doesn’t add anything else like a general feeling of being glad to see me. We could code a glad/sad thread? Once it completed a glad pattern, would the iPhone then be conscious?
I don’t think so. That’s the problem. It still wouldn’t feel glad. It still wouldn’t feel.
LikeLiked by 1 person
Mike, not sure what you’re asking. I’d say information processing necessitates the existence of a subjective perspective, but not sure that answers your question.
@James Cross, I think what people refer to as “feeling” is (at least) a two-step process: 1. a pattern recognition, and 2. responses to that recognition. In Dennett’s terms, it requires the “and then what happens.” Responses can include a memory associating that recognition w/ a time and place, a systemic increase in a specific hormone, the initiation of an action plan to announce the recognition (“I see red”), etc. From the subjective perspective of the information processor, these things just happen. If you need to give a name to this constellation of happenings, “feeling” works pretty well.
James of S,
Not sure. I can see that straight information processing does imply substrate independence, but it seems like a subjective perspective requires a certain type of information processing architecture. Tetris is information processing, but not all information processing is Tetris. Same for a subjective perspective I think.
You’re in the pattern recognition camp too, I see.
I think the fact that so much of what the brain does is pattern recognition/response can lead one to the false conclusion that is all that it does.
Mike, there is more to a subjective perspective than just one information process. Here be weeds.
(From my personal theory) you need at least two processes: 1. the pattern recognition (or information integration, to use a different term) generates a sign vehicle, and 2. the sign vehicle is interpreted by linking it to an action which serves a purpose. Some refer to the sign vehicle as “the representation”, but there is no ”representing” without a specific interpretation. This interpretation, with its associated purpose, provides the subjective perspective.
Thanks James. Unfortunately the semiotics language doesn’t usually click with me. I have to refresh my memory on it every time it comes up. But I’m with you on action planning being an integral aspect.
I agree that consciousness is real from the perspective of the the ‘software’, as James calls it. It only needs to exist to itself in the same terms that anything else exists to it. From the different perspective of an all-seeing external observer there is a need to describe how the ‘software’ is structured (running on the ‘hardware’ to achieve that – to give a ‘white box’ account of how it works, not to experience it for itself.
Excellent as always Mike. Thanks for sharing this paper with us. I think you are right on with your characterisation of the language game people play here. Don’t forget I’ve got my post up for ammunition for you with many, many of the various definitions of consciousness in history:
And the way I tackle this is to give as broad a definition of consciousness as possible (as I see it) and then try to delineate all the *aspects* of consciousness using a hierarchy. It’s like yours. It’s like Birch’s dimensions. I just think going through Tinbergen’s analysis gave me some new twists that are worth considering. The language and emphases used in this huge space though are seemingly always going to be fraught with misunderstanding.
LikeLiked by 1 person
Thanks Ed! And definitely I keep your post on definitions handy. I’ve sent a few people to it who challenged me on the idea that defining consciousness is something other than straightforward and wanted more citations. (Many people really seem to dislike the idea that it isn’t a simple ontological thing.)
One of these days I need to dive deeper into Tinbergen’s analysis. It didn’t really click for me when you discussed it, but that may be that I’m still in the “not getting it” stage with it. Definitely agreed that the language does seem like it will always be an issue.
Mike, great essay. I sincerely feel your frustration over the ambiguity and confusing uses of words (and their concepts) in this area philosophy as well as others. The linguistic turn in philosophy created, in my opinion, a refreshing new set of tools for philosophical inquiry. It is difficult to dispute Wittgenstein and Austin’s arguments that our words and so our concepts are rule governed social constructs. In other words, language does not directly map reality. We start by sketching out some rules defining a concept and then most commonly enlist a word that is already in use—a word governed perhaps by a slightly different set or sets of rules. In short, I think that applies to “consciousness.” So, it seems to me that much of our contemporary philosophical disputes are concerned with on-going arguments about the rules governing our words and so the meaning of our concepts. That makes philosophy endlessly interesting to be sure. You refer to similar discussions among biologists. In physics a common sentiment is “shut up and do the math.” That’s one solution. I think when someone asks ‘what is consciousness?’ One initial approach, for a philosopher or a scientist, might be to inquire what purpose the question serves.
LikeLiked by 2 people
Thanks Matti! And excellent points. You reminded me of where the word “conscious” came from. It originally referred to people knowing something together, but later evolved into John Locke’s conception of knowledge of what passes in one’s own mind. But its meaning seemed to morph again many times over the centuries. The word also shares some etymological history with “conscience”, with the two forking from each other at some point.
I like asking the purpose of the question. Another technique might be to ask what Daniel Dennett calls “the hard question”, which is, “And then what happens?” So we might talk about some aspect of consciousness, but then how does that fit into what the organism does next? It situates such questions into how the concept relates to the organism’s survival and affordances.
Mike, I have no acquaintance with Dennett’s technique. But it may relate to what I said.
When I suggested that we ask what purpose the concept (word) serves, I was on a basic level—the actual process of concept formation. I was addressing the problem of semantics, the meaning of meaning as Hilary Putnam would say. (And my comments may be off the topic you really want to address even though “semantic” was in your title.) Nevertheless, we do not form concepts in a vacuum, pluck a word out of thin air, then come up with the rules for its proper use, it’s meaning. It works the other way around. We first start with a purpose or a need. Then we develop a rational classification scheme from the available information, selecting elements, that serve that purpose. For example, the explanatory and predictive purposes of science guides the formation of scientific concepts. But we have many purposes and needs as human beings. And, hence, there are some concepts, like consciousness, expressed in our language that may serve more than one purpose. My modest suggestion was intended as a basic starting point. I suspect Dennett’s technique may be further down the road from there.
Thanks Matti. I think I see the point you were making. Definitely words start out with a purpose. What’s interesting is that we rarely create a word from scratch, but appropriate one that often fits, sometimes in a metaphorical sense. Over time, that sense can evolve until the word itself is no longer a metaphor, but just the concept itself. The Greek and Latin words for soul (“psyche” and “anima”) originally referred to breath. Over time, they gradually evolved in meaning, forking between multiple concepts in some cases.
This actually makes reading historical texts problematic at times. Sometimes what someone meant by a word in, say, the 1700s, isn’t what we mean by it today, even if they’re writing in the English of that period. It’s why constitutional law isn’t the simple straightforward thing many take it to be.
Biologists may not agonize over the distinction between life and non-life, but astrobiologists certainly do. At some point, we may discover an alien life form and fail to realize that it’s alive. Similarly, we might one day create a conscious artificial intelligence and fail to realize it’s conscious.
I don’t really disagree with what you’re saying, or what it sounds like Anthis is saying. It makes sense to focus our scientific research on easier questions that we can actually answer, at least for now. But it also makes sense to think ahead a little about those harder questions, because what seems like only a philosophical issue now might become a real world problem later.
LikeLiked by 2 people
I know what you mean with the astrobiologists and artificial intelligence. Although I think we need to be careful with assuming there’s necessarily a fact of the matter answer to those questions. As I noted in the post, we can debate endlessly whether viruses (or viroids and prions) are alive. But these things exist and are worth studying regardless of which category we slot them into. It’s kind of like the debate about whether Pluto is a planet. In the end, it’s a category humans care about but is meaningless in nature.
In both the alien “life” and AI consciousness case, I think it will come down to our intuitions. Once we have to struggle with them to regard those things as not being alive or conscious, I think we’ll let them into the club. And we’ll be right to do it, because violating those intuitions too strongly might weaken them for how we treat each other.
LikeLiked by 1 person
Firstly, is the distinction between “inner” and “outer” conceptions of consciousness really any sort of a dichotomy? It seems to me that Davidson’s view is the more useful one: subjective/objective, 1st person/3rd person, mentalist/physicalist standpoints amount to discourses (in the general, philosophical rather than linguistic meaning of the term) addressing the same reality and equally valid in their discourse domains.
Secondly, I am unclear what exactly you mean by the term “eliminativism”. It seems to mean different things to different people. I am guessing that you use it as a shorthand for reductive physicalism. Is that right?
If yes, then is it just reduction of individual specific cases (tokens) or does it include type reductions as well? Personally I find Davidson’s argument plausible: token-level reduction is a necessary feature of physicalism, but physicalism does not entail type reduction. In fact, we have no reason to expect type reducibility between mentalist and physicalist discourses. One does not have to buy into the specifics of his Anomalous Monism to accept the argument on that level.
Finally, consciousness being in the eye of the beholder recalls to me the argument between Dennett and Searle, whether there is such a beast as “original” Brentano-style intentionality or whether all intenionality is, as Dennett agrues, “conferred”. For my money Dennett is right and thus his intentional stance is the only means of assessing consciousness of entities other then ourselves. I.e. consciousness is not a property of an entity, but it is a property of its behaviour. And it seems to me that he is sometimes suggesting that our own consciousness is similarly “self-granted”. Which I think it is.
LikeLiked by 1 person
I used “external” and “internal” really as synonyms for “objective” and “subjective” respectively, so I’m good with the move you suggest.
Eliminativism generally refers to variants of the idea that since consciousness isn’t anything fundamental, it doesn’t exist. Many eliminativists and opponents of reductionism do see them as equivalent, but I personally don’t. I consider myself a reductionist but not an eliminativist. I have no issue with emergent phenomena (in a weak sense). So to me, a chair exists despite the fact that it’s really a collection of atoms. Likewise, I have no issue with saying consciousness exists, despite the fact that it isn’t anything fundamental, that we can dissect it and examine its components. I actually think that’s the only real way to understand it.
I’m not familiar with Davidson’s distinctions, so can’t really speak to them. I’m also not familiar with that particular argument between Dennett and Searle, but in general I’m much closer to Dennett’s views than Searle’s.
Sorry, I assumed you would be somewhat aware of Davidson’s contribution. Let me try to summarise my understanding of his seminal essay “Mental Events”.
Davidson makes three posits:
1. Mental goings-on are “anomalous” — they cannot be captured any kind of strict laws and thus there cannot be any law-like way of translating between events in mental and physical discourses. (So e.g. as mooted elsewhere in this topic’s discussion, there is no way to account for “perception” in strictly physical terms.)
2. Some mental events are physically causative, i.e. minds are not epiphenomenal.
3. There is nothing non-causal involved in any of this.
While these three posits appear to be mutually incompatible, Davidson argues (to my mind convincingly) that they are reconciled by strict physicalism — mental events and physical events are strictly identical. So we are not talking about some kind of property dualism.
The argument hinges on the fact that no discourse can be based solely on tokens (i.e. particulars) and must largely rely on types (universals). Where e.g. “chair” is a type, but a particular chair is a token instantiating the type “chair”. Thus any law or-law like procedure must be phrased in terms of types or consist in a humongous list of special cases. The latter might be possible in principle, but is in principle not realisable in practice.
So a law-like translation between discourses must somehow align (or translate between) types of discourses in question. This may be possible in some cases, but in general there is no reason to expect this to be in principle possible in practice. Where it is not possible, we speak of weak emergence (think spaceships in Conway’s Game of Life.)
This is a highly compressed summary, which of necessity fails to do justice to the matter. It I hope it may serve as a useful pointer at a highly relevant line of thought.
Personally I prefer not to assume some anomalousness of the mental discourse, but to run the argument in reverse, starting from physicalism and and concluding that it does not enatiil reducibility of “the mental” to “the physical” except solely on the token-token basis. Like in the Game of Life, explaining any given spaceship in terms of GoL rules is trivial, explaining the notion of a spaceship is not, without constructing meta-concepts (e.g. “distance”) not actually reducible to the bare GoL rules. What is more GoL spaceships cannot be *predicted* from its rules, because GoL is Turing-complete.
Thanks for summarizing Davidson. At first blush, he sounds like he falls into what David Chalmers called Type-B Materialism, which sees consciousness as not reducible to physics, yet still completely physical. (Sections 4 and 5 at this link: https://consc.net/papers/nature.html )
That said, I disagree with the first point. But I would, falling into what Chalmers labels a Type-A Materialist. I think the mental is completely reducible to the physical, although we still have a lot of work left to do it.
Yes, Davidson is a non-reductive physicalist. To be fair to him, his posit (1) is rhetorical. He notes that in general theories of mind can be classified into monist/dualist and reducible/non-reducible (“anomalous”) and that of the four resulting possibilities none falls into the monist/anomalous quadrant. So in “Mental Events” he sets out to defend the feasibility of that 4th general position. I think he would have done better to start from the physicalism end and show that it does not exclude non-reductive anomalousness, but he didn’t.
The resulting position is classed in philosophy as “predicate dualism” (meaning duality of discourses, or as you may prefer it, of semantic perspectives), as opposed to substance dualism and property dualism. But it is often confused with property dualism — hence for example Kim’s well known casual exclusion objection to Davidson.
For me, Davidson’s is a very deep insight re token identity not entailing type identity, effectively covering not just minds but all weakly emergent phenomena. (Strong emergence is, of course, no more than a wave of magic wand and not worth taking seriously.)
LikeLiked by 1 person
Mike, I’m still working through the weeds you sent me towards. Although, to give an update, I’m still not finding any notion of an explanation for where any laws of nature that we have discovered, like physics, could explain how perceptions are generated. The closest I can come to what might be an understanding is that perceptions emerge from the various physical brain mechanisms. But that’s not very satisfying for me, as you warned. Would that be a type of strong emergence?
I’ve been thinking about something that I remember being discussed here somewhere, I think it’s a p-zombie. Could you point me to where you might have discussed about p-zombies in detail with other folk here? In particular where there has been a discussion/debate about p-zombies.
Actually, I found your blog from 2016 “The problems with Philosophical Zombies”. So that clears up my questions in that regard. An argument against zombies assumes that consciousness is a part of the entire causality package of behavior, whereas an argument for it assumes that consciousness can be separate from the entire causality package of behavior, no?
Maybe because I studied physics and the such as a younger lad, my issue is that I can’t see how any part of physics that I am aware of could lead to perceptions. We have forces and motion and energy and all the usual stuff, but none of that easily maps to a production perceptions, which don’t fit nicely into any of those physical categories for me. I guess another way of putting it is that I don’t clearly see what function perceptions have in the chain of causality of behavior. Why I was considering the zombie aspect was that I can envision a well designed automaton to be able to produce all the same behaviors needed to survive that a conscious one could, but I can’t envision what extra function(s) a conscious organism gains that enhance its survivability over an automaton.
AstroEric! How are you?
You’re repeating the hard problem of consciousness here. Where does the feeling of subjectivity arise in nature out of just physics? Chalmers has speculated it could just be fundamental to the universe, ie we have to add it to our physics model like we did with electromagnetism. I think that is indeed the most parsimonious answer. Physical forces are •felt* by matter, but subjectivity only arises in a subject, ie all living organisms. (Quick aside, artificial life and extraterrestrial life would feel some subjectivity too then, it would just be a very different flavour than ours because the matter matters. cf drug use) I call this pandynamism, and think it nicely splits the difference between panpsychism (C is everywhere) and eliminativism (C is nowhere).
As for zombies, find the 2 papers on “The preposterousness of zombies” and “zimboes”. They cleared things up nicely.
As for automatons vs feelers, Mark Solms identified affect as “the wellspring of consciousness” which he says shows how life must feel its way through existence. I think that is what life does, but maybe it doesn’t have to? However, trace conditioning appears to be a type of learning that requires the subject to be consciously aware of itself before the learning can take place. Presumably a zombie could not do that.
Just last week I finished turning my consciousness blog series into a pdf. You can download it on my homepage now and search through it quickly for all these references to find more links.
(www.EvPhil.com for others who don’t know it)
LikeLiked by 1 person
Hey Ed! Doing well! I’ll elaborate in email.
I’m down for something fundamental to the universe in a pandynamism sense, but are its properties something that can be empirically studied?
Does Solms convincingly elaborate on why life must feel its way through existence? i.e. does he give a convincing function for the affects such that an automaton without affects would be outcompeted in the evolutionary realm? If it’s fundamental then my question is moot and let’s get on with the empirical studying. But if not, doesn’t it need a function such that evolution can select it?
I had never heard of trace conditioning before, so I may be completely missing the point of it, but based on the definition I read where the conditioned and unconditioned stimulus are separated by a time duration, don’t you only need memory to store the mental representation of the conditioned stimulus? I’m not understanding why you would also need to have perceptions.
I’d say no to Solms being convincing about that. It’s impossible to empirically compare. I can imagine how much better I learn by having an emotional experience — the more highly charged, the more impactful the event is to me — but that’s so much further down the line in the evolution of consciousness that it’s hard to project that back to life’s origins. Solms is just clear that affect goes all the way back to the beginning of life and we know what it does to us.
As for trace conditioning, simple summaries of it don’t make this clear. But here’s the relevant bit I quoted in my 19th post on the functions of consciousness:
Robert Clark and Larry Squire published the results of a classical Pavlovian conditioning experiment in humans. Two different test conditions were employed both using the eye-blink response to an air puff applied to the eye but with different temporal intervals between the air puff and a preceding, predictive stimulus (a tone): in one condition the tone remained on until the air puff was presented and both coterminated (“delay conditioning”); in the other a delay (500 or 1000 ms) was used between the offset of the tone and the onset of the air puff (“trace conditioning”). In both conditions experimental subjects were watching a silent movie while the stimuli were applied and questions regarding the contents of the silent movie and test conditions were asked after test completion. In the delay conditioning task, subjects acquired a conditioned response over 6 blocks of 20 trials: as soon as the tone appeared they showed the eye-blink response before the air puff arrived. This is a classical Pavlovian response in which a shift is noted from reaction to action, also known as specific anticipatory behaviour. This shift occurred whether subjects had knowledge of the temporal relationship between tone and air puff or not: both subjects who were aware of the temporal relationship — as judged by their answers to questions regarding this relationship after test completion — and subjects who were unaware of the relationship learned this experimental task. One could say that this type of conditioning occurs automatically, reflex-like, or implicitly. In contrast, the trace conditioning task required that the subjects explicitly knew or realized the temporal relationship between the tone and air puff. Only those subjects knowing this relationship explicitly — as judged by their answers to questions regarding this relationship — succeeded in performing the task; those that were not, failed. In other words, subjects had to be explicitly aware or have conscious knowledge of the task at hand in order to bring the shift about, that is, to respond after the tone and before the air puff. This is called explicit or declarative knowledge. … Clark and Squire (1998, p.79) suggested that “awareness is a prerequisite for successful trace conditioning”: (i) when explicitly briefed before trace conditioning about the temporal relationship between tone and air puff, all subjects learned the task, and faster than those without briefing; (ii) when performing an attention-demanding task, subjects did not acquire trace conditioning. (van den Bos)
The van den Box article is here:
“i.e. does he give a convincing function for the affects such that an automaton without affects would be outcompeted in the evolutionary realm”
The problem with this is that nature has not produced such an automaton. Likely nature is not able to produce such an entity because it would be unable to operate with the low energy requirements under which life operates.
I like where this is headed. Sounds functional to me. How might the presence of perceptions lower the energy requirements of an organism?
I don’t think by themselves “perceptions” lower energy requirements. I think they are the evolutionary solution to creating autonomous, intelligent and complex organisms given the limitations of low energy.
But it isn’t simply perceptions. The problem consciousness solves is how to decide and coordinate action in the face of multiple perceptions, large amounts of information that might possibly conflict. When the lion appears on the plain, you can’t simultaneously run, climb a tree, slink off into bushes, and pick up rocks and sticks to fight. You have to choose. One leg can’t run and the other leg climb.
Ed, I’m still working through the van den Box article and will respond to that when finished.
James, I’m not really sure what my definition of consciousness is and what part perceptions/qualia/affects/sensations/etc. play in the wider realm of consciousness (or whether they make up the entire realm of consciousness). But I am focusing specifically on qualia for the time being. The lens that I am looking at all of this through is the following (and I am eager for someone to comment on whether this lens is suitable or not): at the end of the day, qualia must serve some function that is ideally testable, and how it functions should ideally be able to be explained by empirical laws (which we may not know yet if the empirical laws don’t fall within currently known physical laws). If a functional explanation is not suitable enough, then I feel that it can be counter-argued by the zombie/automaton idea. As Mike said, the burden of the functionalist is “find a plausible (ideally testable) functional explanation for any phenomenal properties”.
You said, “I think they (perceptions) are the evolutionary solution to creating autonomous, intelligent and complex organisms given the limitations of low energy.” Why are they the solution and why would an automaton require too much energy to accomplish the same behavior without them?
You also said, “The problem consciousness solves is how to decide and coordinate action in the face of multiple perceptions, large amounts of information that might possibly conflict.” Why can’t this happen unconsciously?
—> qualia must serve some function that is ideally testable, and how it functions should ideally be able to be explained by empirical laws (which we may not know yet if the empirical laws don’t fall within currently known physical laws).
This just isn’t so. Does gravity serve some function that must be testable and explained by empirical laws? It has effects, but not an inherent function. It may be possible that subjectivity just can’t be avoided in this universe based on fundamental laws that haven’t been elucidated yet. That’s one of Chalmers’ hunches and I’m inclined to agree with that for now since that fits the evidence I see of the simplest aspects of consciousness (affect) arising right from the beginning of life and growing in a rather smooth way (through my hierarchy) into the elaborate feelings that you and I have today.
Enjoy the van den Bos article! (autocorrect turned that last name into a box in my post)
Ed, Yes! This is heading towards what I’m craving. You said, “Does gravity serve some function that must be testable and explained by empirical laws? It has effects, but not an inherent function.” I’m thinking of it like this. A hydro-electric dam serves some function using the force of gravity. It uses gravity to give kinetic energy to water which then is able to turn a turbine and transfer that kinetic energy to electrical energy. etc. Gravity itself doesn’t have a function, but something can be designed to have a function by utilizing gravity.
So I guess my wording may have not been the best. But I’m looking for the equivalent functionality explanation with living organisms (hydro-electric dam) and qualia (gravity).
Coming up with the hydro-electric dam analogy makes me see (I think) what might be wrong with a zombie analysis though. I guess it’s like asking about a zombie hydroelectric dam, or a dam that produces electricity without the force of gravity. haha.
So, if we’re going to place gravity and qualia on the same level, then I guess when I ask about how qualia are produced, to me it’s like asking how the force of gravity is produced. I know we aren’t currently completely clear about that either. We have Einstein’s relativity theories, and we have other theories dealing with hypothetical graviton particles and gravity waves, etc.
Ok, I have a ton of new reading on consciousness to do because of my inquires here, so I’m going to get to that for a long while before I continue probing further here. But I’ve been thinking a lot about the hydroelectric dam analogy and I want to expand that out as fully as possible before moving on.
Fundamental Entity in the universe:
System that utilizes Fundamental Entity:
1. Hydroelectric Dam
2. Living organism (such as a human)
Function of System that utilizes Fundamental Entity:
1. Gravity accelerates water, converting potential energy into kinetic energy and then into electrical energy through the various mechanisms of the Hydroelectric Dam system.
2. Based on some responses I’ve had here:
2a. Consciousness in the form of pain/pleasure motivates a living organism towards a particular behavior.
2b. Consciousness in the form of a visual image is the myriad of categorizing conclusions obtained from the analysis of an array of photons.
2c. Consciousness allows a living organism to decide and coordinate action in the face of multiple perceptions, large amounts of information that might possibly conflict.
What is the system called if the Fundamental Entity is missing (haha)?
1. A hydroelectric Zombie
2. A philosophical Zombie
What capabilities are lost if the Fundamental Entity is missing from the system?
1. A lack of gravity or other suitable force will not allow the water to accelerate. Thus no energy can be gained with with to be converted into electricity.
2a. A lack of consciousness will not allow a living organism to produce motivated behavior (is instinctual behavior still allowed?)
2b. A lack of consciousness will not allow a living organism to form conclusions about the properties of the array of photons entering its eye.
2c. A lack of consciousness will not allow a living organism to make decisions or coordinate actions from a large amount of conflicting information, especially since the organism will also not have multiple perceptions of such information.
Does this jive with folks here? Or is this a totally flawed analysis?
Oh, I forgot:
What can be measured empirically in regards to the system and the Fundamental Entity in the system.
1. The mass of the water can be measured. The acceleration of the water can be measured. The amount of kinetic energy in the water as it hits the turbine can be measured. The amount of electrical energy can be measured and then compared to the amount of kinetic energy, allowing a calculation of energy lost to friction in the system.
2. I have no idea.
I don’t quite agree with the 2b and 2c lines of reasoning. The Unimagined Preposterousness of Zombies lists better things that a zombie would lack IMO.
Click to access rf55zm151
As to how to measure gravity and consciousness, the force of gravity is well understood. In my series, I laid out the case for “biological forces” which describe how living organisms are motivated to move towards things that sustain life, and away from things that destroy life. So, you could come up with measurements for life-preserving behaviour and note how well or how poorly different structures of life are able to sense and respond to these forces. I know this sounds far out, but something exists if it causes changes in the universe. Once life arose, a new quality entered the universe — things that enhance life and things that destroy life. Sensing this quality causes action. Therefore, this quality has a kind of force. Note that even if the organism is unconscious to these forces, they can still tear life apart or help it flourish. I haven’t teased out all the differences between biological forces and physical forces, but I think there’s something there.
I don’t buy consciousness as fundamental (at least objectively), but I think you’re on the right track with most of the rest.
In terms of empirical support, the only thing we can get empirical support for is behavior, functionality, notably self report. Any other evidence we want to cite ultimately has to have a chain back to self report. Many will say brain scans, but brain scans are only useful once that pattern of activity has already been linked to self report and consciousness inferred.
Of course, if you reject the link between functionality and consciousness, then you don’t even have that, and we’re back to epiphenomenalism.
Mike — Can you say more about not seeing consciousness as fundamental? (And I didn’t follow why you put objectively in parentheses.) To me, we all agree the fundamental properties of electromagnetism arise automatically for certain arrangements and motions of matter. Why not the same for the feeling of subjectivity? (Only for different structures and material obviously.) I’m not saying we’ve ruled out all other possibilities and this is what it must be, but I do think it fits the evidence better than anything else right now. Your evidence may vary. I’d love to hear why.
To me “fundamental” implies irreducible. I think consciousness is reducible. It can be dissected. At least from the objective perspective. Subjectively from the inside, a perception of red can’t be reduced any further. But objectively we can get into the mechanics that lead to the conclusion of redness in that part of the visual field.
For the objective vs subjective reducibility distinction, my favorite example is a software bit. A bit, from within the operations of software, is irreducible. But when viewed from outside that context, looking at the hardware, a bit is typically a transistor, which itself has components.
Interesting objection Mike! I honestly haven’t thought about irreducibility since I agree that our consciousness is reducible to lots of bits and parts. At first reaction, I think it’s the mere fact of subjectivity that is fundamental, but the experience of all that subjectivity changes along with the subject. Kind of like the way gravity changes in the different contexts to proximity of large or small objects, or electromagnetism changes in the context of copper vs. other types of wiring. But I’ll have to think about that some more. Thanks.
Thanks Ed. If by “subjectivity” here, you mean information is always collected from a particular vantage point and in a particular manner, and so always constrained and molded by that perspective, I could see a case for that being fundamental. But if so, it seems like it would apply to any physical system, not just ones we typically consider to have subjective experience. (This is getting close to Carlos Rovelli’s ontology, although he adds some QM specific aspects.)
I never reduce my descriptions to information (it’s an abstraction, not the thing in itself IMO) so that’s not quite right. But I’m gonna step away for a while.
LikeLiked by 1 person
Ed, while reading The Unimagined Preposterousness of Zombies, am I correct in interpreting paragraph 5 in it to be positing the same reasoning as 2c.? I got 2c from James C’s response to me here, so maybe he can verify this???
Mike, if not fundamental, could it at least be emergent from something fundamental? Just as a hurricane and its peculiar behavior is a chaotic emergence (I’m reading Chaos by James Gleick right now) of the countless individual particles that make it up and the forces that cause them to interact with each other, could consciousness also be a chaotic emergence of the individual brain “particles” and the forces that cause them to interact? I say chaotic because, like weather patterns, behaviors of living organisms are impossible to predict accurately and the best we can seem to do is to notice generalized patterns like the direction of rotation of a hurricane or that a person is likely to eat when hungry. And just like a “pressure system” pushes a weather pattern in a certain direction, Ed’s notion of a “biological force” pushes living organisms toward certain behaviors? I feel as though a pressure system is itself an emergent entity from the countless individual forces of all the particles in the weather system, and as such, a “biological force” may also be such an emergent entity of all the countless individual forces of all the particles in a living system.
And in the case of gravity, I don’t think we actually measure it directly either. We measure the effect it has on the motion of the mass it is interacting with. So in that sense, it makes sense to me that we also couldn’t measure consciousness directly, but instead we can measure the effects of it, such as the behaviors you mentioned.
I would say no about 2c. The way you have written it, doesn’t jive to me with the ability of lots of animals to make decisions in the face of large numbers of inputs. They simply have to respond to the biggest signal. Or the largest internal reverberation of some learned response. That could theoretically happen without subjective awareness and making choices. Free will skeptics would argue that it already does.
I’m not sure how you are counting the paragraphs in The UPofZ, but that document is more about showing how our conscious wonderings about the feelings of subjectivity do influence our behaviour and they could not possibly arise in zombies. These exact discussions we are all having about subjectivity could NEVER be discussed by zombies. (So they aren’t indistinguishable from us!) And perhaps our feelings of fear and lust could also not be felt on command by zombies in the way that we feel these things when we consciously think about public speaking or an upcoming date. Without subjectivity, where would the stimulus for these emotional reactions come from?
I think that captures UPofZ. I haven’t read it in a while. I discuss it with links in Q #8 of my FAQ’s of Consciousness.
Click to access the_faqs_of_consciousness.pdf
I’m fine with discussing emergence (in weak sense), but I don’t see it, in and of itself, as an explanation. One of the things I think we have to resist when working to explain consciousness is invoking what Michael Graziano calls “the magic step”. In that sense, I think we can do much better than just saying “emergence”. Of course, doing that means clarifying what we mean by “consciousness”, even when just considering it subjectively.
Haha, Ed, I didn’t get to read your response before I sent mine. But I also agree that I didn’t understand the “objectively” part in parentheses.
Ed, it was this paragraph:
“In creatures as cognitively complex as us (with our roughly inexhaustible capacity for
meta-reflections and higher-order competitions between policies. meta-policies, etc.), the
‘blood-is-about-to-be-lost sensors’ and their kin cannot simply be ‘hooked up to the right
action paths’ as Flanagan and Polger put it. These ancestral hookups, which serve quite
well in simpler creatures, do exist in us as part of our innate wiring. but they can be
overriden by a sufficiently potent coalition of competing drafts (in my Multiple Drafts
Model). So it is that in us. there is an informationally sensitive tug-of-war, which leaves
many traces, provokes many reflections (about the detailed awfulness of it all), permits
many protocols (and poignant complaints), adjusts many behavioural propensities (such
as desires to reform one’s ways, or seek revenge), lays down many (‘false’) memories
about the way the pain felt, etc., etc. Notice that ‘experiential sensitivity’ (whatever that
might be) never comes into my account – the ‘conscious feelings themselves’ get
distributed around into the powers and dispositional effects of all the extra informational
machinery – machinery lacking in the case ofsimple organisms. My account is entirely
in terms of complications that naturally arise in the competitive interaction between
elements in the Multiple Drafts Model. ”
But maybe I am just interpreting it incorrectly.
Thanks Mike. Is there ever a case where emergence (in the weak sense), in and of itself, is an explanation for anything? Using weather patterns for example, despite saying that a hurricane emerges from whatever is happening to the atmosphere at its location, one could always look into more detail all the way down to the individual atmospheric particles to explain the higher order properties of a hurricane.
I thought it was implied in the concept of emergence (at least in weak emergence, which is what I was referring to in the particular case since I assumed that weather patterns weakly emerge from the constituent atmospheric particles…not the strong emergence I mentioned earlier in a different branch of comments) that there is no “magic leap” from lower order level to higher order level.
I understand what you’re saying, but I think the important thing is to understand how the emergence happens. If we don’t, then it seems we’re in danger of just papering over an explanatory gap. For example, we know thermodynamics is emergent from particle physics. But we understand how it emerges. That doesn’t mean we abandon thermodynamic laws as a tool, since they remain a far more productive model for macroscopic systems than the particle ones.
Many will say this is impossible in the case of consciousness. Maybe, in this view, we’ll just have to be satisfied with emergence as the explanation. I think anyone who thinks this way should read about microbiology. It once seemed utterly incomprehensible how biological life could emerge from chemistry and physics. Surely some kind of vitalistic force was required. But scientists over the years just kept on probing. While there remain plenty of gaps in knowledge, the idea that we can explain biology in terms of chemistry and physics isn’t questioned anymore. Getting into the details of microbiology means getting into organic chemistry.
Of course, the emergence of biology from chemistry is hideously complicated. I fear the emergence of the mind will be at least as complicated. The details may never be appreciable without a lot of effort.
I am the wrong Mike, but if I may butt in… Whether weak emergence explains anything depends on what one considers to be the role of an explanation. Let me expand on a specific example, which I’d already mentioned in reply to our host.
Conway’s Game of Life is known to be Turing-complete (see e.g. Wikipedia entry on it). This has an interesting consequence: given two patterns A and B, there is no algorithmic way of deciding whether or not A will ever evolve to B. This means that GoL’s spaceships (patterns which replicate themselves with a displacement) cannot be predicted from the GoL rules. Once observed, their behaviour can be, of course, easily reduced to a series of steps fully conforming to the rules, but that’s a one-way trip. Knowing the rules (i.e. the complete “physics” of the GoL) spaceships (and indeed, their very existence!) have to be discovered experimentally — there is no other way.
Likewise, the suggestion is that minds have a similar relationship to matter. While there is nothing in the rules of physics suggestive of consciousness (just like there is nothing in the rules of GoL suggestive of spaceships) it is still perfectly feasible for consciousness to arise from pure physics (and to be fully reconcilable with physics, once it does), without being in any way *predictable* from pure physics.
Does that help?
I agree completely. Thanks for linking me to all the extra reading on this topic! I have some work ahead of me, haha.
Thanks Eric. Thinking about it, your best bet may be to start with Dennett’s Consciousness Explained. There are issues there, but for reaching that mental shift we were discussing, his book, particularly chapter 12, may be the best at priming the pump.
Anyway, hope you enjoy the journey as much as I did.
Mike 1290, that’s super interesting. Now I want to play the game of life, haha. It sounds like chaos to me, I’m reading a book on chaos right now and so it is on my mind a lot these days. It’s why I used the analogy of weather to help me illustrate what I was thinking about. Given the laws of physics, you can’t exactly predict any weather pattern either (or any non-linear system), but given a weather pattern it seems as though you could work backwards to explain how the laws of physics produced such a pattern. Am I on the right track with what you are saying, i.e. does chaos pay a role and is weather another suitable scenario such as The Game of Life is?
Now that I think about it, I think what fundamentally is bothering me, then, is that I don’t know of (and haven’t come across) any descriptions that I have understood that explain the backward path from consciousness to the laws of physics. I guess my best bet is in the resources Mike (our host) has linked me towards.
The known computational examples of weak emergence also rely on iterative application of a small set of rules, so in that sense they possibly fall into the same category as deterministic chaos, but they are distinct in that they all seem to feature Turing completeness. In their excellent “Figments of Reality” Stewart and Cohen characterise such cases of emergence as “ant country”, in reference to Langton’s Ant (again, see Wikipedia) with its road-building. You may wish to add that book to your reading list. 🙂
BTW, I see our host has recommended Dennett’s “Consciousness Explained”. I concur, but just to set expectations: it is a lousy title — don’t expect the book to explain consciousness, which is not to detract from Dennett’s ideas.
Re your concern about a lack of explanations proceeding backwards from consciousness to physics (or at least to neurology). There are various answers to that. The most popular one is “it’s a *very* complex subject, and we’ve only been seriously looking into it for less than a century”. A more philosophical response is: “define consciusness, then we might start looking into explaining it”. Personally, I favour a stronger version of the latter and it goes like this…
Science has taught us that the universe is far stranger than it would appear to us, what with the talk of curved space-time, absence of a universal “now”, and quantum weirdness to boot. And closer to home, it has become clear that our perception of the world is not “direct” as some philosophers perhaps still have it, but a construct — what Anil Seth calls a controlled hallucination. Yet we persist in assuming that consciousness is as it appears to us. So maybe in trying to explain it we are chasing the pot of gold at rainbow’s end. To make progress we may need to ditch that assumption and to accept that our sense of ourselves as some kind of “unitary” conscious beings is also a construct. This is what Dennett (and others) suggests. And I agree with them.
Thanks Mike 1290! This is the most satisfying answer to the questions I’ve been asking so far about consciousness.
I’ll definitely add Figments of Reality to my list! Sounds interesting!
In terms of defining consciousness first before moving backwards to from consciousness to physics, I’ve been trying to find a good analogy to help me try to define consciousness. In these discussions so far, I’ve been using 1. gravity as analogous to consciousness and 2. weather as analogous to either consciousness or the behavior that arises because of consciousness (I’m not sure which is more suitable, if any). Do you have an analogy you tend to use as well that may be suitable in assisting to define consciousness?
Ok! I had some shower thoughts so let me bring those into play. First off, Mike 1290, I think I realized the answer to my own question about what you might use for an analogy. Might it be Game of Life? haha. If so, duh. And I apologize for when I may be slow on the uptake. I am a novice here and I think most others here are much more proficient. I hope I’m not too annoying, but I see myself as a student here, not really a contributor to the limits of what is known about consciousness. Feel free to treat me as a student. 🙂 Another thing that I think contributes to my slowness on the uptake is sort of a deer in the headlights syndrome when tons of new information that I need to consume is presented to me. But please keep that information coming. I’ll get there eventually. Now on to the rest of my thoughts.
So, anyway, I had more thoughts than just a realization that Game of Life might be your analogy. I then started thinking about the implications of that analogy. First off, you said,
–> “Likewise, the suggestion is that minds have a similar relationship to matter. While there is nothing in the rules of physics suggestive of consciousness (just like there is nothing in the rules of GoL suggestive of spaceships) it is still perfectly feasible for consciousness to arise from pure physics (and to be fully reconcilable with physics, once it does), without being in any way *predictable* from pure physics.”
I was thinking about the rules of Game of Life. The rules basically describe what determines whether or not a cell is live or dead. So, despite the fact that we can’t predict things like spaceships, we know that what is produced can only be derivative of a cell being occupied or not. Which means that the only things that can emerge are shapes and shapes in motion based upon which cells are occupied at any tick and over a period of multiple ticks. A spaceship is one such shape in motion.
Likewise but I guess a little more complicated, the rules of Game of Physics basically describe what type of (and whether a) particle of matter will occupy a location in space at any moment. Strangely enough, I think the Game of Physics might be essentially the same thing as the Game of Life, but with different rules and more options for cell occupation than just live or dead. I am not a novice in physics, but I am also not even close to a master, so please fill in any holes I may leave behind. Therefore, in terms of physics, the only things I can see that emerge from it are shapes of matter each moment in time and motion of shapes of matter over a period of time. Thus, a hurricane (or even a Spacex spaceship) is the equivalent in the Game of Physics to spaceships in the Game of Life.
With that in mind then, at this point I’m still not seeing how qualia/perceptions could emerge from the shape of matter or even the motion of the shapes of matter unless qualia themselves are shapes of matter or shapes of matter in motion. That may just be a limitation in my imagination, but I still can’t see it. If they can’t emerge from this, wouldn’t there need to be a different game to describe them than the Game of Physics?
Then my thoughts moved towards the concept of analysis. I feel like I have read Mike (our host), and likely others, discuss this somewhere before. But because I am a novice, I can’t really remember well the details of it. But anyway, I was thinking about the notion that consciousness is an *analysis* of the “motion of shapes” notion of the Game of Physics (ignoring much/all of the rules of quantum physics for this discussion). Take vision for example. It seems to me that vision is all about motion of particles all the way from the object we see to the parts of our brain that analyze that motion. So, if I see a banana, the atoms on the surface of the banana move in such a way and release photons that cause the atoms of air next to the banana skin surface to move in such a way and so on like dominos through all the atoms of air to the eye, and then still further into the eye and eventually into the brain through atoms in neurons and the such. But that’s still all shapes and shapes in motion. Is the analysis of these “shapes and motion” shapes and motion itself (using the rules of the Game of Physics)? I’m having a hard time seeing qualia as playing by the rules of the Game of Physics. But, as you said, Mike 1290, maybe I am persisting in assuming that consciousness is as it appears to me.
Another though I had which is related is that the computers we build don’t really analyze anything do they? They just move matter around using physics and it’s still up to us to at the end analyze the results of how they move matter. Might they not just make it easier for us to analyze things by changing the motion of particles such that it’s easier for us to analyze the motion of those particles.
Sorry if my use of the phrase “motion of shapes of particles” etc. was annoying, but I was trying to keep the analogy of the physics to the Game of Life going throughout my discussion.
“I hope I’m not too annoying, but I see myself as a student here, not really a contributor to the limits of what is known about consciousness.”
Hey A Eric, just jumping in here to note that we’re all students here, which very much includes me. You could probably teach me a lot about physics.
On analyzing the motions of particles and whether computers do it, a lot seems to depend on what we mean by “analyze”. My laptop uses face recognition to log me in. For some reason, moving my face during this process helps the process to completion. Would that count as analysis? If not, what’s missing?
Regarding the relationship of mind/body to physics, I’m playing around at the moment with the idea that it may be useful to think more mathematically of mind as a mapping from the 4 dimensions of physics (spacetime) to a different set of dimensions (a few thoughts in the next paragraph of what they might be). That could mean we can take an approach to the mind that is more like physics than philosophy, but applied to those new dimensions, potentially with just as much rigour.
The sort of dimensions I have in mind:
– in place of time: time chopped up into cognitive cycles of a few 100 ms, sufficient for all parts of the brain to have potentially contributed to the outcome (enabling behaviour as a ‘thing as a whole’), and ending each cycle with a new coherent representation; that representation to include partial representations of past, present and future self and world, providing a sense of longer term continuity
– in place of x, y, z positions, discrete spectrums of values parameterising individual labelled patterns, said patterns switching on and off over time; the values could represent tracked position (relative to the subject), frequency of sound, or position in some other feature space; they would be used both for following things with attention at the sensory end, and parameterising outputs at the motor end; this may well be similar to unitrackers that are sometimes mentioned in comments here
– an additional dimension of valence, i.e. a pleasure or pain scale (which underpins feelings and emotions); this provides the added value of a criterion to select between possible actions and a sense of purpose ‘for me’.
I wonder if such dimensions might be enough to explain how we get the sense that our continuity over time, our existence as a single ‘thing as a whole’ and our sense of what is positive and negative for us arises, whilst being mappable back to physics.
What makes these dimensions better for analysing mind that physics itself, is the high level of discontinuity and nonlinearity in the brain, which makes an approach to modelling based on partial differential equations rapidly lose predictive power.
(This should identify me as Mike Arnautov again, instead of mla1290!)
Eric, no, I don’t think GoL is all that good as model for consciousness. But it is good for challenging assumptions, often unconscious ones, which make us baulk at physicalist views if minds. (E.g. Dennett uses it in his “Freedom Evolves” to challenge the notion that determinism is incompatible with our experience of being “free agents”.) In that respect, it still has a bit more for you, I think. You say “the only things that can emerge are shapes and shapes in motion based upon which cells are occupied at any tick and over a period of multiple ticks”. But in the “physics” of GoL there are no such notions as shapes or motion. You have imported those from outside, from your experience of our world. How do you suggest to define them *strictly* in terms of GoL rules? I can’t see a way of doing it which does not involve a number of posits in addition to those rules. The lesson, I think, can be generalised to: you need a mind to “see” a mind.
Let me give you another useful computing example. Our digital computers work by flipping binary bits according to rules of binary logic. Yet it is trivially simple to write a program which implements ternary logic (or, rather less trivially, any other kind of logic up to and including complex non-monotonic logics). How come? There is nothing ternary (or non-monotonic) going on on the hardware level, yet the software behaves in ternary fashion! The answer starts by realising that the ternary program interprets pairs of bits as an atomic unit, holding values 0, 1 or 2 (three can never arise, in its operation, so is excluded). But, and to me this is the killer question, *who* is doing the interpreting? The programmer? Nope — might be dead for all we know. Nobody does. Interpreting does not require an interperter (or, as Dennett says, meaning does not entail a meaner).
The parallel with your take on GoL seems clear: shapes and their motion are interpretations, brought in by us and yet somehow independent of us doing so. Dennett (again! :-)) explores this strange phenomenon in his paper “Real Patterns” (already mentioned by somebody, I think).
As for my preferred model, I have a few. The jokey/serious one is the notion of consciousness as a notes taking secretary in a large meeting, who’s gone a bit bonkers and thinks that *he* is making meeting’s decisions, rather than just recording them. More seriously, I think Baars’ analogy with a public address system is a good one.
I can almost see you frowning :-): these analogies do not help you to reconcile consciousness and physics. But that’s fine by me. I follow Davidson and count myself a non-reductive physicalist. (See my earlier reply to our host, giving a terse outline of Davidson’s position.) It is perfectly possible that complex mental phenomena (e.g. perception) are not in general reducible to the language of physics, without anything non-physical going on — just like binary bit flipping can amount to a non-binary logic.
–> So color perception ends up being the key to open the lock of ripe fruit, or nectar as the case might be. It allows for a symbiotic relationship between species.
Yes, this is very cool! I started with Chapter 12 in Dennett’s book. Just to get a sense of it before going back to the beginning and reading the whole thing. But I remember the part about color you’re talking about.
Good deal! Dennett can be aggravating to read at times, but he’s really good at shaking us out of our Cartesian-like slumbers.
–> Hey A Eric, just jumping in here to note that we’re all students here, which very much includes me. You could probably teach me a lot about physics.
Thanks Mike S! That makes me feel better. 🙂
–> On analyzing the motions of particles and whether computers do it, a lot seems to depend on what we mean by “analyze”. My laptop uses face recognition to log me in. For some reason, moving my face during this process helps the process to completion. Would that count as analysis? If not, what’s missing?
In terms of what is meant by analysis, I’m not really sure about that. I’m sort of at the phase of learning about consciousness where I basically throw whatever comes to mind at it in hopes that something sticks in the process of me trying to understand it better. I don’t know why the notion of “analysis” in particular came to my mind at that point, but a slight memory of you maybe mentioning it somewhere else accompanied it. So maybe it’s from that??? Regardless, the idea of analysis just seemed to pop in my head from nowhere, unrelated to what I was thinking about just before that.
But I think I am basically trying to differentiate between just natural motions of particles, like water flowing down a stream or photons travelling through a medium and whatever process brings about consciousness. “Analyzing” the motions of the particles flowing through the brain seems like a plausible thing, I guess, to throw at the attempt to differentiate it.
Your question about face recognition made me wonder about a key in a normal door lock. Do the pins in the lock “analyze” the shape of the key in the same way that a computer analyzes a face? I’m not really confident one way or the other at this point.
Is my memory correct that you have discussed analysis in this manner before? If so, is there a particular blog entry you could point me towards?
Ok, now to work on my responses to Peter and Mike A!
I don’t know if I’ve done a post on what analysis entails exactly. With well over a thousand, it wouldn’t surprise me if there’s one somewhere, but if so it doesn’t come up in a quick search.
But your remarks about the key and lock remind me of something from Dennett’s book and his discussion of qualia. (Which our discussion spurred me to reread this week.) He notes that our trichromatic perception of colors probably co-evolved with the color in fruits. Fruits have an evolutionary need to spread their seeds. Primates have an evolutionary need for the calories in ripe fruit. Signaling to primates when the fruit is ready to be consumed is beneficial for both the fruit plant and the primate. So the color isn’t a side effect primates utilize. It’s a co-evolved trait. (It might have begun as a side effect, but selection probably intensified the relationship over thousands of generations.)
A very similar relationship exists between insects and flowers, with flowers needing the insects to carry their pollen for pollination to occur. The fact that this relationship exists alongside our relationship with fruit, is why we can experience the range of colors in flowers, albeit in a much more limited fashion than the insects themselves with their ability to see far more colors.
So color perception ends up being the key to open the lock of ripe fruit, or nectar as the case might be. It allows for a symbiotic relationship between species.
–> Regarding the relationship of mind/body to physics, I’m playing around at the moment with the idea that it may be useful to think more mathematically of mind as a mapping from the 4 dimensions of physics (spacetime) to a different set of dimensions
This is a cool idea! Would they be actual physical/temporal dimensions like those proposed in a theory such as string theory, or more like philosophical dimensions that give the problem of consciousness a physics-like foundation?
–> similar to unitrackers that are sometimes mentioned in comments here
I’ve read this word a couple of times, but up to this point it is a word from a foreign language to me and I’ve treated it like that because of overload from too much other information coming in. Is the definition evident from the word itself, are they things that track one specific entity?
–> What makes these dimensions better for analyzing mind that physics itself, is the high level of discontinuity and nonlinearity in the brain, which makes an approach to modelling based on partial differential equations rapidly lose predictive power.
This makes me think of chaos theory. The words “nonlinearity”, “differential equations” and “rapidly lose predictive power” sound very much like the very concepts I am reading about in terms of chaos theory.
In terms of the specific details in the particular dimensions you brought up, I think I’m too much of a novice in this topic to really get the gist of what you are talking about and how they could help relate back to physics. But am I correct in understanding that you think these other dimensions would act sort of as an “interpreter” between physics and consciousness?
Ok, now to respond to Mike A! This may take a while. 😊
–> How do you suggest to define them *strictly* in terms of GoL rules?
Hmm, I’ll give it a shot, but I’m not sure if I am missing the point. Going by the Wikipedia rules:
“The universe of the Game of Life is an infinite, two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead (or populated and unpopulated, respectively). Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:”
In here I felt as though I saw a definition for space-time for the world of GOL with the mention of two-dimentions, adjacency and ticks/steps in time. From there I felt like this allowed for motion to adjacent squares over time, and that populated adjacent squares over the grid would form shapes. But as you said, even though it happens that way, the notion of shapes comes about from me outside of the GOL world. I guess what you might be saying is that for it to be a good analogy, a shape itself in the GOL world would have to be the one to experience the notion of shapes in the GOL world, not me outside of the GOL world? And to take it further, am I correct that you would also say that even in our own world with the physics as it is here, there really is no notion of shapes of matter and motions of matter except from the notion that arises in our own minds?
–> But, and to me this is the killer question, *who* is doing the interpreting? The programmer? Nope — might be dead for all we know. Nobody does. Interpreting does not require an interperter (or, as Dennett says, meaning does not entail a meaner).
I’m having trouble knowing if I am understanding the implications of this. Let me try though. First off, even though it seems obvious, when you say “nobody does”, you also include the computer/computer program/etc. in the “nobody”? Are you saying that the interpretation happens just by (to grossly simplify the neurobiological process in our brains) the natural flow of electricity around our brains, in the same way that water naturally flows in a riverbed? It just so happens that in the riverbed configuration, no consciousness is a byproduct of that configuration, whereas, in a brain, consciousness is a byproduct of our brain’s configuration? (This, by the way, is the way I prefer to see the world. I’m not sure if this is known as the physicalist view or not, and whether it’s non-reductive or reductive or whatever the alternative is). When you say, “interpreting does not require an interpreter” are you implying that an interpreter is sort of a non-physical entity? This isn’t how I see the world, if so. I don’t see any non-physicality going on (at least I don’t want to see any). I’m just saying that, given the rules of the GOL, with the brain capabilities that I have, I can take what I see emerge in the GOL world and explain how what I see arose using the rules. Given the rules of physics (at least the ones that I am familiar with), with the brain capabilities that I have, I can’t explain what I see emerges in the form of consciousness. Maybe I’m just a couple of hundred years too young to be able to see it yet. Haha.
As an aside, can you judge another analogy I have that I got from thinking about free-will and determinism. I didn’t read the Freedom Evolves that you mentioned, but I read some of Just Deserts and maybe there is some overlap between the two. I came up with the analogy while reading it. Here it is: A rock’s path as it rolls down a hill is fully determined. Given a knowledge of all the variables involved in that scenario, one could theoretically calculate the exact path of the rock. A rock just rolls down the hill as the laws of physics determine it should, with no control over how it rolls down. We also “fall down the hill” as we live our lives and our paths are also fully determined and hence fully calculable, and we “roll down the hill” as the laws of physics determine we should. But unlike a rock, we have the ability to alter the paths in front of us before we arrive there, so that as we are “rolling down the hill of life”, we can alter our trajectories towards different paths than the paths we would have taken had we been as unconscious as a real rock our entire lives. That ability to alter our paths is what gives us our free will, even though we can only alter our trajectories to the extent that we are able to alter the characteristics of the path ahead of us.
Relating this analogy back to where you said, “interpreting doesn’t require and interpreter”, is it possible that the phrase, “path altering doesn’t require a path alterer” has the same meaning?
I like the analogy of the secretary taking notes and thinking that he is making the decisions. Can you explain the physical process of how he takes notes? Hahaha, just kidding. I googled Baars’ public address analogy, but only found results like the Theater of Consciousness. Is it similar in nature to the secretary one you gave, or could you explain it a bit?
–> I can almost see you frowning :-): these analogies do not help you to reconcile consciousness and physics.
Haha, that’s why I asked if you could explain how the secretary takes notes. 😊
–> It is perfectly possible that complex mental phenomena (e.g. perception) are not in general reducible to the language of physics, without anything non-physical going on
I’m still not sure I understand how this is possible. I feel like I understand how the shapes in GOL are reducible back to the rules, and I feel like I understand how the non-binary logic in the software reduces back to the binary aspect of the hardware. Just in terms of how I “want” the world to work based off some preference that I can not explain, I want everything to be reducible back to basic empirical laws. Maybe the world doesn’t work this way, and if so I’ll definitely accept that if given a convincing enough argument.
To clarify… In those computational examples I am not giving you any kind of models or analogues of consciousness. Rather, I am trying to provide examples which might challenge your intuition that any feature of a physical system’s behaviour must be completely reducible to the basic rules of that physical system. So the answer to your questions whether I would demand that a GoL shape rather than you should experience motion on the GoL world, is no. I am merely pointing out that the very concept of motion (or of shape) has no definition in the base rules of GoL, which only talk about individual cellsswitching on or off.
Similarly, when I point out that pairs of bits are interpreted as atomic values in a ternary logic program, I am not positing any non-physical interpreter — I *am* a physicalist, albeit a non-reductive one! Can the program itself be doing so? But the program is just a collection of binary bits, some static, some flipping according to the binary rules of the computer hardware. How does that binary flipping amount to ternary interpreting, producing ternary behaviour? Technically, there is no problem. Conceptually, it is a puzzler. The answer must be that it is the specific arrangement of those flippings that does it. But there are “infinitely” many possible implementations differing in hardware and/or software, making it impossible to capture on that level the simple concept of a ternary logic program.
Re consciousness as a “public address system”. That’s another take on what’s mostly known as “global working space” theories of mind. I think it has the advantage of not implying any particular “space” within the brain where consciousness “happens”. I favour the Halligan/Oakley version (https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01924/full) in which consciousness has no executive powers at all, but merely reflects salient aspects of non-conscious mental activity — like that notes-taking secretary in a meeting I mentioned last time! 🙂 To quote H&O:
“The hard problem (Chalmers, 1996) involves two questions: First: “How and why do neurophysiological activities produce the “experience of consciousness”?”. Our account addresses this by concluding that personal awareness is a passive, emergent property of the non-conscious processes that generate the contents of the personal narrative and is not causally or functionally responsible for those psychological contents. The converse question “How can the non-physical experiences of “conscious awareness” control physical processes in the brain?” is consequently no longer relevant. We propose that there are no top-down executive controls exerted by either personal awareness or the personal narrative as both are psychological end-points of non-conscious processes.”
The upshot being that to this line of thinking, philosophers have been working on a completely wrong model of consciousness as some sort of powerful sun-king, ruling over, served by (and sometimes betrayed by) non-conscious aspect of the mind. As the old saying goes: “You can lead a horse to water, but if you can’t make it drink, you might be working with the wrong end of the horse!” 🙂
I think the potential recognition of movement or shape in GoL arises once you look at the patterns of GoL over a finite but distributed set of cells and a finite set of timesteps….and because GoL has memory, and can be configured such as to compute, it can potentially represent and respond to these, it does not need an external observer.
Correspondingly consciousness in the brain only arises once you look at the brain operating as a whole, and over a time of a few hundred milliseconds such that potentially all parts of the brain can partake in the outcome.
Sure. My point is that none of that can be stated strictly in the language of GoL “physics” (a.k.a. Conway’s rules). If anybody disagrees, they are welcome to try. 🙂
Mike A: Yes agree that the higher level view cannot be expressed directly in the ‘physics’ of GoL rules. However my take on it would be that once some sort of computing engine has been built inside GoL and works consistently, you can dispense with GoL low level rules and instead work with the concepts known to the computing engine you built in GoL, with just as much rigour as you would have had working in GoL low level rules. Further, and the key point, is that in this constrained situation, the higher level view has more predictive power (more accurately predicts outcome with less information) than GoL rules. Same as any computing engine, brain or machine with respect to everyday physics – we build the machine, or it emerges in living things, precisely in order to behave consistently enough that a high level description is adequate to specify what is going on to a certain level of fidelity. That gives it more ability to control real world outcomes by making sufficiently reliable predictions.
Patermartin: We are in violent agreement. 🙂 I am just trying to prod Astronomy Eric into realising that it is possible for something purely physical to be non- reducible to the language of physics.
When you say that once computation is bootstrapped in GoL one can dispense with the GoL low level rules, that is what physicists mean by an effective theory (fluid dynamics being a well known example). Special sciences generally operate in this manner, by providing consistent causal explanations at their own level, instead of referring to the underlying physics. All such theories are tied to physics, of course in that any specific case of must have a specific physical explanation. But that does not mean that their general concepts are statable in the language of physics. And since causal explanations necessarily feature general concepts, that’s where weak emergence comes in.
LikeLiked by 1 person
Thanks for taking the time to try to help me understand this Mike A! This is definitely challenging for me, but I’m willing to keep trying to understand your point. I think the first part that I’m having trouble reconciling is what you first said to me with what you are saying now. I don’t know if it’s something subtle I’m missing or something obvious, but maybe different words might clear it up.
In your first comment to me you said:
“This means that GoL’s spaceships (patterns which replicate themselves with a displacement) cannot be predicted from the GoL rules. Once observed, their behaviour can be, of course, easily reduced to a series of steps fully conforming to the rules, but that’s a one-way trip….Likewise, the suggestion is that minds have a similar relationship to matter. While there is nothing in the rules of physics suggestive of consciousness (just like there is nothing in the rules of GoL suggestive of spaceships) it is still perfectly feasible for consciousness to arise from pure physics (and to be fully reconcilable with physics, once it does), without being in any way *predictable* from pure physics.”
I originally felt like I understood the one-way trip aspect to this. I even assigned a direction to it to keep it clear, where predicting patterns from rules is in the forward direction and reducing the patterns to the rules is in the backward direction. But maybe the following is where I’m missing something: is there a difference between “easily reduced to a series of steps fully conforming to the rules” and “reduced to the rules”?
And then most recently you stated:
“Rather, I am trying to provide examples which might challenge your intuition that any feature of a physical system’s behaviour must be completely reducible to the basic rules of that physical system. ”
First off, I picture what you are saying here as the “backward direction” that I mentioned just before this. Can you explain the difference between “fully reconcilable” and “completely reducible”? Maybe if I can wrap my mind around this, the rest will start to fall into place.
A Eric: The difference I am trying to point out is simply this… Looking a GoL glider, it is obvious how its behaviour follows from (a.k.a. is completely reducible to, a.k.a. is fully reconcilable with) GoL rules. However, because GoL is Turing complete, the more general concept of a spaceship cannot be fully characterised in GoL terms. The difference is that between a specific instance (token, in philosophy-speak) and a general concept (type, in philosophy-speak) of which the specific instance is an example. GoL rules ar strictly local, whereas the concept of a spaceship requires non-local view of the proceedings (as Patermartin pointed out in his contribution).
The difference certainly has the flavour of unidirectionality, but that’s a bit misleading. Is the difficulty in proceeding from rules of GoP to the concept of spaceships (upward direction) or from the concept of spaceships to specific instantiations within rules of GoP? Kind of both, depending on your point of view.
The basic issue lies in the token/type (specific/general) distinction. Given a particular GoL spaceship (e.g. e glider), one can easily move in either direction — up or down. Downstairs, GoL is flipping its cells on and off in harmony with the upstairs pattern of the spaceship moving through GoL space. But when it comes to the class of GoL spaceships, the harmony is broken. Here’s a complex GoL pattern — is it a spaceship? The only way to answer that is to watch it… for how long? Turing completeness tells us that there is answer to that.
So GoL enthusiasts operate on the conceptual level above that of basic GoL rules and take for granted entities such as spaceships, glider guns, puffer trains, gardens of Eden, memory loops, glider construction algorithms etc, etc, without bothering with re-phrasing all of that into GoL rules — because doing so is actually no feasible.
The crucial point here is that no language, and indeed, no conceptualisation can rely entirely on specifics (tokens). Try imagining a language in which every single object has a separate name, with no concepts to agglomerate them. Such a language would be singularly useless. (Try formulating any of Newton’s laws without the concept of an object! :-))
However, physicalism only demands that there is obvious correspondence between tokens (specific instances) of underlying rules and a higher level discourse. It says nothing about any alignment of concepts (agglomerations of instances) on the two respective levels. If the concepts cannot be aligned (except as a vast list of special cases), there is no possibility of translating between the levels, even though nothing “non-physical” is going on.
I hope this is clearer. All this stuff is deeply counter-intuitive and even just talking about it, it is easy to slide quite unintentionally into apparently dualist talk. We are thoroughly pre-disposed towards dualist views.
LikeLiked by 1 person
Hi Mike A, Thanks again for your response. I’ve given myself a few days to stew over this hoping something would click, but no luck so far. I have some clarification questions, but at this point I fully understand if you have had your fill. Don’t feel obligated to keep at it with me. I may just never get it.
My first clarification question though is: is a thorough understanding of Turing Completeness essential to understanding the general point you are trying to convey to me? I looked up Turing Completeness on Wikipedia and while I get the general gist of what it is, I don’t feel like I really understand what it is (and that I would need to spend a large amount of time getting to that point that I’m not sure I can devote at this time).
–> “The basic issue lies in the token/type (specific/general) distinction. Given a particular GoL spaceship (e.g. e glider), one can easily move in either direction — up or down. Downstairs, GoL is flipping its cells on and off in harmony with the upstairs pattern of the spaceship moving through GoL space. But when it comes to the class of GoL spaceships, the harmony is broken. Here’s a complex GoL pattern — is it a spaceship? The only way to answer that is to watch it… for how long? Turing completeness tells us that there is answer to that.”
I think I understand this, but then shouldn’t we be able to devote time to studying tokens of consciousness and thus eventually trace each token example of consciousness downstairs back to the laws of physics like we can for glider tokens downstairs to the rules of GoL?
–> “However, physicalism only demands that there is obvious correspondence between tokens (specific instances) of underlying rules and a higher level discourse. It says nothing about any alignment of concepts (agglomerations of instances) on the two respective levels. If the concepts cannot be aligned (except as a vast list of special cases), there is no possibility of translating between the levels, even though nothing “non-physical” is going on.”
This isn’t clicking with me. Do you remember your Eureka moment when this concept finally clicked? Was there a specific thing that set it off? What’s the relationship between what you said about physicalism in the quote I copied/pasted and how it opposes dualistic thinking? Just to be clear, when you speak of dualism, are you implying mind/body dualism? Or something more general?
OK, thanks. I wasn’t aware of clamping on that level. But if that’s what posited, then why not just replace neurons with devices which fire at will? I guess only because it would demolish this particular intuition pump.
Any physicalist position commits one to accepting that if the complete sequence is replayed somehow down to the base physical level, corresponding consciousness will result. The real issue clearly is what features of that physical playback can be removed without affecting the consciousness outcome. The upshot of this thought experiment is, as far as I can see, is that causal connections cannot be removed. If it were possible to remove them, then the whole shebang can be scattered across space and time or even across logical interpretations of patterns, as in your rock consciousness post. (Dennett presents a similar problem in his playful “Where Am I?” in _Brainstorms_)
I suppose one possible position is to take the bull by its horns and say that yes, all those scattered and/or overlapping consciousnesses can and do exist, (which is what Egan’s “dust hypothesis” would imply) but that seems a bit extreme. So causal connections it is.
It leaves me with an itch to be scratched, though. The notion of “causal oomph” in causal connections is a philosophical snake-pit and I doubt whether we can do better than offer causal explanations. That may or may not be sufficient, if one consider minds, as I do (and I seem to recall from some other posts of yours, you do too) as self-interpreting systems. Which kind-of makes me feel that there is no matter of fact even about there being no matter of fact of their “real” existence, if you see what I mean. 🙂
Steganography is an analogy I tend to resort to. The meaning (or meanings) of a text depend on rules for reading the text (e.g. by selecting characters from it according to some list or rules). A text can contain steganogrpahic instructions for its own steganographic reading. In a certain sense that’s what minds do.
Looks like we hopped threads. No worries. But for anyone curious, the beginnings of this conversation are under the next post.
In the paper, the authors compare this thought experiment with the one where one neuron at a time is replaced with an artificial neuron (similar to Chalmers’). That’s substituting the substrate while keeping the causality in place. This thought experiment is keeping the substrate in place while altering the causality. So definitely, we could do this by uploading the mind into a computer, but now we’re altering both the substrate and causal structure, and that does stresses our intuitions further, but perhaps beyond the bounds of what they’re trying to accomplish with this particular thought experiment.
I agree about there ultimately being no fact of the matter. In the end, it comes down to how we conceive of a conscious experience. This is an easier conclusion if we were talking about a recording of a game. Is the playback still the game? Or just a playback of the original game? Depends on what we mean by “game”. The only reason we’re tempted to think the conscious experience case is different is because of the instinctive dualism we’re all born with. The beauty of this thought experiment is it forces us to confront that intuition.
Looks like you spurred an interesting discussion with Ed and James. Definitely any functionalist explanation is going to be counter-intuitive. But I see you commented on the zombies post, so I’ll take a look and maybe respond in more detail there.
But in general, I think the burden of the functionalist is to find a plausible (ideally testable) functional explanation for any phenomenal properties. The burden of the non-functionalist is to identify properties for which that isn’t possible. The problem is what to do with ineffable properties.
Re “Overall, the point is that even if we examined a system as an omniscient observer, there would be no fact of the matter on whether consciousness as a property exists within that system. Therefore, this property doesn’t exist, and consciousness, in this sense, doesn’t exist.”
I understand that “observer” likely limits our ability to question the beings we observe, but aren’t consciousness test going to happen naturally? Beings observe their image in a pool’s reflective surface, observe a spot of mud on their face, and then wipe it off. Isn’t this a Mirror Test? Are there not similar tests that we could observe as “natural experiments” occurring in this place?
LikeLiked by 1 person
There are. (Although exactly what the mirror test shows is a matter of debate.) But I’m with you on seeing functional capabilities as evidence for consciousness. But then I’m a stone cold functionalist who doesn’t buy the concept of philosophical zombies. If someone else asserts that none of that functionality proves the animal or system in question is conscious, by what standard do we say they’re wrong?
These interesting comments from the new contributors is a welcomed invasion…..
In agreement with Mike, I do not buy consciousness as fundamental either because consciousness is synonymous with mind and mind is a relatively recent physical system that is emergent. Albeit, I will be bold enough to assert that this physical system we know and recognize as mind is a quantum system that emerges from the classical brain.
As far as Mike’s second rebuttal: “In terms of empirical support, the only thing we can get empirical support for is behavior, functionality, notably self report. Any other evidence we want to cite ultimately has to have a chain back to self report. Many will say brain scans, but brain scans are only useful once that pattern of activity has already been linked to self report and consciousness inferred.”
This is a straw-man argument because ultimately, “every-thing” that we experience be it gravity, etc. or our own conscious experience reduces back to a self report; so in the end, we are forced to come up with another stratagem to establish a “proof” of any kind; and that stratagem is sound reasoning, reasoning that conforms to the rules of logical consistency, non-contradiction and universality.
“In my series, I laid out the case for “biological forces” which describe how living organisms are motivated to move towards things that sustain life, and away from things that destroy life. So, you could come up with measurements for life-preserving behaviour and note how well or how poorly different structures of life are able to sense and respond to these forces. I know this sounds far out, but something exists if it causes changes in the universe. Once life arose, a new quality entered the universe — things that enhance life and things that destroy life. Sensing this quality causes action. Therefore, this quality has a kind of force. Note that even if the organism is unconscious to these forces, they can still tear life apart or help it flourish. I haven’t teased out all the differences between biological forces and physical forces, but I think there’s something there.”
I liked your brief assessment that I highlighted here; and your use of the word “sense” implicitly implies or otherwise asserts that sentience may be a fundamental property of matter, and that fundamental property would therefore be universal for both organic and inorganic life. Surely, the electromagnetic charges of the electron and proton are also “sensed”, because if they were not “sensed” by those particles there would be no change or movement, yes?
LikeLiked by 1 person
“Sensed”, as I think you are implying? Yes. Felt, as a feeling of subjectivity? No. That requires the right structure, and inorganic matter doesn’t appear to have that. This is the difference between panpsychism and what I am calling pandynamsim. In other words, forces (dynami) are felt everywhere, but the subjectivity of minds (psyche) only arises in subjects.
Ah, sorry, I see now you said inorganic life (not matter). Yes, in my theory, artificial life and extraterrestrial life would seem very likely to me to have subjectivity, but their flavour would be very different from our own. Just look what drug use does to our consciousness and imagine what a different substrate might cause. Perhaps that difference in feeling would make a difference to the operation, perhaps not. Perhaps silicon “pleasure” would just have to draw it to a flame that destroys it. Perhaps it’s unsuited to sensing biological forces. That’s wild speculation, but just something to keep in mind. (As it were.) We’ve only seen carbon-based life arising so far, so maybe there’s a reason for that. Maybe not. Maybe silicon or something else would be even better once it got going with the right structure. We’re obviously still waiting for evidence on that.
Good seeing you. Totally agreed on new contributors.
I’m not sure if I follow your concern on my point about empirical support. It seems like our own personal experience, at least for ourselves, doesn’t necessarily require self report. But I certainly agree that any objective recording of that experience will involve it. And I definitely agree that reasoning is always required.
Mike’s comment is a good example of subjectivity as we’ve come to know it:
“On a memory of red, I guess it depends on how we define “perception”, but it would definitely be a reconstruction of that perception. Of course, no two perceptions are ever identical, so the reconstruction might be on the commonalities, and is itself never complete.”
Subjectivity is the derivative of a system we know as mind (psyche). This is due to the fact that mind is an indirect or representational experience; whereas objectivity is the derivative of a direct experience. All physical systems such as organic and/or inorganic systems with the exclusion of mind have a direct experience that is grounded in sentience or sensing, what you refer to as “pandynamism”.
In contrast, the system of mind is a representational experience. A representational experience is one step removed from the objective experience of sensing without the need to rationalize (psyche), a process itself that gives rise to subjectivity. I also agree with your intended use of the term pandynamism in your model because in essence it corresponds to my theory of pansentientism. I think that your “dynamic” can be further reduced to sentience where what feels good verses what feel bad to any given system is what drives (the dynamic) the evolutionary process towards complexity.
“In this view, the concept of consciousness is like the concept of life in biology. ”
But consciousness is not a “concept”. It is at the basis of every first-person subjective experience. Sentience is pre-conceptual and pre-semantic. Semantics follows experience. When we look at a chair we first must have the sensorial experience of the chair and only then arises a more or less coherent semantic object we call ‘chair’. Not the other way around.
LikeLiked by 2 people
It can certainly seem like our own consciousness is pre-conceptual. But what about anyone else’s consciousness? Your consciousness seems like a concept to me. And I think mine should seem like one to you.
But I actually don’t think anything in our consciousness is truly pre-conceptual. It is true that some concepts are pre-conscious, and surface in our consciousness at the boundary of what can be introspected, so that subjectively they are irreducible. But that’s a limitation of introspective access, not anything ontological. Our nervous system does a lot of pre-conscious work to produce those concepts, which makes them objectively reducible.
Hope that makes sense. It’s not an easy idea to get across.
I think I agree with you Marco, though I’m not sure this hinges upon the existence of “concept” or not. I consider myself in a conceptual sense for example. Furthermore might a dog grasp what we’re talking about right now? Not very well without semantic conceptualization I think. And of course dogs do conceptualize things in some sense, like “food” and “pain” and such, but surely not something this specific without terms from which to represent the subtle ideas that you and I grasp.
If life does not offer a good analogy for consciousness however, then why? I think this gets to what you’re implying. You are not “life” any more than you are “non-life”. Neither should essentially be “you”. Those terms seem to represent relatively arbitrary elements of continuous physical processes that we find useful to distinguish by means of rough biology. The subjective dynamics associated with “you” however should indeed be something quite special and discrete, or a value element of brain function that addresses associated sentient existence. When your brain stops creating that, as in the case of perfect anesthesia, then “you” should cease to exist. It’s surely the same for a dog as well — not body but mind produced by body. Therefore what’s usefully meant by “life” to us might inherently be far more fuzzy in a conceptual sense than what’s usefully meant by “consciousness” to us.
For anyone interested in a way to effectively grasp an innocent and I think quite useful conception of this term, let me once again suggest the work of Eric Schwitzgebel: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.htm
I’m late to the party, as always.
Anthis’s test fails in that it would say that the property of being witch-crafted exists. I can categorize absolutely all entities as to whether they have this property: they don’t. Therefore, it exists?
We could modify the definition to a two-part one like:
(a) (as above), and
(b) For at least one entity, we can categorize it as having the property, at least to some extent.
But if we have part (b), why do we need part (a)?
On the overall argument, Mike, I think you nailed it. Consciousness is multi-dimensional and thus the word “consciousness” is ambiguous insofar as people will emphasize different parts. As for example: petermartin678 demands that a being must know that it knows something, whereas I am happy to include all perception whether currently attended to or reflected upon or not.
LikeLiked by 1 person
Thanks Paul. Good catch, He probably should have covered the case of nothing having that property.
On all perception, here’s a question for you, and a case I didn’t adequately specify. Does it matter if the perception has no chance of ever being attended to? I’m thinking of the experiments where an image is flashed to a subject for well less than 50 ms. This is as opposed to something flashed for longer, that could be attended to, but isn’t due to the demands of some other task. Just curious.
I wouldn’t bother to come up with an answer, because nothing depends on it. Unless the flashed image somehow still causes psychological trauma, it doesn’t matter. And if it does cause trauma, it’s the trauma experiences that matter, not the 50 ms. To me, and putting it in your terms, it’s the relation to affect that makes consciousness interesting.
From what I recall, there can be emotional effects from subliminal stimuli (which is what such brief displays amount to), but they’re pretty subtle. Although if repeated enough, I’d imagine someone could get agitated (or happy, sexually aroused, etc) without knowing why.
Mike S: Not sure whether you are aware that at least at some past time, Chalmers appeared to agree that consciousness was “merely” a matter of semantics. It is no longer on Chalmers’ website, but Dennett quotesfrom it in “Sweet Dreams” (p 48) Chalmers’ reply to Searle to the effect that yes, p-zombies must have the same beliefs as we do (re being conscious and experiencing qualia) in order to be behaviourally indistinguishable from us, but the difference is that their beliefs are false, while ours are true, validated as such by our “direct experience” (whatever that might mean). That really sounds to me like a semantic distinction, which makes absolutely no difference in actual reality.
I note, BTW, that you refer to Dennett’s zimboes. Have you met Brown’s zoombies? They are beings identical to humans in all *non*-physical aspects, but lacking phenomenal consciousness. Such beings are imaginable, therefore (a la p-zombies) they are possible. Therefore physical aspects of human beings are what generates phenomenal consciousness. It’s a lovely mirror image of the p-zombie argument, just as valid or as nonsensical — take your pick. It really should have been enough to kill p-zombies stone dead, but zombies being zombies, they march on.
I’ve wondered the same thing about Chalmers’ stance, that it might amount to a sort of platonism about non-physical properties, and his differences with Dennett a verbal dispute. However, Chalmers’ attitude toward platonic objects is a stance he calls ontological anti-realism, essentially that there’s no fact of the matter on whether they exist. But his description of consciousness always seems to double down on the non-physical ontology. That said, his version of that is very modest, almost more of an extra metaphysical glaze over physical accounts. It remains compatible with things like conscious AI.
I think Ed was the one who actually referenced Dennett’s zimboes. I did read Brown on zoombies, or actually skimmed it, since I actually have never found zombies a tempting concept. The classic version only makes sense under dualism, and even then only for an epiphenomenal version of consciousness. (I did my own post on zombies several years ago. https://selfawarepatterns.com/2016/10/03/the-problems-with-philosophical-zombies/ ) As you noted, they’ve been refuted many times, yet people keep invoking them, indicating how badly those people want the purported implications to be true.