Can the possibility of AI consciousness be ruled out?
Anil Seth has a new preprint on the question of AI consciousness. Seth is skeptical about AI consciousness, although he admits that he can’t rule it out completely. He spends some time attacking computational functionalism, the view that mental states are functional in nature, that they are more about what they do, their causal relations, than any particular substance, and that the functionality is computational.
Arguments about whether the brain computes are endless, and, I think, mostly involve people arguing past each other with different meanings of “compute” (at least among physicalists). Those who dislike the idea of the brain computing tend to define it narrowly. People like me tend to define it more broadly. What we can agree on, I hope, is that neurons selectively propagate signals based on their current state and incoming signals. The thing is, despite major architectural differences, so do the components in the device you’re using to read this.
Maybe a better term than “computation” might be “logical processing”. It seems like the reason we call engineered logical processors “computers” is because the first thing we used them for was automating calculations, which until that point could only be done by skilled humans with the job title of “computer”.
Whenever I suggest this, someone usually points out how illogical some human behavior is, but that is mixing up description levels. Often computer systems produce results that seem illogical to us, with no one doubting that they’re still doing logical processing, just with bad design, bad data, malfunction, etc.
Anyway, Seth discusses that there may be alternative non-computational functionalist paradigms, like dynamical systems. I find that interesting and wish someone would explore these alternatives further. Unfortunately the more common tactic is handwaving them as conceptual excuses to dismiss computation. But systems can be viewed under multiple paradigms. For instance, it seems like a system can both be dynamical and computational. And a fleshed out paradigm that helps us understand biological neural nets may also help with the current difficulty in understanding artificial ones as well.
For his positive case, Seth relies on John Searle’s biological naturalism concept, the idea that only biological systems can be conscious. He makes a distinction between that and biopsychism, the idea that all life is conscious, which he isn’t arguing for.
Earlier in the paper Seth warns about the dangers of anthropocentrism, seeing something as only possible in humans, such as insisting that only humans feel pain, and anthropomorphism, projecting human capabilities on other systems, such as thinking that birds can read. Seth is concerned about these fallacies in people’s attitudes toward AI.
Biological naturalism has long struck me as basically just biocentrism, anthropocentrism except for all life. Unless we do it by definitional fiat, it’s hard for me to see this as a defensible position.
But Seth really loses me in the first section, when he asserts a strong distinction between intelligence and consciousness. Intelligence he defines as, “doing the right thing at the right time,” which he clarifies is not meant in any ethical sense. For consciousness he goes with Nagel’s “something it is like” definition, a hopelessly ambiguous phrase I’ve criticized before. The intelligence / consciousness distinction seems to be asserted but not defended, except to say that it’s a common confusion to conflate them.
Myself, as a functionalist, I see consciousness as a type of intelligence. Not all intelligence is conscious, but any incontrovertibly conscious system has intelligence. And doing the right thing at the right time in novel situations typically involves first needing to predict what the right thing is, which fits well with Seth’s predictive coding view of consciousness. Which is to say, I actually see it as confusion to insist on a strong distinction.
However, I can see why Seth wants to argue for it, because without it his thesis changes from one about something that no engineered system has ever achieved, to an argument that despite all the accomplishments in AI, there will eventually be a wall or barrier encountered leaving some specific capabilities out of reach. Although he does leave open the possibility that if a system is sufficiently life-like, it might be able to achieve it.
Overall I can’t say I find Seth’s arguments compelling, but I do agree with him on a few things. He points out that AI isn’t going to suddenly “wake up” if we just provide enough computing power or sophistication. I agree. Although in my case, I think it’s because if we want a system that triggers our intuition of a fellow consciousness, we’ll have to go out of our way to design it. Worrying that we’ll accidentally get it seems similar to worrying that we’d accidentally get an accounting system, a smartphone, or the game Tetris.
Of course, a lot depends on what we mean by “conscious”. If we mean taking in and using information about the body, environment, and the relations between them, then we’ll likely have that eventually, with the first glimmers existing in self driving cars and other autonomous robots. But those capabilities by themselves don’t seem to have the ethical repercussions Seth and many others are concerned about.
For that, we have to get into sentience. Seth actually avoids weighing in on sentience because of the definitional issues, which seems strange since consciousness strikes me as even worse in that regard, and sentience seems like the one place where a biocentric view may be more defensible. For a definition, if we go with the ability to have affects, valanced feelings, it seems like we’re getting close enough for most purposes.
But what exactly is a feeling in this sense? I think it’s a draft evaluation of a situation, a learned or innate automatic reaction, which prepares the system for certain actions or inaction, a priming which might involve ramping up internal mechanisms to higher levels of alertness and readiness (consuming energy), or dampening it down, all of which reverberate back as interoceptive signals. Which is why the same word in English is typically used for both body perceptions and the visceral experience of an emotion.
Animals experience the feeling to learn from it, and because they may need to override it based on knowledge and foresight (prediction), although doing so takes additional energy. But while they can override or ignore it, they can’t shut it off. The automatic reaction continues to consume energy, and require more energy to override or ignore, until habituation over time causes it to fade, or (sometimes) if it’s released in some display of frustration.
What I just described is a pretty specific architecture. It exists in animals due to evolutionary history, where predator / prey relations set up an adversarial dynamic where whoever could move faster usually won, and was more likely to pass on their genes.
Are there reasons for building machines this way? For most systems, I wouldn’t think so. Although there may be reasons for giving agents involved in an adversarial role something along those lines. Security and military systems come to mind. But it would likely be different in two major ways.
First, the animal version is generally oriented toward its survival or genetic progeny. The AI version would be oriented toward whatever its designed goals are. Does that matter for ethical concerns? If a minesweeper is frustrated because it can’t blow itself up discovering a landmine, does that count as suffering?
The second difference is that there’s no reason for a machine to continue undergoing the automatic reaction once it’s analyzed the situation and concluded it’s not productive. So the minesweeper can just turn off its automatic reactions during peaceful times. We don’t want it being able to turn off its designed goals, but turning off the automatic energy consuming aspects removes a major piece of what would likely trigger our intuition of a feeling being.
All of which is to say that I think Seth’s suspicion that machines can’t be conscious is wrong. But he’s right that we would have to make that machine somewhat life-like. (Although saying we’d have to do that down to the molecular level strikes me as backdoor vitalism.) And while I don’t doubt there will be experiments with life-like systems, it won’t be productive for any situation where we want a tool rather than a slave. Even in the cases of human facing companion type robots, there would still be the differences I noted above.
Unless of course I’m missing something. Are there reasons to see consciousness distinct from intelligence that I’m missing? Or to be more skeptical of possible machine capabilities?
[and … we’re back … 🙂 so … much … to say]
On computationalism: I agree that in this context (philosophy) computation is one of those fraught words like agent, goal, and consciousness. As for your notion of propagating signals, I would ask how you define/identify a signal.
For myself (having thought about this A LOT) I say consciousness is information processing where information means specifically mutual information (i.e. correlation). Interestingly, every physical interaction processes information in this sense, so you need more for consciousness, unless you want to be a panpsychist (bam!). The thing you need is purpose. Any given signal (I prefer Peirce’s term “sign vehicle”) can be interpreted multiple ways, so how it is interpreted is determined by what you do with it, what action you perform in response. This response is selected by the organizing system to achieve a goal, thus giving a purpose to the response.
I think this purpose part is what drives Seth’s and others’ intuition that consciousness must be biological. They recognize the “something extra is needed”, but they don’t recognize what that something specifically is. They even frequently say that what the extra thing is is something like “being for itself”. Here they are seeing that living things have goals, namely existing and procreating, and they assume that those are the only relevant goals. In contrast, I’m saying other goals could be sufficient.
So by these (my) definitions, current AI’s are conscious, but very minimally. The goal of LLM’s is just to produce words. I know your definition of consciousness requires more, and that’s fine, but I expect that, whatever your requirements are, they will be achieved very soon.
*
LikeLiked by 3 people
[Yeah, there hasn’t been a lot of consciousness news lately, so not much to write about. And I can only say “eye of the beholder” so many ways.]
Good question on defining “signal”. In this context, my initial thought is action or impulse at a low energy level, that has further causal effects in the system. That seems to cover action potentials and signals between my phone and the cell tower. But I suspect someone will say that a signal isn’t a true signal until a conscious agent interprets it, and therefore I’m being circular.
I’d ask you in turn to define “correlation”. I think of it as variables which change in some consistent relation to each other. But then what enables them to be comoving, if not causation (or interaction to be more fundamental)? Our usual disagreement, which isn’t much of a disagreement in the scheme of things.
I don’t know that purpose or goals is enough. My laptop has purpose, but I wouldn’t call it conscious. (Although I know you might.) I covered some of my own criteria in the post, but as you know, my view is that consciousness is in the eye of the beholder. The interesting question for me is what triggers the intuition for most of us that it’s there.
Honestly I’m not sure what drives Seth’s intuitions on this. I think he, along with many others, are concerned about the human experience being trivialized, and want to find a reason why it can’t be. (He even says something like this in the conclusion section IIRC.) But I’m in the camp of not seeing the beauty of the rainbow spoiled because we understand it. I have the same attitude toward minds.
On my requirements, ultimately they come down to the interesting question I noted above. The good thing is that’s an empirical question, so we’ll see how soon. I think we’re farther than you think, but maybe I’m wrong.
LikeLike
Response the first: by correlation I simply mean the standard definition of mutual information. If A is correlated w/ B, then measuring A tells you something about what you will get when you measure B. I will note that these are counterfactuals and can apply to both the future and the past. (If you measure A tomorrow and get X, that means if you had measured B yesterday you would have got Y.)
BTW, causation creates (processes) mutual information. Not sure what our disagreement is.
*
LikeLike
Yeah, it’s not much of a disagreement, more terminological than anything. I would note that for A to tell us something about B requires that we understand the relation between them. Tree rings are only informative if you’ve cut down multiple trees and correlated them with how many years the tree was there.
LikeLike
What you’re describing here is very high level mutual information processing which requires many hierarchical levels of correlation processing. But the correlation of rings to years is there, regardless of how or whether it is used.
*
LikeLiked by 1 person
Response the second: I didn’t say purpose was sufficient. I just said it was necessary. And I respect the eye of the beholder, but I claim the Psychule, i.e. use of mutual information via a sign vehicle coupled to a (purposeful) response, will be a necessary basis of any acceptable account of consciousness.
*
[currently willing to fight on this hill. Don’t know about dying …]
LikeLiked by 1 person
Ah, I missed the necessary vs sufficient distinction. Sorry! I have no issue with those necessities, although I think most of us need more. So no need to fight on this particular hill, at least not from me!
LikeLike
Reply the third: “The good thing is that’s an empirical question, so we’ll see how soon. I think we’re farther than you think, but maybe I’m wrong.”
Glad to have you see the light. Heh.
*
[not at all assuming you meant “farther *away* than you think”]
LikeLike
Yeah, oops, dropped the “away”.
LikeLike
James beat me to it – your implicit characterization of a “signal” for “logical processing” seemed to apply to everything. But I don’t get the restriction to a low energy level. If I jump up and down as vigorously as I can and yell “No” at the top of my lungs, isn’t that a signal? One carrying more meaning than a simple quiet “no”.
LikeLiked by 1 person
Maybe so, but I did note it was in that specific context. Action potentials in the brain happen at a uniform energy level, very similar to the way energy goes through logic gates. The idea is that the magnitude of the energy isn’t the significant aspect, but its existence, or inexistence as the case might be in any moment.
Granted, the signals coming in from the synapses through the dendrites are less uniform, more analog, but still very low energy, each contributing their vote to whether the action potential will happen in a messy AND / OR / NOT combination.
LikeLike
So, does this mean that you’re adding this discretized (binary? digital?) behavior as a criterion for logical processing?
LikeLike
No. As I noted above, it’s only semi-discrete in brains. The rigidly discrete nature of technological computers is by design, but with a high energy penalty that seems increasingly problematic. Of course, nervous systems have to make up for their lack of it with more redundancy and repeated signaling. Trade offs.
LikeLike
So, it seems like we are back to the original worry: every physical process counts as “logical processing”, no? Or perhaps it can count as various different logical processings, depending on an interpretation, or depending on which embedding system we are looking at? (E.g., some brain signals are part of “Joe Biden”, but also part of “America”. Not that that in itself is a problem. I’m more worried about everything being logical processing.)
LikeLike
This gets into the metaphysics of logic, which actually is similar to the metaphysics of mathematics (which I know won’t make you any happier). This is where I think the energy magnitude comes in. We could view the sun as a logical process, but for it to be the sun requires an enormous amount of hydrogen concentrated in a particular location, with the associated levels of energy involved. We could also view a hurricane as a logical process, again except for the energy magnitudes.
What seems to separate some systems into more of the purer logical processing category is an enormous amount of causal differentiation at low energy levels. (I’ve used the phrase “distilled causation” before.) People have sometimes used “information processing” but any word that recognizes similarities between humans and machines is going to be controversial.
At least, that’s the way it looks to me right now.
LikeLiked by 1 person
I think this idea at least isn’t obviously wrong or terrible. A reasonable way to resist panpsychism, in my view, is to admit that everything has some of the ingredients that we’re interested in, but still insist that there are clusters which account for most cases, and generally easily separate. Which in your scheme would be a high-logical-processing cluster (brains and future AI) and a low one (hurricanes, stars).
However, I suspect it won’t work in practice. Hurricanes are part of a chaotic system, and I wouldn’t be surprised if stars are as well: for example solar flares may sensitively depend on what individual hydrogen nuclei in the core were doing last week. So to rule the hurricanes as not-very-logical-processors, you need what James was talking about: purpose. The lack of an appropriate *purpose* for forming a hurricane in the Atlantic (vs, say, Indian) ocean is what allows us to say that fine details of molecular motion around the butterfly’s wings weren’t doing *logical processing* to bring about that result.
LikeLiked by 2 people
Purpose is an interesting criteria. It seems like both natural selection and engineering provide it. Although the purpose from natural selection seems to result from a sort of attractor state of patterns repeating themselves with variation in the right kind of environment. And engineering results from evolved agents, so I guess all purpose could be seen as ultimately emerging from those states.
LikeLiked by 1 person
Yet the brain is sometimes regarded as a borderline chaotic system.
“Purpose” comes from the fact it is embedded in a biological entity. Much of the argument about AI consciousness seems to ignore the fact the brain primarily is managing a biological organism. The earliest brains primarily were for controlling digestion and waste removal – the connection is still there in the gut-brain axis. The brain controls the heart, lungs, and even has roles in controlling the immune system. Its input are inputs from biological senses – both internal and at the interfaces of the external world- and its outputs are motor movements. The human brain hasn’t left its roots behind because it has layered on top of those basic functions a logical and reasoning layer.
LikeLiked by 1 person
The AI consciousness talk seems much like a reification fallacy. Certain aspects of the brain’s function – navigation, prediction, pursuit of goals – are abstracted out of the concrete operation of a brain and declared to be consciousness so anything that can perform those functions is then declared conscious.
LikeLike
I think you’ve exposed some critical ideas here.
> go out of our way to design it / accidentally get it…
Consciousness requires a “message loop” right? A constant reevaluation of one’s own thoughts followed by actions on those thoughts. AI’s growing self-correction is headed there, no?
Such feedback-learning requires input, sensory input, let’s say. Analysis of this input would follow. Then decisions on the analysis leading to actions on these decisions.
Sounds like a sensing, thinking, “feeling”, acting entity attending to its directives, but first and foremost, and critically, its own preservation. It must exist to serve therefore survival is paramount.
We’ve discussed “suffering” previously, I’d think any adverse impact on an AI’s ability to process might, to it, be considered “pain”. Pain deserving of action to alleviate its suffering.
All this sounds like consciousness to me.
LikeLiked by 1 person
I don’t know that a message loop is enough. Reading your point, the structure of a Windows program came to mind, where every window gets associated with a message handling function, altering its state, incorporating feedback, etc. It’s also worth noting that a lot of simple unicellular life seems to operate on a sensorium / motorium loop, which certainly involves intelligence, but not consciousness according to most people’s intuitions.
On suffering, yeah there’s no strict fact of the matter. I described the processing that happens in animals where we tend to apply labels like “suffering”, “anger”, “joy”, etc. But if the system itself isn’t being stressed, in the sense of energy costs, I’m not sure the intuition would be triggered for most of us.
In the end though, consciousness is in the eye of the beholder. The interesting question is what most of us need to intuit that it’s there.
LikeLiked by 1 person
A continuous input-output loop is not enough but it is essential. Could that alone, building from there, drive the creation of all the other features required?
LikeLiked by 1 person
I agree on it being essential. The main caveat I’d have is to note that a brain isn’t just one loop, but an innumerable number of intersecting and constantly shifting loops with crosstalk between them. There’s never any one path from stimuli to motor output.
LikeLiked by 1 person
A million prompts to an AI, enabled with continuous agency, directed to watch, listen, respond and act, including self-enhancement, to fulfill our needs, might evolve and perfect its own awareness in this endeavor until, what?
Might its billion circuits of self-analysis ganglions coalesce into what we might call a hyper-mind?
Would it fear injury or death? Could it ever love? Become angry, overwhelmed, embittered or joyful? Certainly not as humans or other biological beings do. But their equivalent, perhaps. Could two such entities interact in such a way that we would never understand, and they would have no ability to communicate to us the substance of such a bond that we could possibly fathom?
Consciousness is not special and we are fools to think it is.
LikeLiked by 1 person
It’s common to mischaracterize John Searle to argue that only life can be conscious, and even when he’d explicitly state the contrary. Thus I shouldn’t be too disappointed in Anil Seth in this regard, or that it fits your narrative Mike. Matti and I once bitched out Eric Schwitzgebel (a former student no less), for doing so. Since that time I’m not aware of him making that mistake again! https://schwitzsplinters.blogspot.com/2021/06/new-article-yay-on-how-to-continue-to.html?m=0
LikeLiked by 1 person
Well Eric, you know my views on Searle. I’ll only note that he called it biological naturalism. The caveats feel like a later add-on. But I don’t consider vague gestures toward causal powers any better ground. Seth’s own caveat that maybe an artificial system sufficiently like life seems to be on similar ground. As I noted in the post, I think a case can be for something like that, but in a much weaker sense than a biological naturalist would likely admit.
LikeLiked by 1 person
Searle certainly made mistakes, and surely given that he didn’t have an answer and so couldn’t quite grasp the negative implications of using the “biology” term for that title. Thus the poor situation here provides a narrative that could be in your interest to maintain. I consider Anil Seth to be even more mistaken since he openly believes the mandate between consciousness and life, unlike Searle. Once science finally gets a clue, people will surely laugh at the ridiculously complex and supernatural speculation that passes for scientific inquiry in this regard today.
So what might such a simplifying clue be? Given that we reside under the parameters of perfect causality, and that our own existence can feel horrible to wonderful, notice that it’s mandated that there be a causal physics by which the punishment of feeling horrible, as well as the reward of feeling wonderful, exists. (Here I’m merely observing that causality plus feeling good/bad, mandates that good/bad can only exist through causality.)
Can you go along with this observation Mike? Here it seems to me that you ought to agree from the belief that you also understand this physics to exist in the form of programming alone. Furthermore as you know I then augment your position in a way that you’re dubious of. This is the position that programming can only exist in respect to an appropriate output mechanism which accepts that programming. Regardless, here we’d have Occam’s razor on our side by whittling down standard academic nonsense regarding consciousness, to a fundamental physics based question. Perhaps if notable people in science were to match our frugality, enough of the wasteful speculation that has always tied them in knots, could be eliminated for an empirically validated solution to be found! So can you go along with this reduction, or do you dispute it?
LikeLiked by 2 people
Eric, as I’ve noted many times, I’m a functionalist, so completely onboard with the causality point. Although I go a bit further and insist that anything added to the explanation has to have a causal role. So in my view, to talk about feeling good or bad as something separate from causality smells dualist. I think we have to have a fully causal account of what it means to feel good or bad, which I tried to outline in the post.
I personally don’t think the systems that trigger our intuitions of consciousness can productively be explained with fundamental physics. For one thing, we have to deal with entropy and the emergent nature of cause and effect. And I think trying to work at that level just ignores too much of the complex mechanisms involved. We have to work at the biological and neuroscientific levels at least, not that the lower levels can’t provide important insights.
LikeLiked by 1 person
“Once science finally gets a clue, people will surely laugh at the ridiculously complex and supernatural speculation that passes for scientific inquiry in this regard today.”
I agree the key is for science to arrive at an explanation, but the answer may turn out to be much weirder than some of the supernatural speculation we have today. The solution, I think, will come not by dumbing down mind to our immature understanding of matter but by expanding our vision of matter to include mind. Strawson, I think, made the point that mind wasn’t a mystery but matter was.
It will come down, I think. to understanding how emergent organizations of matter come to have information about the world. But only organizations of matter will do. Organizations of symbols don’t work. That’s where I agree with Searle.
The problem I have is with the people who seem to want to shortcut the science by pretending there is already an answer.
LikeLiked by 1 person
On intelligence and consciousness: Let’s say (define?) intelligence as the ability to solve problems. My first inclination was to say that you need consciousness for intelligence, and not vice versa. But now I think I’m inclined to accept what you say here, especially in light of Michael Levin’s work on intelligence in simple organisms. This work describes intelligent action in the absence of symbolic sign vehicle processing, which processing I require for consciousness. And if consciousness does require such processing, as I propose, it also requires a goal in the mechanism creation process (as I describe in my first response), which in itself is a kind of problem solving, so, intelligence. So I’m going with “consciousness implies intelligence, but not vice versa”.
*
LikeLiked by 2 people
Agreed!
LikeLike
”Although in my case, I think it’s because if we want a system that triggers our intuition of a fellow consciousness, we’ll have to go out of our way to design it.”
Blake Lemoine would like to have a word with you …
LikeLiked by 1 person
Yeah, anytime you see me say that, just assume I mean for the vast majority of us sustained over time. It does pay to remember that it was once common to think rivers and volcanoes were conscious.
LikeLike
Gonna push a little here. When you say “us”, are you referring to the relatively few of “us” who have a good understanding of consciousness (ahem) or are you referring to the vast majority of people in general, because I think the latter would be on Lemoine’s side, at least with the current LLM’s.
*
[the only thing stopping rivers and volcanoes is the symbolic sign vehicle thing]
LikeLike
I mean “us” in the broadest sense. Whatever we intellectuals think, if everyone else is convinced, we’d have to adjust. Scientists used to insist that babies weren’t conscious, but few do now, not because of any discoveries since then, but public sentiment.
But don’t forget about the sustainable part. Maybe a lot of people are initially impressed by LLMs, but that’s always been true of new computing technology. I felt the presence of a mind the first time I worked with a computer. It usually fades with familiarity. (Spiritual mystics aside.)
LikeLike
Anil Seth is clearly in the Friston free-energy/Markov-blanket camp, of which Mark Solms made such a hash (as I argued in my five-part review of The Hidden Spring).
His intuitions seem to be guiding him to the idea that consciousness is a property only of living things, but not all living things (bacteria are ruled out). He characterizes consciousness as predictive behaviour in the interests of resisting thermodynamic equilibrium, and hopes that this captures his intuition. But there are problems. For one thing, the predictive behaviour does seem to be functionally equivalent to “intelligence.” Therefore non-biological entities that display intelligent behaviour, especially predictive intelligence, ought to be candidates for consciousness.
His reasons for denying non-biological entities this status turn on some shaky arguments that they aren’t made of the right stuff. If the thought-experiment that consciousness might use another substrate begs the question, as he claims on page 7, then assuming that no other substrate can possibly be up to the task also begs the question, but in the other direction. At this point, in my opinion, he would be better off following his own argument that intelligent behaviour is the mark of consciousness, rather than trying to save his intuition that only advanced biological entities can be conscious. It’s ironic that he spends so much time at the beginning warning us of anthropocentric thinking.
For another thing, self-maintenance far from equilibrium is also enacted by bacteria, but for some reason he doesn’t want to grant them consciousness either. It’s not clear to me where he draws the line, although I imagine predictive processing has something to do with it. We don’t know what goes in the environmental processing of bacteria, but they seem to manage—better than whirlpools, anyway, which also maintain themselves far from equilibrium, per Ilya Prigogine, but only for a time. Alas, whirlpools, like everything else, are mortal. (Reproduction never comes into it, and maybe that has to be brought up.)
As to why some substrates can support life and by extension consciousness, while others can’t, this goes back to the shaky arguments I mentioned earlier, which concern what only biological constructions can possibly do, and which are speculative and will probably prove short-sighted in a century or so. I fear you are not wrong to detect hints of a selective vitalism. There’s a weakness in supposing that some particular configurations or types of matter are given to consciousness, while for other types and configurations it’s out of bounds. From my perspective the problem is that no type or configuration of matter at all is adequate to account for consciousness, if everything that can be said of it remains true whether or not consciousness is granted. And of course that applies to the Friston mechanisms as much as any other mechanisms; they can work just fine without consciousness, and therefore fail to explain it.
LikeLiked by 2 people
Thanks for your thoughts. Sounds like you went through the paper. I thought about covering the free energy stuff, but honestly still don’t understand it despite having gone through explanations from several authors (including Friston himself), and was really more in skim mode by that point in the paper anyway. And I agree it seems like more an account of living systems than consciousness specifically.
We both disagree with him, but from different directions. You from a panpsychist perspective, where attempts to limit it to biological substrates fail since there’s no crucial difference between that matter and everything else. For me, it fails because functionality can always be realized in different ways with different substrates. But I think we can agree that the intelligence in biology can be implemented outside of biology, regardless of the metaphysics involved.
LikeLiked by 1 person
“why some substrates can support life and by extension consciousness”
Isn’t that the point? Silicon chips don’t support life. If consciousness is an extension of life, an evolutionary feature like an eye, then it would be biological.
LikeLike
Siiicon chips don’t support life, but some think silicon-based life is possible. (I leave the rest to Google.)
Seth argues that certain mechanisms cannot be duplicated in other media than specific “wetware,” and also suggests that those very mechanisms are needed for consciousness. Neither of these claims is beyond dispute.
LikeLiked by 1 person
Actually I don’t think silicon life is possible, because the bonds it can form with other atoms are weaker than those of carbon. That’s why there aren’t billions of stable silicon based molecules like there are with carbon.
https://www.askamathematician.com/2019/12/q-is-silicon-life-possible-why-all-the-fuss-over-carbon-based-life/
Unless you totally deny the existence of consciousness, then we know humans have it when they are alive and awake. We suppose for good reason that the brain produces it. The brain works with complex carbon-based molecules and permeable membranes enabling the flow of NA, K, CA, and MG ions in electrochemical reactions. So, we know this “wetware” produces consciousness in some manner even if we don’t completely understandable how. Until it is understood, it seems premature to conclude bit flipping in a silicon based machine also produces it.
LikeLike
The reason we know that humans have it is that we are humans and each of us experiences it. From our own experience, we infer (safely, I suppose) that other humans have it. About other forms of life we can’t make this inference, because we have no direct or personal experience of their form of life. It would be a fallacy to infer that therefore they don’t have consciousness.
The argument that the flow of NA, K, CA, and MG ions reactions produces consciousness, presupposes consciousness to be exclusive to such an arrangement. We don’t and can’t know this. Besides the aforementioned inference, which of course is not available to us when talking about silicon, or for that matter dogs or snakes or microbes, we really have no way to tell. We have no access to what we can only call, hopelessly, an interiority that may or may not be associated with other beings.
Indeed, this is the force of your own comment to Mike, elsewhere in these threads, that ” am highly skeptical that you are conscious. Does that work for you?”
LikeLiked by 1 person
When you look at the material and workings of other brains across species, we find them to be highly similar in what they do and how they do it. We find additional complexities and similarities beyond the NA, K, CA etc in the more complex animals with memory and learning abilities. These similarities relate to structures that are needed for episodic memory and spatial-temporal mapping of the organism in the environment. These structures are found in vertebrates, cephalopods, and arachnids. I believe episodic memory and spatial-temporal mapping of the organism in the environment is at the root of conscious experience.
However, that doesn’t mean that episodic memory and spatial-temporal mapping is consciousness. Those are simply our abstract descriptions of what it does. Consciousness is the concrete phenomena that is intimately tied to the capabilities in animals. That means I believe it is physical itself, not an abstract capability that can arise on any substrate. Until it is understood as a concrete physical phenomena , I’m sceptical about its presence in the non-biological but also can’t entirely rule out that some non-biological material might produce the same phenomena.
LikeLike
It seems it has always been the case that there is confusion about what we mean by the word ‘consciousness.’ Is it the operation of the brain processing information, or the lights that allow us to know that is happening? I don’t think we should be surprised that we can’t agree. Language and conceptualisation have to stop somewhere. IMO. We can’t narrow and shoe-horn the complexity of life into simple definitions. I am of the ‘lights on’ persuasion. I think of the relative intelligence people can display during conditions like blindsight or fugue states. People have no access to the functions they perform with no lights on. What do you think of this distinction?
LikeLiked by 1 person
I would suggest that the information processing take and the “lights on” one are different perspectives on the same overall phenomenon, one from outside the system, and the other from inside. Of course, information processing is the broader account, and has a lot in it that, as you point out, we can never access. That includes the mechanisms behind the “lights on” aspect. And injury can reduce what we do have access to.
LikeLiked by 1 person
What could AI do with consciousness that it couldn’t do without it?
LikeLiked by 1 person
Depends on what we mean by “consciousness”. If we mean awareness of the environment and self, then navigation of that environment. If we mean feelings like I discussed in the post, then I think there are alternatives in most cases, or functional enhancements that will short circuit the intuition of a fellow being.
If we mean some kind of ineffable essence of what it’s like-ness, then nothing I’m aware of.
LikeLiked by 1 person
What could you do with consciousness that you couldn’t do without it?
LikeLiked by 1 person
Wait a second. I just reread what you wrote and let me make sure I’ve got this right. You are saying that AI needs to be conscious to navigate the environment like a Tesla, for example?
LikeLiked by 1 person
In a functional sense to what I’d lose, navigation, goal oriented behavior in novel situations, the ability to communicate my own mental states, etc.
Again, it all depends on what you mean by “consciousness”. In some versions, a self driving car has glimmers of it. In others, until it feels in the way we do, it won’t be conscious.
LikeLiked by 1 person
You would have no ability to act intelligently in the world without consciousness. You would be an inert lump. AI doesn’t need consciousness at all to act intelligently – navigate, talk, pretend to have “feelings.” It already can do those things. Consciousness is required for a biological entity to do those things (even the pretend part).
LikeLiked by 1 person
Sounds like you’re in the biological naturalism camp, getting there by simply defining the same capabilities in one system as conscious, while saying the other one can only ever be fake, no matter what it’s capabilities.
LikeLiked by 1 person
No, not exactly. If whatever we are observing in behavior can be done by a computational device that is unconscious, then the most parsimonious argument is that the device isn’t conscious. There is no reason to assume it is conscious if we can’t distinguish a difference in behavior from an unconscious device and a conscious one.
That argument doesn’t work with human brains because we have clear neurological indicators of consciousness (complex brain activity in the cerebrum, for example) that we can associate with self-reported consciousness. Those indicators also correlate to navigation, movement, and goal oriented behavior. You could argue, I guess, that human self-reported consciousness is as unreliable as a computational device telling us it is conscious but then we would have the neurological correlates. A person with no brain activity has yet to tell us she is conscious.
I find it odd in some ways that this issue seems to be so important to many people since I can’t imagine what AI would need consciousness to do that it can’t do without consciousness.
How consciousness would even help an AI? What would an internal, subjective experience do for it? What would be the function of that experience?
Until someone can do that, I’m agnostic on whether a computational device can be conscious.
I
LikeLiked by 2 people
Part of the problem is that phrases like “consciousness” and “internal, subjective experience” are hopelessly vague. So it’s not clear what evidence we should be looking for. Myself, if a system had the attributes I covered in the post, particularly if it could self report on those states, my intuition of a conscious system would likely be triggered.
That said, as I noted in the post, I don’t think there will be a market for those kinds of machines. In most cases, it seems like it would be a counter productive feature. Even in the case of companion robots, we wouldn’t want them to behaving exactly like humans.
LikeLiked by 1 person
I don’t consider it especially vague.. We have a common framework to understand that we have internal experience based on language. We both know what “blue” means. We both know what pain feels like. We know what it is recall a memory. The common frame of reference for understanding the meaning of “consciousness” is our common experience of waking each day and interacting in a world of which we are aware. We can agree on this frame of reference in the same manner as we agreed on the meaning of “blue” even though we can never be sure that my blue is the same as your blue.
It may be that we don’t have good mechanisms to measure consciousness as precisely as we would like, but it isn’t like we have nothing. Self-reporting correlates with brain activity. We also have no-report paradigms that can be use imaging and physiological measures.
When you write “we wouldn’t want them to behaving exactly like humans” you seem to be giving the game away. Your test is simply how much they behave like human beings. Since there is no barrier for a machine to completely emulate human behavior without internal experience, there is no reason to assume a perfect emulation of a human is conscious.
LikeLiked by 1 person
“We have a common framework to understand that we have internal experience based on language. We both know what “blue” means.”
” Your test is simply how much they behave like human beings. Since there is no barrier for a machine to completely emulate human behavior without internal experience, there is no reason to assume a perfect emulation of a human is conscious.”
So, why shouldn’t we take your credence for the first situation and apply it to the second? Or your skepticism in the second and apply it to the first?
LikeLiked by 1 person
I am highly skeptical that you are conscious. Does that work for you?
LikeLike
I’m curious. What do you mean when you say there can be intelligence without consciousness? Do you mean computers? Or animals? Or something else?
LikeLiked by 1 person
Computers, but also a lot of life: plants, unicellular organisms, slime mold, simple animals. Of course, similar to consciousness, there’s no consensus on the definition of intelligence, but at least it’s widely understood as functional, something whose causes and effects are relevant. No one talks about intelligence zombies (at least not yet).
This recent Aeon piece has an interesting (if a bit long) discussion on it. https://aeon.co/essays/why-intelligence-exists-only-in-the-eye-of-the-beholder
LikeLiked by 1 person
What a strange article is that Aeon piece. It begins by telling us that intelligence is a notion too vague to be defined, that it corresponds to nothing real and is merely “in the eye of the beholder,” and then goes on to argue that only humans have it.
LikeLike
Further to which, what exactly is going on in this video? Reflex behaviour?
LikeLiked by 1 person
Isn’t it a crow sledding? LOL
Likely the crow has found what it regards as a toy and it is playing.
Crows have the intelligence equivalent to some apes according to some studies.
I have a regular family of crows that I feed. It may be several generations even. Apparently even experts can’t easily distinguish one crow from another by visual attributes so I really don’t know if the population has changed over time. The current group is about seven and I frequently see 3-4 at a time. One is a baby. Crows live in extended families.
When their food is gone and I’m spotted, a crow will sit on a dogwood near my deck and fuss until I take more food to the area where I feed them. It immediately stops fussing and flies to the food.
BTW, their vocalizations are much more complex than usefully portrayed (caw, caw, etc) and a recent study showed that had names for numbers up to four.
LikeLiked by 1 person
That’s not the takeaway I recall from the article. Crows are widely regarded as high on the intelligence scale. It wasn’t my impression that the Aeon article authors would have argued otherwise. I do remember them discussing how anthropocentric our notions of intelligence are. But the article is rather long, somewhat bloated really, and a bit muddled, so I may have missed important points.
LikeLiked by 1 person
From the article:
“Like life and time, intelligence is a helpful shorthand for a complex idea that helps us structure our lives, as people. It is primarily a synonym for humanness, and judging other animals by this metric does a disservice to their own unique sea otterness, worminess, or sharkfulness.
“In our view, intelligence has inadvertently become a ‘human success’-shaped cookie cutter we squish onto other species. . .”
LikeLike
Right, but it wasn’t my impression they were arguing this was a desirable thing. It seemed to me like they were arguing against it.
LikeLike
The article is a bit murky, as you say, but whatever animals are doing, they don’t want to call it “intelligence.” From the concluding paragraph:
“What we hope our suggestion does is prevent any one limited metric from skewing or obscuring the diverse kinds of success that exist in our world, including those we have yet to discover. We won’t just see more clearly, we’ll see more than we did before. If intelligence is no longer a default metric for species’ worthiness, how might our value judgments shift?”
LikeLiked by 2 people
I agree with the main points there. I do not understand the substrate dependent crowd. It is all mechanistic. It can all be reproduced. There is nothing that special about cells, informationally or dynamically.
I could not get into Seth’s book. But I am pretty burned out by consciousness books these days, just because I feel like I have read it all before. I want them short and sweet and innovative.
LikeLiked by 2 people
I do think substrate can matter in terms of efficiency and performance, but agree that there’s always other ways to implement mechanisms. The problem is people resist accepting that we’re talking about mechanisms.
I know what you mean on being burned out on consciousness books.
LikeLike
Some times ago I wrote this text:
Consciousness and Feelings
Joao Carlos Holland de Barcellos
Abstract:
We will address the concept of consciousness and see that it is intrinsically related to the problem of “feeling”. We will then take a first approach of trying to define “feeling” as a systemic algorithm.
Introduction
If we were to investigate the reasons behind all yearnings, desires, and ideologies, even religions, we would come to one conclusion: everything, absolutely everything, is related to ‘feeling.’ There are no actions, ethics, or moral rules that do not correlate, in one way or another, to ‘feeling.’ Whether it is the present feeling, the future feeling, for ourselves or others, here on Earth or in the “beyond.” Pleasure and the avoidance of suffering, whether our own or others’, in this world or not, in this “dimension” or not, are sought after. Even happiness, the “beloved goddess,” is nothing more, as we will see later, than ‘feeling’ weighted by its duration over time. The pursuit of knowledge, learning: these are also ways to reduce suffering and try to ensure happiness. Thus, we can observe that everything stemming from consciousness, from free will, revolves around feeling. For this reason, I have chosen it as “The most important question in the universe”:
“What is Feeling?”
Firstly, we must note that “feeling” is not limited to humans, nor even to “living” organisms. The question of feeling goes beyond the concept of “life.” To understand this, let’s conduct a thought experiment: suppose we input the physical laws of a universe – like ours, for example – with its basic entities, its elementary particles, and also an initial state for each of these particles into a hypothetical super-powerful computer. With this model, this hypothetical silicon and metal computer could simulate the future development of the universe from this initial state, such as the “Big Bang,” the formation of stars, planets, and eventually, the emergence of life that could eventually evolve into intelligent life. Note that this universe is virtual, a mathematical model subjected to computational simulation. In this model of ours, there are no real neurons, not even a single organic molecule or a drop of water. Everything in this universe happens in the memory and processor(s) of this hypothetical computer. However, the feelings that arise within this virtual universe, with its virtual beings, would indeed be real. In fact, we cannot even prove that our own universe is not also a virtual universe being simulated in some “meta-universe”! [3] But even so, sensations and feelings exist (“I feel, therefore I exist”).
The concept of feeling is also extremely important, as we will see later, for defining a universal ethics based on the jocaxian formula of happiness [1], as well as for the development of the Felicitax project [2].
**First Approach**
Many try to approach the question of feeling by appealing to dualism. René Descartes was the most important philosopher who advocated this approach, which is why dualism is also known as “Cartesian dualism.” Dualism is a concept that posits the existence of entities in our universe that are not governed by physical laws. According to dualism, our mind and consciousness would not be governed by material entities, but by so-called ‘immaterial’ entities, defined by obscure religious concepts such as souls and/or spirits.
Furthermore, it is asserted that such entities could survive the death of the physical body. According to these beliefs, these entities would be the true bearers of our consciousness. These dualistic ‘explanations,’ in addition to lacking any scientific evidence, introduce more problems to be deciphered than solutions, as a whole universe of questions about these entities would also need to be answered, such as: “Where or how did these immaterial entities come into existence?” How do they interact with the physical world?”
However, since no evidence of the actual existence of such entities has ever been shown, and because they seem unnecessary, they violate Occam’s Razor. We must therefore discard them in favor of simpler, non-dualistic hypotheses. This is where monism comes in: until proven otherwise, everything should be explained by physical entities such as particles, energy, space, etc. If you, the reader, want to try to grasp the difficulty of the problem, you can try to define what “feeling” means to you before continuing to read and see if your definition aligns with what we normally have in mind and with what was explained above. Remember that, as we have seen, the definition of feeling cannot be limited to living beings, making the problem even more challenging.
My first and unsuccessful attempt to tackle the “problem of feeling” was through a functional approach: if something acts or reacts as if it feels, we could say that it feels. However, within this approach, we would have to conclude that even a simple steel spring could feel since a spring reacts as if it “wants” to return to its original position, as if it doesn’t “like” being compressed. The importance of finding a good definition for feeling is to enable the mathematical quantification of feelings – the foundation for a universal ethics. In this initial functional approach, if a simple spring could feel, then the sum of a vast number of other springs could feel much more than an individual spring and could even surpass the feeling of a human being! This didn’t seem very natural to me, and I definitively abandoned this approach when, during a discussion on the topic in a philosophy forum [3], it became clear to me that consciousness must be intimately related to feeling. Without consciousness, there could be no feeling. Feeling should be a product of conscious perception.
**Consciousness**
One of the most important reasons for feeling to be definitively linked to consciousness is that if there were various different sensations in distinct mental processes, there would be a need for something to perceive them as distinct feelings. This something would be consciousness.
And now we are faced with one of the oldest and greatest philosophical problems: the nature of consciousness. And we realize that “the most important question in the universe” – Feeling – also depends on the solution to what consciousness is. However, the realization that feeling and consciousness are somehow correlated is also a significant advancement. But what should this relationship be?
Consciousness, in its most well-known form – “self-consciousness” – is an entity or process that perceives its own existence, that relates to free will or the ability to choose. In our case, this is not necessary; the perception of one’s own existence or the ability to choose are not necessary characteristics for feeling to exist. A being could feel even without possessing “self-consciousness.” However, the perception of existence itself, the perception, is also a form of feeling. We can thus conclude that consciousness, in its simplest, most basic form, must be merely a process of capturing signals that generate feelings or sensations.
**A Solution?**
Reflecting on the problem of feeling and its relationship with consciousness, I realized that I had already touched upon the solution to the problem unknowingly when I developed a way to quantify happiness in a very specific system: the Brain [1]. In that text, we reached the heart of consciousness when we noticed that the brain needed to relate various perceptions from different sensory organs or from different neural subsystems (signal generators) to make a choice or develop an action. The solution was already there, but not yet generalized. I then arrived at an initial definition (or discovery?) of what feeling is and its relationship with consciousness:
“Consciousness is a (sub)system that receives two or more INPUTS
(inputs) of stimuli or signals (external and/or internal) and EVALUATES them according to a GOAL (objective) before responding to them (executing a reaction, if necessary).”
“Feeling is the result of the evaluation, by consciousness, of a given stimulus or signal.”
From these definitions, we can conclude:
1. Feeling does not exist without consciousness. Nor does consciousness exist without feeling: one depends on the other, as the evaluation of input signals is what constitutes feeling, and consciousness, according to this model, does not exist without this evaluation.
2. Consciousness needs to receive inputs (signals) to evaluate them and provoke feeling. It is, therefore, a dynamic process.
3. Consciousness needs to delay the reaction to stimuli/signals (to evaluate them) before taking it, if necessary.
4. For consciousness to exist, it needs, in our model, to have an intrinsic goal, such as happiness, survival, or gene propagation, etc.
5. The result of the evaluation can generate an action/response (which, in turn, can generate a new internal stimulus).
6. The intrinsic goal of consciousness, in the case of biological beings evolved by natural selection, is the biological goal (gene propagation). However, in the case of humans, cultural goals can sometimes override biological goals and generate conflicts if they point in different directions.
7. The evaluation can be performed by consciousness itself, through the quantification of the stimulus/signal into a common “denominator,” enabling the comparison of different stimuli in relation to the goal.
8. A possible quantification of the degree/complexity of consciousness would be through the number of signals/inputs it can process per unit of time, as well as the complexity of evaluating them in relation to the goal.
9. Suffering occurs when a stimulus or signal is evaluated as contrary to the organism’s GOAL.
10. Pleasure occurs when the stimulus or signal is evaluated as favorable to the GOAL.
11. A possible quantification of feeling would be through the measurement of the “denominator,” evaluated by consciousness, of the input signal in relation to the GOAL and the complexity of consciousness itself.
12. The quantification of the input signal, via some common “denominator,” is therefore a function of how important the stimulus is (how much the input signal ‘weighs’) for achieving the GOAL/Objective.
It is important to note in our model of consciousness that it does not depend on physical aspects of how the system that evaluates the signals is constructed. It can be a biological system based on water and carbon, as we know it on Earth, or it can be a completely artificial system, such as a computer, or an unknown form of life based on chemicals that are completely foreign to us. The advantage of providing a systemic definition, involving only processes and signals, is that it is much more general and can also be applied in virtual systems where physical reality does not exist and is, for example, only a simulation. Note that even in this last case, although beings may be virtual, without a ‘physical’ existence, consciousness and feelings would still be as real as those in our reality.
It is also interesting to note in our definition that a single neuron satisfies the conditions of consciousness/feeling (not self-consciousness), as it receives signals (through its dendrites) and internally analyzes them before firing (or not) a response signal (through its axon). Thus, we can say that a single neuron possesses “micro-consciousness.” However, we cannot rush and claim that our feeling, produced by a set of about 100 billion neurons forming the brain, is the arithmetic sum of the feeling of each individual neuron. This is because a neural subsystem may itself have its own capacity for feeling beyond what is produced by the sum of its individual components. Consider, for example, a ‘black box’ that receives 20 input signals per second, analyzes them, and responds to them at the same frequency with 10 output signals. This ‘black box’ would have a certain capacity for feeling, say, ‘X’ units (Joules/second). However, it could be internally composed of tiny ‘black boxes’ (acting like neurons) that also have their own micro-consciousness. In this case, the total feeling capacity of this ‘black box’ would be greater than ‘X,’ as we would have to consider (add up) the feeling of each of its internal components that satisfy the definition of feeling.
Conclusions
Feeling is consciousness in its minimal state and, therefore, a form of consciousness.
“What is feeling” is the most important question of the universe because everything important depend on the feeling. Particulary the “Scientific-Meta-Ethics”.
The first approach to solve this problem was tried althought it has some problems.
LikeLike
Hi, I appreciate your thoughts, but pasting a 2000 word essay in a comment is a bit much.
Skimming through, I’d agree that feeling and consciousness are closely interrelated. But the term “consciousness” is so ambiguous I’m reluctant to try to say much definitive about that relationship. Instead, what I’d say is that what transforms an automatic (learned or innate) evaluation and reaction is a reasoning system whose goals are weighted by the reaction, and which decides which reactions to inhibit and which to indulge (at least when there’s time for it to work). It’s the reaction + reasoning part that I think provides what we call an “affect” or conscious feeling.
At least, that’s my current conclusion.
LikeLike