David Chalmers in his book: Reality+: Virtual Worlds and the Problems of Philosophy, eventually gets around to addressing the 800-pound gorilla in the room for any discussion of the simulation hypothesis. Can consciousness itself be simulated, and if so, would the resulting entity be conscious?
This issue, I think, is what makes many react with far sharper incredulity to this hypothesis than they might for other speculative technologies like interstellar travel, nanotechnology, or flying cars. It’s one thing to imagine biological humans wired into a simulation like in The Matrix. It’s a whole other matter to imagine simulating the humans themselves.
Myself, I’m pretty much a stone cold functionalist. As far as I can see, if a machine can reproduce the functionality of a conscious system, then it will have reproduced consciousness. Which isn’t to say that reproducing that functionality is in any way trivial. We’re probably decades away, or maybe even centuries, from being able to do it. But I can’t see anything in principle to prevent it. (Obviously new discoveries could reveal blockers at any time.)
Chalmers, on the other hand, is the philosopher who coined the “hard problem of consciousness” and popularized the philosophical zombie thought experiment. He’s argued extensively that no physical explanation for consciousness is possible. For him, it can only be explained by either invoking non-physical forces, or by expanding our concept of what physics is. In this view, a theory of consciousness would be similar to a fundamental theory of physics, composed of brute irreducible facts that we simply have to accept.
All of which seems like it would make him skeptical of machine consciousness. But Chalmers’ stance has always been nuanced. He’s a dualist, but a property dualist rather than a substance dualist. He doesn’t equate consciousness only with functionality, but he does see it as something that “coheres” with the right functionality and organization. It’s a view that reconciles with science. So he’s long been open to the possibility that this non-physical or physics-expanding ontology can be present in a machine.
This nuanced view operationally seems like a combination of functionalism and identity theory. A straight functionalist can be open to functionality implemented with alternative strategies. Chalmers’ extra metaphysical commitment makes him care more about the specific organizational structure of the brain, and it seems to make the question of mind uploading harder to postpone.
He notes that the philosophical problem of other minds means we can never know for sure whether a machine or uploaded entity is actually conscious. Being uploaded, he muses, might just mean the creation of a philosophical zombie, even if it produces an entity with similar behavior that talks about its own consciousness.
In an attempt to work through this, Chalmers’ goes through a thought experiment (which some of you have already been discussing in the comments) where we replace our brain one neuron at a time with an artificial technological one. He asks, what happens to our consciousness during this process?
It seems implausible that it disappears on the first neuron being replaced, or on the last, or on the replacement of any one neuron in particular. Maybe it gradually fades away, but if our behavior and capabilities are preserved, then it’s a situation where we’re not aware of it fading away, where we are in fact massively out of touch with our own experience. Chalmers also finds this implausible. In his view the most likely scenario is our consciousness continues the entire time.
Interestingly, a functionalist might be open to a more aggressive version of this scenario. Imagine having cybernetic implants installed to reproduce functionality lost from strokes or other injuries. So if someone’s visual cortex is damaged, maybe we replace it with an implant that provides similar functionality. If later their amygdala is destroyed, we also replace it with an implant. Over time, every part of the brain gets replaced with something providing the same functionality of the lost part. This might be part of an overall process happening all over the body.
At what point, if any, does the person’s consciousness end? I wonder how Chalmers’ intuitions would change with this version since it wouldn’t preserve the fine grained organizational structure.
In either case, rather than an abrupt copy, we evolve the mind from one substrate to the other. It’s easy to see that there would be changes along the way, so that the final resulting mind has differences from the original. But I have differences from the me of ten years ago. I regard myself as the same person because of the continuity between us, even though most of the atoms of the ten year old me aren’t present anymore. It seems natural to take the same stance toward the gradual replacement.
Of course, these thought experiments can’t provide any kind of authoritative answer to the question of whether consciousness can be produced in a machine. Like any philosophical thought experiment, all they can do is exercise our intuitions for the scenarios where the functionality, and possibly organizational structure, are successfully reproduced. Many will simply reject that these scenarios are possible.
The common sentiment here is that we only have evidence for consciousness in organic brains, and assuming it can exist anywhere else is hasty, if not hopelessly misguided. But it’s worth noting that, strictly speaking, for a non-functionalist, we only have direct evidence for our own consciousness. Consciousness anywhere else has to be inferred from the behavior and functionality typically associated with it. Which behavior and functionality in particular is a controversial question.
For whatever functionality we decide is sufficient, the question remains whether it can be reproduced in technology. Here it’s also worth noting all the things that used to only be possible with natural brains, such as calculating ballistic tables, recognizing faces, playing chess, or any of a wide and ever increasing set of capabilities. Maybe we’ll eventually hit an insurmountable obstacle on what functionality can be reproduced, but there doesn’t seem to be any current reason to assume it. For any specific functionality at least, it’s eventually going to be an empirical question.
Unless of course I’m missing something?
Right now, I think it’s science fiction but, judging from some of our technology, we may be closer than we think.
LikeLiked by 1 person
Thanks. I do think we have a ways to go, not just technologically but scientifically. But you never know how fast things might happen.
LikeLiked by 1 person
//we only have direct evidence for our own consciousness. Consciousness anywhere else has to be inferred from the behavior and functionality typically associated with it.//
If it’s the case that it is possible that other people are not actually conscious, then we are saying it’s possible (even though incredibly unlikely) that they are philosophical zombies. But materialists assert, and indeed their position entails, that p-zombies are metaphysically impossible.
LikeLiked by 1 person
Thanks for commenting!
In that snippet, I was speaking from the non-functionalist’s viewpoint. I made a minor edit to make that more apparent.
As I noted in the post, I’m a functionalist. I think the original philosophical zombie concept is only coherent if we assume dualism, which is problematic since that’s what they’re supposed to demonstrate. Behavioral zombies are a bit more plausible, but they don’t have the same implications of the classic version. I did a post on this a while back.
https://selfawarepatterns.com/2016/10/03/the-problems-with-philosophical-zombies/
LikeLike
Just like the related isomorphism argument, the claims of functionalism are hollow without a details about what the functions are. What if the function of consciousness is to modify and control neuron firings by incorporating information from senses and memory in the integrated manner that enhances the survival of the organism?
Defined like that, maybe I could be a functionalist.
LikeLiked by 1 person
There is a tendency to define consciousness in terms of the external behavior of a conscious organism, like defining a car by its ability to move and transport people from one place to another. My argument is that consciousness is more like the steering system of the car. Its function is to be a enabler of the external behavior of organisms but it is several steps up the causal chain and the causal chain includes neurons. We could envision multiple types of steering for a car and multiple types of steering systems for robots, but the steering system for an organism would necessarily have to work through a nervous system.
LikeLiked by 1 person
As things stand, there are different combinations of functionality to choose from. As I noted in the post, which combination we call “consciousness” isn’t really a strict fact of the matter. The versions you list, integrating sensory info to enhance survival, and a behavior enabling system, are both plausible (and not exclusive of each other). But there are others. It’s why I pull out hierarchies so often.
Enhancing survival is an interesting stipulation. With it, we’re not likely to regard any machine with specialization purpose or set of such such purposes as conscious, since there’s not many practical roles for machines primarily interested in their own survival. On the other hand, a simulated living animal seems like it could fit the bill.
LikeLiked by 1 person
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
LikeLiked by 2 people
It doesn’t look like much progress has been on that theory since it came out in 1987. It is interesting but seems somewhat mammal, primate, human centric. I think if progress is to be made I would look at smaller organisms like spiders. At any rate, the theory is sufficiently interesting to me to buy Edelman’s book. I don’t think creating machines that mimic humans or even smart animals convinces me that the creations are conscious.
LikeLiked by 1 person
Well said James. The overwhelming majority of the consciousness industry, from philosopher to scientist to engineer, has been aligned with the notion that mimicking phenomenal experience should produce phenomenal experience. The origins seem to reside in Alan Turing’s imitation game. Edelman was simply one of countless in this race, him with the ploy that tinkering with actual machines should be more promising than the merely theorized variety that most provide. I can’t say that he’s failed any more than the rest have. Sometimes even ideas that happen to be true need good marketing ploys to gain acceptance.
Given the tremendous investments which have now been made in the premise of mimicry here, (and I’ve noticed that young people tend to go this way as well given its dominance), I kind of doubt that reason alone can prevail. Instead reason may need to be supported with biological evidence of what the brain essentially does to create something that phenomenally experiences its existence. Then once that physics becomes verified in enough ways, in hindsight it should become understood that the status quo had been founded upon a supernatural premise. Until then people like us should be considered jerks for raising such objections. And though I do present a negative picture of the status quo, I see no malice in either their position nor mine. I consider us all to be behaving sincerely here.
LikeLiked by 1 person
Actually most aren’t even claiming to mimic phenomenal experience. The argument is that the mimicry of some subset of external behavior either demonstrates the presence of consciousness or renders the question of consciousness irrelevant. As soon as the idea that consciousness is biological is abandoned, then consciousness becomes woo-woo that materialists believe in.
LikeLiked by 1 person
I suppose you’re right James. They figure that if the behavior is right then either the phenomenal experience will be mandated by that behavior, or perhaps none will exist though here the missing phenomenal experience would then be irrelevant given the (theorized) preformed behavior.
What I never see however is equal treatment for both “if” possibilities — the “no” as well as the “yes”. Instead the explicit “If a machine will function like it has phenomenal experience…” seems to be taken as an implicit “Because we will build such machines”. I’d like to see someone on that side at least acknowledge the possibility that it may not be possible to build something that functions like it’s in pain without the pain existing in itself, or being independently caused by means of some sort of pain creating process that the brain clearly has. This is to say that the pain might incite the function (as should be expected in a causal world) rather than the presumption that the function will incite the pain in an after the fact way.
Because the status quo is powerful enough to ignore the logical possibility it will be more difficult for causal explanations like McFadden’s to be experimentally tested. Difficult… though science should find a way regardless. Then once the brain physics of phenomenal experience becomes verified in enough ways, things should finally begin to make sense. Thus no Chinese rooms, China brains, USA consciousness, or thumb pain when the right inscribed sheets of paper are properly converted into another set of inscribed sheets of paper.
LikeLike
Assumption #1: Consciousness is, in large part, a function of capacity.
A #2: Collecting and wiring together all of the world’s computing power we would have enough capacity to replicate at least one human mind.
Question #1: Would there need to be some sort of metamorphosis required to transition this digital mind to a conscious one? That is, I’ve read that human babies are not considered conscious and only become so after a year or so of existence and interaction. Would this Digi-Mind need to undergo some similar discovery?
Q #2: We might assume that a human infant’s sensory input contribute to or trigger this shift into consciousness. Denied all input, would a human child develop consciousness if kept alive through supplemental means?
Q #3: The corollary being, would this D-Mind require similar, artificial inputs to allow it to recognize its place in the Universe?
LikeLiked by 1 person
A lot depends on how we define “consciousness” and what we think is really necessary to meet that definition.
For infant consciousness, is perceiving the environment and bodily-self sufficient? If so, then newborns seem conscious. Or do we need some degree of imaginative deliberation? If so, that seems really limited in newborns, but becomes far more developed even a month or two in. Or do we need full introspection? That would get us to that 12-18 month threshold.
But I’m not sure how much of that relates to sensory experiences vs just delayed brain development. Human babies, by the standards of the animal kingdom, are all born premature. On the other hand, biological development always involves interaction with the environment, even in the womb.
If a human baby was somehow kept isolated in a sensory deprived manner from the moment of conception, would it ever be conscious in any way we can imagine? If so, it seems like it would be a desolate and desperately impoverished consciousness.
In terms of reproducing consciousness, is fidelity to the organization of the brain down to the protein level necessary? The molecular level? Atomic? If so, it’s not clear to me that we have that computing capacity yet. Although nature does it, so in principle we should be able to someday as well.
On the other hand, if just the functionality is sufficient, then some neuroscientists think it might be possible to do it with with a lot of supercomputing clusters available today. The trick is understanding that functionality well enough to reproduce it, an understanding we’re still a long way from.
LikeLiked by 1 person
Yo momma!
No, literally. When you say, even if only on behalf of the dualist, that “we only have direct evidence for our own consciousness,” you’re forgetting that language is communal. We learn the meanings of “pain”, “feel”, “hungry” etc etc at our mothers’ knees. The meanings of these words cannot help but be communal.
Of course, you’re free to invent your own concepts that only apply to one person, such as yourself. But why bother?
On the main topic, I’m a pretty standard identity theorist about pain, joy, hunger, etc. But not the generic word “consciousness” because that seems to group so many disparate things together. The *only* plausible candidate to bring all those things under an umbrella seems to be a functional property. Roughly, conscious processes are those that are available to the rational System 2 executive planner of a being to whom some things matter.
Note that standard identity theory in philosophy of mind usually (definitely in my case) comes from an identity-theory in philosophy of language and/or philosophy of science. To wit, if you want to find out what “gold” refers to, look for a property that explains all the classic properties of most or all of the strongly-suspected samples of “gold”. Hey look, it’s atomic number 79!
This approach gets you an answer to the “how microscopically deep do you go” question, possibly subject to some vagueness. The answer is, deep enough to explain what we know about pain, joy, hunger, or whatever it is we are trying to explain. And if there is residual vagueness remaining, embrace it.
LikeLiked by 1 person
On language being communal, I may not be catching your point. If you mean that people refer to their conscious states and feelings with language, and that counts as evidence, I agree. But I’m a functionalist, so for me, evidence of the right functionality is evidence of consciousness. On the other hand, for someone who accepts p-zombies as a valid concept, it doesn’t seem to be sufficient.
I’m actually also a functionalist about pain, joy, hunger, etc, for the same reason you land on functionalism for overall consciousness. Pain in particular is a vast array of disparate processes we group under the category “pain”. I like the tie-in to Kahneman’s System 2, if that’s what you meant.
I do agree identity theory can work if we can establish the right equivalence, where the two sides of the identity can be shown to be interchangeable. So anywhere we can refer to “gold”, we can also refer to element 79. The same relation exists between “water” and H2O.
But it seems to me that affects like pain and hunger are far more complex. There I think the corresponding identity has to be complex processes involving interoceptive and exteroceptive impressions, along with memory and overall bodily states. I might be wrong, but this seems more involved than what most identity theorists have in mind.
So for me, the answer to the “how deep” question, is resolved by what level is necessary to reproduce the functionality, the causal structures and their relations to the environment. Although I don’t think there will be simple one size fits all answers. Some processes can probably be modeled at higher levels than others. And some that might involve processes down to molecular interactions in biology might be able to be effectively replaced with simpler mechanisms in technology. But it all requires a much greater understanding than we currently have.
I’m good with accepting vagueness, as long as we admit that’s what we’re doing, and stay open to future clarifications.
LikeLike
When you say hunger is complex, do you mean on the everyday-experience side, or the scientific side? It isn’t necessarily a problem if multiple scientific categories underlie an everyday category. There are two kinds of jade. That’s not a problem. It does not mean that there is no such thing as jade, or that jade is a functional concept, or that only one substance is the real jade and the other is a fool’s mistake (like fool’s gold).
I think “consciousness” is multi-valent in everyday experience. That’s why it’s not antecedently plausible to expect one underlying cause.
LikeLike
I was thinking of the scientific side. Certainly subjectively it seems pretty straightforward. We’re either hungry nor not hungry. But the feeling of hunger seems like the conscious awareness of the result of a complex pre-conscious evaluation performed by the brain based on things like blood sugar level, signals from the digestive tract, circadian rhythms, habits, etc. It seems like a causal nexus in an of itself but also overlaps with the one for consciousness.
LikeLike
I forgot that you need two returns to separate a paragraph.
Like the paragraph that should begin with “On the main topic”. And the one with “Note that”.
LikeLike
It actually looks right to me, but let me know if you want it adjusted.
LikeLike
Never mind, it was just WordPress’s notification-sidebar confusing me.
LikeLiked by 1 person
If ingesting Lithium affects our consciousness, why would replacing carbon with silicon not also affect consciousness? Chalmers’ thought experiment has never been very persuasive to me. As you know from my long series on consciousness, it seems obvious to me that the materials matter as to the exact “flavour” or “feels” of our conscious experience. My definition of consciousness is also functional, though, so if you can actually simulate it in other materials, then the new conscious being matters.
LikeLiked by 1 person
It’s always possible to implement functionality in different ways. Lithium has an affect on the brain’s implementation of that functionality. Whether silicon would have an affect in a silicon based system would depend on how the functionality was implemented with it. But flavor and feelings are higher level descriptions of functionality, functionality that could be reproduced in another way while still meeting all the functional goals of the original system.
Assuming of course that it actually is all about functionality, and that the brain’s implementation doesn’t depend on some unique low level properties. But silicon systems can already do a lot of things brains only used to be able to do, such as recognize faces. So if the low level properties of the system are relevant, it hasn’t become apparent yet. Of course, that could change any time.
LikeLike
I don’t follow that about different “implementations of functionality.” I thought your functionalism was just concerned with the functions that are performed and the information that is processed. How would those change if, for example, I watched a movie while sober, high, drunk, or on lithium? Isn’t the function of the activity the same, and the information that is processed the same? I thought you would expect consciousness to be the same then, although it clearly doesn’t feel the same. I explain these different feelings using the differences in one of Tinbergen’s 4 questions, ie the mechanisms. Is that what you are calling implementation? Can you explain your position a little clearer for me?
LikeLike
Consider a gas powered car, an electric powered one, and a horse drawn cart. All provide transportation, but through very different mechanisms. Each would be affected by certain things, but the things that might affect the horse (such as perhaps lithium), might affect a battery powered car very differently, not to mention a gas powered one.
This is usually called multiple-realizability in the philosophy of mind, the idea that mental states are functional, and so could be implemented by different underlying mechanisms.
https://plato.stanford.edu/entries/multiple-realizability/
LikeLike
Sorry, I get that already. Let me be more specific with my question. Let’s take the definition of consciousness to be the Nagel version “what it is like to be an X.” What I’m saying is that what it is like to be “sober Ed” “drunk Ed” and “medicated Ed” are all different. You said above that Chalmers claims “ the most likely scenario is our consciousness continues the entire time.” But I’ve just demonstrated my consciousness clearly *changes* with changes in its substrate, so who’s to say that change couldn’t lead to oblivion (or hyperclarity I suppose)? If sober Ed can’t remember blacked out Ed, was he conscious the whole time? Chalmers is playing with poor intuitions and the Sorites paradox, which don’t really lead to his conclusion in my opinion.
I agree entirely with you that consciousness in other substrates is possible, but the fact that its flavour is undeniably different means to me that its affect (the wellspring of consciousness according to Solms) could just possibly be so different or even nonexistent as to render consciousness impossible to render there. Who’s to say? But it hasn’t arisen anywhere else yet so there may be a bigger hurdle than Chalmers or you imagine.
Or maybe I’m missing something? ; )
LikeLike
Hey Ed,
Sorry the spam filter is causing you grief. And doubly sorry if you pinged me on Twitter about this one and I missed it. (My notification tab has been flooded recently from someone with a lot more followers tagging me.) You can also hit me on email (see About page). The good news is that usually once I pull someone’s comments out of the spam folder, the local engine learns not to grab future comments from them.
I’m not sure how the new specifications change the picture as I responded to above. It’s worth noting that not all changes are equivalent. The key is whether the changes are functionally relevant. But that’s me as a functionalist saying that.
Is it possible that something about the organic substrate is required for consciousness? Sure. But given that so much of what previously only animals could do are now doable with machines, I’m not seeing the logic of assigning a substantive credence to the mind being different. But maybe I’ll learn different in the future.
LikeLike
No worries Mike! I hadn’t pinged you about this before and hopefully the issue will be sorted now.
(Btw, I once got tagged in a comment from Sam Harris about free will and my notifications were shot for days so I have a suspicion I know what you are talking about. Good luck getting through the wave!)
So, I’m not *sure* the changes I’m describing change the overall picture either. But I think it’s pretty clear they *might* change. Or, to throw Chalmers his own curveball back at him, it’s at least *conceivable* that I’ve undermined his arguments, which is all I’m really trying to do. Once undermined even just a bit, then they aren’t persuasive at all since we’re talking hypotheticals here with observations of n=0. Like you, I’m open to new information and remain willing to learn differently in the future.
[presssing “Post Comment” button with bated breath….]
LikeLike
Thanks Ed. Looks like it worked. I’m also going to start checking the spam folder more regularly. Historically, I haven’t because I get hundreds to thousands of spam comments. But most are aimed at older posts, and I just discovered I can sort that view by post, which makes it more feasible to at least check the more recent posts. (Interestingly, the Russian post is now attracting Russian spam.)
I actually never saw Chalmers’ thought experiment as particularly airtight. It’s just exercising intuitions. That can have value, but we always need to keep in mind it’s only telling us about those intuitions. I actually think the best thought experiments are the ones that call those intuitions into question.
LikeLike