Neuroscientist Kenneth Miller has an interesting post at the New York Times discussing the feasibility of mind uploading:
I am a theoretical neuroscientist. I study models of brain circuits, precisely the sort of models that would be needed to try to reconstruct or emulate a functioning brain from a detailed knowledge of its structure. I don’t in principle see any reason that what I’ve described could not someday, in the very far future, be achieved (though it’s an active field of philosophical debate). But to accomplish this, these future scientists would need to know details of staggering complexity about the brain’s structure, details quite likely far beyond what any method today could preserve in a dead brain.
This fits with what I’ve read from other neuroscientists. It starts with the assumption that the mind is a physical system that exists in this universe, and operates according to the laws of nature. We have good reasons for making this assumption. All of the evidence points toward the mind being entirely in the brain. None of it requires any ghostly aspect of it as an explanation. (Of course, substance dualists will insist that the evidence doesn’t rule it out, which is true, but Occam’s razor does seem to.)
Given this assumption, there doesn’t seem any reason, in principle, that a human mind couldn’t be uploaded, or copied in some other manner. The question is, how likely is it to happen in the near future? Again, Miller gives a response similar to that of most neuroscientists:
Neuroscience is progressing rapidly, but the distance to go in understanding brain function is enormous. It will almost certainly be a very long time before we can hope to preserve a brain in sufficient detail and for sufficient time that some civilization much farther in the future, perhaps thousands or even millions of years from now, might have the technological capacity to “upload” and recreate that individual’s mind.
Although most neuroscientists I’ve read aren’t quite that conservative, seeing it as possible that we might be able to accomplish it in a century or two. Thousands of years, or even millions, seems extremely pessimistic. Of course, predictions about the future are rarely worth the ink their written in (or these days the digital storage they occupy), and if it is possible to achieve it in the near future, it’s unlikely to be accomplished by anyone who has ruled out the possibility.
One thing that Miller doesn’t discuss, is that we’ll likely have to have a thorough understanding of natural minds and how they arise from brains before we can build an artificial one, a general artificial intelligence. Certainly computing power will continue to increase, and systems will become increasingly sophisticated, but I don’t think they’ll achieve a state where we might be tempted to refer to them as a mind, until we understand enough about organic minds to recreate them.
All of which I see as reasons to be skeptical of any kind of hard takeoff singularity event as Kurzweil and similarly minded futurists often predict. Eventually mind uploading or copying may be possible, but it’s likely centuries in the future. And Miller seems to throw cold water on the idea of modern cryonics, having your brain preserved in some manner so that your mind could be uploaded when the technology develops, although I’m sure a cryonics enthusiast would point out that we don’t know enough to categorically say it won’t work.
For me, contemplating mind uploading is like thinking about ways we might use to get to the stars. I have little hope I’ll ever get to do it, but it’s still fun to speculate.
80 thoughts on “The feasibility of mind uploading”
I’m leaning toward the thousands of years. Making a high fidelity copy of a brain is a long shot, with a hundred billion neurons, a thousand synapses per neuron, and the supporting tissue. Digital hardware as we know it won’t be able to hold up. The brain is asynchronous, insanely parallel, and communicates in analog. It’s neurons are plastic and can arrange themselves however best suites them. Much sooner, on the order of 50-100 years, I think, will be analog computer brains that are trained rather than programmed.
LikeLiked by 1 person
Could be. I learn toward shorter periods because we seem to have a good grasp of the physics. It’s a matter of precision rather than titanic energies.
But a lot depends on just how standard the functionality of each particular neuron and synapse is, versus how unique it might be. If every one of them is a unique snowflake whose unique molecular dynamics have to be reproduced to accurately run a mind, then it will definitely be longer. On the other hand, if there is any ability to abstract the lower layers, as happens in computer technology, then it could be sooner, but “sooner” might still be a century or more away. My sense from neuroscientists is that neurons and synapses of the same type (there are several types) function in a roughly standard way, but that might turn out to be hopelessly naive.
LikeLiked by 2 people
I addition to the trillions of synapses there is another level of complexity: each single neuron may be a system of ? multiple/numerous quantum processors. A neuron appears to be stuffed by micro-tubules bearing quantum critical proteins that switch on and off. Since I can’t really understand the complexity involved, I am agnostic about our future success at emulating a brain.
LikeLiked by 1 person
See Hamerof and Penrose.
LikeLiked by 1 person
I’m not familiar with Hamerof, but from what I’ve read about Penrose’s theories, they seem more geared to explain why things are the way he wants them to be rather than what is actually observed. I’m afraid Penrose is one of my chief examples of why we shouldn’t listen to physicists on consciousness. I haven’t read any reputable neuroscientist give credence to exotic quantum theories of consciousness. (Quantum physics does factor into consciousness of course, just as it factors into making a cup of coffee. It’s the exotic quantum physics that are questionable.)
LikeLiked by 1 person
A lot of people have thought Penrose a bit of a kook regarding his “quantum activity in the microtubules of the brain” but his work with Hamerof turns out to be worth a look. It speaks to the complexity of how neurons actually do work, and down at the level they’re talking about quantum effects would be part of the picture.
They’re still a little fringe, but it wouldn’t be the first time “kooks” have been right. It’s possible that a working theory of mind does require detail this fine.
LikeLiked by 1 person
Warning: this is going to be me the skeptic talking. 🙂
“Kooks,” defined as people with extraordinary notions that fail to convince a substantial portion of the experts that they are right, are so seldom validated that I feel safe ignoring them. Indeed, the historical examples often thrown out (Galileo, Pasteur, etc) are misleading, since those historical figures did convince substantial portions of their field within a few years of putting forth their ideas.
That’s a fair point, and I agree other than on “ignoring” them, although you might have meant ‘after taking a look’ which is all I’m suggesting — often a look immediately identifies an idea as, shall we say, incredibly unlikely. 🙂
In this case, it at least gave me an idea of just how complex neuron architecture is (there really are microtubules and they sound important to cell function). As Miller says in his article, neurons “one of the most complicated [molecular machines] known in biology.”
I assumed Penrose was just blue-skying — he mentions it as a general idea in his book, The Emperor’s New Mind, which was back in 1989. The work he’s done with Hamerof takes it much further, and I found it interesting looking into.
Certainly something that operates on the very small time and size scales that microtubules do could account for quantum randomness directly entering brain function. Kind of exciting if it does!
One of the problems with looking at every extraordinary claim, is that if I looked into every one everyone always tells me I should look into, I wouldn’t be able to do anything else. At some point, I have to trust the experts in the relevant field to evaluate a proposition.
I learned this lesson hard when I was a teenager reading Chariots of the Gods. It seemed obvious to me that Daniken had it all figured out and that archaeologists were just being stubborn. Later I finally read some actual archaeology and learned that Daniken simply didn’t know what he was talking about.
But that was only evident with plenty of reading, which took a lot of time. It’s easier to wait until someone manages to sway a sizable portion of the experts before investing time in their ideas.
Sure, it depends on your interests.
(In my case, I’m very interested in theories of consciousness and particularly interested in ideas about how mind could be non-deterministic, so that’s why I spent a bit of time looking into it.)
I agree with your assessment. The article left off several lines of thought, mainly stemming from the idea that recreating a mind and a self could be a little more thrown together than trying to recreate exact brain structure. A lot of videos and notes about your self, along with the understanding of general brain processes and AI kind of stuff, may allow for the recreation of your general identity that believes itself to be your self. That is, to feel like you are waking up. Also, if you wake up in the morning, and generally everything is the same except for some slightly different memories or even dispositions, well it will not be that different than how we actually are. Though such may go against identity purists or strike discord in some of our intuitions.
LikeLiked by 2 people
Good points. I agree that ultimately, whether or not we accept a copied mind as the same person will be a philosophical decision. I think uploaded minds will be inescapably different from the original organic version, but I’m not sure those differences will be enough to make people reject their grandmother post-upload.
Very interesting stuff. Millions of years seems a bit much.
I too wonder about the identity question that you and Lyndon are talking about. Some people seem to talk about mind uploading as if it is the answer to living eternally. But even if the uploaded brain could exist forever I don’t think it would be the person who was copied experiencing existence – it would be a copy of that person experiencing it.
LikeLiked by 1 person
Hey Howie. Good hearing from you.
Of course, one counter to the copy being the one doing the experiencing is that I am an imperfect copy of myself from 10 years ago. Except for some of my bone structure, every atom has been replaced, particularly in my brain. There is a continuity between the me of ten years ago and the me of today, although it’s interrupted by periods of nonconsciousness (sleep, being anesthetized a few times, etc).
For the uploaded mind, it will subjectively feel like it had continuity with the original mind, with just an interruption in consciousness while it was uploaded. If the original is dead (the first uploads will likely have to be destructive), is the uploaded version any less the original than I am the same me of ten years ago?
LikeLiked by 1 person
Yeah, that’s weird to think about. If there are 10 of you walking around do you then experience all 10 lives at the same time? Talk about sensory overload. 🙂
Actually, I think my conceptualization of “self” is probably all wrong.
LikeLiked by 1 person
The interesting thing to think about is, if there are all those copies of a person running around, who controls the checking account? Who is married to the original’s spouse? What happens if those clones grow apart and start disagreeing about things?
I think “self” is very much a philosophical concept. I’m not sure there is a right or wrong conception of it, just ones with varying degrees of usefulness. At least that’s what my current self thinks.
LikeLiked by 2 people
Hey Mike. Your reply cracked me up – all the attorneys will be ecstatic about new ground to cover in their field. 😉
This topic is a very interesting one to me. My current thoughts related to this topic are related to people having the desire to live eternally. I personally think living eternally would likely be an awful curse, but there is an aspect of the desire to live eternally that I can relate to – curiosity about what the future will be like. But my thought about this is that somehow that curiosity won’t get satisfied if the “original” person dies, even if copies are made. This becomes a bit confusing for me to think through though because your point about our present self being a copy of our self 10 something years ago. But as you also mentioned there is some kind of “continuity” of who we are as thinking individuals (perhaps our connectome or what have you) which I would think a copy of our self doesn’t help to satisfy on this particular question of curiosity. What are your own thoughts on this?
Hey Howe. Obviously if I get uploaded, and the upload isn’t destructive to the original, then the original me never gets to experience the future. I could see the original me being pretty depressed about that.
That depression might be substantially ameliorated if the different versions of me can synchronize memories, so that my sense of self expands to fill both versions. Then I might regard the biological me as just where I am currently, and subjectively feel like I will wake up later in the uploaded instance.
That said, I’m not sure how possible synching memories with an organic brain is going to be. Organic brains don’t come with data ports or subsystems designed to import memories. But if we understand well enough how the mind and memory work, it might be doable with technological implants wired in to the brain’s memory recall systems.
Of course, this issue doesn’t exist if the upload process is destructive. Then there’s no original me around to be depressed, just the uploaded me with memories of having been the original me. If the upload happens near the end of the original me’s natural life, it seems like a winning scenario, with nothing really for the original me to lose.
Assuming you could copy your “consciousness” from your brain, like from computer to computer, it would be a copy. You’d still be there. What would be the point, for you?
LikeLiked by 1 person
Hi henryomand. I think the first uploads will almost certainly have to be destructive, so it’s unlikely to still be an original you there. Subjectively, you’d be in your body before the procedure, then afterward in a virtual reality environment, robot body, cloned body,etc.
But if someone figured out a way for uploads to be nondestructive, then yeah, the original you might feel left behind, unless someone also figured out a way to synch memories between the original and uploaded, so that the original’s sense of self grew to incorporate both versions. But even without that, if the original you were close to death, wouldn’t you like the idea of a copy of you continuing?
LikeLiked by 1 person
Thanks for posting about this, it’s a really thought-provoking thing. The proponents of transhumanism propose that I am a set of data. That has to be true in order for a copy to be made. Thus, I can get an idea of what would happen if their supposition is true by imagining myself as a really big file, or set of files. When I “copy” a file from one place or machine to another, the original is still there, and unchanged.
I think some people are confused or misled by the “move” function on their computers. In reality there are two processes involved: copy followed by delete. They only think that the file was “moved” because they don’t see the delete happening. They imagine that they can “transfer” their consciousness, because they can “transfer” files. The copied file is entirely different, which is observable because the original file is still there. Thus, a copy of my consciousness, assuming such a thing is possible, can’t be me, I will still be there and when I am deleted, or die, I still die.
The copy, even if exact, is not me. I would not “wake up” anywhere. This is apparent because until deleted, I would still be where I was before the copy process happened. It’s all interesting stuff, but I think the transhumanists are being deluded into supporting a technology with false promises. Most of those wealthy old men hoping for a new lease of life would not fund it if they really understood how a “move” process works.
I think that those who obviously do understand it (e.g. Kurzweil) are not telling their funders the whole story. When you have a really inquisitive mind, and all the people with loads of money are dull, I guess you have to mislead them a little! (and thus yourself, if you’re any good at it?)
If I was close to death, a copy would be no more relevant to me than any other random entity. What matters most is my family and friends. If I had no friends and all my family hated me, I might be interested in a copy. But then it would make more sense, with the prospect of death as my guide, to use whatever there is of this life to ensure that I’m not in that situation when death is close, rather than trying to cheat death.
Before anything like that could be a viable thing for me to expend any energy, time or money on, I’d need to be convinced that it was a transfer, not a copy. Still it’s all interesting stuff because it brings up questions like “what is consciousness” and so on, and I think that’s what the researchers are really interested in. They just use life extension to get the stupid funders to fund their explorations.
I wish they wouldn’t talk about transferring consciousness before they even know what it is. That just seems very unscientific to me, so even if I was incredibly rich, with no friends and all my family hated me, I’d probably not fund “scientists” who talk about transferring something when they can’t even define it yet. If it was my passion, I’d rather fund the ones who are still trying to figure out what it is, and are willing to admit that they don’t know.
As Vaclav Havel said, “Embrace the company of those that seek the truth, and run from those who have found it”.
LikeLiked by 1 person
Henry, your reaction to this is a common one. All I’ll say is that it’s completely possible to understand everything you said, and disagree. I disagree that a copied version of me wouldn’t be me, but it all comes down to how we define “me”, “self”, etc. These are philosophical issues, which means it’s pointless to talk about which position on them is right or wrong. It’s also a bad habit to assume that people who disagree with you on these matters are deluded or shysters. Some may be, but others may simply disagree philosophically.
That said, if a loved one had just died, and I suddenly found myself communicating with and uploaded version of them, I would accept that uploaded version as them. Could you imagine how that copy might feel if I told them they weren’t real? You might argue that their feeling wouldn’t be real, but they would be real to them.
LikeLiked by 1 person
Can I throw a basic question at you Mike, rather than pretending to have a clue about this wild idea? The discussion seems to centre upon the central processor of the brain organ, as if that were all (ha!) that needed replicating. And yet consciousness is a phenomenon integrated with things outside the brain, being always dependent upon I/O ports and other organs. It also always attends with feelings (see: The Emperor’s New Mind / ‘Ultronic’ computer). So, my question is, wouldn’t a brain replica with no I/O ports and ancillary sensors just be something like a closed interactive system, only capable of juggling old data; in other words, not a very useful or innovative thing?
Hariod, I think that’s an excellent question. A key aspect of uploading, is wiring the mind into something to give it inputs and allow it to have outputs. To simply upload and leave it isolated strikes me as profoundly cruel, and I’m not even sure if a mind could function without those inputs and outputs. After all, the mind evolved at the intersection of an organism’s sensory perceptions and action initiation for a reason.
The mind could be wired into a virtual environment, a robot body, or (far down the road) even into a new engineered biological body. Of course, wiring it into anything is going to be enormously complicated. Like all aspects of uploading, we’re going to have to have a thorough understanding of how all of this stuff works and interacts, which is why centuries in the future still seems like the safe bet to me.
LikeLiked by 1 person
Thank you Mike. I wonder, would it be ‘profoundly cruel’ given that this isolated replicant brain had no nervous system and hence no feelings? That was the thing with Penrose’s ‘Ultronic’ machine of course.
It seems to me that what’s really under discussion here is replicating the entire human organism, not just the brain – that would be a nonsensical (in a very real sense) cranialist conception would it not?
Hariod, your first paragraph is an interesting question. If the whole brain is being emulated, that comes with a lot of nervous system components, although the peripheral nervous system wouldn’t be there. I’m not sure anyone knows the answer to this question yet.
I don’t know if I’d say that we’re talking about replicating the entire organism. (Although there are some who posit that our sense of self is so tied to our body that our minds couldn’t function without it. I don’t agree, but again, it’s not something anyone really knows yet.) The simulation of the rest of the body wouldn’t have to represent every cell throughout; it could probably get by with functional equivalents.
Of course, it may also be possible to get by with functional equivalents for the brain, but we’d have to thoroughly understand what those functional equivalents were replacing in order for us to have faith that they result in effectively the same person.
LikeLiked by 1 person
Thank you Mike. I imagine the sense of self would not be critical, and that memory and the representational endogram of consciousness alone would suffice in its place – the ‘self’ in any case being no more than a narrative construct sustained in mentation, memory recall and on-going sensory input – I know many would disagree, but it has no ultimate instantiation or necessity; I think it’s no more than an evolutionary artefact. What would be necessary, I feel sure, is a proprioceptive sense, which for me, would be essential in creating the feedback loop between idea, its expression in motor action, and the subsequent confirmation back to the central processor.
The question of what would be the minimum environment for a human mind is an interesting one. I suspect that, due to the evolutionary history of evolving as a hominid brain, we’d be most comfortable in a humanoid or simulated humanoid body, but it may be that the mind is far more adaptable then we might think.
LikeLiked by 1 person
Copying an entire mind would certainly be a tall order. There are steps that could be taken towards this goal. The first task would be to create synthetic brainlike computing power that could simulate simple nervous systems. The second would be to grow this into an AI. The third would be to wirelessly connect brainlike computing power to an existing brain, so that it could provide additional thinking and memory for a living person. The fourth step, of copying an existing brain, may not actually be necessary, if step 3 is sufficiently advanced.
I would think we could reach step 3 within decades.
LikeLiked by 2 people
I think for the result to be the same person, step 3 would need to reproduce the core functionality of the brain. Instead of upload by copying the exact architecture of the brain, it would be upload by reproducing its functionality.
Now, I personally think that’s possible. It may in fact be the path that uploading will need to take. (Particularly if Moore’s Law peters out too soon.) Although it will likely increase skepticism that it’s actually the same person. Myself, I’m skeptical that we’ll be able to do it without a comprehensive understanding of how the original works. Which I think brings us back to our timeline of understanding the brain.
Just to be clear, my proposed step 3 is to expand and augment the capability of the brain, not to copy any existing functionality.
It sounds like what you’re saying is that our sense of self would expand to include the enhancements, so that maybe we wouldn’t miss the original brain when it’s gone. But I’m not sure how that’s different from removing the primitive reptilian portions of my current brain. I’m much more than those evolutionarily ancient modules, but without them, I’m just enhanced capabilities with no motivating core. Even if I could survive such a removal, my friends and family wouldn’t recognize the same person as being there anymore, at least unless the original functionality were reproduced.
LikeLiked by 1 person
Ah yes, you are right. We are not just date storage and pattern matching. Maybe – speculation – those primitive parts are relatively generic, ans one person’s reptilian brain is much like another’s? I imagine that you could swap your visual cortex for an “off-the-shelf” visual cortex without it significantly altering your identity.
Or – converse – maybe subtle differences in the reptilian brain drive our personalities and make us what we are? Who knows?
Hard to say about the reptilian brain in particular. How much of distinct personalities do reptilians have? I agree that it does seem like we could replace the visual cortex with equivalent functionality and still have the person be the person, and maybe even the brainstem and other lower level autonomous processing, at least unless we discovered that they had some as yet undiscovered role in personality.
But for the human brain overall, we do know that we are not born blank slates, that a big chunk of our personalities are innate. If you took my innate personality and gave it your experiences, I would not be you, just as you wouldn’t be me if we reversed it.
Even if brains were generic, I think we’d still have to understand them thoroughly to replace their functionality. And them not being generic means we have to understand the individual person’s brain thoroughly, if we want to replace it.
Yes, yes, yes, yes, YES! But also bummer, ’cause that guy kinda stole my thunder! These math posts I’ve been writing are leading up to a discussion very much along these lines. Specifically, two things: The astonishing complexity and staggering size of the connectome. The likelihood of a software simulation experiencing consciousness (another question that is relevant here).
As Miller points out, we don’t know how much detail is required to model a single synapse usefully enough to model consciousness. If you assume 64 bits (about 18 quintillion states) is enough to model one synapse, you’re talking roughly 8 petabytes to model a mind. It takes another 8 petabytes to model the connectome — the map of neuron interconnections. We’re up to 16 petabytes, and that’s assuming 64 bits is sufficient to model one synapse.
Here’s a fun fact: At 100 gigabytes per second, it takes 2:45 (nearly three hours) to transfer one petabyte.
And here’s the thing: We don’t know if a software model of all this complexity supports consciousness. A software model of a laser, no matter how detailed at any level, cannot generate laser light. Laser light supervenes on specific physical circumstances. What if consciousness is like that?
Note that this is not a dualistic idea any more than lasers are. Lasers are fully based on understood physical phenomenon. But software models of lasers don’t lase.
You could even build a model that looks like it’s lasing. Parts of your model would show states that indicate, ‘If this was real, there would be laser light here!’ But there would be no actual laser light. Your model couldn’t do things lasers do.
What if consciousness is like lasers?
LikeLiked by 1 person
It’s pretty easy to imagine how complex it might be to copy the information in a brain. Many uploading skeptics talk about how much information would be necessary to capture the state of every molecule, or even every atom in the brain. And, of course, if the mind requires a faithful adherence to exotic quantum physics, then all bets are off.
If the mind can be modeled on neurons and synapses, and I think there are good reasons to hope they can, then things begin to look a little more hopeful. Yes, petabytes of information (if that’s what it takes) seem like a lot right now, but it might not be in a few years. At least unless Moore’s Law hits a brick wall in the next few years. Even if it does, the fact that it happens in nature means that, eventually, we should be able to do it. (Although virtual worlds might be ruled out.)
But it might be that once we start understanding how the mind actually emerges from the brain, that we can abstract the lower level processing and build modules that perform the same function as modules in the brain. (Such as the visual cortex, the hippocampus, etc.) If so, then uploading starts to look easier. Of course, the further up the chain this continues, the more it invites skepticism that we still have the original person. But if uploaded-grandma’s behavior is indistinguishable from the original, I suspect her relatives would accept her.
On the laser analogy, I think we have to think about how each of these things manifests in the physical world. A laser beam is generated by laser equipment. You can replace the equipment with different equipment, that could produce an identical laser beam. Consciousness is produced by a physical brain. If consciousness is like a laser, then provided you copy all the relevant data, why can’t you replace the equipment that generates it?
LikeLiked by 2 people
“If consciousness is like a laser, then provided you copy all the relevant data, why can’t you replace the equipment that generates it?”
IF mind is a physical phenomenon then there’s no reason we can’t build an artificial brain machine. We know nature does it, so it’s just an engineering problem.
But it’s not a matter of copying data. It’s a matter of replicating the physical system that generates consciousness (or laser light). That might involve actual physical connectivity (rather than a software model of such). It might involve the timing aspects of how the brain works — neurons use pulse timing as part of brain function. Maybe, as with lasers, size, density, frequency, confinement, or spacing, is crucial.
Replicating an existing mind in a brain machine is another level of complexity, since we’re talking about mapping an existing incredibly complex system onto another. The total connectome that comprises you took many decades to form in real time. Transferring that to another system is a formidable task.
[We’re talking about whether mind is, as I know you believe, information processing (algorithmic), or (as I’m coming to believe) based on crucial physical processes that can be modeled in software, but those models can never achieve consciousness any more than a software model of a laser can lase.]
“Yes, petabytes of information (if that’s what it takes) seem like a lot right now, but it might not be in a few years.”
I agree. (As you’ll see, my estimate is based on conservative assumptions, so the actual requirements are almost certainly higher. Possibly much higher depending on how much is required to model a synapse — that most complicated biological machine.)
The key point is whether a software model can “lase” (so to speak).
“[T]he fact that it happens in nature means that, eventually, we should be able to do it.”
Exactly. Unless some kind of dualism or spirituality turns out to be a real thing.
“But it might be that once we start understanding how the mind actually emerges from the brain, that we can abstract the lower level processing and build modules that perform the same function as modules in the brain.”
Ha! I’m looking forward to buying a Visual Cortex™ from Radio Shack®! 😀
It seems likely to me we’ll build a brain machine, or simulate a connectome in software, on our way to uncovering how the mind works. Much of that current research is exactly about that. I think in building artificial minds we’ll begin to understand how ours work.
Honestly, I think uploading existing minds is the least likely goal of AI to be achieved. It requires mind be algorithmic (a completely open question with most evidence against it). It requires a way to read and transfer an existing mind to a machine. If mind is physical but not algorithmic, then it requires somehow imposing a mind’s connectome on a machine matrix.
All hugely formidable problems that require most of AI plus the challenges of scanning and transfer! “Centuries” (if it’s possible at all) might be an accurate estimate.
I think we agree that uploading would require far more knowledge than we currently have. I’m optimistic over the long term because I see it as evident that the mind exists as a system in this universe, and can therefore be studied, and eventually reproduced. I’m pessimistic over the short term (20 years) because we still have a lot to learn.
“It requires mind be algorithmic (a completely open question with most evidence against it).”
What evidence would you say goes against it? Most of the cognitive neuroscience I’ve read seems to have the computational theory of mind as its underlying assumption.
But it has only living brains as its only example of fully functioning minds, and we don’t have any real understanding of how brain generates mind.
Meanwhile, there is no connection whatsoever — no shred of a sign — of “experience” or “consciousness” in what we understand of calculation — an area we’ve studied for centuries.
The laser analogy shows that some real world phenomenon supervene on physical circumstance. Given that mind is (we’ll suppose) a physical process, that it supervenes on something physical seems possible, if not likely.
I would say Occam’s supports this. A living cell is not a living cell if modeled in software. Why would a brain be?
I’m not quite sure what you’re trying to say in the first paragraph, but neuroscientists also study dead brains. What, other than brains, should we study to understand brains?
On “no shred of a sign”, it’s what a lot of mystics, spiritualists, and some philosophers want to be true, but neuroscience is steadily gathering new insights. Yes, it’s very early days, but to say we’re completely clueless is just not an accurate statement. It hasn’t been since the late 19th century. I’ve done many posts on those insights and there are many books on neuroscience that discuss them.
On cells, my response is the same as it was for lasers. Cells are physical systems that interact with its environment in certain ways. Someday we may be able to replace them with nanomachines that perform the same physical interaction, just as we can replace a laser to make the same laser light. Difficult? Unquestionably. Impossible? I’m not sure how anyone can say building a machine that does what already happens in nature is impossible.
LikeLiked by 1 person
“…but neuroscientists also study dead brains.”
Indeed. I said “only example of fully functioning minds” which dead brains… not so much. 🙂
“What, other than brains, should we study to understand brains?”
Exactly. We don’t have access to any others, so our understanding of what else might be possible (or not) is limited.
“On ‘no shred of a sign’, it’s what a lot of mystics, spiritualists, and some philosophers want to be true,…”
Mike, I’m going to ask that you stop using that argument in our discussions. Per some of your own posts bias is true of everyone (therefore, including yourself), so it’s a non-argument and ad hominem to boot. We can’t hand-wave away arguments by saying the other party wants their point to be true. Who doesn’t!
I ask that we stick to the merits and points of arguments and forego characterizing them as biased. Let’s just assume that’s true of us all, okay?
“…but neuroscience is steadily gathering new insights.”
Many insights, I agree. I wasn’t talking about brains or neuroscience, though, I was talking about mathematics — calculation. What I said is that our best understanding of calculation shows no sign whatsoever of “experience” or “consciousness” and I stand by that.
But I know you do believe mind can be calculated, and I’m genuinely not trying to debate that with you here. All I’ve said is that uploading a mind to a software version does require that mind be algorithmic. Consciousness may be; it may not be. It is still very much an open question.
(It’s possible we’re in a situation where we’re seeing ways to model the brain’s behavior — just as we model other systems — but that’s something different from algorithmic consciousness. It’s quite possible such models — just like models of cells or lasers — will simulate function but still not experience consciousness.)
“Someday we may be able to replace [cells] with nanomachines that perform the same physical interaction,…”
Absolutely. Nano-machines which would be physical. Exactly as you said, “Cells are physical systems that interact with its environment in certain ways.” A software model of a cell does not and cannot.
What if brains, likewise, have to be physical. What if the actual physical interconnections are critical? What if the timing and periodicity of signals in the physical mind is critical? What if take your pick among myriad ways the brain is a physical system that interacts with its environment is critical?
Actually, software has physical manifestations (transistor states, magnetic storage, etc) and can interact with the environment. Indeed, it must to be of any use. A software model of a cell, in the right nano-hardware, could interact with the original cell’s environment.
(The physical reification of the software is not relevant. You mentioned just two of the myriad ways bits can exist. The physical nature of the bits has nothing to do with this.)
You’re absolutely right: A machine cell, running on software, could interact with living cells around it. But only through physical manifestations functionally sufficiently similar to the living cell!
The software cannot be the cell. It can model a cell. It can run a cell machine. But it cannot be the cell.
Wyrd, the obvious question is, why couldn’t a software model of a mind work through hardware to provide whatever physical manifestations are necessary?
Software is always going to require hardware to interact with the environment, but then a mind in a biological brain requires a functional body to do it as well.
Naturally software needs I/O. This isn’t about the I/O, this is about the software itself.
The question is: What is the distinction — if any — between a software model of consciousness and the actual physical consciousness found in our brains?
It might be the distinction between a plan of a thing and the actual thing, which is generally considerable.
For a software model of consciousness to work, the plan has to work as well as the actual thing.
“What is the distinction — if any — between a software model of consciousness and the actual physical consciousness found in our brains?”
My point above about transistors and magnetic states is that they’re both physical. The question I perceive you to be asking is whether the mind requires the precise physicality, and only that precise physicality, that it’s currently in. While I agree it’s a possibility, I think you see it as far more likely than I do.
But even if that precise physicality is required, I don’t see how that would stop us, in the far future, from reproducing it. It seems to me that only substance dualism, or the mind requiring some kind of exotic and forever unknowable physics would permanently prevent it.
“My point above about transistors and magnetic states is that they’re both physical.”
I know they are, but they aren’t relevant to this.
“The question I perceive you to be asking is whether the mind requires the precise physicality, and only that precise physicality, that it’s currently in.”
No, that’s not the question I’m asking. The question I’m asking is the one I did ask and which you just quoted: “What is the distinction — if any — between a software model of consciousness and the actual physical consciousness found in our brains?”
Not the “precise physicality” but a physicality. I’ve said all along that an artificial physical system was far more likely to succeed than a software model. (The only way a good physical model would fail, as you say, and as I’ve said before, is if some form of dualism or spirituality turns out to be true.)
Here’s the thing: a software model of a thing is, in our experience, not the thing, but a representation of the thing. Why would a software model of our brains be different?
The premise here is that the mind, unlike pretty much anything else in our world, can be represented as abstract information and still be the same sort of thing it is as a physical instance. The premise supposes there is something very special about minds that allows a software version to function the same as a hardware version.
Wyrd, if a physicality is the question, then why isn’t the fact that software is a physicality relevant? Just about any physical system can be replaced by an alternative physical system that performs the same function. (Artificial hearts, bionic limbs, etc.) What would lead us to conclude that the brain is different?
“Just about any physical system can be replaced by an alternative physical system that performs the same function.”
Agreed. 100%. Note the use of “physical” in both clauses.
“Wyrd, if a physicality is the question, then why isn’t the fact that software is a physicality relevant?”
Because software is not a physicality. It’s an abstraction implemented in various physical ways, and those ways are irrelevant to the software.
Any software can implemented in circuits, or with pen and paper, or with a clever system of ants running around in glass tubes.
Software is as abstract as math, because software is a form of math.
Your presumption is that mind is equally abstract, that it can — like software — be run on any platform. That could turn out to be true, but…
It would be the first thing we’ve run into in the physical world that isn’t actually physical, but abstract and made — literally — of math.
It would mean that mind is pretty special.
I think you’re overstating the abstractness of software.
Consider that the labels “software”, “book”, and “gene” all have dual meanings. The first refers to a pattern of information. The second to the physical instantiation of those patterns. But, make no mistake, these things are physical for every instant of their existence. They come into existence when the first physical copy is created, they flourish as copies proliferate, they may degrade or fork if the copying processes aren’t accurate, and when the very last copy is burned, deleted, or selected out, they cease to exist (for any practical version of “exist”).
An alien from Andromeda has a good chance of discovering the Mandelbrot set, but is profoundly unlikely to write Microsoft Windows or Moby Dick.
I think the only assumption I really make is that it’s possible to record and reproduce the patterns that make up the mind. The reproduction would itself be physical in any implementation.
“I think you’re overstating the abstractness of software.”
I’m really not. A fundamental tenant of computer science is the Church-Turing thesis, which is about the equivalence of different physical instances of computation.
“I think the only assumption I really make is that it’s possible to record and reproduce the patterns that make up the mind.”
You’re making a huge assumption: That those physical patterns work the same way in a software model. Nothing else physical I can think of works that way!
Cars can’t drive over a software model of a bridge, software models of airplanes don’t really fly, a software model of fusion doesn’t fuse, nor does a software model of a laser lase. You go down the list, and software models simulate things, but aren’t those things.
“The reproduction would itself be physical in any implementation.”
But the implementation of the software is hugely different than the implementation of the thing it’s modeling.
The book, for example: While the physical instance of the book looks like a book, and it can be easily read, the data doesn’t and can’t! You can’t read the software model of a book!
A better example is the difference between physical sound and a stream of numbers that model that sound. The software model of sound isn’t anything like sound.
You are assuming that mind is a program running on brain machine and that you can copy the program into some other extremely different machine that works on extremely different principles and get the same result.
To me that seems like a really huge assumption.
Any response I make will be a repeat of what I’ve already said. It seems clear neither of us is going to convince the other. I think we’ve reached that agree to disagree stage again. 🙂
I’m not trying to convince you of anything so much as explain why a working software model of human consciousness has some significant requirements.
I’m not quite sure what are we disagreeing on here. I know we have different beliefs about the nature of the mind — I’ve been treating that as a given. (I’m not trying to convince you the mind can’t be algorithmic — I don’t know it isn’t!)
If we’re disagreeing about the abstractness of software, all I can say is talk to someone with background in computer science. (Seriously, you work with programmers, right? Get one who was a CS major and ask them about Church-Turing and what it implies about algorithms.)
If we’re disagreeing about the difference between the physical instance of a thing and the physical instance of a software model of that thing, then color me surprised, but okay. Can you name anywhere they are the same?
(The irony is that I’m working right now on posts that address a lot of this. You suggested not too long ago I write a post about my laser analogy — about what we’ve been discussing here — and the posts I’ve done recently plus the upcoming ones are exactly that.)
Wyrd, I am a programmer, and have been for decades. I was promoted several times for being the best programmer on whatever team I was working on at the time, and spent a significant part of my time over the years mentoring other programmers. So, I feel pretty comfortable that I know what software is.
I won’t return the insult by suggesting you consult with computer engineers, because implying that deep knowledge makes a difference on this simple ontological point is a distraction. Both of us are convinced the other is missing something and repeating the same arguments back and forth is unlikely to change that.
I will check out your posts. I’m behind on my blog reading right now. I’ve been slammed at work due to, ironically enough, a hardware failure caused by a software bug.
Mike, look, I’m not trying to be insulting, really I’m not. I’ve known lots of excellent programmers who still don’t have a strong background in computer science. And I’ve known long-time CS grads who really sucked at programming. Those two things just are not related.
Here’s the thing: I’ve talked with you long enough to have some sense of your apparent knowledge of CS. Your responses here and in other threads don’t appear to indicate a strong background in the science of computers to me.
If I’m wrong, I apologize, but then why aren’t we talking about this in CS terms? You apparently see the C-T thesis quite differently than most. Why exactly? And why do you see a software model as being identical with the thing it models? I ask because from a CS point of view, the answers would be really interesting!
“You apparently see the C-T thesis quite differently than most. Why exactly?”
I’m not going to claim to understand the mathematics of the Church-Turing thesis, but from what I do understand of it, it seems orthogonal, perhaps even antithetical to the point you’ve been trying to make. Maybe I’ll see it differently after reading your posts. (I suggest we save the discussion until then.)
“And why do you see a software model as being identical with the thing it models?”
I don’t. You keep ascribing that to me, but I’ve never said that. If you want to ask me about one of the views I’ve actually espoused (above or otherwise), I’ll be happy to discuss.
“[The Church-Turing thesis] seems orthogonal, perhaps even antithetical to the point you’ve been trying to make.”
Seriously? How so? You can’t make a claim like that without backing it, dude!
“You keep ascribing that to me, but I’ve never said that.”
Then you do agree with what I’ve been saying about the difference?
On Church-Turing, I’d really prefer to discuss it on your post, mainly because I don’t have strong views on it and you might change them with the post itself. You’re welcome to link to it from here if you want.
On the other, I’m really not interested in rehashing the same arguments over and over in an endless loop. If you have something new to add, then cool, but otherwise let’s please put this discussion out of its misery.
I think you two are just talking at cross purposes.
For Mike, a software mind is an algorithm instantiated on some computing device. Any such system would be just as physical as a brain or a nano-machine cell.
For Wyrd, a software mind is just an algorithm in the abstract, perhaps never physically implemented, and so it is rather problematic to attribute consciousness to it.
Wyrd, once you acknowledge that Mike is talking about making minds by using software running on a real hardware substrate then the problem should largely go away. The C-T thesis doesn’t really come into it. While any number of physical systems could implement a particular algorithm, to make it conscious (for Mike) it would still be necessary to construct such a physical system.
That’s not quite my view however as I think the whole world is a mathematical structure and so for me the distinction between software implemented on a physical device and software not implemented on a physical device is something of an illusion.
“I think the whole world is a mathematical structure”
Well under that view, everything is math, so obviously mind would also be. And as such very likely could be simulated by an algorithm.
“Wyrd, once you acknowledge that Mike is talking about making minds by using software running on a real hardware substrate then the problem should largely go away.”
No, not at all, for two reasons. Firstly, of course we’re talking about running mind software on a physical machine. That’s a given, but as I’ve said all along: irrelevant.
Secondly, more importantly, this jumps right over the very distinction I keep trying to make: The presumption that mind is an algorithmic process. If that’s true, as I’ve said all along, then implementing mind as software will almost certainly be possible some day.
What I’ve been pointing to consistently is the presumption that mind is an algorithm. And all I’m saying is that it is a presumption. A guess. A hope, even.
And here’s a crucial question: Why would mind be algorithmic? Nothing else physical is.
“The C-T thesis doesn’t really come into it.”
It does in how it says all computing (algorithmic) devices are essentially the same. The presumption that mind is an algorithm that can be run in a computer means this falls under C-T (as all computers do), that there is an implicit necessity that mind is algorithmic.
I’m just pointing out that we don’t know that’s true and that, in some light, it’s actually a strange thing to think. (Unless you believe everything is math, then it’s an obvious thing to think. 🙂 )
If you like, you can check out my current series of posts, since I’m addressing this subject. Here’s a summary of the upcoming next post (mostly written, but needs polish and pictures):
Algorithms are abstract mathematical expressions. Every algorithm has a Turing Machine and a λ-calculus expression that represents it (the abstract expression of the algorithm). As such, any algorithm is math. To believe human consciousness can be modeled by software is to believe it can be modeled by finite mathematics. Which is what algorithms are.
Thanks DM. I appreciate the effort and completely agree. The only thing I’d add is that for software to exist in this universe (as opposed to somewhere in the mathematical multiverse), it would have to be physically instantiated somewhere.
@Mike: “[F]or software to exist in this universe […], it would have to be physically instantiated somewhere.”
You continue to have this amazing blindspot for seeing why: [A] That is trivially true; [B] Completely irrelevant.
Right. I just wanted to mention that my view is not quite Mike’s even while I try and defend his a little.
I think there’s something of an equivocation at the root of this objection. Mind qua pattern is not physical but abstract, but mind qua a process occurring in the physical world is physical.
Just like software. Software qua algorithms is abstract. Implemented software qua a physical process running on a computer is physical.
So there are aspects of my physical computer system sitting before me that can be described as algorithmic or abstract. Those qualities which together can be described as “operating system” is one such aspect. The computational view of consciousness is that the important aspects of mind are such algorithmic or abstract patterns, but that doesn’t mean that an actual mind living in and interacting with the physical world is not also a physical process. It just means that any other physical process which implemented the same algorithm would have the same kind of mind (just as any other computer system which implemented the same algorithm would have the same operating system).
On this view, there is nothing unique about this property of mind. We see the same thing in patterns of all kinds. It’s the dichotomy between form and substance. My view and Mike’s is that mind is a property of form (like, say, squareness or complexity) and not of substance (like mass or charge). But that is not to say that it is not also physical from another point of view, because on Mike’s view a mind will only exist if it is physically implemented.
So I think you need to understand that Mike is saying that a mind is abstract in some respects but physical in others. It’s a nuanced view and I feel you may be missing some of that.
In your own words, that’s a given but irrelevant. I agree that with all that but I don’t think bringing up the C-T thesis has much bearing on the point we are trying to clarify. I think we all agree with the C-T thesis but we disagree on whether the implications are important here.
Agreed and I believe Mike would agree too. If you think he would not agree, then that seems to point to a useful test of whether you actually are talking at cross-purposes. Mike?
LikeLiked by 2 people
Again, very well said DM.
On your last question, I think we have to make a distinction between perfect modelling versus effective modelling. I think it’s possible, probable actually, that we may not be able to model the analog processing of the brain perfectly. Chaos theory dynamics may put it forever out of our reach. Many upload skeptics reach this conclusion, stop here, and declare the entire endeavor doomed.
But I think perfection is a false standard. My mind this morning isn’t a perfect copy of itself from last night, even less so of itself from a year or decade ago. To be effective, a model only needs to be convincing to its friends and family, as well as to itself, that it’s the same person as the original. This may be a tough goal to achieve, but unlike a perfect model, I can’t see any obstacle in nature to it eventually being doable. (Of course, we never know what we don’t know.)
Whether the copied mind is the original will always be a philosophical debate, involving the problem of other minds and whether soulless philosophical zombies are a coherent concept. I’m pretty sure there will be people who never accept copied minds, no matter the implementation or how convincing they might be.
I think the distinction to draw here is between the practical difficulty of figuring out what the model for a given mind is and the idea that such a model exists. Speaking for myself, I’m not so interested in the practical goal of building such a model, only in the philosophical idea that such a model (a perfect one, even) must exist.
I’m good with that distinction.
Mike, you said in response to Wyrd that “consciousness is produced by a physical brain.” Whilst it’s clearly evident that the sensory representations of consciousness are produced with the brain as processor thereof, how can we be certain that an objectless awareness is? If you talk to people who are highly skilled in introspection (yes, I know, ‘unreliable witness’ and all that), they will confirm that there is an objectless awareness accessible to them, one which is not perceptual, not a re-imaging of memory. Strictly speaking, that state is not consciousness, as it isn’t ‘with knowledge’ of anything; it is just lucidity itself, or pure awareness.
Thanks Hariod. You taught me a new phrase by spurring me to google “objectless awareness.” If I understand the concept, it’s being awake but not thinking about anything, similar, if not identical to the state you’re supposed to get in with meditation, the quieted mind.
But it’s not quite clear to me why we would doubt that this form of awareness takes place in the brain.
LikeLiked by 1 person
“But it’s not quite clear to me why we would doubt that this form of awareness takes place in the brain.”
Equally, the contrarian might say it’s not incontrovertibly clear why we ought necessarily presume that it does take place in the brain. In other words, might it be possible that the illumination of consciousness is not brought about by a localised phenomenon, that the data and objects of consciousness are processed locally, but that the illumination (or knowing) of them, is not? Where is your knowing of these words now; is it in your head, on the screen, both, or could it be non-local, a result of a fundamental property of space (or vice versa)? What exactly is the evidential proof that demonstrates the illumination of knowledge is exclusively localised to the cranium? Is inference enough to constitute the same?
Well, the evidence that impresses me are case studies from brain damaged patients going back to the 19th century, which demonstrate that no aspect of mind seems to be immune from changes to the brain’s physical state. I’m further impressed by the evidence of mind altering drugs, which alter a person’s ability to reason, their moral judgments, in addition to their coordination. Again, not much seems to be immune from this.
Of course, a determined substance dualist can insist that this doesn’t close the case on some part of the mind existing outside of the brain. You can always add assumptions that can’t be disproven. But unnecessary assumptions have historically almost always been wrong, which is what Occam’s razor is all about.
Now that I think of it, I remember reading something a year or two ago where someone put meditating subjects in an fMRI and discovered that meditation had a particular brain state (or collection of brain states). So even the quieted mind has an observable brain state.
Continuing as the contrarian Mike:
Your first paragraph deals with the objects of consciousness, as well as qualia, which are reflective (representational) states. I have already conceded that these are not only correlated to the brain, but are produced or ‘take place in the brain’. When you talk about ‘aspects of mind’, these are synonymous with the aforementioned. When you say “not much seems to be immune from this”, you miss the very immediate fact of the knowingness to which I refer, which does persist throughout all you describe.
Our minds can only think and conceive in terms of space, time and causation. We therefore presume that the knowingness of pure awareness – and the objectless awareness I first referred to – occurs at some specific location and is caused by something we already have knowledge of. It may not be a matter of inside ‘here’ or outside ‘there’, and as there is no evidence of any seat of consciousness, then ought we not remain open to such a possibility? If not, then the onus is placed on those who would claim there is a seat of consciousness, who presume it exists, and yet who cannot produce the evidence.
Yes, to posit this contrarian-to-convention idea is perhaps akin to, say, adding a god that cannot be disproven. In this instance though, it seems you are applying Occam’s razor to your own argument, not to the one I as contrarian am presenting, and which may indeed be simpler than the presumption that brain correlates are identical to consciousness, of which latter conception the scientific community has yet to build an unimpeachable bulwark of evidence. No one knows what consciousness is, only certain things about its functionality.
Hariod, I may not be understanding the contrarian point you’re putting forth, so apologies if my response seems…unresponsive.
Knowingness or objectless awareness in this context sounds like any form of minimal awareness. If so, then certainly any awake mind is going to have it. It seems tautological. But some brains receive enough damage to preclude it. There are patients who have lost any form of consciousness, whose brains only function enough to keep their body’s autonomic functions going (heartbeat, breathing, etc). I can’t see any reason to suppose they retain any knowingness.
Now, you can always insist that people who cannot be observed (even in fMRIs) to be aware are actually still aware in some unknown manner. Such a proposition can’t be proven or disproven. But like a deistic god, it’s redundant to observations. The only reason to hold on to it is because it implies something we want to be true.
The contrarian point is as if to say that given we still do not have even a loose consensus on what consciousness is, neither can we (the scientific and philosophical community) agree on whether it exists at all, and if it does, then where it might exist, then perhaps the questions science and philosophy poses about it are being wrongly put. We looked inside the box (the cranium), and nothing was found other than certain functional correlations.
Objectless awareness should not be considered a form of ‘minimal awareness’ Mike, or if it is, then only in the sense of it being the Tabula Rasa, or state of potential, upon which the objects of consciousness later supervene, in the process temporarily obscuring it. On the contrary, upon reflection after the event, it would be considered a state of full awareness which later becomes attenuated by a ‘collapsing’ or coalescing of attention around sense impressions which gives rise to discrete objects of consciousness in a serial stream of representations. One might think of it as awareness knowing itself as itself, not as an image of itself (a percept) or as an image of a sense perception having occurred ‘just then’ (everyday consciousness).
“I can’t see any reason to suppose they [brain damaged subjects] retain any knowingness”. When we are awakened in the morning by the alarm clock, it is because we had been aware and knowing in sleep immediately prior to its ringing – otherwise it would never have awoken us – right? Memory was not running in sleep, as it is when we are awake, so the knowing passes away continuously and absolutely. What we regard as ‘perceiving this now’, is actually a re-presentation of what occurred ‘just then’. It is memory re-cognising a past event, but which we take to be a real-time apprehending. As it is a re-presentation, it gains traction in memory and can be recalled as part of a sequential series later. So, the patients you describe as having lost all consciousness, have not necessarily lost this same awareness without memory, or knowingness. To presuppose that they have would seem counter to what is already demonstrated by the healthy subject who is asleep but whose knowingness apprehends the alarm clock – even though they do not remember their knowingness prior to awakening. This is an account of what you called being “aware in some unknown manner”, only it is known Mike, unless my account as above is incorrect. Is it?
Hariod, I would say not to knock the functional correlations. Slowly but steadily they are building up, and could eventually add up to the whole objective picture. (They will never add up to subjective experience, but that’s not a failing of science, simply an inescapable difference between observing the subject and being the subject. No amount of observing a bear will tell us what it’s like to be a bear.)
On the rest, I think I need to make sure I have handles on objectless awareness and knowingness. Objectless awareness sounds like awareness itself prior to any perceptions or memory recall. Knowingness sounds like maybe the ability or capacity for knowledge. They sound like the raw engines of consciousness. Does it sound like I have the concepts correct?
LikeLiked by 1 person
I must of course agree Mike, the correlations may eventually turn out to be the sole causes of consciousness; or they may not. In the latter case, a hopelessly poor analogy may be as if we were to have looked for the causal origins of our radio program within the circuit boards of our Panasonic. And yes, what is it like to be a bear? Or bat.
“Raw engine of consciousness”: It doesn’t quite feel right, in that it sounds like something separate from consciousness – a ‘ground state’ comes a little closer. I apologise for the fact that it’s incredibly difficult to convey a sense of unless experienced, but might repeat what I said previously:
“One might think of it as awareness knowing itself as itself, not as an image of itself (a percept), or as an image of a sense perception having occurred ‘just then’ (everyday consciousness)”. So, there is a knowingness to it, but not in the sense of a subject/object relationship. It is entirely unreflective, un-reflexive, un-representational. It is just a knowing lucidity, but without the idea ‘I am being lucid’, or ‘my mind is lucid’, or ‘I am aware of lucidity’. The only knowing is of aware lucidity itself as itself – tricky to grasp, I know.
But we must wrap this up; I’m conscious of sapping your reserves of patience and time my learned friend! One frustration I have in all this chatter about uploading conscious minds and so forth, is that a stream of consciousness is invariably presumed to be either a clump of psychical objects or a clump of physical states, ignoring the fact of its fundamental ground state, and that ground state’s accessibility. It’s treated just as a data stream, or is presumed to be a data stream – because that is how it appears in everyday life: ‘this’, then ‘this’, then ‘this’. All those “this’s” are what we’re interested in uploading of course, but they may have to come along with something we’re not even considering for it to be possible.
LikeLiked by 1 person
Haven’t been commenting or philosophising much recently due to having a baby in the house.
Just wanted to chime in order to take issue with you on the point that we can’t hope to replicate biological intelligence without understanding it first.
I lean towards pessimism on us ever really understanding consciousness simply because human consciousness could just be too complicated for a human brain to grasp (perhaps necessarily so). But I am not quite as pessimistic about replicating it.
Because I think it is quite conceivable that we might replicate it without any better understanding of how it works than we have now. Schematically, we might make a machine which can scan a brain in high-def, then simulate all the neural connections and set it running. It just might be that consciousness could come out of an effort like that without us having a clue how that happens or even what consciousness really is.
Another way to get there might be to use a genetic algorithm or training a sufficiently complicated neural network for long enough. It might eventually work just like a brain but it might be far too complex for us to understand what it is doing.
Or we might make an unconscious AI which we do understand which then goes on to design a conscious brain which we don’t understand.
And there may be other paths to get there also. It might even develop by accident somehow without our intending it (after all, that’s more or less what happened in nature!).
Good hearing from you. Congratulations on fatherhood!
It might be that if we gain a good enough understanding of the low level mechanisms in the brain, that we’ll have enough to emulate the entire network. But for it to work, we’ll still need to have a thorough understanding of how that low level works, particularly how it interacts with the peripheral nervous system, otherwise we might have a mind without any I/O.
My concern with that scenario is that it will take a lot of processing power. Far more than what is in the brain itself. (Consider that we can run programs from old systems in emulation effectively only because the host system is far more powerful than those old systems.) If Moore’s Law peters out before we get there, we may find ourselves unable to take the just-emulate-the-neural-synapse-connectome route.
If that’s the case, then we might have to understand the higher level architecture of what’s going on in the mind. I’m more optimistic than you are that that’s possible. We probably will need the help of AIs to do it. (The AIs would need to be of the narrow variety since I don’t think we’ll get AGIs until we achieve this.) Although the work of people like Michael Graziano gives me hope it might happen sooner than we imagine.
Now, I suspect when we do figure out consciousness, many people will look at the explanation and reject it, insisting that there’s just no way their consciousness comes from that. The objective explanation will never bring us to that first person experience. I don’t think it’s fair to require that it does. The rejection will probably be like that of evolution, a visceral revulsion against something that appears to trivialize us.
My issue with training neural nets, is that it inherently assumes that the mind is a blank slate. We have substantial evidence against this. The brain comes with a lot of pre-wired programming. I think until we understand that programming, that all we’ll have is a raw computing engine, not a mind or AGI.
Hope to see you around more!
My fear is there isn’t a clean high-level architecture really, and that all the messy detail of a real brain is needed for human intelligence. Nature is not limited by what is feasible with transistors. It may be that we cannot make convenient abstractions without destroying the functionality.
I could be wrong about that. There’s no reason to believe that a complete connectome simulation is the best way of going about creating AGI, but it does demonstrate one way we may do so without a deep understanding of how it works.
Good point about the blank slate of neural nets, but it seems to me that we don’t know that a blank slate couldn’t work, given enough time. There’s also the possibility of evolving gross architecture of the neural net beyond the basic training. Either way it seems conceivable that we could get there without understanding how it works.
I’m not really disagreeing with you that understanding would help, or even that it might prove to be the key. I just think it’s too often assumed that understanding is necessary so I think it’s important to mention that that should not be taken for granted.
LikeLiked by 1 person
Good point on the lack of clean architecture. I do think there will be recognizable architectures. Humans, or animals for that matter, are too consistent for there not to be something there. But it certainly won’t be laid out in any way designed to be convenient for humans to grasp. I suspect it will be something like DNA, with a lot of mismash and junk, with functionality scattered throughout.