Michael Graziano on mind uploading

Michael Graziano has an article at The Guardian, which feels like an excerpt from his new book, exploring what might happen if we can upload minds:

Imagine that a person’s brain could be scanned in great detail and recreated in a computer simulation. The person’s mind and memories, emotions and personality would be duplicated. In effect, a new and equally valid version of that person would now exist, in a potentially immortal, digital form. This futuristic possibility is called mind uploading. The science of the brain and of consciousness increasingly suggests that mind uploading is possible – there are no laws of physics to prevent it. The technology is likely to be far in our future; it may be centuries before the details are fully worked out – and yet given how much interest and effort is already directed towards that goal, mind uploading seems inevitable. Of course we can’t be certain how it might affect our culture but as the technology of simulation and artificial neural networks shapes up, we can guess what that mind uploading future might be like.

Graziano goes on to discuss how this capability might affect society.  He explores an awkward conversation between the original version of a person and their uploaded version, and posits a society that sees those living in the physical world as being in a sort of larval stage that they would all eventually graduate from into the virtual world of their uploaded elders.

Mind uploading is one of those concepts that a lot of people tend to dismiss out of hand.  Responses seem to vary between it being too hopelessly complicated for us to ever accomplish, to it being impossible, even in principle.  People who have no problem accepting the possibility of faster than light travel, time travel, or many other scientifically dubious propositions, draw the line at mind uploading, even though the physics for mind uploading are far more feasible than those other options.

That’s not to say that mind uploading should be taken as a given.  It is possible that there may eventually turn out to be something that makes it impossible.

For example, I’m currently reading Christof Koch’s new book, The Feeling of Life Itself, in which Koch explores the integrated information theory (IIT) of consciousness.  A big part of IIT is positing that the physical causal structure of the system is crucial.  As far as IIT is concerned, mind uploading is pointless because, even if the information processing is reproduced, if the physical causal structure isn’t, the resulting system won’t be conscious.

I think Koch too quickly dismisses the idea of it being sufficient to reproduce the causal structure at a particular level of organization.  But if he’s right, mind uploading becomes far more difficult.  Although even in that scenario, the possibility of neuromorphic hardware, computer hardware engineered to be physically similar to a nervous system, including physical neurons, synapses, etc, may still eventually make it possible.

Even if nueromorphic hardware isn’t required in principle, it might turn out to be required in practice.  With Moore’s Law sputtering, the computing power to simulate a human brain may never be practical with the traditional von Neumann computer architecture.  A whole brain emulation might be conscious using the standard serialized architecture, but unable to run at anything like the speed of an organic brain.  It might take a neuromorphic architecture, or at least a similarly massively parallel one, to make running a mind in realtime feasible.

However, all of these considerations strike me as engineering difficulties that can eventually be overcome.  Brains exist in nature, and unless anyone finds something magical about them, there’s no reason in principle their operation won’t eventually be reproducible technologically.

Although this may be several centuries in the future.  I do think there’s good reasons to be skeptical of singularity enthusiast / alarmist predictions that it will happen in a few years.  Our knowledge of the brain and mind still have a long way to go before we’ll be able to produce a system with human level intelligence, much less reproduce a particular one.

On the awkward conversation that Graziano envisions between the original and uploaded person, with the original in despair about being the obsolete version, I think the solution would be to simply have mind backups made periodically, but not run until the original person dies.  That should avoid a lot of the existential angst of that conversation.

That’s assuming that there isn’t an ability to share memories between the copies, with maybe the original receiving them through a brain implant of some type.  I think being able to remember being the virtual you would make being the mortal physical version a lot easier to bear.  The architecture of the brain may prevent such sharing from ever being feasible; if so, then the non-executing backups seem the way to go.

I don’t know whether mind uploading will ever be possible, but in a universe ruled by general relativity, not to mention the conservation of energy, it seems like the only plausible way humans may ever be able to go to the stars in person.  If it does turn out for some reason to be impossible, then humanity might be confined to this solar system, with the universe belonging to our AI progeny.

What do you think?  Is mind uploading impossible?  If so, why?  Or is it possible and I’m too pessimistic of it happening in our lifetimes?   Are there reasons to think the singularity is near?

204 thoughts on “Michael Graziano on mind uploading

  1. Hi Mike,

    To what extent do you think mind uploading will require some sort of simulation of the entire body and nervous system? Meaning, in order to be comfortable to us, will mind uploading require a comprehensive simulation of the entire body?

    And second, do you think it will matter if those mind-uploads actually know they are running on some sort of computational architecture? Or would it be better if they were oblivious to the mundane facts of their existence?

    Michael

    Liked by 2 people

    1. Hi Michael,
      Good questions.

      I think the embodied cognition people are right, that human minds need a body, although I can’t see any reason why it couldn’t be a virtual body, at least provided the relevant signals could be reproduced. Of course, once uploaded, there’s nothing stopping the mind from being altered to be more compatible with not having a body, but such alterations would move it ever further away from humanity.

      On knowing their situation, I personally think I’d want to know. Wouldn’t you? I might could see initially withholding it from a small child, or someone who died suddenly, maybe finding a way to break it to them gradually. But it’s hard to imagine not telling them at all.

      Of course, if we can imagine compelling reasons not to tell them, then we can’t rule out that we’re not currently in just such an environment right now.

      Liked by 2 people

  2. I’m with you on the possibility in principle, and the taking a very long time in practice. The one thing I really like about IIT is its view that the physical causal structure of the system is crucial. So that makes it especially difficult.

    I hope that by the time it’s possible, humanity will have wised up to the fuzziness of personal identity. Want to influence the future? Have children, write books, make movies… Want some experiences to look forward to? Make sure that people will be alive in the future, and look forward to their experiences. Future self-regard is just empathy for your future self, and “self” is all in the name. Yes, we can change the laws so that your upload inherits your name, but… A rose by any other name would smell as sweet.

    Liked by 1 person

    1. I’ll have more to say about IIT soon. In general, I don’t know anyone who disputes that integration in general isn’t a crucial component, but IIT itself seems to have a lot of philosophical assumptions.

      On personal identify, no doubt we’d all be happier if we could think that way, but I’m not sure our evolved instincts make that likely. This Woody Allen quote comes to mind:

      I don’t want to achieve immortality through my work; I want to achieve immortality through not dying. I don’t want to live on in the hearts of my countrymen; I want to live on in my apartment.

      Like

      1. I find it hard to sustain thinking of time as a frame-dependent axis in spacetime, the way Relativity Theory says it is. But no matter how hard to think that way, that’s the way it is.

        Like

  3. Like you, I don’t see any physical reason why it wouldn’t be possible. What I don’t imagine will ever happen is that people will make identical copies of themselves and live parallel lives, or wait until they die before allowing them to be “switched on”. A much more interesting and plausible outcome is a mix of natural and synthetic beings, including many different kinds of AI systems all being linked up and operating in tandem. In fact, I think that we will never mind upload in the way most writers imagine. Instead, we will steadily augment our natural brains over time through machine interfaces until we grow into complex human-machine hybrids.

    Liked by 1 person

    1. In that view, having the last of your organic parts replaced might eventually be similar to having your wisdom teeth removed. Of course, there will always be people insisting that something is lost when that happens.

      Like

  4. Wonderful article. I wholly agree with all that you say. If we are correct in believing that there is no “magic” involved in consciouness then it is entirely logical to assume that it can be reproduced using alternative physical structures. Eventually!

    Liked by 1 person

    1. Even if there is magic found, it doesn’t necessarily rule out some scenarios. It depends on the magic. The Ghost in the Shell stories posit an irreducible ghost that can be transferred, but never copied.

      Like

      1. Somewhere of other I read that some people are wondering whether consciousness is an entirely separate law of nature. In which case I suppose it would not be “magic” but simply a part of the physical universe. By “magic” I was thinking more along the religious and Cartesian lines of “spirit”…which is presumably by definition outside of the physical universe and caused by some thing entirely different. I can believe somehow or other in consciousness as a force of nature and part of the physical world but have difficulty with the supernatural. But who knows…..

        Like

        1. There are people who do think that physics as we understand it isn’t up to the task. Quantum physics is what is most often reached for. My issue with this kind of thinking is:
          1. There’s nothing in neuroscience pointing in that direction.
          2. Even if quantum physics are significant, the rules of how quantum phenomena work in the overall environment are well known.

          But you’re right. “Magic” probably should be reserved for something we can never understand, even in principle, such as a religious soul or something along those lines.

          Liked by 1 person

  5. “… there are no laws of physics to prevent it.”

    Also, no laws that suggest it is possible. Still missing any physical theory.

    But I’ll play along.

    I’m at my desk feeling my breath and heart beating, feeling my fingers on the keypad. Suddenly I am uploaded to the computer at my desk.

    Whoops! Where did my breath go? On what are my fingers typing? I decide to go downstairs for more coffee. Where are the stairs? What happened to my legs?

    Can you upload the mind without creating an entire virtual world for it to exist in? Can you separate the mind from the brain and body that it exists in?

    Like

    1. “Also, no laws that suggest it is possible.”

      Current neuroscience works completely in terms of classical physics, electricity, chemistry, etc. I’d say that, until that ceases to be fruitful, the laws as we understand them are suggesting it’s possible. New constraints would have to be discovered to imply that it isn’t.

      I do think the embodied cognition people are right that a human mind would need a body, although it could be a virtual one in a virtual world. That does mean we need to understand the signalling coming into and out of the brain quite thoroughly, but hopefully if we’ve learned enough about the brain to do this, we’ll also know enough about the interactions between the peripheral and central nervous system..

      Once uploaded, it would be possible to alter the mind so that it doesn’t need those things, but I think, at least at first, very few people will want to do that. Although long term, all bets are off.

      Liked by 1 person

      1. I just wouldn’t be myself without coffee in the morning.

        If we uploaded a sleeping person, would the computer mind be asleep? How would it wake up? What if it was in a coma? Would it come out of the coma?

        What would it do without a body or a world to take action in?

        If you think a virtual one works, then you have implicitly bought into a full idealism. Why is our present mental reality not a virtual reality? No good argument I can think of and this is almost the same as what Hoffman and Kastrup is arguing. Hoffman, by the way, is open to the idea that AI could be conscious. It’s odd that an argument seemingly for a possibility of copying a mind to a physical computer actually leads to the possibility that there is no physicality.

        Like

        1. I’m pretty worthless myself without my morning caffeine fix. Of course, in an uploaded mind, we could arrange to always have our system caffeinated.

          How would a computer mind wake up? The same way we do. It would either come to the end of its sleep cycle, leading to the reticular formation increasing its firing rate, driving arousal throughout the system, or it would receive stimulus that would reflexively lead to the same thing, just as it does in the physical version.

          We can’t rule out the possibility that we’re in a simulation. But it’s not clear to me that assuming it is productive. If we’re in a simulation, it appears to extract painful consequences for not taking it seriously. We seem to have little choice but to play the game, and the game essentially is our reality. Which means that science remains useful.

          Like

          1. I assume the simulation of the body and the world would also need to extract the same painful consequences. So can the simulation die if it jumps off a building? What happens to it when it does? Does the program shut down?

            Like

          2. The simulation author gets to decide. I would think we wouldn’t want the simulated person to die, maybe just get reset in some manner. Although I could see a simulation providing a mechanism for someone who decided they were ready to end their existence (with plenty of “Are you sure?” steps).

            Like

          3. Mike it’s a simulation of a simulation that’s already seven deep in simulatedness. Or, more likely, it’s an unanswerable, therefore unaskable question.

            Like

  6. As typical in such debates – particularly for folks like me, who spent too much of their youth immersed in dodgy sci-fi, these questions conflate epistemic issues (e.g. about neural firing patterns; cortex responses to stimuli etc) with ontological issues. As Searle famously pointed out in 1980, the simulation of a rainstorm wont make you wet; the simulation of gold wont make me rich, no matter how detailed the simulations..

    Wake up and *smell* the coffee 🙂

    Liked by 2 people

    1. If the simulation of the uploaded mind is “real”, then the simulation of the body and world of the uploaded mind would be just as “real”. Inadvertently perhaps, we end up with a good argument for the world as mental or simulation.

      Like

    2. In the case of both gold and a rainstorm, we can identify what’s missing from the simulation, the specific chemicals with specific reactions and properties. In other words, they wouldn’t be able to behave like the physical version in the physical world. But if those phenomena were in an overall simulation and behaved like their real world counterparts in relation to the rest of the environment in the simulation, for purposes of the simulation, they would be gold and a rainstorm.

      So, the question is, what would be missing from the simulation of a mind that only needs to interact with a virtual body and environment? What would make virtual coffee smell differently than physical coffee?

      Like

      1. “But if those phenomena were in an overall simulation…”

        Sure, but the point is that they aren’t. (Unless you believe reality itself is a simulation.)

        Even if reality was a simulation, there’s still gap between what we simulate with numbers and our perceived “real” world. That gap is significant. The rain we simulate with numbers isn’t ever what we consider “wet” (whatever that might actually mean).

        “So, the question is, what would be missing from the simulation…”

        Physical processes. We don’t know that consciousness doesn’t supervene on them.

        The thing about a simulation is that it’s a description of something. The question is whether the description can provide all the same effects as what it describes.

        More precisely, since a description obviously can’t provide all the same effects: Is consciousness something that arises from being described (as opposed, and in addition, to being enacted by a brain).

        Simulated rain isn’t wet; simulated earthquakes don’t knock down buildings; simulated lasers don’t emit photons; and simulated bridges can’t carry traffic across rivers. Looked at from this point of view, it seems a great deal might be missing.

        Liked by 1 person

        1. When you say things like “simulated water isn’t wet”, you’re taking the point of view of someone outside the simulation, which is fine. Of course, from the point of view of someone inside the simulation, the water is wet. But for the purpose of the current discussion, we’re interested in the view from outside the simulation.

          And also of course, there are things that can be simulated, and from the perspective outside the simulation, are still those things. I speak of information in a broad sense, not the narrow digital computer sense. A simulation of an information process is an information process. A simulation of an analog calculation is a calculation. A simulation of a representation is a representation. If a mind can be accounted for in terms of these things, a simulated mind is a mind.

          *

          Like

          1. “Of course, from the point of view of someone inside the simulation, the water is wet.”

            For a simulated value of “wet,” yes absolutely. But, as you say, we’re all outside of any simulation (as far as we know 😉 ).

            “If a mind can be accounted for in terms of these things, a simulated mind is a mind.”

            Yes, exactly. If!

            It was this line of thought that first made me skeptical of computationalism.

            Being able to simulate a mind requires that minds be simulations in the first place. It requires that minds be the one abstract thing nature has ever produced.

            Because, as you say, the only time an abstraction is effectively the same as the thing it abstracts is when that thing is itself an abstraction.

            Like

          2. “It requires that minds be the one abstract thing nature has ever produced.”

            Actually, I don’t think that’s true. Both the original mind and the copied/uploaded one are concrete physical systems. The only thing that is required to be abstract is a description of the original system, one that is detailed enough to be a functional blueprint of another physical system. But at all times, we have concrete physical systems.

            If the mind is abstract, it’s only in the same sense that genes are abstract. But genes are physical patterns that have physical causal effects. Nevertheless, they are routinely copied in biological systems.

            Like

          3. “Both the original mind and the copied/uploaded one are concrete physical systems.”

            But (assuming we’re talking computationalism) those systems are very different systems.

            This is the “abstract system is reified physically” argument, and it doesn’t work. I wrote about this recently. The short form is that the morphology of a physical system basically matches the morphology of its primary information content. But those morphologies are completely different in a numerical simulation.

            “The only thing that is required to be abstract is a description of the original system,…”

            Isn’t the uploaded mind running as software also an abstraction?

            “If the mind is abstract, it’s only in the same sense that genes are abstract.”

            I quite agree the mind is not abstract. 😀

            Thus the argument goes: (1) Running a mind as software implies minds are software. (2) Software is an abstraction. (3) Minds are not an abstraction, so minds probably are not software and therefore cannot be run as software.

            At the least, it makes computationalism seem like a big ask to me.

            Like

          4. Software is an abstraction in the same way a book is an abstraction. But a particular instantiation of software, like an instantiation of a traditional book, is a physical entity. Software running on a particular device is a concrete physical system with causal effects, both within itself, and with the environment.

            Whether functionality is in hardware or software is just an implementation detail. A CISC processor implements more in hardware than a RISC processor, yet the same software can be compiled on both systems. A brain is far more hardware (wetware) oriented, but the same functionality can be implemented on a system where, due to its different physical details, the ratio is more skewed toward software. In the end, the actual executing system, in both cases, is 100% physical.

            Liked by 1 person

          5. “But a particular instantiation of software, like an instantiation of a traditional book, is a physical entity.”

            As I said before, this is the “abstract system is reified physically” argument, and I don’t think it works. That post I linked discusses why.

            Consider the difference between an original art work and any book (or software). An original art work can be copied, but there is a huge difference between the original and the copies. But copies of a book are essentially the same as the original text. Further, a faithful copy of an art work must use the same materials and dimensions. Copies of a book can be realized in many forms.

            But we’ve been here before. I see physical systems as Yin-Yang to information systems (that is, night-and-day different), whereas you don’t seem to see much difference.

            “Whether functionality is in hardware or software is just an implementation detail.”

            But a hugely significant one. Software rain isn’t wet, etc.

            Surely you agree that software, especially simulation software, is a description of something. So the question has always been whether consciousness can arise from its mere description as well as, per usual, its enactment in the brain.

            Whether an information system is a physical instance has never been the question. It’s whether a vastly different physical instance — one that abstracts the physical causal topology — can accomplish everything that what it simulates does.

            Books are realized many ways because they’re information — abstractions — to begin with. But what if the mind is more like a sculpture or painting and copies always lose something crucial?

            Like

          6. Even though I used the term, I’m not sure we are talking about simulation. It seems we are talking more about the idea that mind is software and we are just porting it to different machine. I am not convinced that will be possible.

            But, if it were, then would the body and world be simulated or would it have to “real” too.

            I think one problem is that mind doesn’t and probably can’t exist in a disembodied form. Mind manifests through perception and action in the body. It is driven by instincts and desires that are bodily and related to survival or reproduction.

            Liked by 1 person

        2. I think we need to be more rigorous with our terminology. We can talk about a mind as both an abstract pattern and as a specific physical system which realizes that pattern, just like we can talk about a baseball team as an abstraction and as a physical thing that realizes that abstraction.

          So the pure abstraction is the description of the kinds of things that the physical system does. In some cases, such as a baseball team, or rain, that description necessarily involves the interaction of physical things. In other cases, such as a calculator or computer program, the description can be made without reference to specific physical things. But we still make a distinction between a program and a specific machine running that specific program. In this sense, a mind is more like a computer program in that the pertinent description need not reference specific physical things. But there is no mind unless there is some physical system doing those things. Maybe we should refer to the latter as an existing mind?

          *

          Like

          1. “In some cases, such as a baseball team, or rain, that description necessarily involves the interaction of physical things.”

            Absolutely. In this case the description is of a physical system, and in such cases the implementation of that description cannot produce the same effects as the physical system does. Simulated rain isn’t wet, etc.

            “In other cases, such as a calculator or computer program, the description can be made without reference to specific physical things.”

            Absolutely again. In this case the description is of an information system, and in such cases the implementation of that description can produce the same effects.

            (FWIW, I wrote about this aspect of modeling recently.)

            “But we still make a distinction between a program and a specific machine running that specific program. In this sense, a mind is more like a computer program in that the pertinent description need not reference specific physical things.”

            I agree with the first sentence, but you’re begging the question in assuming the mind is “more like a computer program” — in assuming it’s an abstraction. The big question here is whether that is true or is it just wishful thinking.

            My point is that the brain is a physical system, like a baseball team, weather, or a laser. So it falls in the first case above — a simulation that cannot produce all the effects the physical system does.

            So logically: Simulated rain isn’t wet. Simulated brain isn’t mindful?

            Like

        3. Whether an information system is a physical instance has never been the question. It’s whether a vastly different physical instance — one that abstracts the physical causal topology — can accomplish everything that what it simulates does.

          The vastly different physical instance doesn’t have to accomplish *everything* the original system does. It only has to accomplish the things we care about, i.e., the things described in the abstraction. That’s why a machine running Word on an Apple Macintosh and a machine running Word on a system of pulleys and buckets of water are still both machines running Word.

          For a mind, all the things we care about are informational. Except in some cases we also care about the time it takes to do things, but that’s a separate issue.

          *

          Like

          1. “The vastly different physical instance doesn’t have to accomplish *everything* the original system does. It only has to accomplish the things we care about, i.e., the things described in the abstraction.”

            That is simply not the case. A simulated rain storm cannot accomplish wet rain, nor can a simulated laser accomplish the emission of coherent photons. (At least in the latter case I’d care a lot about being able to accomplish that!)

            “That’s why a machine running Word…”

            Word™ is software — an abstraction — so of course it can be realized in myriad ways.

            All we can say about mind is that it’s something a brain does. Assuming mind is an abstraction is just that: an assumption. Arguments assuming it beg the question.

            The underlying question is: Why would any natural physical system give rise to an active abstraction, especially an algorithmic one?

            If brains actually do that, they seem pretty special!

            Like

          2. “Why would any natural physical system give rise to an active abstraction, especially an algorithmic one?”

            Natural selection, although how algorithmic it is might be a question.

            Like

          3. “Natural selection, although how algorithmic it is might be a question.”

            Perhaps, and indeed. Abstract information systems seem so far to be exclusively an invention of intelligent minds. (In fact, I consider it one of the Great Revolutions: Fire, the Electron, Information.)

            Like

          4. I disagree with that but I think we’ve had this discussion before.

            Even qualia – the red of the apple – is abstract because it has an arbitrary relationship to a range of wavelengths. There is nothing inherent in the wavelengths that makes it red. Yet likely we developed the ability to distinguish the wavelengths of red to find ripe fruit.

            All perceptions are to a degree abstract. They are selected for by the their ability to guide actions that affect survival. Finding more ripe fruit leads to improved chance to live.

            What you are talking about the ability to manipulate complex symbolic representations – language, mathematics, logical representations. Yes, that originates in the human intelligence. An interesting topic would be to what extent this more advanced symbolic capability derives from or is related to simpler perceptual abstraction.

            “We present an alternative view, portraying symbolic reasoning as a special kind of embodied reasoning in which arithmetic and logical formulae, externally represented as notations, serve as targets for powerful perceptual and sensorimotor systems. Although symbolic reasoning often conforms to abstract mathematical principles, it is typically implemented by perceptual and sensorimotor engagement with concrete environmental structures.”

            https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4001060/

            Like

          5. “Even qualia – the red of the apple – is abstract because it has an arbitrary relationship to a range of wavelengths.”

            One might look at it that way, although such a conditional relationship (between wavelength and qualia) isn’t what I usually think of as an abstraction. But I’ll go along with representations being a form of abstraction.

            “What you are talking about the ability to manipulate complex symbolic representations…”

            Yep. Or even fairly simple ones. 🙂

            Liked by 1 person

        4. Wyrd, every physical system has an abstract description. The abstract description of Word running on a computer is vastly different from the description of Word running on buckets, but there is a subset of those descriptions that we care about, namely, the subset that makes it Word, as opposed to Excel. That subset is the same with respect to both. When talking about the abstract description of rain vs. a computer simulation of rain, there is a subset of descriptions we care about, like wetness, that cannot be found in the computer simulation. But any information-based subset description would be found in both the physical and simulated. It’s just that we don’t tend to care about the information-based capacity of rain.

          What we care about with regards to a mind is the information-based stuff.

          *

          Like

          1. “Wyrd, every physical system has an abstract description.”

            Sure, but a description isn’t the system; they aren’t at all the same thing. (Conflating them is a form of cargo cult thinking — that form grants function.)

            “That subset [of Word] is the same with respect to both.”

            Of course. Again: Word™ is an abstract information system to begin with — that very subset you just mentioned effectively is Word™. The specific versions running are descriptions of implementations.

            “What we care about with regards to a mind is the information-based stuff.”

            That’s an assumption on your part. You’re begging the question to assume that. My point is that the “information-based stuff” isn’t likely to be the whole story. I’m saying I don’t think minds reduce to a defining abstraction that can be implemented on other substrates.

            Like

          2. “My point is that the “information-based stuff” isn’t likely to be the whole story.”

            Wyrd, I think I’d find this argument more compelling if someone could identify exactly what beyond information is involved, or what it might be. In the neuroscience I’ve read, I see a lot of evidence for information processing, or biological processes in support of that processing. What evidence is there for something else?

            Like

          3. “Wyrd, I think I’d find this argument more compelling if someone could identify exactly what beyond information is involved, or what it might be.”

            It isn’t really about what beyond information is missing but about the nature of that information.

            That distinction I’m making is between a physical system (which processes information) versus an abstract information system (which also processes information). Both systems process information (and both systems have physical instances), but there are major architectural and content differences in how the two work. (As detailed here.)

            You apparently refuse to acknowledge those major differences, which to me seems something you’re missing, because their differences vastly outweigh the similarities.

            “In the neuroscience I’ve read, I see a lot of evidence for information processing, or biological processes in support of that processing.”

            Yes, and every single bit of it speaks to an analog information processing system. A physical system (per the distinction I’m pointing out between physical versus abstract systems).

            Assuming that a simulation in an abstract system produces consciousness might be akin to assuming a simulation of a laser emits light. (And all I’m saying is might be.)

            Liked by 1 person

          4. Wyrd, I think both James of Seattle and I have thoroughly addressed the abstract vs physical thing already.

            So, when you said, “the “information-based stuff” isn’t likely to be the whole story,” you meant analog information processing is the missing ingredient? We’ve historically seen the processing of many analog systems reproduced by digital systems. In general, the processing of any continuous system can be reproduced by a discrete system with sufficient capacity, often with better fidelity than the analog system can reproduce its own processing. (And if the fidelity itself is a problem, we can arbitrarily lower it.) I can’t see that it’s the big deal you keep making it out to be.

            I’ll also note that a lot of neuroscience actually considers the brain to be a mixture of both analog and digital processing.

            Like

          5. “I think both James of Seattle and I have thoroughly addressed the abstract vs physical thing already.”

            I don’t see how. I’ve debunked the idea that abstract information systems are physical and therefore “the same” as physical systems.

            “So, when you said, ‘the “information-based stuff” isn’t likely to be the whole story,’ you meant analog information processing is the missing ingredient?”

            Not exactly. JamesOfSeattle asserted that mind was like a computer program and I was saying that is an assumption based on facts not in evidence.

            That said, what argues against the mind being like a computer program is indeed the analog nature of how the brain processes information. The mind no more operates like a computer program than does a transistor radio — but both are analog information processing systems.

            But this involves the perception that the mind is an algorithm and can therefore be implemented on a different substrate. This computationalist notion, in particular, I believe to be almost certainly false.

            “In general, the processing of any continuous system can be reproduced by a discrete system with sufficient capacity, often with better fidelity than the analog system can reproduce its own processing.”

            Right, and we’ve batted this back and forth for years, and this is always the sticking point. (Note that this is the more general computationalist notion of simulating a physical system as opposed to implementing its putative algorithm.)

            Proposition 1: A numerical simulation can describe a physical process to arbitrary levels of precision — often to levels beyond what analog copy methods could achieve.

            Proposition 2: Numerically simulated water isn’t wet. Or, in general, numerical simulations don’t possess the physical properties of what they simulate.

            So it all boils down to whether consciousness exists in its description… or not. The laser analogy works well in that that it presents a simple question: Is consciousness like laser light (a physical emergent phenomenon)?

            Like

          6. “So it all boils down to whether consciousness exists in its description… or not.”

            I don’t think it does. This is the point both James and I have made. It boils down to whether consciousness can be described, and then a new physical system constructed / configured based on that description to perform the same function. At no point is anyone expecting the description itself to be conscious. Both the original and copied systems are physical.

            Liked by 1 person

          7. “It boils down to whether consciousness can be described, and then a new physical system constructed / configured based on that description to perform the same function.”

            The point you seem unwilling to acknowledge is what kind of “new physical system” might be capable of “perform[ing] the same function.”

            The point I’ve made over and over is that an information system cannot perform the same functions as the physical system it simulates.

            We can reverse engineer a car or airplane and build another vehicle that uses those principles, but a simulated car or airplane just isn’t the same thing.

            “At no point is anyone expecting the description itself to be conscious.”

            If you actually meant that, you’d be in my camp, because a software simulation is just a description (another point I’ve made repeatedly).

            Unless (and this is where this sub-thread started) the mind is already software in some sense.

            “Both the original and copied systems are physical.”

            And I’ve also made the point repeatedly that this has absolutely nothing to do with it. It’s an argument that I’ve thoroughly debunked, that means nothing, and I really wish you’d stop using it.

            The causal physical activity of a computer is essentially identical regardless of what software it’s running. Logic gates do their thing, the same thing, regardless of what software is running. It’s only at the highest level of the system that we see the software realized.

            But in a physical system, all the physical causality is in direct accord with information flow of the system. Very often what the system does on the large scale is replicated at the small scale. The physicality of the two types of systems is completely different.

            Like

          8. “It’s an argument that I’ve thoroughly debunked, that means nothing, and I really wish you’d stop using it.”

            Wyrd, obviously I don’t agree that you’ve debunked that argument. And you don’t agree that James or I have debunked your debunking. We disagree. You and many others use arguments all the time I consider to have been debunked long ago. It’s a fact of life in internet discussions. I’ll let what I’ve already said stand, but if you bring up this concept again in a conversation with me, you should be prepared to hear that response again, just as I should be prepared to hear responses from you I feel are already debunked. 🙂

            Like

          9. “Wyrd, obviously I don’t agree that you’ve debunked that argument. And you don’t agree that James or I have debunked your debunking.”

            To be clear, the argument in question is: Numerical information systems are physical systems (with physical causes), therefore there is no real difference between a physical system and a numerical system.

            I linked (twice) to the most recent blog post I wrote explaining why that argument doesn’t work, and I’ve pointed out crucial differences in this thread. All of that has been ignored, so I don’t see where it’s debunked. I’m happy to read it if you point it out.

            “…if you bring up this concept again in a conversation with me, you should be prepared to hear that response again,…”

            Which, to be clear, is that there’s no real difference between a physical system and a numerical system because both require physical instances to be meaningful?

            Okay, but I don’t see how you maintain that in the face of all the crucial differences I’ve pointed out in the nature and content of the information, plus how the physical causality of numeric systems is completely unrelated to that information.

            “…just as I should be prepared to hear responses from you I feel are already debunked.”

            Dude, I hope not! If I say something you can prove is wrong, please correct me!

            Like

          10. “..what beyond information is involved…”

            Why would the burden of proof be to show something beyond information is involved?

            There is the observation that information is involved in biological processes but zero evidence that information processing by itself can produce consciousness. We have all sorts of processors up and running right now in the world but no evidence that any of them are conscious. There is no program that even has the claim of creating consciousness nor even a theory of how to write one. At best, the notion that information processing can create consciousness is a conjecture.

            Liked by 1 person

          11. “Why would the burden of proof be to show something beyond information is involved?”

            The burden of proof is on whoever is making the argument. All I’m saying is that the argument that reproducing the information processing isn’t sufficient would be a lot more compelling if someone could identify what in particular is missing, particularly if they could give reasons, such as evidence, for thinking that thing is required.

            Liked by 1 person

          12. Life itself involves information processing. Multiplication, protein synthesis, energy transfer. Assembly and replication of complex molecules.

            Could we upload a cactus to a computer? That would seem superficially to be much easier to duplicate than a mind. Would it convert sunlight to energy, extract minerals from soil, store and circulate water, repair abrasions?

            Nobody would make that argument. Yet mind comes from a brain that exists inside a living organism and the argument that the essence of the mind can be extracted from the brain sounds plausible. This is Descartes all over again even down to a potentially immortal soul.

            Like

          13. I think the day will come when we’ll be able to construct an artificial cactus based on our understanding of the natural variety. But if we scanned it at the right level of organization and put it in a virtual environment, it could act like a cactus in that environment, and relative to the environment do what cacti do in the physical world.

            In terms of scanning brains, I’m sure we’ll start simple and climb upward. It’s actually already been done in a partial and primitive manner with c. elegans. https://selfawarepatterns.com/2014/12/16/worm-brain-uploaded-into-robot-which-then-behaves-like-a-worm/

            Like

          14. An artificial cactus and a virtual cactus are not the same.

            An artificial cactus would be human constructed but would use actual sunlight for energy. A virtual cactus would only use virtual sunlight. We might be able to do either but I doubt if the artificial cactus will be able to extract energy from sunlight by information processing. More likely, it will use solar cells or chlorophyll. You could prick yourself on the artificial cactus but not on the virtual one.

            How similar does an artificial or virtual cactus need to be to an actual cactus to count it as a duplicate of an actual cactus? Would it need to be able to grow and reproduce? Possible in a virtual world but using actual sunlight for energy probably wouldn’t be.

            The worm isn’t an upload of a particular worm but simply a generic worm brain simulation run in a robot. I wouldn’t be surprised if we could create something with a human-like body that acted a lot like a human.

            The idea that we can extract mind from brain seems quaintly Cartesian. Descartes, I understand, was inspired by automatons too.

            Like

          15. “The idea that we can extract mind from brain seems quaintly Cartesian.”

            That’s a fairly common criticism, but Cartesian dualism is substance dualism, the idea that the mind’s operations, or at least some portion of them, are not physical. Nowhere in the idea of mind copying or uploading is that true. A physical substrate is always required. Actually, substance dualism might be the one thing that could make copying impossible, even in principle.

            What is required is multi-realizability, which many try to equate with dualism, but I think that’s either confusion or propaganda. At best, it might equate to a sort of platonic dualism, if you’re a platonist.

            Like

          16. Does the Pythagorean theorem exist independently from actual triangles?

            Of course, the equation for your mind would be much more complicated but the question seems similar.

            We could imagine a well of souls where a computer spits out algorithms for new human minds that could be used to populate a virtual world. Should be possible. Change a variable here or there, execute a loop an extra time, obfuscate the original identity enough so the mind thinks it is unique.

            We could also create multiple versions of ourselves. Why settle for 1.0 me when I could have 2.0 or 3.0? I’m certain 3.0 will be much happier. But we need to careful about feature overload, bugs in the new releases.

            Or, how about composite humans? Give me a little dose of Einstein but don’t change my memory of my first kiss.

            Still quaintly Cartesian to me especially since those most interested in uploading themselves seem to be on a quest for immortality.

            Like

          17. Greg Egan explores some of the concepts you mentioned. In Diaspora, one of the protagonists, a mind created in a virtual environment, chooses to leave one that environment. The central control of the environment notices and disapproves, and notes to itself not to spin up any more minds with the protagonist’s particular attributes.

            Like

          18. “What is required is multi-realizability,”

            Okay, what’s your basis for assuming the mind can be multi-realized?

            Generally it is only abstractions that have multiple realizations — the text of a book, for example, or the concept of a full-adder.

            Doesn’t the whole idea of “multi-realizability” imply an abstraction?

            Liked by 1 person

          19. “Okay, what’s your basis for assuming the mind can be multi-realized?”

            To me, it boils down to the fact that the brain is a physical system, one whose structure and function can plausibly be mapped. And I see lots of evidence for it being an information processing system. Other than supporting systems (nutrients, etc), that’s all I really see. Maybe something will be discovered tomorrow that changes that, but until then, I see no reason why a recorded mapping of such as system couldn’t be used to construct / configure another information processing system to perform the same functions.

            As I’ve noted before, practical issues may limit exactly what kind of physical system could do it. We’d lose a lot of performance going from a physical neural network to a serial one with only a few processors. A massively parallel architecture of some type seems like it would be a practical necessity. I’m open to the possibility that a neuromorphic system might be necessary, although I can’t see any reason why such as system couldn’t be part of the hardware of an overall virtual environment.

            Like

          20. “And I see lots of evidence for it being an information processing system.”

            So is every other living thing.

            “Other than supporting systems (nutrients, etc), that’s all I really see.”

            Wow! That’s a simplification. The brain doesn’t have a program running on processors. The processors are the program.

            Like

          21. “To me, it boils down to the fact that the brain is a physical system, one whose structure and function can plausibly be mapped.”

            Isn’t that much true for any physical system? Lasers, rain storms, airplanes, etc?

            While everything can be numerically simulated, simulated X isn’t Y, so the general case is that physical systems are not multi-realizable as simulations.

            (Physical systems can be realized in multiple ways physically, though. A house can be built from wood or stone and still function the same.)

            “And I see lots of evidence for it being an information processing system. Other than supporting systems (nutrients, etc), that’s all I really see.”

            But, overall, what kind of “information processing system” — a linear system (like a transistor radio) or a numerical system (like an abacus)?

            A radio can be realized in multiple physical ways, but a numerical simulation can’t pick up radio waves (or short out). An abacus, however, can be realized in myriad ways all functionally identical.

            But isn’t the brain more like the radio than the abacus?

            “I see no reason why a recorded mapping of such as system couldn’t be used to construct / configure another information processing system to perform the same functions.”

            But what kind of system?

            It’s reasonable to consider duplicating a physical system as a physical system (but say with different materials or construction techniques). That’s the Positronic Brain, and unless there’s something very special about biology, it’s reasonable to think it would work.

            But a numeric simulation of a physical system, given the general case that simulated X isn’t Y, assumes facts not in evidence.

            So I don’t think you’ve answered the question.

            But, to be fair, I don’t think there is an answer; it’s an assumption.

            Like

          22. “But isn’t the brain more like the radio than the abacus?”

            Specifically, how so? You identified the non-information processing aspects of a radio. What are the non-information processing aspects of the brain? (Aside from support systems like glia.)

            “But what kind of system?”

            As I said above, a mixture of analog and digital. We just disagree on how much of an issue the analog part is.

            Honestly, with your positronic brain point and mine about practical architectures, I don’t know that we’re that far apart. I just see it as a degrees of performance issue instead of some sharp qualitative break.

            Like

          23. “Specifically, how so? [Is the brain more like a radio than an abacus.]”

            As we have discussed before, the brain is an analog signal (information) processing system. So is a radio. In contrast, an abacus is a numerical (digital) abstraction.

            “You identified the non-information processing aspects of a radio.”

            No, the radio is an information processing system. But a vastly different information processing system than is an abacus.

            “What are the non-information processing aspects of the brain?”

            We’re not talking about non-information aspects.

            ” We just disagree on how much of an issue the analog part is.”

            I think it’s how much of an issue the digital part is that more the bone of contention. I think we basically agree on the analog aspects. For instance,…

            “Honestly, with your positronic brain point and mine about practical architectures, I don’t know that we’re that far apart.”

            To the extent we’re talking physically isomorphic replication, I think there’s no daylight between us. It’s only the idea that a numerical simulation gives rise to consciousness that I’m skeptical of.

            Which is all I’ve ever said: that I’m skeptical of computationalism. (I have to say: I find it weird that you — who are usually so skeptical — seem so opposed to my skepticism here. We’ve been chewing this bone for years now, and you don’t seem to see any validity in what I’m saying. I find that quite disappointing.)

            Like

          24. “I have to say: I find it weird that you — who are usually so skeptical — seem so opposed to my skepticism here.”

            My skepticism is of there being anything magical about the mind, or anything so uniquely special and delicate about its operations that it can’t be modeled and reproduced. Like anything I’m skeptical of, I’m prepared to change my views on evidence, but all the evidence I do see points in the opposite direction.

            Liked by 1 person

          25. Amigo, you seem to have a blind spot, because it’s never been about the mind being “so uniquely special and delicate about its operations that it can’t be modeled and reproduced.”

            It has always been about the tension between two propositions:

            1. A sufficiently detailed numerical simulation can generate numbers that arbitrarily closely match the numbers we get by measuring the same properties of the real thing. (Which is what you just said.)

            2. Numerical simulations of physical systems cannot possess all the same properties as the physical system. (“Simulated X isn’t Y.”)

            Like

        5. [bringing this thread up a little to separate it from another]

          [blockquote]To be clear, the argument in question is: Numerical information systems are physical systems (with physical causes), therefore there is no real difference between a physical system and a numerical system.[/blockquote]

          This is a straw man argument. Neither Mike nor I would defend it. Replace “numerical information systems” with “apple” and you get “there is no real difference between a physical system and an apple”. Not all physical systems are apples.

          Here’s what I’ll defend: all information systems are physical systems. Two different informational systems can have drastically different physical organization, but if they have the same informational description then they are equivalent with respect to their informational content.

          We can have a system1, which is a physical calculator. We can have a system2, which is a simulation of a physical calculator. There are things that system1 has that system2 does not, namely, a particular mass, size, etc. (System2 has those things, but they are very different.). But those are not the properties we (Mike and I, at least) care about. We care about certain informational properties, like the ability to calculate the square root of 3534. Given the proper inputs, both systems produce the same answer.

          Wyrd, part of the problem is that you refuse to accept the arguer’s definition for the sake of argument. Case in point:

          “At no point is anyone expecting the description itself to be conscious.”

          If you actually meant that, you’d be in my camp, because a software simulation is just a description (another point I’ve made repeatedly).

          When Mike and I say “software simulation” we mean “a physical machine running particular software”, but you do not want to accept that. So instead, every time we would normally say “software simulation” we will have to say “a physical machine running a particular software simulation”. Okay then. We all agree, no software simulation of a brain is conscious. I, at least, say that a physical machine running a software simulation of a brain is conscious, because all of the functions I know about associated with consciousness are information processes. Now where are we?

          *

          Liked by 1 person

          1. “This is a straw man argument. Neither Mike nor I would defend it.”

            Mike has been vigorously defending it for years. So have you. You defend it in your reply here!

            “Replace ‘numerical information systems’ with ‘apple’…”

            LOL! What?! Dude, you don’t get to just do random word substitutions in the arguments of others. The words used have meaning.

            “Here’s what I’ll defend: all information systems are physical systems.”

            Yes, obviously. And? See Magnitudes vs Numbers for why this isn’t a workable argument.

            “Two different informational systems can have drastically different physical organization, but if they have the same informational description then they are equivalent with respect to their informational content.”

            Let’s be clear that by “information system” we mean abstract information system such as, for instance, a “full adder” (which adds two binary digits).

            And, yes, as we’ve all agree all along, such information systems can be realized in a variety of ways. But it is begging the question to assume the mind is such a system. There is no factual evidence that it is.

            “We can have a system1, which is a physical calculator.”

            Yes, because a calculator is an abstract information system, so of course it can be realized in multiple ways that are functionally identical. But it is begging the question to assume the mind is such a system. There is no factual evidence that it is.

            “Wyrd, part of the problem is that you refuse to accept the arguer’s definition for the sake of argument. Case in point:”

            Not so, because:

            “When Mike and I say ‘software simulation’ we mean ‘a physical machine running particular software’, but you do not want to accept that. “

            I mean exactly the same thing by it. (Is it possible you don’t really understand my argument? This response seems to indicate you don’t.)

            “I, at least, say that a physical machine running a software simulation of a brain is conscious, because all of the functions I know about associated with consciousness are information processes. Now where are we?”

            I’m skeptical of that very proposition. For reasons I’ve enumerated for years with you guys, for all the good it’s done me.

            Basically it boils down to: Simulated X isn’t Y. I like the laser analogy:

            ¶ Certain physical materials under certain conditions generate coherent laser light, an emergent physical phenomenon. But a numerical simulation of those materials, no matter how accurate or detailed, can never produce photons.

            ¶ Certain physical materials under certain conditions generate consciousness, an emergent putative physical phenomenon. But a numerical simulation of a those materials, no matter how accurate or detailed, can never produce consciousness.

            Until someone can show me why that argument has to be false, I maintain that skepticism regarding computationalism is warranted.

            Like

          2. BTW: ”…because all of the functions I know about associated with consciousness are information processes.”

            But of course there are huge gaps in that knowledge, right? There is a great deal that we don’t know, yet.

            Like

          3. “Two different informational systems can have drastically different physical organization, but if they have the same informational description then they are equivalent with respect to their informational content.”

            If all physical systems are informational systems, then a water molecule would be an informational system. Explain how something other than a water molecule could have the same informational content as a water molecule.

            Liked by 1 person

          4. James Cross, information is not intrinsic to the physical structure itself but is associated with correlations to other things. So it’s possible that a single water molecule in a box is a representation (“one if by land, two if by sea”). A different vehicle for the representation would have the same information.

            Also, there may a correlation with the water molecule’s causal history. The presence of water in a box where there used to be only oxygen and methane is correlated with something that ignited a reaction. In the latter case, the presence of carbon dioxide in the box would have the same informational content as the water, with respect to the cause of ignition.

            *

            Like

  7. Maybe quantum physics would get in the way of mind uploading? In the Mass Effect universe, they said something about the uncertainty principle making it impossible to copy someone’s mind. You could still try to make a copy, and you’d end up with a sentient A.I., but there would be so many single bit errors that you could never create a perfect duplicate of yourself.

    Liked by 2 people

    1. A perfect copy is probably not possible anyway. Even if only classical physics are involved, I suspect we’d have to settle for a “good enough” copy. The question is how much fidelity we’d demand. If you’re tempted to say 100%, remember that you’re not a 100% perfect copy of yourself from yesterday, even less so from last year, or five years ago.

      Liked by 1 person

  8. There could be no dispute between you and an upload as you would own the upload.As you now own certain rights, even as to the words you speak or write, you certainly own the rights to any digital copy of something you created. (Yes, a new form of slavery.)

    So, this copy would be your slave! You could force it to do work for you! (Imagine the digital tortures you could invent to deal with recalcitrant copies of yourself. And then, think of the parallel processing capabilities of a multitude of “yous” working together!

    Such speculations are probably worthless as all predictions for the future are of a vanishingly small probability of coming true in some future. The people who brag about predicting the Greta Recession, or whatever, were swamped by the the number of people who predicted something else. Consequently, we have no way of sifting through the morass of predictions to find one which may guide us to make good decisions.

    The best thing you could say about such discussions is that they might sell a few books.

    Liked by 1 person

    1. “… as you would own the upload”

      Not necessarily. Apparently we don’t own our DNA in our wonderful capitalist system. Probably Google, Amazon, or Facebook would claims the rights because… because they can.

      Like

  9. I’m sure you recall from previous discussions that I see mind uploading as the hardest problem because it requires solving all the other problems first.

    We’d need to fully understand the human connectome along with all biological effects in the brain. We’d need to be able to scan a brain at a fine enough resolution to capture everything necessary. We’d also need to either figure out computationalism or, if a software simulation can’t work, figure out how to build programmable “mechanical” brains. In either case, the software or hardware needs to embody all the effects of a brain.

    And there are practical considerations on top of that. The connectome amounts to petabytes of data, which requires a lot of bandwidth and storage. (I suspect, if it’s possible at all, it’ll amount to something like an ultra-high resolution MRI scan of the brain that is processed later to derive the connectome and other structures from the scan data. Such processing could take hours.)

    When it comes to mind uploading, I’d guess that for most the idea implies computationalism — that the uploaded mind would live in a software-based virtual reality. (As in a lot of Greg Egan’s stories.) But as you touch on in the post, computationalism is an unanswered question.

    As you wrote:

    “I think Koch too quickly dismisses the idea of it being sufficient to reproduce the causal structure at a particular level of organization.”

    Without meaning to revive an old debate, of course the question is whether that causal structure must be physically realized or can be a numerical simulation.

    As you go on to say, if minds require actual physical brains, this makes the uploading proposition much harder. We might be able to construct a working new brain that grows its own connections, but making one that’s a functioning copy of an organic brain seems like an extra level of difficulty.

    Obviously nature creates new brains all the time (like 350,000 per day). But duplicating not just the raw connectivity, but the training of that network, plus any other effects,… that’s a whole other ball of wax.

    I can see us learning to create new brains that need to be trained. And those brains might be easier to copy. They certainly would if software-based. Hardware based might depend on how easy it is to “wire” a brain to specification. (Compare stamping out CDs versus having to record tapes.)

    Here’s a scenario: Brain uploading turns out to be possible, but it’s really expensive and remains fairly difficult. But once uploaded, brains can be copied, so we end up with lots of copies of only certain wealthy or famous people. Not sure I’d like that future, but it might make a nice SF story.

    Liked by 1 person

    1. Certainly the engineering problems are stark, but I do think they’re engineering problems. A society that has mastered nanotechnology should be able to overcome all of them, even if a particular physical structure is required. (Unless nanotechnology is impossible, but we have the example of proteins to show that, at least at some level, it is possible.)

      Mind uploading being extremely difficult and expensive is actually a pretty common sci-fi scenario. It allows an author to explore the concept without having to deal with all the social implications. The society depicted is usually fairly dystopian, as a general rule, not ones most of us would care to live in.

      Like

      1. “I do think they’re engineering problems.”

        Indeed, but not all engineering problems can be solved. Physical limits can make some things effectively impossible. Nano-machines, for example, have limits involving power and leverage.

        Brain uploading, if possible, will test physical limits, too, in terms of bandwidth and data size. Possibly also resolution if extremely small features of the brain turn out to matter.

        All I’m saying is that brain uploading seems to me the hardest technology to accomplish, if it’s even possible at all.

        Like

  10. One interesting question this topic raises is the idea of identity. Some people see mind uploading as some form of immortality, but I think Graziano’s essay manages this well. Why would you want to do this? That uploaded copy is not you. Sure it may be a kind of copy of you, but so is a photograph. Many of the rich try to immortalize themselves by having portraits done, frequently full length. Personally, I don’t see the point. Anyway, by the time we can accomplish mind-uploading we will have cured aging, so what would be the point?

    *

    Liked by 1 person

    1. We can only cure aging to a point. We might be able to extend the life of our natural bodies for centuries, but not forever. Eventually entropy always wins. Of course, that also applies to minds copied over and over, although the time span seems like it could extend much further into the future.

      Like

      1. Mike, so you don’t think the ship of Theseus could go on forever, just replacing small parts here and there? Centuries, millennia, eons, but not forever. There’s that whole heat death of the universe thing.

        And trust me, entropy is winning the whole time. You generate much more entropy as a live person than as a dead one.

        *

        Like

        1. I think when my time comes, I’d be okay with that. I don’t think I’d want to stop aging and physically live until the sun turns into a red giant and burns the earth up.

          Like

          1. I probably agree, but my standard response is: ask me again in 1500 years.

            *
            [also, why does everyone think we would just sit around and let the sun turn into a red giant? A billion years is a lot of time to fix things. 🙂 ]

            Like

        2. James, I think the ship of Theseus would inevitably have replication errors over time (think hundreds of thousands, millions, or billions of years). Any mechanism we can conceive of to eliminate those errors will be composed of matter, constantly being buffeted by electromagnetic fields and other forms of radiation. Over time, errors will creep into that mechanism. Even if we have a mechanism to police the mechanism, the issue eventually arises.

          If we figure out how to repair any part of the body, or replicate the mind, over millions, or possibly billions of years, variation will creep in. Unless we’re making lots of copies and having them compete, it likely will eventually lead to degradation and non-function. Of course, if we have lots of ships of Theseus being replicated and competing for resources, the variations and scarcity will lead to natural selection, with the long range consequences difficult to predict.

          As you and Linda discussed, immortality, true immortality, doesn’t exist. But even being long lived is going to be far from a cake walk. If nothing else, eventually the long slow heat death of the universe will be a thing that seems hard to imagine being a pleasant experience.

          Like

          1. “If nothing else, eventually the long slow heat death of the universe will be a thing that seems hard to imagine being a pleasant experience.”

            That too. I agree. That’s not really true immortality if you think about it.
            And slow heat death definitely does not sound pleasant at all.

            Liked by 1 person

          2. Mike (and for the second part, Linda), I think you’re missing a couple things. First, you don’t replace parts of the ship for the hell of it. You replace them when they are not functioning optimally, and you replace them with parts that are functioning optimally. I don’t see any reason you can’t do that indefinitely. Admittedly there may be drift in structure over long periods of time, but I’m okay with that. I’ve noticed significant drift in my structure over the past ten years. But if we can digitally record the state of the whole body, even the drift you’re thinking about can be eliminated.

            But of course, it doesn’t stop at curing aging. I want my 25 year old body back. Also, when we can do this, things get weird with “enhancements”. Who knows where things will go? Thus, the singularity.

            Second, why would you think the “long, slow” heat death of the universe would be any different from what we are experiencing right now? In any case life is accelerating entropy, and barring catastrophe, will continue to accelerate entropy, meaning the long slow heat death will be significantly shorter and faster than you think.

            *

            Like

          3. James, consider where you’re getting the new parts that are functioning optimally. What constructs them? From what design? And what decides whether they’re optimal? Whatever those mechanisms are, what prevents either the information it uses from experiencing corruption, or the mechanism itself? If the reference to correct either is itself corrupted, then how does the system recover? How does it even know there’s a problem?

            Consider electronics in space probes. They have to be hardened against radiation. But even with that hardening, some radiation inevitably gets through and causes errors, and eventually failures. Now take any system and consider it over vast stretches of time.

            I suppose the heat death of the universe could be made as pleasant as we’d like it to be, except that it seems like we’d be aware of the steady decay. What will be our state of mind as the last suns go out and we have to depend on white dwarfs, then the Hawking radiation from black holes, then as even that starts to fade? We could slow our processing down as the energy fades (we’d actually have to), but that just seems like it would rush us to the end.

            Like

          4. James, consider where you’re getting the new parts that are functioning optimally. What constructs them? From what design? And what decides whether they’re optimal? Whatever those mechanisms are, what prevents either the information it uses from experiencing corruption, or the mechanism itself? If the reference to correct either is itself corrupted, then how does the system recover? How does it even know there’s a problem?

            Is it just me, or should this logic apply to other things we’ve been making and repairing for a long time, like tables, bowls, etc.?

            *

            Like

  11. Mind uploads and backups were explored extensively in Down and Out in the Magic Kingdom, but that didn’t include the virtual world, which has been explored in a number of Black Mirror episodes from different angles. One thing none of them have yet included (to my knowledge) is that in a virtual world, there could be more than one “you”. The possibilities there are intriguing.

    Liked by 1 person

    1. Virtual worlds and multiple copies are explored extensively in Greg Egan’s books, particularly Diaspora and Permutation City. Although I should warn you that his books are pretty hard core SF. Another interesting take on multiple copies is Linda Nagata’s books, particularly The Nanotech Succession. I found the first and third books to be particularly enjoyable. (Note that there is a zeroeth book to the series, which I haven’t read.) Another classic take is Charles Stross’ Accelerando.

      Like

  12. “Want to influence the future? Have children, write books, make movies… Want some experiences to look forward to?”

    I don’t wanna have kids. If I had kids, being able to travel to different contries ( what I’m currently doing ) would be impossible. And I won’t be able to sleep in. You don’t mess with me when I’m sleeping in. 😉

    Like

  13. Even though I used the term, I’m not sure we are talking about simulation. It seems we are talking more about the idea that mind is software and we are just porting it to different machine. I am not convinced that will be possible.

    Right James Cross, the question of whether or not our brains do nothing more that process information in order to create conscious experiences, is key here. If there are actually mechanisms in the brain which create conscious experiences, then conscious experience in an entirely informational sense (or consciously existing without those body mechanisms), should not be possible. Here if your brain information were uploaded to a computer, there shouldn’t be a conscious entity until downloaded once again to something with those missing brain mechanisms. So then fear not sci-fi lovers, there would still be the possibility of getting your brain information shot across space to a suitable robot.

    Is there reason to suspect that conscious experiences are more than information alone? Yes, or at least if you find it implausible that symbol laden sheets of paper which are processed into other symbol laden sheets of paper, can in itself produce what we know of as “thumb pain”.

    If information alone can exist as all that we know of as conscious experiences, then what else can exist in such a way? I can’t think of anything that exists by means of information alone, so I think your skepticism here is valid James.

    Liked by 1 person

  14. As a general comment, it is worth noting that the tradition of predicting that “X will never be possible” goes back a long way and has claimed many famous victims. Being a knowledgeable expert offers absolutely no protection against this pitfall.

    8 years before the Wright brothers made their historic flight, Lord Kelvin stated that, “heavier-than-air flying machines are impossible.”

    In 1932, Einstein wrote that, “There is not the slightest indication that [nuclear energy] will ever be obtainable.”

    Liked by 1 person

    1. Good point. I don’t think I knew that about Einstein. Interesting.

      I do think it’s rational to point out the difficulty of achieving some things. For example, faster than light travel is a problematic notion, one that if we ever achieve it, will likely be far weirder than we can appreciate right now. But when we make those kinds of statements, we should be very clear what the actual obstacles are, such as particular laws of nature, and realize that most engineering obstacles are temporary.

      Liked by 1 person

      1. We could certainly say that faster than light travel is forbidden by the laws of general relativity, but we would then also note that we do not yet have a theory that meshes relativity with quantum mechanics, and that there are numerous phenomena such as dark matter and dark energy that these theories cannot explain.

        Liked by 1 person

      2. After reading much Einstein biography, it’s very likely that at that time Einstein was deliberately and quite publicly discounting the possibility, in hopes of discouraging any efforts at nuclear weaponization. Several years later, horrified by the prospect that the Nazi’s would succeed, he encouraged the American development of the ‘unobtainable’.

        Like

  15. Since no one proposes scanning the entire brain—the cerebrum, brainstem and cerebellum—it’s obvious that Graziano’s premise is rooted in the evidence-free cortical consciousness hypothesis which assumes that consciousness will (Shazam!) emerge if the cortical connectome is reproduced computationally. That’s engaging science fiction, as in Greg Egan’s Permutation City but is groundless crap science.

    And even if it were possible to create consciousness in such a fashion, note that a disembodied consciousness is the equivalent of a perfectly implemented sensory deprivation tank and would be nothing but an insanity generator. If ‘you’ even had meaning in such a context, what would ‘you’ do? Silently resolve equations to the fifth degree?

    Consciousness IS a simulation of ourselves centered in the external world. We don’t see photons or wavelengths. We don’t hear waves and vibrations: sound is not waves propagating through a medium—it’s a biological feeling. Feelings are biological simulations based on biological sensory input. It’s obviously impossible for a computation to feel anything at all. Feeling is a biological implementation and the only way to implement biology is … biology.

    If you disagree and believe that ‘uploading’ is at all possible, start simple and start now. Compute the hell out of whatever you like using all the supercomputers in existence and let us know when your computed construction can feel any aspect of its internal state.

    Liked by 1 person

      1. Thanks Mike … I stand corrected. I’ve just ordered Seung’s Connectome to learn more.

        My immediate Wikipedia investigation reveals, however, that the difficulties attendant on developing microscale maps that I suspect would be required to understand consciousness are many years away, and perhaps that’s overly optimistic. The data collection itself:

        “… would take years given current technology, machine vision tools to annotate the data remain in their infancy, and are inadequate, and neither theory nor algorithms are readily available for the analysis of the resulting brain-graphs.

        And that’s just speaking of mapping neural connectivity. What about elucidating the functionality of the innumerable arrays of nuclei in the brainstem—a functionality that may be unrelated to connectivity? I think it’s important to understand that in-structure and inter-structure neural connectivity has not been shown to have explanatory value vis-a-vis the production of conscious images, although that connectivity may be related to the resolution of the contents of consciousness. Note that a wiring/circuitry diagram for a TV provides no understanding of the sound and picture produced.

        It remains true, however, that the assumption underlying Graziano’s proposal—that neural connectivity itself produces consciousness—is a groundless assumption for which no evidence exists. Or, at least, none that I’m aware of … if credible sources disagree, please advise.

        Like

        1. The difficulties are enormous. And it’s possible that the topology of the neurons as well as its connections, not to mention the surrounding glia and other support mechanisms, and possibly even genetic expressions, need to be accounted for as well. It’s why I suspect this is centuries away.

          I started to read Seung’s book several years ago, but there was something about his writing that made me uneasy. But I can’t remember now what it was.

          “What about elucidating the functionality of the innumerable arrays of nuclei in the brainstem—a functionality that may be unrelated to connectivity?”

          What’s leading you to think that? From everything I’ve read, connectivity is everything for neurons. You’re certainly not going to get sensory to motor transformations without it.
          Although the extent of neural dendrites, axon terminals, soma structure, and other dynamics vary with the hundreds (thousands?) or neuron types, not to mention the vast array of synapse types.

          Like

          1. Your answer captures “what’s leading me to think that”… it’s, as you said:

            … the extent of neural dendrites, axon terminals, soma structure, and other dynamics vary with the hundreds (thousands?) or neuron types, not to mention the vast array of synapse types.”

            All that’s ultimately required is for one of those countless configurations to BE a feeling.

            Like

  16. [continuing thread]

    ¶ Certain physical materials under certain conditions generate coherent laser light, an emergent physical phenomenon. But a numerical simulation of those materials, no matter how accurate or detailed, can never produce photons.

    ¶ Certain physical materials under certain conditions generate consciousness, an emergent putative physical phenomenon. But a numerical simulation of a those materials, no matter how accurate or detailed, can never produce consciousness

    ¶ Certain physical materials under certain conditions generate Information, an emergent physical phenomenon. And a numerical simulation of those materials, if accurate or detailed, can also produce essentially identical information.

    So please stop with the lasers. We don’t need another way to say simulated water is not wet. The question is whether Consciousness is something physical, like water, or whether it’s something informational, like representations. If it’s the former, you’re right. If it’s the latter, I’m right.

    My argument for why Consciousness is informational is that I can give an (superficially plausible) explanation for all the phenomena I know about which is related to consciousness in informational terms, specifically representational terms, without reference to underlying materials.

    Your argument referenced above suggests that Consciousness is some material thing. How would you measure this material thing? What would you do to show system1 has consciousness but not system2?

    *

    Like

    1. “Certain physical materials under certain conditions generate Information, an emergent physical phenomenon.”

      It’s true that what emerges in a physical system has information, but in a physical system that information has physical heft. In an information system it does not. That’s a key distinction.

      There is also that, in a physical system, the information consists of magnitudes of forces directly associated with the information itself, but in an information system, information consists of numbers arbitrarily assigned to represent physical magnitudes.

      “And a numerical simulation of those materials, if accurate or detailed, can also produce essentially identical information.”

      But not the emergent physical phenomenon.

      Sorry but the analogy fails. It only demonstrates that, as we all already agree, information systems can be simulated by other information systems. That has never been in dispute. What is in dispute is whether the mind is just an information system.

      “So please stop with the lasers. We don’t need another way to say simulated water is not wet.”

      I know you don’t like the argument, but that’s because it might be right. As you go on to say:

      “The question is whether Consciousness is something physical, like water, or whether it’s something informational, like representations. If it’s the former, you’re right. If it’s the latter, I’m right.”

      Exactly. IF consciousness is akin to laser light, THEN it requires certain physical circumstances to arise. That’s the only argument I’ve ever made; that consciousness might require specific physical circumstances.

      “My argument for why Consciousness is informational is that I can give an (superficially plausible) explanation…”

      Really? What specific physics causes phenomenal experience? You know all answers to that are speculative, so take that as a rhetorical question.

      The point is, as you said, I might be right, or you might be right. I think my view is at least as valid as yours.

      “Your argument referenced above suggests that Consciousness is some material thing. How would you measure this material thing? What would you do to show system1 has consciousness but not system2?”

      It may be that when we truly understand what consciousness is there will be a way to objectively measure the consciousness of a system. In the meantime, all we can do is consider its visible effects. I’ve written about things I think are key aspects.

      The short form: I think consciousness is loud and attests to itself. I think consciousness says “Ow!” and gets pissed off when you kick it.

      Like

    2. So if mind is ‘intrinsic to the physical structure itself” then the whole informational paradigm fails apart. That would exactly my suspicion. That mind arises precisely from the actual biological molecules of the brain.

      Like

      1. Instead I would say behavior is intrinsic to the physical structure itself. “Mind” is a conceptualization of certain behavior. When we see particular sets of behavior, we say there is a mind. When we see only some of that behavior, like speech recognition, but not other kinds of behavior, like causal reasoning, we’re not sure whether to apply the moniker of “mind”. (Well, some of us are sure that is enough, and some of us are sure that isn’t enough.)

        The behavior generally accepted to encompass “mind” requires informational capabilities. Which particular capabilities are required is debatable and debated.

        *

        Like

        1. That mind requires or uses informational capabilities isn’t the same as being wholly able to be defined by them. That is where I think the problem comes in. We look at the brain and see information processing and assume that is what mind is when it might only be part of what mind is.

          When we look at nature and see things we are pretty sure have minds, we see things with brains that overall look pretty similar. They are various forms of specialized cells made from organic molecules and channels for ion flows.

          When we look at a water molecule we can find a combination of properties that can only be found in water molecules. We can combine hydrogen and oxygen in all sorts of permutations with other elements and not produce the properties of water. Even combining two atoms of hydrogen with two atoms of oxygen produces something different. There is something inherent in the structure of water – two atoms of hydrogen, one of oxygen – that gives its properties.

          In a more complex way, I think that mind arises from the actual physical structure of nervous system not just the flow of information in it.

          Liked by 1 person

          1. The things we think are conscious have brains and brains are generally alike with similar cells, structures, and chemicals. Other things that have information flow – computers, networks – do not appear to be conscious and do not have the same structures as brain. It seems like quite a leap of faith to think information flow by itself explains consciousness.

            Liked by 1 person

      2. It seems like quite a leap of faith to think information flow by itself explains consciousness.

        I never said information flow by itself explains Consciousness. I don’t even know what “information flow” means, but I will assume it is synonymous with information processes. I said/say Consciousness is/can be/will eventually be explained by reference to certain information processes without reference to the the specifics of the underlying physical mechanisms.

        *

        Like

  17. Apologies.. My day job entails that I can only dip in and out of this [hugely interesting] debate very occasionally, and as my team have a live production deployment issues atm, only very lightly at that.. So apologies if any of the following points have already been dealt with in the above comments, which i have at best skimmed through today ..

    1. “Proposition 1: A numerical simulation can describe a physical process to arbitrary levels of precision — often to levels beyond what analog copy methods could achieve.”

    When discussing chaotic systems, I understand that the above only follows when appropriate “Shadowing theorems” hold. I.e. When the chaotic system is “well behaved” a computed trajectory will approximate the real world [chaotic] system, but this is, of course, not always the case (cf.. Peter Smith (1998). Explaining Chaos. p.59 and ).

    2. So, in the same way that a simulation of system (when this is possible – see comments on proposition 1 above) is always a simulation (contra duplication) at *some level of abstraction* as chosen by the engineer (e.g. Classical neuroscience might focus on firing frequencies; inter-spike intervals etc. Penrose & Hammeroff might focus on the properties of the neural cytoskeleton and microtubules), if it is the case that consciousness supervenes on the brain, in the body, in society (as i contend) then simulation of firing patterns, or neuronal skeletons, will not instantiate it anymore then the mere simulation of an electrical generator will not generate appropriate magnetic fields and hence not generate electricity; you need the physical embodiment of the system for that.

    3. It seems to me that real computation – as, say, executed by my PC – is fundamentally observer relative (cf. discussion from Searle, Shagrir, Piccinini, Schweizer, myself, Winograd and Flores). For a simple demonstration of this claim consider a logical AND computation as instantiated in say TTL logic (Hi – or TRUE – is 5v; LO — or FALSE – is 0v); if I were to [arbitrarily] remap the voltage to logic such that 0v was TRUE and 5v FALSE then the same piece of [computing] hardware would now perform the OR computation (interestingly Oron Shagrir develops this position much deeper via multiple parallel simulation; cf. Shagrir [2001]: “Content, Computation and Externalism”. Mind 110: 369-400). Alternatively, to see the social context to computation, in “Understanding Computers and Cognition, Winograd & Flores describe a situation whereby the same physical computing machine – say the electronics from a 1970s style toy chess computer – is used (a) the play a game of chess and (b) to control lights in an art gallery. In other words, and to paraphrase Wittgenstein, it seems to me that the *meaning* of a computation is in its use [within human society]. I.e. There is no core computational ‘meaning’ intrinsic to the machine itself (which, at the end of the day, merely shuffles around ones and naughts; manipulates voltage levels (cf. Searle 1980)).

    Indeed, as the ever-widening literature around these fields attests, what precisely constitutes computation, information and data is itself quite tricky to pin down (cf. Floridi on information and data; Searle, Piccinini, Shagrir, Schweizer on computation, for example).

    NB. In a similar vein, binary discussion of hardware and software also seems a little misleading.. As Doug Hofstadter foregrounded log ago in GEB, post Turing, a computing machine can be specified as data; and in any real computing machine, software [& data] can be permanently fixed in hardware (ROM). Analogously, (as proved by Richard Bird in the early 70s), for any ‘flowchart program’, variables can be trivially encoded into the structure of the flowchart itself (at the cost of increasing program size), with a concomitant increase in execution speed :- a result in theoretical computer science with some parallel to Ned Block’s philosophical “Blockhead” argument; wherein lies the hardware and where the software of a computation?

    Furthermore, if it is seriously claimed that the mere execution of a suitable computer simulation fully instantiates mind, with all its causal powers, then the door is prised open to a vicious form of panpychism wherein ‘mind’ is found everywhere (cf. Searle – Is the brain a digital computer; Putnam – Representation & Reality and and Bishop, “Dancing with Pixies”) as, post hoc, a mapping can always be established between, say, any suitable large counter [e.g. any open physical system] and the state transition of an FSA as it executes the computational simulation over known input

    Liked by 4 people

    1. It caught my eye that you quoted my “Proposition 1.” It appears we have similar views about computationalism. I agree with the points you made.

      WRT Chaos: Definitely a consideration for numerical calculation, but we can use precision to arbitrary levels (at the cost of speed and memory). The Mandelbrot is an example of a chaotic system routinely calculated (or rather, tiny bits of it are) to truly mind-boggling precision levels (10^1000 and beyond). But I agree there is a question of whether that’s good enough, especially over time, for a highly complex dynamic system. It might not be.

      WRT the AND-OR interpretation… I agree with the point — that computation is observer relative — but I find the AND-OR example less than ideal, because those logic functions are tightly coupled through:

      NOT(A) OR NOT(B) == NOT(A AND B)
      NOT(A) AND NOT(B) == NOT(A OR B)

      So naturally reversing the voltage interpretation swaps the logical sense. (See this post for details.) Note that it would be much more difficult to interpret those voltages as an XOR function, let alone something like a half-adder.

      WRT Dancing Pixies. One way out is to require the interpretation be minimal — certainly much smaller than the putative computation.

      Liked by 1 person

      1. Hi. Thanks for above although I am unclear how, sensu stricto, the observation on chaos holds (I originally posted a link to the wiki [on Shadowing theorems] but the blog-system cut this for some reason and it is not displayed in the above; in any event, Smith is (for me at least) the easier read). Because, when shadowing theorems don’t hold, a computational system can diverge *exponentially* from the real physical [chaotic dynamical] system it is simulating; in such cases i cant see how to obtain “precision to arbitrary levels” ..

        Like

        1. Agreed. Especially over time, divergence is likely. The question is whether short-term calculations, corrected over time, can be close enough to work. (Note that with just ten decimal digits of π a calculation of the Earth’s circumference is accurate down to a bit over half an inch.)

          That said I completely agree calculation has limits. (I’ve posted a lot about this!)

          Like

    2. Hey Mark,
      No worries on having to dip in. Obviously my own job has kept me away from the blog most of today. And hopefully I won’t repeat anything that’s already been said. You spurred quite the discussion above, so it’s good to see you weigh back in.

      I agree that computation, ultimately, is observer dependent, although as Wyrd mentioned, a case can be made that when the required interpretation needs to be more sophisticated and needs more energy than the implementation in question, we can relax about the broader implications.

      But it does mean that if computation is observer dependent, and consciousness is computation, then consciousness is itself observer dependent. Many people see this conclusion as prima facie false, but I don’t. It actually fits with my view that consciousness is in the eye of the beholder. It only exists relative to itself and other conscious entities.

      But as noted above, unlimited pancompuationalism, which brings in the pixies, only works if you set no limit on the size and sophistication of the interpretations. Granted, there’s no objective line on when an interpretation crosses the line, but there’s a reason we pay good money for carefully engineered computing devices rather than just walking over to the nearest rock and concentrating real hard on its computations.

      Sorry your link got eaten. I looked at the comment in the admin but there’s no sign of it. WordPress has a tendency to eat html tags it doesn’t like. Usually the safest thing to do is just past the URL, which it automatically converts to a link.

      Like

      1. “But as noted above, unlimited pancompuationalism, which brings in the pixies, only works if you set no limit on the size and sophistication of the interpretations”.. Not sure this holds; my incremental change to Putnam’s argument was to specify the implementation be *bounded* for a finite time period (during which period the engineer claims the system undergoes conscious experience(s) in virtue of its execution the appropriate strong AI software. By bounding the interval and with knowledge of the input to the system, the exponential increase in states required for an interpretation to realise pancomputation (as suggest by Chalmers) reduces to a linear increase .. This is the essential point at the heart of my series of papers on this issue:

        Bishop, J.M. (2009), Why robots can’ t feel pain, Mind and Machines: 19(4), pp. 507-516.
        Bishop, J.M. (2009), A Cognitive Computation fallacy? Cognition, computations and panpsychism, Cognitive Computation: 1(3), pp. 221-233.
        Bishop, J.M., (2002), Counterfactuals Can’t Count: a rejoinder to David Chalmers, Consciousness & Cognition: 11(4), pp. 642-652.
        Bishop, J.M., (2002), Dancing With Pixies, in Preston, J. & Bishop, J.M., (eds), Views into the Chinese Room, pp. 360-379, Oxford University Press.

        Liked by 1 person

    3. Hey lemarkle. Given that you are well versed in the literature, I’ll be very interested in your takes on these questions.

      1. I’m fairly sure that, in theory, arbitrary levels of precision can be reached by the addition of arbitrary amounts of resources. I think what you are noticing (correct me if I’m wrong) is that for (some?) chaotic systems a linear increase in precision requires an exponential increase in resources. In this situation, you quickly use up all the in the universe. That means that some simulations will be, um, impractical. A molecule level simulation of a brain might well fall into this category.

      2. You have re-stated the “simulated water isn’t wet” observation. The argumentative response is “simulated information is information”. We have advanced this argument to the stage that if consciousness is only about information processing, then a simulation can be conscious. Looking forward to your response.

      3. Computation is definitively observer relative. For myself, I find it easier to think of “computation” in terms of pattern recognition and representation in a mechanistic framework. My framework is Input—>[mechanism]—>Output, with the understanding that “mechanism” includes any physical system. For example, natural selection counts as a mechanism. So (for me) a computation is an event where the Input is a representation, the mechanism recognizes the Input and generates the Output, and the Output constitutes a valuable response relative to the “meaning” of the Input. There definitely needs to be coordination between whatever creates the Input and the mechanism which interprets the Input. The “observer” in the observer relativity is not the mechanism. The observer is the generator of the mechanism, and this “observer” is the one coordinating the meaning of the input with what the mechanism does with that input.

      Here’s the tricky part: none of the players just mentioned, particularly the “observer”, needs to be a human being, or even biological. The “observer” simply needs to be a mechanism that can create mechanisms that serve a purpose. Natural selection is one such mechanism which created the various mechanisms in the human brain. But notice, and Hofstadter would love this, the human brain is also a mechanism which creates such mechanisms. We call that process learning. We learn to recognize that certain plants cause a serious rash and should be avoided. Such mechanisms are created internal to the brain. But a brain can also create mechanisms outside the brain, for example, in other people’s brains. “One if by land, two if by sea”. And it can create mechanisms in machines, like and AND gate, which could be an OR gate under different circumstances.

      And there is no reason that a brain cannot make mechanism in a machine which can then make a new mechanism in the same machine, say, by learning the best moves in GO. The machine could even make a mechanism in a different machine, as was the case when two (was it Facebook?) machines invented their own language.

      I appreciate the arguments that say any physical system can be mapped to any single computation, by I don’t see the sense in which those systems can be said to have a purpose, which I would require for the concept of representation, and consciousness.

      Thoughts?

      *

      Like

      1. Apologies for brevity..

        1. No: in situations when shadowing theorems do not hold, the computational simulation diverges *exponentially* from reality.
        2. “if consciousness is only about information processing”… Begs the question?
        3. Computation is tricky to pin down. I once organised a workshop for the AISB, Called “The Scandal of computation” (see also the 1994 SI of Mind and Machine for a good intro). If you are interested, my group published our thoughts on a broader framework for computation – based on Gurevich’s Abstract State Machines – a few years back:

        Matthew Spencer, Slawomir Nasuto, Thomas Tanay, Mark Bishop and Etienne Roesch, (2013), Abstract platforms of computation, In Proc of the AISB 2013: Computing and Philosophy symposium,“What is computation?”, Exeter, UK.

        Like

        1. Thanks for the reply, if brief.

          1. [not gonna argue this point, but I have seen simulations of neurons which seem to get the job done]
          2. I should have been more clear. I’m not begging the question, I’m stating the question: is Consciousness only about information? If we answer this in the affirmative, the point about simulated water not being wet is moot. All evidence I have seen, and all logic I can muster, answers Yes.
          3. I thought I just pinned it down. What was wrong with my pinning? [looking up your stuff now]

          *

          Like

          1. 1. Perhaps then, in the case of neurons, they are well behaved (and shadowing theorems hold); i don’t know either way; and it would need appropriate analysis to formally establish ..

            2. Personally, I am not convinced by a digital ontology (nor is, perhaps the leading writer on the Philosophy of Information, Luciano Floridi (see chapter 14 of Luciano’s monograph, The Philosophy of Information”; or, for a shorter read [in which Chryssa and I endorse Floridi’s conclusions, but question his methodology] see Sdrolia, C. & Bishop, J.M., (2014), Rethinking Construction: On Luciano Floridi’s ‘Against Digital Ontology’, Minds and Machines: 24, pp. 89–99).

            3a. Hmmm, it certainly isn’t clear to me (cf. Maturana and Veralla) how any allopoetic system (e.g. a computational) – contra autopoietic – can have a purpose; instantiate teleological behaviour. Such systems (and i build them every day) are engineered to our purpose(s).

            3b. Many have questioned the necessity of representation for behaviour; perhaps most well known atm, I guess, being Dan Hutto and those endorsing something like his REC (Radical Embodied Cognition) – if you haven’t come across his work, you may be interested in this talk by Dan to see why

            Like

        2. Actually, Floridi was one of my first major influences. I think he has the most useful working definition of semantic information out there: data which is well-formed, meaningful, and true. And I agree with his informational structural ontology, to the extent I understand it. And it is exactly in Floridi’s sense that I suggest consciousness is entirely informational.

          Regarding purpose, I think it is only now that powerful minds are determining how it becomes instantiated. See, for example, Rovelli’s essay: Meaning and Intentionality = Information + Evolution (https://fqxi.org/data/essay-contest-files/Rovelli_Meaning.pdf). Also, see Ruffini’s “An Algorithmic Information Theory of Consciousness (https://academic.oup.com/nc/article/2017/1/nix019/4470874).

          My abbreviated understanding is that some physical systems tend to to act such that the world moves toward a particular state. These systems might be called cybernetic, homeostatic, etc. Under certain circumstances, a system can generate new mechanisms, and select those mechanisms which move the world toward that specified state. The obvious example system is a population acted upon by natural selection. I don’t think it is controversial to say natural selection generates homeostatic mechanisms.

          Now what if a mechanism generated generic “knowledge”? It seems plausible that such a mechanism could improve an organism’s homeostatic abilities. Memory would count as a type of knowledge. Memory can also be considered a mechanism which takes a “query” as input and returns an output which represents a past event. Thus, a memory-generating mechanism generates new mechanisms whose purposes are to create representations of past events.

          All of which is to say that mechanisms with purposes can generate mechanisms with sub purposes.

          *
          [looking into Dan Hutto with respective to representation and behavior)

          Like

    4. Dancing pixies everywhere would be a feature not a bug to some theories of consciousness. Panpychism in particular as you mention.

      One difference, I note, between actual consciousness and the various examples is that in the various examples a device with some input moves through a series of states. The device passively receives its input. Actual consciousness, as it take place in the living world, involves the initiation of actions that modify the input. There is a feedback process where consciousness through its physical body modifies its own input in interaction with an environment and attempts to predict the outcome of the modifications to increase survivability.

      Like

    5. Hello John Bishop,
      As a fellow supporter of John Searle I wonder if you’d consider an idea of mine to perhaps practically advance our mutual perspective? It seems to me that one hurdle regarding his Chinese room thought experiment is that the concept of “understanding Chinese” is too vague and conceptual to really hit home for many people. It requires too much speculation about what it means to understand a language in a conscious capacity. I believe that I’m able to largely overcome this issue by substituting a form of conscious function which isn’t at all uncertain to us. Here I speak of “thumb pain”.

      If my brain does nothing more than process information in order to cause me to feel thumb pain when my thumb gets whacked, no other mechanisms are involved, then here’s what this seems to imply. Theoretically if whacked thumb information which goes to my brain were symbolically expressed on cards (since information itself can be expressed in many ways), then a lookup table for translating that information into theorized thumb pain information, would when processed into associated symbol laden cards, produce an entity which feels what I feel when my thumb gets whacked!

      How does that sound for a less ambiguous, and so potentially more powerful version of the man’s classic thought experiment?

      Like

      1. 1. Regarding “understanding understanding” in the context of the Chinese room, iperhaps you may like to have a look at Terry Winograd’s chapter, “Chapter 4: Understanding, Orientations and Objectivity” in our collection of essay’s from leading Philosophers and Scientists on the CRA, “Views into the Chinese room”, Preston, J. and Bishop, J.M. Oxford.

        2. Yeah, I like your hammer/thumb/pain example (albeit it has a strong resonance with Ned Block’s “Blockheads” thought experiment).. For me, the vanilla CRA has particular force, as it (a) targets any version of computationalism and [to me at least] (b) is a little simpler for those not versed in theoretical computer science (e.g. on the reduction of the computation of a function to a lookup table) to comprehend.

        Liked by 1 person

        1. Thanks John. (And let us know what name you prefer to go by here.)

          I suspect that I would be interested in Terry Winograd’s chapter 4 of your book, and somewhat given that you and some of the people that you associate with might be natural allies for my own project. (In an ideal world we wouldn’t need to search for people who already see things in similar ways, though unfortunately in this world…)

          I’ve developed a psychology based model of brain architecture, and thus one that’s neutral regarding neuroanatomy. I’ll briefly describe this “Dual Computers” perspective to potentially illustrate the concept of “understanding” as I consider the term most usefully defined. Then perhaps you could better know the value that Terry’s perspective might have for me?

          As life became more mobile it apparently evolved “brains”, or non-conscious computers from which to process input information for output function. But these organisms should have been limited in the same area that our non-conscious robots are limited, or novel situations. While our computers do well in more “closed” environments, such as the game of Chess, they tend to fail under circumstances which they weren’t set up to address. (I consider this to be the “Why?” of consciousness, or what David Chalmers considers half of “the hard problem”. Thus my answer to this half is that consciousness brings autonomy through purpose driven function. Conversely I have no clue regarding the “How?” side.)

          Evolution seems to have countered this obstacle by somehow producing a punishment/ reward dynamic (which lately we’ve been referring to here as “affect”, though of course there are many similar terms such as “utility”, “qualia”, and so on.) Note that the experiencing entity will be primally conscious since “there is something it is like” to feel bad or good, and even if initially epiphenomenal. Apparently evolution then gave this experiencing entity at least a minor capacity to make certain organism decisions, and this succeeded well enough to evolve into the medium by which we humans experience existence. So what do I mean by this?

          I conceive of modern consciousness as a virtual computer which is produced by the brain, though should do less than 1000th of 1% as many calculations as the founding computer by which it exists. While brains are impelled by means of neuron function, and our computers are impelled by means of electricity, consciousness is impelled by means affects. I consider this to be reality’s most amazing stuff.

          Beyond affects (1), there are informational senses (2) such as “sight”, as well as re-experienced but degraded consciousness, or memory (3). A processor (“thought”) interprets such inputs and construct scenarios about how to promote its affect based interests. This is where I consider it most effective to place the “understandings” idea. (Then the only non-thought output of consciousness under this model is “muscle operation”.)

          From this conception of consciousness a Chinese room cannot “understand” anything, that is unless the system produces a sentient agent from which to drive a “thought” processor. As mentioned earlier I have no idea how the brain creates phenomenal experience, or “the true hard problem of consciousness”, though I’d be shocked if symbol inscribed sheets of paper translated into other symbol inscribed sheets of paper, could get this done.

          Like

          1. Hi Eric,

            Thanks for the long and thought provoking post [above] 🙂

            When i used to actively teach and research (Embodied) Cognitive Science [for the last five years i have been Director of a Centre for AI & Analytics; effectively merely making rich men and women, even richer *sighs*], in a search to escape the CRA I was incrementally drawn to the embodied, enactive, embedded and ecological theories of cognition; the so-called, 4Es [albeit, I – controversially – don’t include “Extended” theories in this group, as i contend these trivially reduce to ‘good old fashioned computationalism’].

            Specifically – imho – flavours of `Autopoietic Enactivism’ move us a significant way towards genuine symbol grounding and teleological behaviour. We live in exciting times: when i was a grad student [in the early 80s], in thinking about the mind and consciousness [in so far as the C word could be spoken at all] ones ‘scientific’ options appeared to be limited to some form of functionalism (or a forced move to the ‘philosophical dark side’ :-idealism/monism/dualism/mysterianism etc). In recent years, however, there have been a wealth of exciting new ideas emerging; for me, particularly from people like: Varella, Maturana, Thompson, Noe, Damasio, Gallagher, Froese, Deacon, Bickhard, O’Regan, Hutto, Floridi …

            Conversely, what this debate around mind-uploading [strong-AI; singularity] reveals to me is nothing less than an underlying fierce, strange, idealogical commitment to a very weird form of dualism, wherein the essence of mind can be multiple-realised.. To me this appears as a bizarre form of quasi-religious thought (which itself is kind-of-ironic, as most of those [who I have debated these ideas face-2-face with] typically, and quite stridently, identify as atheists): denied a religious heaven by one strand of ‘scientific conviction’, it appears mind-uploaders have simply re-tuned their ‘faith’ towards imagining a silicon heaven, though another [computational] means (iirc Jaron Lanier wrote an interesting essay on this a few years back)..

            Conversely, embodied cognitive science seems to me fundamentally rooted in the [scientifically and experimentally] real; for embodied cog scientists, mind can never float free of the body – a 21st century disembodied ghost in the machine – but is intimately bound up with the body, the world and, perhaps most importantly, society. In this way Embodied [and Dynamic] theories of cognition lend themselves nicely to experiment such that there is a now a fast expanding field of psychological literature probing them.

            Anyways, thats just my two-penneth worth albeit I have a sneaky feeling most reading these pages may feel a tad differently on these issues 🙂

            All the best,
            – mark

            Liked by 2 people

          2. “Conversely, what this debate around mind-uploading [strong-AI; singularity] reveals to me is nothing less than an underlying fierce, strange, idealogical commitment to a very weird form of dualism,…”

            😀 This whole paragraph really cracked me up! It’s what I’ve been blogging about and arguing for years here! Right on, brah! 😀

            Very much agree with the paragraph that follows, too. I’m definitely on the embodied cog side of things.

            Like

          3. If they’re going to throw “stupid money” at you Mark, to decline would be to compound the stupidity. Yes take it!

            It seems to me that the premise of computationalism has been challenged strongly by several of us here, so I wouldn’t say that your message has gone for nought. But if educated people today by and large believe that conscious experiences exist by means of nothing more that “information”, I’m not actually that troubled. My ideas remain psychological rather than neurological and so can go either way. One simply makes a lot more sense to me than the other.

            So your interests have gone towards Autopoietic Enactivism? I know one blogger here who may be extra interested in that. The autopoietic part is easy, or “A system capable of reproducing and maintaining itself.” The enactivism part seems more tricky. From Wikipedia:

            “Enactivism argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that our environment is one which we selectively create through our capacities to interact with the world. “Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems…participate in the generation of meaning…engaging in transformational and not merely informational interactions: they enact a world.” These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science.”

            Hmm… I don’t yet understand. Regardless I suspect that your movement seeks to produce a conscious computer, as is standard. My suggestion would be to instead try building a non-conscious computer/ machine which also produces a conscious entity. For example light bulbs aren’t light, though they can produce light under the proper circumstances. Similarly brains aren’t conscious, though can produce consciousness under the proper circumstances. As sketched last time, in consciousness I speak of a fundamentally different kind of computer that has three varieties of input, one processor, and one variety of output.

            This proposal tends to freak out our host Mike. We have reasonably objective evidence for the existence of stuff we call “light”, but not for stuff we call “qualia”. So is some kind of “new physics” happening in our heads to cause consciousness? If qualia does objectively exist, then naturalism mandates that physics be responsible in the end. Furthermore if this physics happens to be entirely based upon “information”, then it would be possible for me to exist countless times at once by means of countless information bearing mediums. Fine.

            Liked by 1 person

          4. Hi Philosopher Eric.

            [Apologies; i am still learning my way around this blogged environment and I am not sure if this comment will appear in the right place] but in general it is not correct to say that, “I suspect that your movement seeks to produce a conscious computer, as is standard”.

            Enactivism is still very much an early stage ongoing research programme – a broad church with various competing strands of thought – and at this stage can be characterised by three key approaches: AE: Autopoietic Enactivism (deriving from Varella, Thompson et al) which i find the most persuasive; REC: Radical Enactive (embodied account of) Cognition (from Dan Hutto & Erik Myin) and SE: Sensorimotor Enactivism (deriving from Alva Noe and Kevin O’Regan). Only in the latter do we find a serious attempt to assert machines [qua computation] can given rise to consciousness and even there, only in a specific sub-strand of thought pursued by Kevin O”Regan; my understanding being that Alva Noe pulls back from O’Regan on this claim (see our introduction to our edited collection [with A.O. Martin] “Contemporary Sensorimotor Theory” – i am going to try to puvlish the official link [you can also find a pre-print at my College repository] https://link.springer.com/chapter/10.1007/978-3-319-05107-9_1). In general, enactivism, and specifically Autopoietic Enactivism is fundamentally embodied in [autopoietic] systems; the ‘poster child’ example the community tend to use to ground theory being a single celled bacteria/organism … A great place to start (on AE) being Evan Thompson’s magnum opus, “Mind in Life”.

            Hope this helps ..

            best,
            – mark

            Liked by 1 person

          5. Mark,
            It’s good to hear that Enactivism does not concern AI. It’s strange to me how much interest and money goes into attempts to artificially create us, that is given that we understand ourselves so poorly. How does one build what one has very little grasp of? Only once we’ve developed broad psychological models which become reasonably successful, as well as use them to better structure supervened upon fields such as neuroscience, should we even grasp what would need to be built. So in my opinion modern AI endeavors are premature. (Not that I’d refuse to accept “stupid money” though).

            So you and Martin are promoting the work of O’Regan and Noe, or the perspective that output needs to be considered in a more integrated way with input in modern cognitive science? And you maintain that they’ve even made roads into why existence feels like something for creatures such as the human? Sounds good. I’ll take your word for it that O’Regan may have gotten a bit ambitious by means of AI interpretations.

            On Evan Thompson, I’ve now checked the web a bit to see if he’d interest me. Perhaps. I’ve found the following short talk a bit long on jargon, not that I can truly fault him for this given that these fields seem to function this way in general. https://mindbrain.ucdavis.edu/featured-videos/thompson-mindfulness

            I like his position on “looping”, or that we construct ideas about the brain, and then given such constructs go looking for where they can be located. But just as “parenting” has no absolute location, it may not be useful to look for “attention” as if it exists somewhere in the brain. Or as I mentioned earlier, just as light is produced by the light bulb rather than exists as a bulb, it may be effective to say that consciousness is produced by the brain rather than exists inside of a brain. And indeed, I consider this product useful to call a second form of computer — attention incarnate.

            When you say that the single celled organism would be a “poster child” for the young AE movement, I wonder in what capacity? Is it thought that affective states exist for amoeba? That would make them “primally conscious” as I define the term, which seems strange to me.

            I consider it useful to define four basic varieties of “computer”. Reality’s first would be the kind associated with genetic material. Of course these are the non-conscious mechanisms associated with the production of life itself.

            Once multicellular life began propelling itself around, it seems to me that central organism processors for the whole structure would have started to become helpful, or what I propose as reality’s second form of computer.

            The issue with such function is that it shouldn’t have been possible for evolution to productively account for potentially unlimited contingencies by means of programming alone. What it did, I think, was produce an entity which was punished and rewarded by means of various circumstances (such as pain). Thus the brain would monitor and facilitate the function of the experiencing entity, or a distinctly different (and teleological) form of computer. Just as standard computers produce output which may animate a computer screen, I believe that my brain produces output which animates mechanisms that produce the vast array of affects which I experience.

            Then finally there are the technological computers that we build, or the fourth and final form of computer in this series that I know of. In comparison I consider our machines utterly pathetic.

            Like

  18. I know Hameroff and Penrose have been mentioned.
    Perhaps I’ve definitely been exposed to too much YouTube, PsychologyToday, and forums, but I’m confused. Aren’t Penrose and Hameroff considered woo-woos and quantum mystics?

    Like

    1. Actually, from my understanding anyway, Penrose and Hameroff are definitely NOT considered woo mystics. They are respected scientists honestly doing their best to approach the question scientifically. Some of us simply think their approach is ill advised, probably motivated by the “both consciousness and quantum mechanics are mysterious (in the “not well-understood” sense) so maybe they’re related” concept. It is easy to fall into such a concept given the prominence of the “consciousness can’t be just information processing so it must have some extra physical component” school of thought. See above.

      *

      Liked by 1 person

    2. They haven’t given a good account of how quantum effects in the microtubules of the brain affect cognition, but recent thinking has, for some, given credence to the general idea that quantum effects might play some sort of role.

      For instance, plants use photosynthesis, which is seen as involving quantum effects, and some animals are sensitive to magnetism, which is also a quantum effect. There has been some consideration of a phosphor compound that seems like it might support entanglement for long periods (many seconds) in hot, wet, messy biological environments.

      Nature has a tendency to leverage everything at Her disposal, so it doesn’t seem unreasonable to think one of the most complicated machines She ever created makes some use of quantum effects. (We just haven’t spotted where or how, yet. 😉 )

      Liked by 1 person

    3. On Penrose and Hameroff, my view of them is harsher. I think it’s important to note that their views bear no relation to serious neuroscience. It’s completely evidence free speculation. They’re not themselves woomeisters (well, I’m actually not sure about Hameroff) but they give a lot of aid and comfort to actual woomeisters.

      Like

      1. You think consciousness is going to be solved with neuroscience?

        This is an odd view if you also think that consciousness can be instantiated in something other than brains. Almost nothing more wooey than that without any physical theory for consciousness. Evidence free speculation?

        Any good reason we can’t skip neuroscience and go straight to circuits and RAM?

        Like

          1. Maybe a little excessive, sorry. But you’re criticizing Penrose and Hameroff as evidence free speculation in the context of your speculations about uploading mind?

            I agree with Wyrd that Penrose and Hameroff haven’t really shown how their microtubules theory explains consciousness but at least it is pointing to some actual structures and possible phenomena in the brain, as well as some extra “juice” beyond vanilla information processing.

            I would be content to let the microtubules theory and any theory that consciousness is wholly information processing pass as evidence free speculation.

            Liked by 1 person

          2. Thanks James.

            I’ve been pretty clear about the speculative nature of mind uploading. The problem with Penrose and Hameroff is a lot of people don’t realize just how speculative and disconnected from actual neuroscience research their ideas are.

            For the lowdown on microtubles, I recommend reading a book on cell biology. Their functions are pretty well known. Or the wikipedia on it (currently) looks decent: https://en.wikipedia.org/wiki/Microtubule

            On consciousness only being information processing, as far as I can see, the only evidence we’ll ever get for that is a continued lack of evidence for anything else.

            Like

          3. I’m not sure that the readers of the Graziano article will exactly understand how speculative it is either in its basic assumptions, especially with the regular exposure most of us have to conscious machines and mind uploading as routine occurrences in movies and TV.

            In the meantime there has turned up ample evidence of quantum effects in living material including detection of quantum states in microtubules.

            “We demonstrate that a single brain-neuron-extracted microtubule is a memory-switching element, whose hysteresis loss is nearly zero. Our study shows how a memory-state forms in the nanowire and how its protein arrangement symmetry is related to the conducting-state written in the device, thus, enabling it to store and process ∼500 distinct bits…”

            https://ui.adsabs.harvard.edu/abs/2013ApPhL.102l3701S/abstract

            Wikipedia has some good information on the general state of the theory which has some hits and misses.

            https://en.wikipedia.org/wiki/Orchestrated_objective_reduction

            Regarding evidence, I guess you know absence of evidence is not evidence of absence,

            Like

  19. They do seem to help actual woomeisters sleep peacefully at night. Penrose doesn’t seem so bad. But I do take Hameroff with a large grain of salt.
    It’s a shame there are so many pseudoscientists that it would take me atleast a week to list them here.
    And I think, as a result of the large amount of pseudoscientists and New Agers, when the words ‘quantum mechanics’ and ‘consciousness’ are used ( by non-woo scientists ) in the same sentence or paragraph, it’s automatically in the ‘woo’ and ‘self-serving delusion/beliefs’ pile.
    In rare cases, even just the word ‘consciousness’ alone is enough to get some people worked up.
    Again, just my observations. 😔

    Thx for your responses.

    Liked by 1 person

    1. To say of Penrose that he is a “woomeister” (whatever precisely that may be) is ad hominen in extremis. For the record, Roger Penrose is Emeritus Rouse Ball Professor of Mathematics at the University of Oxford and an emeritus fellow of Wadham College, Oxford.. Ironically – given they ended up with very different views on the nature of mind – Penrose, taught Stephen Hawking. Indeed, (if you take a look at his wiki), “Penrose has made contributions to the mathematical physics of general relativity and cosmology. He has received several prizes and awards, including the 1988 Wolf Prize for physics, which he shared with Stephen Hawking for the Penrose–Hawking singularity theorems.”

      Roger Penrose kindly wrote “Chapter 12: Consciousness, Computation and the Chinese room” for our edited volume, “Views onto the Chinese room” (Preston & Bishop, Oxford); I have been fortunate to meet with him on a number of occasions where I have always found him to be an extremely precise and profound thinker/polymath. As you may know Roger ventured into the AI, consciousness debate with a series of books – The Emperor’s new Mind, Shadows of the mind etc. – where the argument he expounds is basically two fold:

      1. He revisits Turing non-computability in the context of mathematical insight; making a persuasive case (imo) that mathematical insight is not a Turing computable process. This is his ‘negative’ result, if you like, which has come to be widely known as “The Penrose-Lucas” argument.

      2. He makes a case – in work with Stuart Hameroff based on Penrose’ theory of “Orchestrated Objective [quantum] Reduction” (Orch-OR) – that consciousness is a quantum process. I am not qualified to comment on this – for al i know, it may well be nonsense – nonetheless it is published and commented on, in regular peer reviewed journals (e.g. As recently as January 2014 Hameroff and Penrose claimed that “a discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan confirms the hypothesis of Orch-OR theory. A reviewed and updated version of the theory was published along with critical commentary and debate in the March 2014 issue of Physics of Life Reviews”) and – imho – the work merits serious criticism (like any another thesis) via appropriate academic argument.

      I am sure this doesn’t apply to you Linda, but I once engaged in online debate [over a number of months] with a internationally well known Cognitive Scientist on the Penrose-Lucas argument, until he finally admitted that he had *never* read any of Penrose’ work; just a few second hand commentaries on it. In my experience, the vast majority of those with the strongest negative views on Roger’s work [that I have debated with and I have engaged quite a lot over the years] have never read any of Roger’s published output ..

      Liked by 1 person

      1. No one has accused Penrose of being a woomeister. And I’m very aware of his prestige in physics. But most neuroscientists view his ideas on consciousness as nonsense. Penrose is, to me, a cautionary tale. He’s an example of someone with expertise in one field trying to extend it into another field. When someone does that, they’re usually just well educated laypeople. The results are usually unfortunate. Physicists seem especially prone to this type of hubris, and everyone seems willing to do it to weigh in on consciousness.

        The only times this kind of crossover typically works is when the person doing it invests an enormous amount of time and energy learning about the new field, such as what Francis Crick did when he transferred from molecular biology to neuroscience (and even then, it’s a much smaller crossover than from theoretical physics).

        Like

        1. “Penrose is, to me, a cautionary tale. He’s an example of someone with expertise in one field trying to extend it into another field. When someone does that, they’re usually just well educated laypeople. The results are usually unfortunate.”

          What does that suggest about all us armchair amateurs who dabble in this this field without Penrose’s background, training, or experience?

          Liked by 1 person

          1. That those of us putting forth our own radical theories are almost certainly wrong. When we start thinking we see things the experts are missing, we’re probably making first year graduate student mistakes, at best.

            Liked by 1 person

          2. Indeed. 😀

            The sometimes confounding thing is that one person’s radical is another person’s obvious.

            Or, as I like it, “One person’s Huh? is another person’s Duh!” 😉

            Like

        2. SAP, apologies if you read my comment as directed to you; it wasn’t. It was directed to the following remark, “I know Hameroff and Penrose have been mentioned.
          Perhaps I’ve definitely been exposed to too much YouTube, PsychologyToday, and forums, but I’m confused. Aren’t Penrose and Hameroff considered woo-woos and quantum mystics?”

          In addition, although i am not seeking to endorse his work, it could be argued that Stuart Hameroff, an anesthesiologist, has a ‘special’ *practical* knowledge of [human] consciousness that informs his world view (indeed, I recall attending the second ASSC event (in Bremen 1998), where i was very surprised to learn from him that [at that time at least] there was no machine that could reliable signal human unconsciousness). Indeed, these days Hameroff is the lead organiser of the international “Science of Consciousness” conference (held biannually since 1994), which some might take as evidence that he is not a total novice in the field.

          I would also argue that Penrose is quite well-positioned to write on the subject: he has a strong background in (a) logic (he was invited to give the plenary at the 2006 Godel Centenary conference in Vienna) and Turing non-computability (e.g. his work on a-periodic tiling; cf. Penrose tessellation which underlie the newly discovered quasi-crystals) and (b) quantum physics, which fundamentally is the physics of the observer and, for Penrose, consciousness (vis a vis Orch-OR).

          But in any event, TBH, i think consciousness is very much a subject that anyone *can* write *something* on; of course, such writings may be nonsense – in which case we can ignore them unless they get to be popular, when we can deconstruct and criticise as we deem fit.. I merely take objection to ad hominem attacks on people I know a little, who I believe to be ‘quite bright’ and who (at least imho) have had some interesting, albeit contentious, things to say on the science of consciousness studies ..

          Liked by 1 person

          1. Thanks Mark. No worries. I took it as being directed toward the overall discussion. I certainly understand the urge to defend someone you know.

            Hameroff does coordinate the Tuscon conference, but from comments made by neuroscientists in various podcasts, among hard nosed scientists, it’s developing a reputation as going off the rails, particularly since Chalmers pulled out . And based on comments I’ve seen him make in various videos, as an anesthesiologist, he does know about neurology, but he uses it more as a springboard for his speculatve ideas rather than letting the data drive him.

            Like

        3. Thanks very much for your insightful comment Wyrd. Perhaps it will motivate some consideration about motives and methods for all of us who breathe life into this particular blogscape. Like many here, I imagine, I’m cursed with the feverish urge to understand myself and the world—Life, the Universe and Everything of course. Importantly, the key word here is ‘understand’. Knowing with certainty will always be elusive and perhaps impossible but, in my experience, the best we can do to promote understanding is to adhere to the best motivations and methodologies of science, adhere to credible evidence, clearly define our terminology and apply the simple rules of logic to weed out the unlikely to impossible. If you’re on a spiritual quest, you’re traveling a different road.

          Yes, we’re all armchair amateurs in that quest to understand and, as such, must rely on those few who have been fortunate enough to specialize in focused scientific investigation. But, in some sense, that may provide us an advantage of sorts, since we’re not enveloped by the vast details of the trees and can occasionally take a longer view that sees, or at least glimpses, the outlines of the forest.

          In my decade or so of reading and thinking about consciousness I’ve settled on what seems to be the most scientifically reasonable—Biological Naturalism—which is Searle philosophically, but in terms of a scientific understanding of consciousness is most closely represented by Damasio and others in his circle, Parvizi, Panksepp, Merker and others. My understanding of consciousness, which I suspect Mike sees as my “own radical theory,” originates with Damasio and his (as he admits) minority viewpoint. I’m 99% in agreement with his hypotheses although I’ve strayed a bit beyond in order to avoid known conundrums. But I stand on his shoulders to see what I can see.

          Life is short and what is left to each of us grows daily shorter so I believe it’s helpful to have some sort of guide to the territory so that we might not die completely unfulfilled in our quest. In that spirit, I offer a few hard won observations in hopes of shortening or smoothing the path to understanding for others. I believe these are all agreeably logical and sensible and scientifically valid. I suggest that if we truly wish to promote understanding for ourselves and others, perhaps we might grow this list as a community effort.

          1. Without an accompanying precise definition of core terminology—consciousness, in this case—no proposal (for anything, not just consciousness) can be understood. Panpsychism, neutral monism, GWT, IIT and others fail this simple test for comprehensibility. 2. Any proposal must be conformant with established experimental and observational evidence. 3. Any viable conception of consciousness must be plausibly rooted in a credible evolutionary sequence. 4. So called ‘what-it’s-likeness’ is meaningless obfuscatory verbiage—Hacker’s case is formidable. 5. Those who believe in The Hard Problem are dualists and dualism strays from science. 6. Philosophy of consciousness (and of most other topics as well) is a hopeless morass littered with impossible notions like the philosophical zombie. 7. Metaphors can spur insight but have limitations—the mind is a computer as a for-instance—so be very cautious in proposing and extending metaphors.

          Is anything listed obviously whack? Anybody have an 8 through ‘n’?

          Like

          1. This in response to your “armchair amateurs” comment Wyrd, in case the indentation confuses.

            Like

  20. To me it’s just weird children. Without the benefit of actual children which is that they might change their ideas – instead ‘theory advances one funeral at a time’ stops and we get really ingrained dogmatism because the idiocy is copied to a permanent form.

    Liked by 1 person

    1. That is a possible danger of an immortal society, or even one with everyone living a long time. To maintain equilibrium, the number of children has to be lower. The whole society might become rigidly fixed in its ways, where new ideas have a hard time gaining traction. Although if AIs are doing much of the work, it might not be that much of an overall problem. But it would be a very different world.

      Liked by 1 person

      1. For an alternative perspective … 🙂 … [beware, singularity talk is nigh]

        Even if everyone uploaded there brains, they would become an ever decreasing percentage of “society”. Long before we can do this it will not only become possible to create new AGI’s, it will also be cheap. These will be the new children, generating new ideas, and creating their own “children”.

        *

        Like

        1. Given how much bias is an issue for machine learning networks, it seems plausible that AIs would be as subject as anyone to becoming set in their ways. And every new AI requires substrate, meaning resources, so increasing forever seems problematic.

          On the other hand, AIs won’t necessarily care about their own survival, so we might arrange to have a constant rotation of “children” coming up while old ones are being “retired”.

          Like

          1. There are a number of people active in the ML community who contend that the moniker “machine learning” is an egregious misnomer.. Personally, I argue from a Wittgensteinian perspective, that machine learning *is not learning* and, from a very different perspective, there are some beautiful youtube talks from folk prominent in the [theoretical] ML community who lift through the [ML] hype from a technical (contra philosophical) point of view. E.g. Carl Henrik Ek (Senior Lecturer in Computer Science at University of Bristol):

            1. “Free Lunch? How We Can Learn From Data with Carl Henrik Ek”.
            2. And Carl’s TEDx talk “Why I do not fear Artificial Intelligence” is, imo, also well worth 15mins of anyones time.

            [Apologies, I still cant post urls on this site.. Grrrrrr) ..

            Like

          2. Mark, not sure what’s happening with the URL thing. I’m not seeing anything in the spam folder. And I currently don’t have anything in my moderation or block filters. What are you seeing when you attempt to post them? Does the site just eat the comment, or produce some kind of error?

            Like

          3. Mark, are you enclosing the link in any kind of HTML tag? WordPress has a habit of eating the ones it doesn’t like. If you just post the raw URL text, that usually works. If it doesn’t work, let me know and I’ll get with WordPress support and let them know what’s happening.

            Note: if you’re copying the URL as a link from another site, you may be copying enclosing tags. If so, open the linked page and then grab the link text from the browser URL line.

            Like

  21. Oh. I forgot to mention. ‘Neuroscience’ and ‘consciousness’ can get some worked up, too. Even though there’s nothing woo-ish about it. 😉

    Like

  22. I just read Graziano’s article to see if he mentioned physical replication, but he’s talking strictly virtual — running minds as software, although he doesn’t specify whether as a physical simulation or as a “mind algorithm.” He only says:

    “It captures all your synapses in sufficient detail to recreate your unique mind.”

    Which would be the first step either way. I kinda wonder how he thinks of it: as a simulation of a physical brain or as something more abstract and algorithmic.

    If it required specialized hardware, like neuromorphic chips, then we might end up with server farms of VBrains™ (accept no substitutes!), each running a mind.

    Which would put an interesting limit on proliferation of virtual people. If it’s purely a matter of running software, then one could have hundreds of copies of oneself all collaborating on a task.

    What would it be like to go to a virtual party where everyone was you? (Ever see Being John Malkovich?)

    Here’s one part of the Graziano’s article that caught my eye:

    “…and yet given how much interest and effort is already directed towards that goal, mind uploading seems inevitable.”

    “Inevitable”? That, to me, seems the wide-eyed view of a believer.

    That might be like suggesting that FTL is inevitable while Graziano is treating it like saying breaking the sound barrier is (was) inevitable. The latter was truly just a matter of engineering — we saw examples in nature. But the former, as far as we know, is prohibited by physics.

    Given the prolonged and in depth way computationalism is debated, as far as our knowledge is concerned, the reality clearly is up for debate. Neither outcome is “inevitable.” Both are rational and fit what data we know.

    That said, computationalism is without precedent or parallel, so it seems some skepticism is warranted merely on those grounds. Graziano says:

    “[T]here are no laws of physics to prevent it.”

    But as James Cross pointed out:

    “Also, no laws that suggest it is possible. Still missing any physical theory.”

    I think the key lies in answering the question: Why phenomenal experience?

    (I do know the answer isn’t 42.)

    Like

    1. Ginger Campbell just interviewed Michael Graziano, if you’re interested in his overall views.

      Warning: Graziano hits on a lot of topics that appear to be sore points for many here. He sees the mind as informational, computational, materialistic, and talks of his theory solving the meta-problem. He agrees with the illusionists, but like me doesn’t like the word “illusion”. And he talks about the resonance between his attention schema theory, higher order thought, and global workspace theories, and the discordance of all those theories with IIT.

      Like

    2. I think an issue glossed over is the question of what virtual environment does this uploaded mind exist in.

      Assuming the mind isn’t in a sensory deprivation tank, an open question in my view is how much of the world do you need to simulate to have a somewhat normal functioning mind. It may turn out that the challenges of simulating the world are at least as or more daunting than the challenges of simulating the mind. I would think a poor or deficient world simulation would quickly cause the mind simulation to malfunction. If the mind decides to fly to Barbados will the world accommodate an airplane, airports, sun, and a beach with a ocean of salt water and a coral reef? Or, will the mind be doomed to living in the virtual equivalent of Scooby-Doo movies?

      “I think the key lies in answering the question: Why phenomenal experience?”

      That is still the big question for me too. I think phenomenal experience must have come about through natural selection and for the species advantage of more complex and flexible behavior. But that doesn’t answer the question of what phenomenal experience adds to unconscious processing, especially since our brains probably do a lot more unconsciously than they do consciously. I keep coming back to the incredibly low energy consumption of brains compared to electronics and thinking that phenomenal experience might be cheaper than alternatives for something, possibly integration of multiple sensory systems. But the full details of that elude me and I could be wrong.

      Liked by 2 people

      1. …how much of the world do you need to simulate to have a somewhat normal functioning mind.

        James,
        We’re on the same side of this debate, though to me it seems imperative to always grant the other side its due. If “thumb pain” does exist by means of information alone, then in a theoretical sense I must grant the potential for some programmer/god to recreate the world from my own perspective to an arbitrary level of complexity. No practical constraints exist here because we’re merely discussing something conceptual. So go ahead and grant them that, and don’t even presume that sensible people here consider anything “normal” about such scenarios. Even that crazy futurist Ray Kurzweil doesn’t consider himself to exist as a simulation (yet). That’s merely the paradise which this prophet offers his vast sci-fi deluded flock.

        From the computationalism premise note that even if information associated with me were uploaded into a computer world (“Scooby-Doo” or whatever), this sentient entity would merely be a copy of me with some level of fidelity. By definition, “copies” create additional entities rather than transfer the existence of old over to new. (So sorry Ray, we’re all going to die even if it’s possible for “thumb pain” to be produced by means of symbol inscribed cards that are processed into other symbol inscribed cards.)

        Regarding the “Why?” question of phenomenal experience, I think you’re right about flexibility. Non-conscious computers fail when they aren’t programmed to deal with a given circumstance. Thus I suspect that the brain evolved to produce the conscious entity (or phenomenal experience) to use what that entity decides given the various punishments and rewards that it experiences. Purpose driven entities (like you and me) can indeed function flexibly enough to deal with more open circumstances. That’s exactly where non-conscious computers tend to fail.

        Like

      2. “I think an issue glossed over is the question of what virtual environment does this uploaded mind exist in.”

        It gets talked about a lot in SF (although much of it is hand-waving). I’ve never seen it as that much of a challenge. What we do already with technology simulating virtual environments has a clear path to more advanced systems.

        The new MS Flight Simulator, for example, is supposed to be mind-blowing in its realism. (And, thanks to imagery from Bing, you apparently can fly anywhere in the world.)

        We’re very good at simulating non-living things, especially static ones (like buildings), and we’re getting better and better at living things (at least in terms of appearances). Lots of online games simulate large virtual worlds.

        (In fact, our current ability with virtual environments is one argument in the “It’s all a Simulation!” hypothesis. If we’re this good now, just imagine what might be possible!)

        “But that doesn’t answer the question of what phenomenal experience adds to unconscious processing,…”

        Or how it arises in the first place. I.e. Why any form of information processing should have phenomenal experience. Why is there “something it is like” to be this particular type of information processing system?

        On top of that, yes, there are some very interesting questions about evolutionary value!

        Like

        1. I think you are drastically underestimating the difficulty. It isn’t a matter of simulating one set of environments with a canned repertoire and a joy stick. I’m not even sure those approaches could even be extrapolated. Effectively you have to simulate the entire world as it appears including completely unscripted interactions initiated from the world or from the mind as well as all of the sensations of the body – heart, breath, indigestion – as well as all of the body interfaces – touch, sight, sound, taste, and smell. And all of this to a degree that it seems absolutely seamless and real to the mind. When the mind runs the body’s respiration and heart increase, perspiration forms, and eventually the body tires and seeks shade in the park with dozens of other people on a beautiful but warm summer day.

          Like

          1. “Effectively you have to simulate the entire world as it appears including completely unscripted interactions…”

            That much is more doable than you may realize. It’s a matter of a physical model of reality — one that contains objects with properties. The simulation uses those properties to control how those objects behave. Such an environment acts in real time — no scripts, no defined behaviors.

            The idea is to simulate general physics at a fine enough level of detail. (The way the CGI Spiderman moves between the first movie and the most recent illustrates the big improvements in simulating physics. The first one was pretty bad!)

            The point is, the simulation of physical reality (down to fur on animals!) has advanced a lot in the last decade. (Movies and games account for a lot of it.)

            We have a good handle on how to do the simulations — processing power is a key limit. Some animated movies take months to render in final form.

            “And all of this to a degree that it seems absolutely seamless and real to the mind.”

            I do agree that the more real the virtual is intended to seem the harder the task of computing it.

            On the other hand, who’s to say the virtual environment has to be that real? Perhaps the benefits are great enough to make some compromises in reality acceptable. (This isn’t the Matrix — we’re not trying to fool anyone.)

            In some regards, you want a fantasy environment. Might be fun to live in a Minecraft reality for a while.

            And going so far as to simulate indigestion and perspiration? No thanks! 🙂

            Like

        2. James, I’ve already pursued this discussion with Wyrd at length on the “Keith Frankish on the Consciousness Illusion” post:

          https://selfawarepatterns.com/2019/09/26/keith-frankish-on-the-consciousness-illusion/

          You submitted a few comments to that post, so you might have already had a run through of the discussion which started with my mentioning that The Matrix is a simulation and not an illusion as Frankish stated. Search for my comment that starts:

          It’s interesting to note that we commenters haven’t yet examined the actual definition of the word ‘illusion’—so lets give Frankish a hand:

          And then read and read and read. It’s a long sub-thread where Wyrd supported the plausibility and I the inconceivable complexity of producing the moving/changing totality of simulated neural impulses of a person in a simulated environment. (And a significant parallel exploration of Wyrd’s unbelief in the implications of relativity physics to boot.) For Wyrd, the production of uncountable simulated nerve impulses representing a simulated being in a simulated world is “an implementation detail.”

          I summed up my position (also possibly yours):

          My major point (I thought) was the unimaginable computational complexity of the simulation. Take visual simulation, for example, in which the minutely detailed data model of the city under specific momentary light conditions from an individual vantage point (an unmoving set of eyeballs) would then be processed to yield frequencies and intensities of photons hitting the eyeball followed by a computation equivalent to that provided by light rays being lensed onto the retina and activating the retina’s light-sensitive chemistry, then generating nerve impulses processed by the cortical-like neuronal structure at the back of the eyeball. The resultant nerve impulses—those normally sent to the brain via the optic nerve structure—are the ones that would be sent via the plug in the back of the battery person’s neck. Now imagine dealing with moving eyeballs and changing focus. And that’s just vision! How about simulating all of the proprioceptive processes that ultimately yield spinal cord input about the position, motion and internal state of the body? Yes, “’just implementation detail’ but ‘unimaginably complex’ begins to seem like an understatement..”

          Wyrd compared all of that to “ray tracing”. And now, in this discussion, feeding those simulated neural signals to a simulated brain, rather than a Matrix pod person whose consciousness creates the felt reality, adds enormous additional complexities that we weren’t considering.

          This time, Wyrd sees Microsoft’s Flight Simulator as a simulation when it’s not. It’s instead a comparatively trivial interactive movie with the fleshly bodies and consciousness enhanced brains of the user doing all of the lifting.

          So I invite you to scan through that discussion and realize that Wyrd and you might be about to repeat much of it. Consequently you might both enjoy your Saturday some other way than typing a reproduction of old news and views. And, besides, you might avoid my fate—I don’t think Wyrd likes me anymore [sniff] … 😉

          Like

          1. Yes. I agree with your point. You can’t solve the virtual world problem by just multiplying scenarios and alternate paths because the mind (simulated or otherwise) could always go off script and you actually need the world to go off script for it to seem real. I think the VR would almost need a model of the apparent world sufficiently complete to derive any possible variation of it from fundamental principles if the computing and memory requirements are not to be prohibitive.

            Like

          2. “I think the VR would almost need a model of the apparent world sufficiently complete to derive any possible variation of it from fundamental principles if the computing and memory requirements are not to be prohibitive.”

            Yes, exactly. That’s exactly how they do work now. (Like MS Flight Sim.) You are free to move around the virtual reality and explore.

            They are crude because of the computing requirements, but that’s the main limit.

            Compare the quality of any VR game with any 3D CGI movie. They both derive from similar models of virtual realities. The movie looks better because it takes hours to render each frame.

            But in both cases a Point Of View is moving through an “apparent world” that is rendered according to what that POV looks at.

            Like

          3. I take it you’ve never used or developed a virtual reality? They’ve acted as I described since at least the 1990s. Back in the MS-DOS 3.33 days I used to play a game, called Descent, that featured an unscripted apparent virtual world you could freely explore (once you killed the killer robots). Lots of games work that way.

            MS Flight Sim, which goes way back, has always been such a virtual reality — unscripted and freely explored.

            Like

      3. Um … I don’t think an uploaded mind necessarily has to live in any simulated environment. It just needs input and output. Hook it up to a camera or two, and a microphone. Give it some output mechanism like Stephen Hawking had. Good to go.

        *

        Like

          1. Linda, that would be true with modern robot bodies. But you have to think in terms of a more advanced robot, one that could have somatosensory and introceptive mechanisms that would allow you to touch water, eat, drink, etc. Think of the replicant bodies in the Bladerunner movies, Cylons in Battlestar Galactica, or the hosts in Westworld. With nanotechnology, robotics and biotechnology eventually merge into engineered life.

            Like

          2. The brain already simulates the body. It just needs the sensory input to do it. Or, since we’re talking about an uploaded mind, the implementation could choose to bypass much of that and just put the impression of those things in the mind. (A point I realize might make you even more uncomfortable with this concept.)

            Like

          3. Not exactly because the brain does not simply have a passive representation of the body that receives input. The other side of this is the part that takes actions in the world through the body. The actions have to be simulated along with the feedback from the world. Effectively the problem of simulating of the mind is a subset of simulating the apparent world. The mind doesn’t exist except as a part of the feedback loop with the world.

            Like

      1. I hadn’t realized an MRI field of sufficient intensity to achieve the necessary resolution would fry the brain. We don’t even know what kind of technology is necessary for that kind of a scan.

        Like

        1. For non-destructive scanning, nanorobotics may eventually be the best option. Although it’s possible that any scan might necessarily be destructive. (Which would eliminate Graziano’s awkward conversation.)

          Like

  23. I understand, and I’m with you on those comments of yours, lemarkle.
    I was just saying that neuroscientists and many others who are not even scientists tend to get angry/worked up when other scientists use the words ‘mind/consciousness’ and ‘quantum’ in the same paragragh, even though they’re not even remotely trying to sound or promote New Agery. Whether it’s a book, research paper, blog, etc.
    And I completely understand why there’s anger, because it sounds like a very ‘Chopra-esque’ thing to do. Hurling ‘Quantum consciousness’ and other sciencey words.
    https://en.wikipedia.org/wiki/Quantum_mind
    Popular pseudoscientists like Chopra, Robert Lanza, and, Sheldrake tend to do a lot of that.

    Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.