The brain is a computer, but what is a computer?

Kevin Lande has an article up at Aeon which is one of the best discussions of the brain as a computational system that I’ve seen in a while.  For an idea of the spirit of the piece:

The claim that the brain is a computer is not merely a metaphor – but it is not quite a concrete hypothesis. It is a theoretical framework that guides fruitful research. The claim offers to us the general formula ‘The brain computes X’ as a key to understanding certain aspects of our astonishing mental capacities. By filling in the X, we get mathematically precise, testable and, in a number of cases, highly supported hypotheses about how certain mental capacities work. So, the defining claim of the theoretical framework, that the brain computes, is almost certainly true.

Though we are in a position to say that it is likely true that the brain is a computer, we do not yet have any settled idea of what it means for the brain to be a computer. Like a metaphor, it really is unclear what the literal message of the claim is. But, unlike a metaphor, the claim is intended to be a true one, and we do seek to uncover its literal message.

Even if you agree to the literal claim that the brain computes since, after all, our best theories hold that it computes X, Y and Z, you might be unsure or disagree about what it is for something to be a computer. In fact, it looks like a lot of people disagree about the meaning of something that they all agree is true. The points of disagreement do not spoil the point of agreement. You can have a compelling reason to say that a claim is true even before you light upon a clear understanding of what that claim means about the world.

The overall article is pretty long, but if you have strong opinions on this, I encourage you to read the whole thing.

I’m pretty firmly convinced that the brain is a computational system.  When I read about the operations of neurons and synapses, and the circuits they form, the idea that I’m looking at computation is extremely compelling.  It resonates with what I see in computer engineering with the logic gates formed by transistor circuits, or with the vacuum tubes and mechanical switches of earlier designs.

Apparently I’m not the only one, because neuroscientists talk freely and regularly about computation in neural circuits.  It’s not really a controversial idea in neuroscience.  The computational paradigm is simply too fruitful scientifically, and no one has really found a viable alternative theoretical framework.

Of course, many neuroscientists are careful to stipulate that the brain is not like a commercial digital computer, and I do think this is an important point.  Brains aren’t Turing complete, and they certainly don’t have the Von Neumann architecture that most modern commercial computers resemble.  But as the article discusses, these systems are a small slice of possible computational systems.

If we restrict the word “computation” to only being about what these devices do, then we need to come up with another word to describe the other complex causal systems that sure appear to be doing something like computation.  I’ve used the word “causalation” before to describe these systems.  But commercial computers are also causalation machines, so this just brings us right back to the bone of contention.  Both commercial computers and brains appear to be causal nexuses, and computation could be described as concentrated causality.

The stumbling block for many, of course, is that the brain doesn’t follow any particular abstract computational design.  It’s a system that evolved from the bottom up.  So like any biological system, it’s a messy and opportunistic mish-mash.  Still, scientists are gradually making sense of this jumble, and computationalism is an important tool for that.

Anyway, one of the things I’ve learned in the last few years is that a lot of people really hate the idea of the brain as an organic computer.  It’s yet another challenge to human exceptionalism.  So I anticipate a fresh wave of anti-computational responses to this piece.

What do you think?  Are there reasons I’m not seeing to doubt the brain is a computational system?  And if so, is there another paradigm worth considering?

199 thoughts on “The brain is a computer, but what is a computer?

  1. “…a lot of people really hate the idea of the brain as an organic computer. It’s yet another challenge to human exceptionalism. So I anticipate a fresh wave of anti-computational responses to this piece.”

    Hold on there, buckaroo… One can just as easily argue this computational idea is the wishful thinking of having read too much science fiction. 😛

    I don’t “hate” the idea, Mike. I just think it’s wrong. I think every argument I’ve heard amounts to hand-waving.

    From the quoted part:

    “The claim offers to us the general formula ‘The brain computes X’ as a key to understanding certain aspects of our astonishing mental capacities.”

    This is either circular or the start of a claim that must end in contradiction (e.g. a claim we can enumerate all the reals). Yet it ends just asserting the thing it asserted in the first place, so it’s circular hand-waving.

    “When I read about the operations of neurons and synapses, and the circuits they form, the idea that I’m looking at computation is extremely compelling.”

    Why? There is no form of computation anything like it. It’s an analog system functioning according to very complex dynamics, a massively interconnected fully parallel asynchronous system.

    As you say, “Brains aren’t Turing machines, and they certainly don’t have the Von Neumann architecture…”

    If that first part is actually true, then the brain is not a computing device of any kind, game over. (The latter part is obviously true.)

    A Turing Machine defines what we mean by computation.

    That’s the thing… If a numerical model can compute the mind, then the mind is necessarily a Turing Machine. Brain uploading assumes this, too, for the same reason.

    That means the mind is a mathematical abstraction, unlike anything else in nature.

    The question I keep asking is, given that nothing else natural is an algorithm, where does this confidence that our minds “almost certainly” are algorithms come from?

    To me it seems completely unwarranted. It isn’t so much I cling to my idea (not that I even really have one) so much as, after considerable thought, I utterly reject this one.

    Liked by 3 people

          1. Firstly, those are the same thing. Computers compute what is computable.

            Secondly, I originally said, “A Turing Machine defines what we mean by computation.”

            You asked for a source for that statement, and I provided it.

            Like

          2. The link you gave is at its core about the question of what is computable. I think you’re interpreting it incorrectly. And the computer you’re using to type your messages is not a Turing machine – unless you don’t want to call the device you’re using a computer, you’ve a contradiction there.

            I don’t agree you provided a supporting source – I think various panels of scientists would consider it the same way. Just lodging my disagreement on this matter. If you think it’s supported, okay, I get you think it is supported.

            Like

          3. “The link you gave is at its core about the question of what is computable.”

            Well, yes. As I said, what we mean by computation — what is computable.

            “I think you’re interpreting it incorrectly.”

            How so? What is the correct interpretation?

            “And the computer you’re using to type your messages is not a Turing machine…”

            You mean literally, physically? No, obviously not. But there absolutely is some TM that implements this laptop running this software.

            Like

    1. Agree. Let me quote something that I wrote that hasn’t made it to my blog yet.

      I’ve grown weary of the comparison between brains/minds and computers even though I find myself sometimes slipping into talk of “visual processing” rather than “seeing” or “retrieving information” rather than “remembering”. Certainly, a one to one comparison of brains to computers is flawed. The brain does not have physical equivalents to CPU, RAM, or a disk drives even if it might have some activities that resemble what they do. It does not have software independent from hardware. The closest comparison to an actual computer would be something that doesn’t exist, self-modifying hardware that comes partially programmed from the factory and then reprograms itself on the fly. Sometimes it is stated that the brain operates digitally like a computer. Some aspects of its signaling such as firing neurons resemble digital processing but other aspects are analog. As Paul King writes: “Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not ‘digital.’” The ultimate claim is that consciousness is computable. I wonder sometimes if ones making this claim even understand this claim or a basis for it. I suppose some are of the Seth Lloyd school that basically believes everything in the universe is a form of computation. In that viewpoint, consciousness as just one other thing in the computing universe would necessarily be computable. Whether an individual’s consciousness, however, can be replicated in an actual computer is probably a purely academic question. Miguel Nicolelis believes it is impossible because its “most important features are the result of unpredictable, nonlinear interactions among billions of cells . You can’t predict whether the stock market will go up or down because you can’t compute it. You could have all the computer chips ever in the world and you won’t create a consciousness”. For the foreseeable future the prospect of an individual’s consciousness can be replicated through pure algorithm and bit flipping is not likely.

      Like

      1. Not sure that it will ever be feasible to build a conscious computer with anything like current tech, but I think it has to be possible in principle and I’m more optimistic than you.

        In particular, the analogy to the stock market seems wrong to me, because we can indeed simulate something that behaves very much like an actual stock market. The reason we can’t predict what a particular real-world stock market will do is because of chaos, and not because we’re unable to simulate such a thing. The analogous situation for a brain simulation would be that the simulated brain would not allow us to predict exactly what the real brain would do, but it could still be behaving in a manner characteristic of a real brain, and if so I would deem it to be conscious.

        Liked by 1 person

    2. Wyrd, just to clarify, I wasn’t just thinking of you when I made the statement about people hating this concept. I’ve mentioned neural computation before in discussion threads on philosophy blogs and subsequently felt like I’d dove into piranha infested waters when all the anti-computationalists swarmed. I’ve actually learned to not bring it up unless it’s directly relevant to the subject matter, because it can often derail the conversation.

      In your case, you may not hate it, but you certainly seem to be passionate in your conviction that it’s wrong. It’s a passion I feel in your responses anytime this comes up.

      On the snippet of text you say is circular, what in particular is circular about it? What consequence is being assumed as a premise (which is what I take you to mean by circular)? Or am I missing the point here?

      On Turing machines, I think we’ve discussed this before, but what do think about analog computers? https://en.wikipedia.org/wiki/Analog_computer
      They’ve fallen out of use in recent decades, because digital computers can approximate their operations with enough precision for practical purposes. Why couldn’t a sufficiently powerful digital system not approximate a brain’s operations in the same manner?

      On your final question, if you read the article, it covers the fact that the computational paradigm is fruitful for scientific research, particularly when testing for specific computations. If it ever ceases to be fruitful, or if someone comes up with a more effective meta-model, I’m sure it’ll be dropped like a hot potato. Until then, computation seems like to neuroscience what evolution is to biology, a framework allowing neuroscientists to make sense of their data.

      Like

      1. “It’s a passion I feel in your responses anytime this comes up.”

        A lot of that is just my passion for a good argument. I do get more passionate anytime the topic is important. (You should see me argue politics and culture! Oh, right; you have! 😀 )

        “On the snippet of text you say is circular, what in particular is circular about it?”

        As cadxx comments below, it asserts the thing it seeks to prove.

        “On Turing machines, I think we’ve discussed this before, but what do think about analog computers?”

        They operate according to completely different principles and don’t, in computer science terms, “compute” their outputs. A resistor network, for instance, can implement a mathematical expression. The physical law, E=IR, results in an instant output due to voltage and current flow.

        We can definitely compare the brain to an analog “computer” — both operate according to dynamical physical principles. Neither “calculates” or “computes” their outputs. (Not as the terms are used in computer science.)

        “Why couldn’t a sufficiently powerful digital system not approximate a brain’s operations in the same manner?”

        The rub lies in the approximation. And in the complexity of what must be modeled.

        And we’ve definitely talked about this before! 😀

        “[T]he computational paradigm is fruitful for scientific research, particularly when testing for specific computations.”

        That’s almost exactly what proponents say about String Theory.

        Which has been fruitful in the field of mathematics, but hasn’t given us much of anything when it comes to solving reality.

        Given the huge size of the assumption the mind is the one purely mathematical object in the universe, perhaps computational theories will someday be recognized as an interesting, but incorrect, diversion.

        OR not. Maybe that is what is special about the mind; that it is an abstract mathematical object.

        Like

    3. “It’s an analogue system … A Turing machine defines ”

      1 – I don’t remember that objection being used in the days of analogue computers.

      2 – There are digital elements to the brain’s components: action potentials.

      3 – Digital components in a digital computer are analogue devices at some scale. There is no specific digital count of electrons that trigger the change of state in any of the millions of ‘bits’ in a memory chip – and each bit is an arrangement of analogue components. If you’ve ever seen messy digital signals on an oscilloscope there’s no mistaking their analogue nature.

      5 – Computer – originally a human.

      Liked by 1 person

        1. None of those examples I gave require ‘reals’. If you want to get picky, all that we know boils down to the quatum digital world. And, ‘reals’ consitute a mathematical concept we have no example of in our experience (science). Every place we think we measure a ‘real’ is a digital approximation … to so many digits.

          Like

          1. Pretty much all the math in quantum physics, let alone physics itself, uses the reals. We do know matter and energy are discrete, but the jury is still out on time and space. (Physics currently treats them as continuous.)

            The thing about a measurement is that it necessarily rounds off reality. Likewise entering finite numbers into a computer. This is exactly where we lose the battle.

            To be honest, I’m not quite sure what your numbered points asked of me:

            1. Are you saying your memories are all-inclusive of the period? You never heard of it, so it couldn’t have happened? What time period are we talking about? Turing’s work is mid-1900s, and the field has grown a lot since then. All I can say is that analog computers work on completely different principles than digital ones. The former are physical models of reality; the latter are numerical simulations.

            2. Sure, but the brain (as with all real world objects) is distinctly analog.

            3. Yes, but as I discussed with Travis R, computer circuits are designed with hysteresis so there are large zones enforcing a bistable system. The design overcomes the noisy signal.

            5. The term, originally, yes. It meant “one who computes.” Even then it had the distinct meaning of applying algorithms to solve mathematical problems. (A learned skill that doesn’t come easy to humans and which many never learn.)

            Like

          2. Wyrd,

            ”Sure, but the brain (as with all real world objects) is distinctly analog.”

            The brain, and my iPad, are distinctly analog, as you say, but both can make use of digital computing. A neuron either fires or it doesn’t. It get’s inputs from multiple sources, and some of those sources may be suppressive, but if it gets 70% positive and 15% suppressive, it doesn’t fire at 55%. It just fires, or not. (There may be some wiggle room here, but that’s the standard “integrate and fire” model.)

            Also, when the brain processes spoken words it is making use of digital technology. Phonemes are digital. That’s why I can say “Harverd”, and you can say “Hahvahd”, and we can both understand that as the same word. Digital does not have to mean binary.

            *

            Liked by 1 person

          3. “The brain, and my iPad, are distinctly analog…”

            No. There is some analog circuitry in your iPad (the radio, the audio), but nearly all of it is digital.

            “[A neuron] just fires, or not.”

            But its output is distinctly analog. A pulse train where the frequency carries signal.

            “Phonemes are digital.”

            Not the way the term is usually meant, but sure. They’re symbols, we can agree on that much, and symbolic processing is a central aspect of computing.

            “Digital does not have to mean binary.”

            Right. It means discrete symbols.

            Like

  2. I think an interesting way to put it is could an evolved bio computer end up denying it is an evolved bio computer? I think the answer is yes, it could deny that. Indeed it makes sense, as evolution doesn’t filter toward the organism to identify itself correctly, it primarily filters towards food and mating.

    But I have my own issues with the word computer, but not the traditional complaint – ‘computer’ is, imo, too athropomorphised a term. It’s too intentional. Which I hope is clearly the opposite of the exceptional humans camp, which finds the word ‘computer’ something not to be related to the human condition.

    Liked by 1 person

    1. I can see that. I actually try to minimize my use of the specific word “computer” when talking about the brain, so people don’t think I’m implying it has a CPU, RAM, hard drive, etc. This is a concession to the realities of language in a culture since the exact meaning of “computer” changes over time. It once referred to human workers doing computation.

      Like

  3. I never really thought about it before, but now that it’s come up and I have, I realized why I resist the idea that neurons are logic gates: I think it confuses the operation with the output.

    It is true that gates and neurons have operational states we can label “on” and “off” (or “firing” and “not firing”), hence the metaphor of seeing them as logic gates.

    But a logical output is discrete, just a one or zero in binary. The output of a neuron encodes an analog signal in firing frequency. Further, that signal is a complex summation of similarly coded analog inputs.

    It’s an analog device operating according to complex dynamics.

    It’s better compared to an op-amp circuit with very high gain (so it’s in one state or another — bistable) and a large resister network summing inputs. Although that doesn’t model suppressive inputs well — that requires a network of op-amps for each neuron.

    Maybe it comes from having worked with a lot of these kinds of circuits in my hardware days, but I just can’t see a neuron as a logic gate. It’s a convenient easy metaphor, but there’s a lot of “you know what we mean” involved. It shouldn’t be taken too seriously, and it’s not grounds for seeing the brain as a computer.

    Like

    1. John Dowling, in his book “Understanding the Brain”, describes a neuron as translating an amplitude modulation signal into a frequency modulation one, which subsequently results in AM signals in the downstream neurons, and the cycle continues. Since brain waves are always propagating, an excited neuron is really just firing more often than the average, and a inhibited neuron is firing less often.

      It’s definitely seems more analog processing than digital. But the overall chain still feels very computational to me, albeit in an analog fashion.

      Like

      1. I’m guessing Dowling sees the synapses converting the FM to AM? Yeah, that’s a decent model. Billions of tiny “radio stations” broadcasting a signal they derive by listening and responding to a sub-set of other stations.

        It certainly feeds into my suspicions regarding inter-cranial standing waves being a fundamental part of consciousness — the “lasing” of the mind, so to speak.

        “But the overall chain still feels very computational to me, albeit in an analog fashion.”

        See my questions in the comment below. If you want to use the radio station model and see “computation” as a dynamical physical system operating under physical laws, then we’re on the same page.

        The only difference is the label “computes” — which I don’t think applies here.

        It’s an important sticking point, because fully appreciating why has everything to do with whether a numerical model of a mind can work.

        Like

      1. Affirming the Consequent is about making the cart more important than the horse that will inevitably need to pull it. Theories that are not theories are constructed with never a thought for empirical truth. There is not a shred of evidence in support of the brain being a computer – ‘it has to be or as it should be’. Because science has become a religion we are again confronted with David Hume’s is–ought problem about what is and about what ought to be. See: https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem
        A computer is a number crunching machine (basically an adding machine which is what it evolved from).
        If we trace the computer/brain myth back in history we find Alan Turing who never built a computer and was not a brilliant code breaker. The Turing we have today is not a man but a mythological construct used and abused in order to build computer science.

        Like

        1. I still can’t see that Affirming the Consequent has been demonstrated. And I personally think the evidence is glaring and pervasive. But it seems pretty obvious we see the world very differently, and neither of us is going to convince the other today 🙂

          Liked by 2 people

          1. “…glaring and pervasive…”

            I genuinely don’t understand what is “glaring” or “pervasive” (persuasive?), especially after all the objections I’ve raised. You seem to have made up your mind, and it’s a mystery to me why or how.

            I think it’s that I don’t understand the lack of science skepticism… People seem to really want this to be true.

            And I’m not saying it’s not — I don’t know! But considering all the serious challenges involved (which I’ve covered thoroughly), why are people so certain a natural object in the physical world — unlike all other natural physical objects — is an algorithm — an abstract mathematical object.

            The only way that makes any logical sense is if you are a Tegmarkian.

            And it means dualism, since the belief entails that brain and mind are separate.

            From where I sit it’s a huge ask, so I’m lost as to why y’all think it’s “glaring and pervasive”?

            Like

          2. “People seem to really want this to be true.”

            My impression is that a lot of people desperately want it to be false. But each of us has to call it as we see it. I think your objections hinge on an artificially narrow conception of computing, one not borne our by the historical understanding of computers.
            https://en.wikipedia.org/wiki/History_of_computing_hardware
            Now, if you want to say that the brain is not a general purpose programmable computer, then I’m on board. But to say it’s not computational at all strikes me as simply ignoring what we know about neural circuitry.

            I’m not a Tegmarkian and have never seen the necessary connection, although DM might agree with you on this.

            On dualism, you seem to be labeling functional abstraction as dualism. You can call that dualistic, but it’s dualism in the same sense of Windows or Linux having an existence beyond whatever hardware it’s currently running on, or a book having an existence beyond the current physical copy we’re holding. If you’re saying it’s the same as Cartesian dualism, I’d need to see the dots connected.

            Like

          3. “I think your objections hinge on an artificially narrow conception of computing, one not borne our by the historical understanding of computers.”

            “My” definition is the one used in computer science. What do you feel is artificially narrow?

            “I’m not a Tegmarkian and have never seen the necessary connection…”

            You believe the mind is an abstract mathematical object. Tegmarkians believe everything is an abstract mathematical object. 🙂

            “[I]t’s dualism in the same sense of Windows or Linux having an existence beyond whatever hardware it’s currently running on,…”

            Yes, exactly. There is hardware and there is software. That’s the dualism.

            “…or a book having an existence beyond the current physical copy we’re holding.”

            Yep. There is a distinct object, the text, and there are various ways of realizing that text.

            In both cases, the key point is that there is an abstract object consisting of pure information.

            (And note both examples are human designed and created, not natural objects. Now try to find a natural object with this same dualism.)

            Like

          4. “What do you feel is artificially narrow?”

            You seem to be saying that computation can only happen on a general purpose digital computer. Or that it can only be done in the exact way that kind of system does it. The reason I linked to the historical information was to remind you that computation happened before 1936.

            It may eventually turn out that computation is the wrong way to understand nervous systems. Only time and the data will tell. But if so, I can’t see that this line of attack will be what brings it down.

            Like

          5. “You seem to be saying that computation can only happen on a general purpose digital computer.”

            No, I’ve been very clear that “my” definition is the definition of CS.

            “The reason I linked to the historical information was to remind you that computation happened before 1936.”

            But what there do you think invalidates anything I’ve said?

            Like

  4. Thanks for recommending that article, Mike. I agree that it was very well though-out and nuanced. A pleasure to read.

    I have my own thoughts about what it is for something to be a computer. To me, to be a computer is a bit like being a paperweight — anything can be viewed as a computer or as a paperweight (at least in the latter case if it is of a somewhat medium size/density), because just about anything can be interpreted as computing something and just about anything (if medium-sized/dense) can be used to hold down a sheet of paper.. When we say that something is a computer, what we really mean is that it is useful to regard it as a computer, and this means that there is something interesting and/or complex in how information flows through it.

    For me, to say that the brain is a computer is to go further still and to say that the brain’s primary role in the body is to process information, taking in nerve signals as inputs and producing nerve signals as outputs. It is true that it also does other things — it secretes hormones and waste products, it acts as an inert mass in the head, it converts chemical energy to thermal energy, etc., and all of these have their effects too, some of which may be quite important to us because we have evolved to accommodate them. For instance, if my brain for some reason suddenly stopped consuming so much energy then I would presumably put on more weight and be more at risk of heart disease etc.

    But to say the brain is a computer is to say that the most important thing it does is to process information. It is no different in this respect from saying that the heart is a pump, which is to say that the most important thing the heart does is to push fluid around. And the heart really is a pump, just as the brain really is a computer.

    The problem for your view, I think, is that you require there to be a fact of the matter about whether some particular computation is actually taking place, because it seems that there has to be a fact of the matter about whether we have minds and experiences and so on. Searle and Putnam have made (to me) convincing cases against this, concluding that you can interpret any physical system to be conducting any computation. I think they’re right on this, and yet I still think that our minds are computations, because I identify the mind with the abstract platonic computation itself rather than with some physical process which may or may not be realising it. All ground we’ve covered before but it seems relevant so I’m just mentioning it.

    > Brains aren’t Turing machines

    Neither is a PC. A Turing machine is an abstract model of computation. I’m not sure if any actual Turing machines exist. So I’m not sure what you mean, unless you mean “Turing machine” as a synonym for computer, in which case I would say it is a Turing machine. Perhaps you mean that it’s not programmable and fully general purpose like a Universal Turing Machine or a PC. But for your point I think all you need to say here is that it does not have a Von Neumann architecture and it is organised nothing like man-made computers.

    Liked by 1 person

    1. One clarification — I also think that the claim that the brain is a computer implies not only that the brain’s primary role is to process information, but that the ways in which a brain processes information are computable in the Church/Turing sense. So if it turned out that Roger Penrose/Stuart Hammeroff were right and the brain could process information in ways unavailable to a Turing machine, then I guess the brain would not be a computer.

      Liked by 1 person

    2. “Neither is a PC [a Turing Machine].”

      Actually, it is in the sense that there is some TM that is functionally identical to any given PC running any given algorithm.

      For any computable function, there is some TM that computes that function. Anything else that computes that same function is computationally equivalent to that TM.

      The idea that there is some TM that is equivalent to the brain is a belief held by many, but it is so far a belief founded almost entirely on wishful thinking and metaphor.

      Liked by 1 person

      1. Hi Wyrd

        > Actually, it is in the sense that there is some TM that is functionally identical to any given PC running any given algorithm.

        I covered that sense in my post though. Using it in this sense is just making it a synonym for computer.

        I understand that you don’t accept that the brain is a computer/that there is some TM that is equivalent to the brain, but Mike and I are on the same page on this one — that there is such a TM, so from our shared perspective Mike’s point that the brain is not a TM is either trivially true (in the literal sense) or false (in the broader sense you describe)

        Like

        1. “[F]rom our shared perspective Mike’s point that the brain is not a TM is either trivially true (in the literal sense) or false (in the broader sense you describe)”

          Indeed. From my perspective as well. 🙂

          Like

        2. But a TM and a von Neumann architecture modern computer are both discrete systems, while the brain, as far as we know, is not. It’s an analog system whose function is not known to be equivalent to any discrete system.

          Unless there’s something I’m missing?

          Like

          1. Paul, in what sense is the brain an analog system? Neurons seem to be binary digital to me. They fire or they don’t. Yes, they can fire at a certain frequency, but that’s still a digital signal, just like a music synthesizer creates a frequency from a digital signal.

            *

            Like

          2. Paul, that article makes a good case that the brain can do and does analog computations, but so can a digital computer. The author noted that as well. He also notes the oft-cited digital nature of the neuron, and to counter that he points out that there is some evidence that some neurons can have action potentials of different shapes which could have different effects on downstream signals. Given his nice explanation of what an analog computation is, it is not clear that this difference in shapes would correspond to such a computation. (How would the meaning of the signal co-vary with the shape of the action potential? Wider is more? Higher is more?).

            In the end, I don’t think the article demonstrates that the brain is an analog computer as opposed to a digital computer which can do some analog computing.

            *

            Like

    3. Thanks DM. And well said.

      I agree that ultimately whether a particular system is doing computation isn’t a fact of the matter. Neither is the existence of a mind or consciousness. As I say regularly, as counter-intuitive as it seems, consciousness is in the eye of the beholder. As are minds overall, and computation.

      But I think you make an excellent point that there is a pragmatic determination to be made here. Is thinking of the brain as a computational system productive? Does it help us make sense of the data? Just as considering the heart to be a pump helps us understand what it’s doing, considering the brain to be computational helps us understand what is happening.

      From what I understand, Turing machines don’t physically exist, although some systems are closer to them than others.

      But abstract concepts aside, the brain is not a general purpose computer. It can’t perform just any computation. It can’t run Tetris. It’s a system tightly orchestrated for certain purposes involving being the control center of a living organism. But that doesn’t mean a general purpose computer can’t perform the computations it performs. Although a digital one will only be able to do so to some level of precision depending on its capacities.

      Like

      1. Hi Mike,

        If you mean a general purpose computer, then you’re talking about a Universal Turing Machine, not a Turing Machine. A Turing Machine is an abstract model of something carrying out a specific computation. A UTM is a Turing Machine where the specific computation is a program to simulate any TM given to it as input.

        Even so it would be clearer to say that the brain is not a general purpose computer.

        Liked by 1 person

      2. “As I say regularly, as counter-intuitive as it seems, consciousness is in the eye of the beholder. As are minds overall, and computation.”

        Doesn’t this lead to Berkeley-style Idealism?

        Why is a particular thing conscious? Because it’s being interpreted as such. What happens if there’s no one around to interpret it as conscious? Well, luckily there’s one interpreter who’s always around…

        Liked by 1 person

        1. Not as I see it. As I see it, there is an external world, but consciousness doesn’t exist in it. Like beauty or love, it only exists subjectively. To perceive a fellow consciousness is an act of empathy.

          But if there were only one conscious system in the universe, there’d be no one to empathize with it, or anything for it to empathize with, and the idea of consciousness would not arise. But if there were two, then each might wonder how much like it the other is, and might give a label to this idea, which we’d likely translate as consciousness.

          Liked by 1 person

          1. So when we, in our universe, ponder the hypothetical universe you just outlined with these two creatures in it, we call them empathetic. They empathize with each other and use a word which we translate as consciousness.

            When we say they empathize, is our statement true? Is it non-truth-evaluable, like “Hooray for the Detroit Tigers”?

            Like

          2. Are you asking if empathy itself is an objective fact? Well, we say it involves holding an internal model of how we think the other works, a theory of mind. I do think those models exist as a neural firing pattern in our brains. And it is similar to, if not another instance of, the metacognitive model we can hold of the self.

            I used to think this metacognitive model was itself consciousness, but there are many problems with that position, most notably that if it’s knocked out in a human, but they’re still aware of their environment, we still consider them conscious.

            Like

    4. I think what y’all are missing, and by y’all I mean all y’all [I do so love getting my fake Texan on] is the role of teleonomic purpose. The difference between mapping a computation to a wall (a la Searle) and mapping that same computation to a Mechanism designed to make that computation is that the mechanism was designed with a purpose, and the mechanism could theoretically thus repeat the computation.

      Despicable Me recognizes anything could be viewed as a computer or paperweight. The question is, what purpose is it serving in its current configuration. Is the computer unplugged and sitting on a stack of papers? Or is it plugged in and taking input from a keyboard and producing some kind of output that serves a purpose. You could map that keyboard input, processing, and output to the molecules in a wall, but you would be hard-pressed to explain the purpose of the mechanism there, and also be hard-pressed to repeat the computation.

      Despicable Me also says the primary role (purpose!) of the brain is to process information, but what counts as processing information? Does shredding a piece of paper with writing on it count? I suggest what counts (for computation/information processing) is the recognition of a physical pattern as representing a concept (information) and the generation of a response which is useful (relative to a teleonomic purpose) in the context of the meaning of that concept.

      Mike (SelfAwarePatterns) likes to say “Consciousness is in the eye of the beholder”. But this is another way of saying the mechanism is designed to interpret the input in a particular way such that it produces a particular output which is teleonomically useful relative to a specific concept relatable to the input. So a black spot moving in the visual field can produce a blink response for one mechanism, but it might produce a “capture with tongue” response in a different mechanism. How the mechanism responds is determined by whatever creates the mechanism (the final cause), and that determines the concept (potential food vs. potential eye damage) associated with the interpretation.

      Word Smyth asks, “does the Earth compute it’s orbit?” I answer, only if you can explain how the Earth takes input and generates output which is useful relative to the meaning of the input. Doing this for a computer, and for a brain, is somewhat straightforward. Easier for a computer because we know all the functions and purposes going in. Harder for the brain, but still in the category of Chalmers’ easy problems.

      *
      [it’s all about the final cause]

      Liked by 2 people

      1. Did you mean to misstate Disagreeable Me’s handle?

        The only issue I see with the mechanism application is it seems trivially true. And isn’t everything ultimately a mechanism for something? Even a rock could be interpreted as being a mechanism for some things, like receiving sunlight and radiating warmth.

        In the case of a massively parallel system like the brain, it seems like we have clusters and hierarchies of nested and interacting mechanisms. You can view the whole thing as one giant mechanism, but it’s one with millions of inputs and outputs.

        I do agree that function is important. It’s one reason why IIT always leaves me unsatisfied. It seems to ignore function in favor of a pattern.

        [The final cause is in the eye of the beholder] 🙂

        Granted, some interpretations require a lot more energy than others.

        Like

        1. [Ermmm … Sorry DM]

          In point of fact, everything is a mechanism for something. Congratulations, we’ve figured out ontology. But I’m pretty sure you can define (teleonomic) purpose without including everything. In what sense is a radiating rock serving a purpose?

          Regarding a massively parallel system like the brain, yes, we have clusters and hierarchies of nested and interacting mechanisms. Originally, Marvin Minsky referred to them as agents, but in his latest book he changed that to resources. And yes, you can consider one brain as one mechanism. And yes, you can consider a company as one mechanism. And yes, you can consider a nation as one mechanism. And yes, you can consider one body plus one smart phone plus one internet as a mechanism. The question is always, what kinds of computations can Mechanism X do, and asking the question assumes we care about the answer.

          I agree that IIT hasn’t addressed function/purpose/meaning, but I think they’re getting there. The latest version (3.0) has started saying that the integration generates a vector in a multi-dimensional qualia space. This is a (possibly exact) description of what Eliasmith’s Semantic Pointers are/do. Thus, we can demonstrate a purpose for particular integrations.

          You can say final cause is in the eye of the beholder, but them’s fightin’ words. Teleonomic final cause, teleonomic purpose, is an objective description. And it’s exactly what makes the difference between a computation and a rock rolling down a hill.

          *

          Liked by 1 person

          1. “the integration generates a vector in a multi-dimensional qualia space”

            That sounds impressive, but I have no idea what it means. This is another issue I have with IIT. Much of the language around it seems like a word salad designed to sound impressive rather than clearly convey information. It can often be interpreted in many different ways. If I can follow professional neuroscience papers, but can’t follow what the IIT proponents are saying, it seems like a red flag.

            “Teleonomic final cause, teleonomic purpose, is an objective description”

            So, as I understand it, teleonomy is apparent purpose. Can something that is only apparent be objectively established?

            Of course, we can demonstrate whether something is adaptive for survival or reproductive success. We can also document how people use certain artifacts. And we can choose to call those qualities “purpose”, as long as we’re clear what we mean by that word. Evolutionary biologists use purpose filled language all the time, but they usually mean it metaphorically.

            But then what do we do when use of a certain trait or artifact shifts, either in evolution or in general usage? It’s not unusual for evolutionary traits to shift in adaptive usage over time.
            The hippocampus seems to have evolved to map spatial points in the environment, but has morphed to play a crucial role in memory persistence. In the movie Logan’s Run, characters in the future keep themselves warm by burning paper money that’s lying around in DC ruins, obviously a major shift in purpose from how we use that paper.

            Like

          2. “the integration generates a vector in a multi-dimensional qualia space” […] That sounds impressive, but I have no idea what it means.

            If you really want to understand it, look up Semantic Pointers. But try this: think of a watch with hands, but no numbers on the dial. Instead of the numbers, put in concepts. So where the “1” would be, put in “hot”. Instead of “2”, put “blue”. Expect noise, so you don’t want to put more than, say, twelve things on this dial. So wherever this dial is pointing is what you are attending to. Maybe a loud noise would shift the dial over to “danger”. Other systems get info from the dial (thus, global broadcasting) and can react accordingly. (“Danger”? Initiate “escape”!). Now add a dimension and replace this dial with a golf ball. Each dimple is a concept, and similar concepts may be close together (or not?). Lots more concepts. Now add another dimension, and another, and another.

            The cool thing is, not only can you get concepts but you can do operations which can combine them. So you may have a concept (pointing at one dimple) for “red”, and another for “ball”, and you can “add” them to get an entirely new pointer for “red ball”. And you can “subtract” them too. So you can ask questions about unknowns. Say “red ball” = X. You can say, “I know X is a ball, but what color is it?” You can do the “subtract ball” operation on X, and you get “red”.

            The actual workings are more complicated (X would be more like “color:red + shape:sphere + size:small”), but Eliasmith has shown this works with a reasonable number of biologically plausible simulated neurons.

            Worth understanding.

            *

            Like

          3. Thanks for the explanation. Every time I read about semantic pointers, they strike me as a high level version of Damasio’s covergence-divergence zones, essentially the nexus where a specific mental concept exists. Although I guess the semantic pointer itself might be more a reference to that nexus rather than the nexus itself, or maybe it’s own nexus referencing the other, similar to the higher order representations in HOT.

            Like

          4. ”So, as I understand it, teleonomy is apparent purpose. Can something that is only apparent be objectively established?”

            Only to the extent that anything can be objectively, i.e., scientifically, established. And everything is dependent on context. If you look just at a desktop computer, that fact that those things aren’t typically found in nature is a clue that there is more to the explanation of how it came to be. But if you have additional information, such as the fact that this particular computer is unplugged, sitting on the floor, and clearly blocking the path of a door which might otherwise be closed, you can reach the objective conclusion that you’re looking at a doorstop. At that point you can conjecture that there was another mechanism which had a goal of keeping that door open, recognized that the computer in the proper arrangement (sitting on the floor between the door and the jamb) would meet that goal, and arranged things appropriately.

            *

            Liked by 1 person

  5. Let me pose the situation this way:

    Is the Earth “computing” its orbit? Do soap bubbles and raindrops “compute” their spherical shapes? Does water “compute” its flat surface perpendicular to the local gravity field?

    In computer science there is a significant difference between a “computation” (which has steps and can be represented by a TM) and a physical system following dynamic physical laws. The brain, as a natural object, is almost certainly doing the latter, not the former.

    Like

    1. Excellent point Wyrd….. All you have to do now is take your hypothesis to the next level by dismantling the “magical” dynamic of physical laws and replacing that myth with a more reasonable explanation called panpsychism….

      Like

    2. Whether the solar system is computing Earth’s orbit is a matter of perspective. Not that I find it particularly productive to regard it as doing so. But I do find it productive, as does most of neuroscience, to see the brain as doing it. In the end, this comes down to what is useful, what enables us to form more predictive models.

      Like

      1. “Whether the solar system is computing Earth’s orbit is a matter of perspective.”

        I think it’s a matter of physical fact. More importantly, it addresses a crucial distinction that matters when it comes to numerical models of the mind.

        “In the end, this comes down to what is useful, what enables us to form more predictive models.”

        Fair enough! (I do think the comparison to String Theory is apt. 🙂 )

        Like

      1. “I don’t think computer science somehow treats computation as being something other than physical systems following physical laws.”

        Obviously not. But the systems of physical laws involved with computing (complexity, information theory, computability, number theory) are quite different from the laws governing real world dynamical systems.

        It’s the difference between analog and digital, the difference between real numbers and integers, smooth versus discrete.

        Like

        1. But the systems of physical laws involved with computing (complexity, information theory, computability, number theory) are quite different from the laws governing real world dynamical systems.

          Where does that difference come from? I would say they both use the same physics. Something like information theory is just a shorthand for refering to physics processes. If you want to treat something like information theory as actually being something in itself rather than just a shorthand, okay. But it isn’t. As humans we seem to be keen to put labels on everything – to the point some people will actually start to see the labels as some sort of real space itself.

          Like

          1. “Where does that difference come from?”

            The natures of the systems involved.

            “I would say they both use the same physics.”

            At a highly reductionist level, sure, but at that level it’s hard to say anything about a system. For instance, at that level, there’s no difference between you and a rock or any other thing.

            “…information theory is just a shorthand…”

            Indeed, but it’s a very useful (and as far as we know, correct) shorthand that allows us to explore theories about information.

            It’s usefulness is behind your cellphone operating correctly, for instance.

            “As humans we seem to be keen to put labels on everything…”

            Yes! It’s what allows us to talk about things.

            Liked by 1 person

          2. At a highly reductionist level, sure, but at that level it’s hard to say anything about a system. For instance, at that level, there’s no difference between you and a rock or any other thing.

            I think that’s Selfawarepattern’s subject for the post (or a large proportion of it). Unless I’m taking him incorrectly.

            Like

          3. No, you have it right. It was a part of it, although I didn’t go into detail. Whether any particular physical system is computational in inherently a matter of interpretation. Ironically, this was actually pointed out by opponents of computationalism such as Searle. Their contention was that since technological computers required interpretation to work, and brains could work without interpretation, then the brain couldn’t be computational.

            The interpretation of what happens in a computer chip is made by engineers when they design the computer’s I/O systems and peripherals. But natural selection provides an interpretation for what happens in brains. That interpretation is manifested in the way the body signals the brain and responds to the brain’s output signals.

            Of course, interpreting a rock to be a computer requires an aggressive and sophisticated interpretation, one that would require its own computations. But the issue is that there’s no objective line on when an interpretation becomes unproductive. That said, if brains required as much interpretation to be computational as rocks do, then I doubt anyone would be tempted to consider them computational systems.

            Like

          4. I’m not sure I’m really understanding what in particular is being discussed. To paraphrase Pratchett, computation is really just a very fine grain way of saying where the good fruit is. We use computation as a means to such ends as not starving. Is it being asked whether computation is somehow at a different level to that?

            Like

          5. “Of course, interpreting a rock to be a computer requires an aggressive and sophisticated interpretation,”

            Which is why that interpretation is absurd. I’m sorry, but it is. At best the rock is a sensor in some other computing system.

            Like

          6. I agree that the interpretation is absurd. However, reality has frequently show itself to be absurd by human standards. Absurdity, in an of itself, is not a reason to reject a proposition. You have to show why such an interpretation is logically invalid.

            Like

  6. Since we can map almost any decision making process onto a computational one, what other options are there? Obviously, supernatural or spiritual process are not worth considering, so what natural processes other than computational are possible? (I am suggesting that the suggestion that the brain works through processes that could be characterized as computational is not exactly earth-breaking.)

    Liked by 2 people

    1. It actually isn’t a controversial topic in neuroscience, or biology overall, where it’s not seen as anything particularly earth-shattering. I can understand the opposition from religious people (although some are fine with computation happening; they just think there’s more to it), but I was taken aback when I learned how controversial it is in philosophy and some segments of psychology.

      Like

  7. I agree that the Kevin Lande essay is well written. But I also think it misses the point completely, both about computers and brains.

    Mathematicians started building machines to perform calculations, because machines are more reliable than humans at performing repetitive tasks with accuracy. But, because mathematicians are so lazy, they started making their calculators more and more programmable and general-purpose. This generality of purpose also turned out to be important for the industrialization of microprocessors: it is certainly easier to produce millions of identical components, each of which can later be programmed to perform widely different tasks in widely different fields.

    Similarly, the interesting thing about the brain is not that it can unconsciously perform tasks that we have a hard time computing consciously. The interesting thing is that the brain can learn, that it has the same kind of lazy generality that a general-purpose microchip component has for changing its behavior. The brain is not a true blank slate, of course, but it is the most plastic part of a biological organism, something that can be programmed far beyond just intrinsic genetic behaviors.

    And of course, the interesting thing about the Turing machine is its universality. It does no computation particularly efficiently, but it can perform any calculation that mathematicians can formalize. If you don’t like Turing machines, just show that your system can simulate a Turing machine: it follows that your system is at least as general-purpose as the Turing machine. (But prepare for the possibility that your system can also be simulated with a Turing machine, a point which is made by the Church-Turing Thesis.)

    Liked by 2 people

    1. Hi Miles,
      I’m not sure I disagree with anything you say here, and I don’t know that Lande would either, except I’m not sure exactly what specifically you’re saying his essay missed.

      Like

      1. I think the crux of the matter might be the word “computation”. Like Lande implies, it is used as a label for different things in different contexts. But it doesn’t mean that we can do like Humpty Dumpty, and use the word to mean whatever we like. If you want to make your ideas clearer, it pays to be careful with words that have multiple meanings, and specify the contextual meaning to the reader up front.

        In the Church-Turing context, we are talking about epistemic limits: what mathematical functions are computable in principle; i.e. what can be known/proven, in principle, with the same reliability as 1+1=2. The only reason this thesis is famous is because the limits of decidability/computability become counter-intuitive, if the system is capable of both multiplication and addition.

        The brain is mostly performing the role of a real-time control system. Such systems can also be studied in quite abstract ways, in control theory a.k.a cybernetics; but control agents almost never operate in conditions of epistemic certainty. (And of course, when implementing our own real-time control systems, we sometimes use generic processor hardware, similar to what are used in other machines we call “computers”, adding to the confusion.)

        Liked by 2 people

        1. (“The brain is mostly performing the role of a real-time control system.”)

          Brilliant intuition Miles!!! Control is the primordial, fundamental, targeted objective all discrete systems. Why?? Because without at least a “sense” of control, the self-model, which is intrinsic to all discrete systems has no sensation of self.

          Consciousness is the hardware that the discrete systems (software) of appearances run on.

          Like

        2. I get that the word “computation” can be a hang up for many. Sometimes the phrase “information processing” is used instead, but my experience is that people have just as many hang ups with that as computation. (This despite the etymology for “information” of inputting forms into minds.)

          I’m open to an alternate word to describe the signalling chains and transformations that happen in nervous systems. However, finding a word that also can’t be applied to technological computers is harder than it might seem at first, as I noted in the post with “causalation.”

          And ultimately I think that’s the real issue, the functionalism, the implication that minds work according to the same physical principles as other systems, using electricity and chemistry, all of which ultimately can be described in purely mechanistic terms.

          Like

          1. All words are ultimately invented by someone (“causalation” ??). Once they enter general use and become popular, be prepared for semantic drift and bleaching.

            (You could also resurrect an obsolete word as technical terminology, to avoid confusion with contemporary use. How about “supputation”?)

            Like

          2. “Causalation” was actually an alternative to “information” that I used on a post about information processing. The name references my thinking that what makes the patterns we refer to as information informative is their causal history. Think tree rings or fossils.

            I was reminded of it recently when Christof Koch made the comment that he thought consciousness was more about the right causal structure than information processing. Which given the understanding I just described, is a distinction without a difference.

            Had to look up “supputation”. Interesting. Thanks! The brain is s supputer (supputator?). Of course, given it’s definition, so is the laptop I’m typing this on. Maybe signalation? “The brain is a signaler,” sounds weird though, and again it could refer to my laptop.

            Like

          3. I will make one more attempt to clarify the distinction between mathematical computation (what Church-Turing is about), and real-time control systems.

            The reason why a mathematician trusts the results of a computation, is because the computation is possible (in principle) to break down into elementary steps, each of which can be verified. This is essentially the same reasoning for accepting a proof of a mathematical theorem: it can in principle be broken down into elementary steps that can all be verified. For an idealist, a computation has the same timeless validity as any mathematical truth.

            A real-time control system, like the brain, makes timely decisions with limited resources and limited knowledge. It is separated from the system it is trying to operate by communication channels. Such channels can be characterized in terms of latency, bandwidth, noise, as well as costs in terms of some resources, etc. The study of communication-control systems has been multi-disciplinary, combining engineering, informed by biology and anatomy, as well as some more pure mathematics, like information theory and game theory. But there is no equivalent of Church-Turing thesis in control theory, the concepts just don’t map across.

            (Actual modern computers are of course quite complex machines, and their designs do incorporate many features of communication-control systems, just to keep everything working.)

            Like

          4. Thanks for the clarification. Some observations, which I don’t know if they will support or contrast with your thesis.

            A control system may be a simple one, such as an old school thermostat, or a more complex one. It seems like the more complex it gets, the more likely it is to be doing computations of some sort, although they may be the only computations the system can make. In other words, it may not be a general purpose computer.

            But we can always have a general purpose computer, engineered to be as close to that abstract concept of a computation machine as possible, be the controller. It just needs the right I/O systems. When we wire it into such a system, it isn’t simulating a controller, it is the controller. It’s the I/O system that makes the difference.

            The main thing is, looked at purely in physical terms, the computer plus I/O systems remains a very complex controller. We may know the boundaries in an engineered system, because we know the pedigree of the components.

            But if we were on another planet doing an archaeological dig for alien technology and came across such a system, we might have a hard time figuring out where the separation is. We might even doubt whether there was a computer in there at all. The system might resemble our idea of a computational system, but it might diverge as well. It would inevitably be a matter of interpretation whether its components were close enough.

            Of course, our attempt to understand the control systems of animals is a sort of alien archaeological dig, and we’re faced with the same disadvantages, except exacerbated because biology never makes the clean tidy separations we make in technology.

            Like

  8. Let me try to get a consensus on some hopefully uncontroversial facts:
    1. The Church-Turing thesis is widely used among mathematicians and computer scientists as an explicit (typically) or implicit *definition* of “computation”.
    2. Wikipedia has an article on “analog computation”, and some early “computers” (as they were called by scientists and engineers using them) were analog.
    3, Modern computers made by Intel, HP, etc. do computation, in the Church-Turing sense.
    4. Joe Sixpack understands “computer” to mean something like a modern computer.

    Liked by 1 person

    1. Agree on all four.

      FWIW, under the C-T thesis, there are other formalisms of computing besides a TM. Lambda calculus, for one example, is a “parallel” definition of computation.

      Like

    2. I think you’re excessively minimizing 2. Analog computers were once pervasive. A slide rule is an analog device. What made analog computers obsolete is that their operations could be approximated, to ever increasing levels of precision, by digital computers.

      But what about these additions?:
      5. Brains function using chemistry, electricity, and other standard physics.
      6. Computers predominately work by selectively propagating and integrating signals.
      7. Brains predominately work by selectively propagating and integrating signals.

      Like

      1. Well, I would admit to stacking the deck, except it wasn’t me, it was Joe Sixpack and the companies that built his computers.

        But given where Joe Sixpack stands, I would suggest that it’s needlessly misleading to declare “the brain is a computer”. Of course, it does take more words to say something less misleading.

        Like

        1. I usually avoid the specific word “computer” in relation to the brain as a concession to the current cultural meaning of a commercial computer with a CPU, RAM, hard drive, etc, more often sticking with the phrase “computational system”. Of course, a computational system is a computer, but the former doesn’t carry the same connotations. Those connotations were once different; “computer” once referred to a human performing calculations.

          That said, Joe Sixpack thinks Donald Trump is a brilliant businessman, so restricting ourselves to what he thinks has very limited efficacy.

          Like

    3. In the early days of electronic computing, Vannevar Bush was a pioneer of analogy computers, and Norbert Wiener a proponent of digital computers. (I can recommend Wiener’s books, if you haven’t read any.) Real computers are physical, not mathematical, so they are ultimately just approximations of mathematical Turing machines. In digital computers, physical noise and its contribution to computation error is easier to estimate and control, since every step is effectively also a quantization step. (The Church-Turing context is about epistemic certainty, and computer error is unacceptable in that context.) Analogy machines, and some quantum computers, can suffer from accumulation of noise during processing, since they usually wait until the end to do quantization. Still, there are some who hope/believe in “quantum supremacy”.

      Liked by 1 person

      1. I’ve heard of Wiener in relation to cybernetics, but didn’t know he was instrumental in the digital computer revolution. Definitely digital is better in technology since it makes the operations reliably deterministic (epistemically) in a manner that analog computers never could.

        Although I saw something a while back that analog computers were being used for some new purposes, but don’t recall the details. And George Dyson is urging everyone to consider analog again, although I don’t know that he’s made the case for that yet.

        It will be interesting to see what happens with quantum computing. Some people say it will provide near limitless computing power, others that it won’t work at all, at least beyond being very expensive classical systems. My guess is that it will be beneficial for certain narrow applications, but probably not anywhere to the extent it’s being hyped.

        As you said, real computation is physical, and that always means dealing with physical trade offs.

        Like

        1. I’m a little hesitant to mention this, but I started reading an online tutorial on quantum computation and got bogged down just when they started talking about … wait for it …

          Multi.Dimensional.Vectors.

          *
          [semantic pointers, but orders of magnitude faster?]

          Like

          1. I think you’re confusing a way of describing something with the something.

            Your multi-dimensional vectors are well-known in physics, quantum theory is based on them, for one thing. The concept is known as phase space or vector space or configuration space, and it’s used to describe many, many dynamical systems.

            (Coincidentally, I’m planning a series of posts about phase space because I think they’re also a very handy tool for looking at life in general. I wrote an initial post long ago, but I want to expand on the topic because it’s so useful.)

            Liked by 1 person

          2. Wyrd, not sure what I’m confusing here. (You meant me, right? ). Are you familiar with quantum computing? Do you know if they can do circular convolution (sorta combining two vectors) and involution (the inverse) in quantum systems? These operations are discussed for Semantic Pointers here.

            *

            Like

          3. Wyrd, the question isn’t could it be done. The question is could it be done more easily with a quantum computer than a standard digital computer, or a standard digital computer simulating neurons, given the multidimensional-vector nature of quantum computing.

            *

            Like

          4. We’re getting out of my area of knowledge here, but I would guess it depends on exactly which aspect of multidimensional vectors are involved. (As I mentioned, the concept is very general and applies lots of places.)

            The math of quantum physics is heavily based on vectors, and in that sense, no, quantum computers might actually be worse at implementing the algorithm than a digital computer. (Because QC, like quantum itself, is stochastic. It doesn’t deliver the precise answers we’re used to.)

            OTOH, if vector maths are built in to QC as part of the implementation, then I would imagine QC could do it as easily as any computer, but what is meant by “easy” here? The problem isn’t tricky, so speed is about the only advantage, and it’s possible QC might find solutions faster than a digital computer, but it’s also possible the problem falls well within normal computing complexity.

            Like

    4. I think #1 is controversial. ‘What is computable?’ is like asking how far a car can travel. But people are taking it as a definition of the car itself. It discusses capacity, not the means.

      The distance they propose can be traveled they propose can be done by a TM. This doesn’t make a TM the definition of computation any more than saying a certain car can do the distance means only that car can do the distance.

      Like

  9. For those who are making a hard distinction between analog and digital computers, what do you make of the fact that our digital computers and analog signals are both defined by how they are moving electrons? And that the brain is moving electrons and molecules? Anybody who has spent time troubleshooting communication signals between digital interfaces can attest to the fuzziness of the analog\digital distinction. Are you saying that a computer is only a computer if it is a TM in the pure abstract sense with true binary bits for the information carrier? Ultimately this can be reduced down to the question of whether the fundamental constituents of reality are discrete or continuous, but that is far removed from the level of detail relevant to brain function.

    Liked by 2 people

    1. “Anybody who has spent time troubleshooting communication signals between digital interfaces can attest to the fuzziness of the analog\digital distinction.”

      Indeed. And you also know how much effort each stage along the way puts into re-shaping a discrete signal! Our digital systems work very hard to stay digital!

      “Are you saying that a computer is only a computer if it is a TM in the pure abstract sense with true binary bits for the information carrier?”

      In computer science, yes. With the qualification that it need not be binary. The requirement is that it be a discrete symbol processor rather than an analog magnitude analyzer. (The real requirement is that there is some TM that implements the hardware+software.)

      Like

      1. I can see this going in two opposing directions.

        If we consider that our CMOS circuits are operating across a myriad of different charges, why would this count as a computer rather than an analog magnitude analyzer? Is it just because the charges are almost always easily grouped into binary classifications due the sharp bimodal distribution? Does the system’s response to this distribution need to be perfectly and consistently dichotomous in order to count as a computer?

        Conversely, if we consider that all electrical charges are constructed from discrete collections of the elementary charge (e), do we then consider every electrical system to be a computer because it is a Touring machine using the laws of physics as the rules and the elementary charge as the symbol? Would this mean that there are no such things as analog electrical systems?

        Like

        1. “If we consider that our CMOS circuits are operating across a myriad of different charges, why would this count as a computer rather than an analog magnitude analyzer?”

          Depends on how the circuits are biased. If it’s designed to operate at saturation and deal with discrete symbols (which we label “1” and “0”), then it’s one thing. If the circuit is biased to work with magnitudes (for instance, in a pre-amp), then it’s another thing.

          Use and intent are what matters.

          “Would this mean that there are no such things as analog electrical systems?”

          No, clearly not. Again, use and intent.

          This confusion, is it analog, is it digital, depends on how you look at it (no, it doesn’t), focuses too much on the hardware and not the abstractions and theory involved.

          At heart, we’re talking about the distinction between the integers and the reals (symbols and magnitudes). Those are distinctly different worlds (although, admittedly, it’s a point many people never fully wrap their heads around).

          Like

          1. Use and intent are what matters

            So something is only a computer if it was produced with the intent of processing abstract symbols? Of course, we are only able to do this by employing real world mechanisms which will not perfectly match the abstract symbols. So even a really sloppy design with high error rates would count because of the intended operation?

            Setting intelligent design aside, the ‘intent’ requirement necessarily excludes any naturally occurring systems from being defined as a computer. So how about a thought experiment. A tornado rips through a lab and by pure unfathomable chance executes all the etches and doping that a fabricator would normally run to create an IC that implements a PIC microcontroller. Is the resulting chip a computer?

            Like

          2. “So something is only a computer if it was produced with the intent of processing abstract symbols?”

            I suppose there’s wiggle room on the intent, although it’s hard to see how nature would evolve a computing system. A natural fission reactor, sure, but a computer? I don’t really see how.

            “So even a really sloppy design with high error rates would count because of the intended operation?”

            It’d be a bad one, but why wouldn’t it be?

            “[T]he ‘intent’ requirement necessarily excludes any naturally occurring systems from being defined as a computer.”

            I’d waive the requirement if someone can find a naturally occurring CPU with an instruction set running code. 🙂

            The thing is, that’s a highly artificial situation. Like finding a creature with a real jet engine in nature. (Yes, I know many creatures use some form of propulsion. I think there’s a beetle with a rocket. Not the same thing. Intake, compression, fuel, ignition thrust. Not natural.) For that matter, why not talk about natural combustion engines? Some things are pretty clearly artifacts.

            “Is the resulting chip a computer?”

            Obviously, but the reason it’s a computer is the fabricator, not the tornado. The tornado didn’t design it.

            Like

          3. If you’d waive the ‘intent’ requirement if a naturally occurring instantiation was discovered, doesn’t that mean it isn’t actually a requirement in the first place? If that requirement were waived, what would be the criteria for a computer?

            I also don’t follow why the fabricator is the reason it’s a computer in the thought experiment. What if the result of the tornado was a chip running a unique architecture that had never previously been designed?

            Like

          4. “If that requirement were waived, what would be the criteria for a computer?”

            I’ve explained that several times in this comment thread.

            “What if the result of the tornado was a chip running a unique architecture that had never previously been designed?”

            Through the agencies of a fabricator or just by combining random elements?

            You’re talking about odds so astronomically high, especially in the latter case, they can reasonably be considered impossible. (I’d be willing to classify the latter possibility as physically impossible.)

            My bottom line, I guess, is that designed systems show intent. It may not be obvious, but designed systems have intent. I reject the notion of a “natural computer” — I don’t consider that a coherent idea — so from my perspective any computer will have intent.

            Like

          5. Refrigerator Moment: Upon reflection, I would put it that ‘intention’ is a property of computers, not a requirement. For reasons discussed above.

            Like

          6. Ok. I’m done poking. I don’t have a dog in this fight and see it as little more than a semantic quibble, but I do still enjoy testing the limits of limits of our definitions to see how they hold up.

            Liked by 1 person

          7. Oh, very much likewise. Sharpens the mind, too!

            Some of it is semantic, but at its heart it’s about what is possible in the physical world. Figuring that out often requires recognizing important distinctions.

            Like

  10. Hi all,

    Have been busy so I’ve missed a few threads of conversation that I find interesting.

    From my point of view, a computer is anything that can be usefully viewed as something that processes information, by any means, as long as it’s not doing anything uncomputable in the Church/Turing sense. This means that even analog computers are computers, because analog computers can be simulated to any desired precision by Turing Machines and so are not computing anything TMs cannot.

    In particular, Paul Torek said:

    > But a TM and a von Neumann architecture modern computer are both discrete systems, while the brain, as far as we know, is not. It’s an analog system whose function is not known to be equivalent to any discrete system.

    So, the brain is not a TM or a von Neumann architecture computer but it is still a computer. It isn’t doing anything that a TM (or any computer) could not do in principle. In particular it is a computer because it is useful to regard information processing as its primary role, in the same way that it is useful to regard the heart as a pump.

    Wyrd Smythe asked whether the Earth computes its orbit or a resistor network computes Ohm’s law. I would say that if you find it to be useful to regard these as computations, then you certainly can. But I doubt it is so useful. On my view, there is no fact of the matter on whether any system is actually computing anything. This is like asking whether the earth is actually a water pump because it contrives to move water around in ocean currents or in the water cycle of evaporation/condensation. That’s perhaps a daft way of looking at it, but even if there are no hard and fast criteria for defining what is and what is not a pump, it remains useful and true to say that the heart is a pump.

    Wyrd Smythe also makes the point that:

    > They operate according to completely different principles and don’t, in computer science terms, “compute” their outputs.

    I wouldn’t take computer science, which in practice primarily deals with issues around conventional digital computers, to be concerned with defining “computer/computation” on very different substrates. By the traditional computer science understanding of computer/computation, I’m not sure that a quantum computer would count as a computer either, since the algorithms it enables cannot be broken down into discrete steps that could be performed by an ordinary chip. Even so, quantum computers do make use of algorithms, so I assume you would agree that they are computers. Your issue with the earth or the circuit seems to be that they jump straight to the answer without availing of such an algorithm. Yet you could certainly construct an algorithm which used such steps (“measure the voltage across A and B and then multiply the result by the distance of earth from Mars”). So perhaps the single-step “computation” of these analog computers should be analogised to the single-step “computation” of a single atomic CPU instruction. There’s still an algorithm, just a trivial single-step algorithm.

    Like

    1. Well said DM.

      “This means that even analog computers are computers, because analog computers can be simulated to any desired precision by Turing Machines and so are not computing anything TMs cannot.”

      I would just add to this, in case anyone decides to focus in on the precision part, that an analog computer can only ever approximate the operations of another analog computer, even one of the same model due to manufacturing variances. Indeed, one of the issues with analog computers is that they won’t replicate their own operations done in the past with absolute precision.

      So, provided there’s capacity for enough for effective precision, you don’t lose any magical qualities in the remaining imprecision. You’re just importing the original system’s imprecision. And most of us are aware that imprecision can already be a problem in a digital system working with real numbers. The reality is you can’t ever eliminate it, only reduce it to the point that it’s not relevant anymore.

      Like

      1. “The reality is you can’t ever eliminate it, only reduce it to the point that it’s not relevant anymore.”

        Except that chaos theory tells us it’s always relevant in real world systems.

        Like

        1. It may be relevant in trying to precisely predict the long range outcomes in an analog system, but I can’t see how it’s relevant for a digital system effectively recreating the capabilities of an analog system, unless you’re going to insist there’s magic in the chaos.

          Like

          1. “It may be relevant in trying to precisely predict the long range outcomes in an analog system, but I can’t see how it’s relevant for a digital system effectively recreating the capabilities of an analog system,…”

            Your first clause answers the second clause.

            Like

          2. You agree it’s relevant with analog systems, but you can’t see how it’s relevant with analog systems.

            The problems with digital simulations are well-known. Why are we arguing about this? You have to know digital is not the same as analog.

            Like

          1. I’m saying that to the extent we’re computers, we’re bad ones. Which puts some pressure against saying “the brain is a computer” because it’s misleading.

            I think that there are better ways to express the grain of truth contained in the “brain is a computer” statement. Something like: brains were selected primarily for their ability to retain and integrate semantic information – a function computers are also designed to handle.

            Like

          2. It seems like you’re judging it in terms of its ability to be a general purpose computer. And you’re right, it’s useless for that. But my electronic watch is useless as a calculator, my phone is terrible for composing blog posts, and the old style word processing appliances were worthless for anything but word processing, but they were all still computers.

            Brains are tightly adapted to their purpose, being the control center of an animal, enhancing its ability to survive, find energy, and reproduce. Like those old word processing appliances, it can’t be reprogrammed to run a video game. But it makes up for its deficiencies in comparison to technological computers in terms of precision and performance with massive parallelism, which actually makes it more powerful than all but the largest high performance computing clusters. It’s a fantastic computational system, for what it needs to do.

            That said, as I’ve said elsewhere in this thread, I do usually avoid the specific word “computer” in relation to the brain because people do tend to associate it with modern devices. Instead I usually prefer “computational system” or “information processing system”. But that’s a concession to current cultural conceptions, not because I don’t think the brain is a type of computer.

            Like

          3. All of those thoughts only work if you continue using “computer” and “computation” in the broad sense which is agnostic between analog vs. Church-Turing-style computation. That broad sense is antiquated. Human beings are good at analog processing for human-life-related tasks, that’s not in dispute.

            Like

    2. “This means that even analog computers are computers, because analog computers can be simulated to any desired precision by Turing Machines and so are not computing anything TMs cannot.”

      No, that’s false. Chaos theory, for one.

      “So, the brain is not a TM or a von Neumann architecture computer but it is still a computer.

      That’s an assertion with no proof (or much hard evidence to support it).

      “It isn’t doing anything that a TM (or any computer) could not do in principle.”

      Another unfounded assertion. No computer can do what I can do, nor come even close. There is no basis (other than belief) for assuming they ever will.

      “On my view, there is no fact of the matter on whether any system is actually computing anything.”

      A view I absolutely cannot wrap my head around. Computation is a well-defined concept, not a matter of interpretation.

      “This is like asking whether the earth is actually a water pump […] That’s perhaps a daft way of looking at it,”

      Yes! Exactly!! And yet rocks are “computers.” Daft is exactly right!!

      “I wouldn’t take computer science, which in practice primarily deals with issues around conventional digital computers, to be concerned with defining “computer/computation” on very different substrates.”

      False. Computer science pre-dates digital computers and the science is exactly “concerned with defining ‘computer/computation’ on very different substrates.” The whole point of a TM (or lambda calculus or other formalisms) is to explore computation in a substrate-agnostic way.

      There’s a popular line CS students often hear on their first day: “Computer Science is no more about computers than astronomy is about telescopes.”

      (I’ve always wanted to add: “That is to say, it’s a little about computers just as astronomy is a little about telescopes.”)

      “By the traditional computer science understanding of computer/computation, I’m not sure that a quantum computer would count as a computer either…”

      Of course it does. It’s an active sub-field in CS. The physical principles are different, but as you said, they use algorithms to accomplish their goal.

      “Your issue with the earth or the circuit seems to be that they jump straight to the answer without availing of such an algorithm.”

      Exactly. The difference between a calculation and the evaluation of a physical quality.

      “Yet you could certainly construct an algorithm which used such steps”

      Sure, one can always create a numerical model. That doesn’t mean your numerical model is the thing it models. (Again: Why do you believe the mind is already a numerical model? Only if that is true can some other numerical model deliver identical results.)

      “There’s still an algorithm, just a trivial single-step algorithm.”

      Nope. The difference, calculating with numbers versus measuring a physical quantity, still remains a key distinction.

      Like

      1. Hi Wyrd,

        > No, that’s false. Chaos theory, for one.

        Digital computers can do anything analog computers can do including model chaotic systems. Small changes in initial parameters yield wildly different results in digital computers just as they do in analog systems. Sure, it’s possible to get consistent results with digital computers in a way it isn’t with analog computers, just by choosing not to vary initial conditions, but I don’t think that’s any sort of limitation, rather it makes them more useful. If you want chaotic inconsistent results all you need to do is perturb the parameters very slightly.

        > That’s an assertion with no proof (or much hard evidence to support it).

        I’m saying that to me, according to my definition, to be a computer is like to be a pump, that is, to be a computer is to be something which is best viewed as a machine for processing information. I think this is clearly what the brain has evolved to do. I’m not sure what a proof of this self-evident fact would look like. How would you prove that the heart is a pump, if not to show that its purpose is to move a fluid around? Do you deny that a brain’s main purpose is to process information?

        I don’t think you’re taking issue with an assertion requiring evidence. I think you’re taking issue with my definition of a computer. You may also want to push the hypothesis that the brain is processing information in ways in which a Turing Machine could not simulate, and so that it is “computing” an uncomputable function, which would make it a hypercomputer rather than a computer.

        > Another unfounded assertion. No computer can do what I can do, nor come even close. There is no basis (other than belief) for assuming they ever will.

        Well yeah, I’m not giving arguments for everything all the time. Sometimes I’m just stating my claims, and can follow up with argument if you like. No computer can do what you can do, but I rather suspect that it ought to be possible in principle, not least because the laws of physics appear to be computable. If we simulated a brain, the simulated brain should be able to do whatever you can do. That’s very difficult in practice but I don’t see any reason to suspect that it should be impossible. There are arguments from the likes of Roger Penrose, I guess, but I personally don’t find them convincing — do you?

        I also have a suspicion that even if the laws of physics were uncomputable (e.g. if there is true randomness and objective waveform collapse), then that uncomputability would not be particularly useful or relevant to what the brain is doing. I’m not getting into that right now, as that would be a whole blog post by itself.

        > Computation is a well-defined concept, not a matter of interpretation.

        It’s a well-defined abstract concept but that doesn’t mean that there is an objective fact of the matter about whether a particular concrete physical system is instantiating a particular computation. 5 is a well defined concept, but if I give you 5 apples, there is no objective fact of the matter about how many objects I have given you, because you can interpret the apples as the objects to count or you can interpret the atoms as the objects to count or any other way to break it down. It’s the mapping of the abstract concept to a physical instantiation that is open to interpretation, not the concept itself.

        > Computer science pre-dates digital computers and the science is exactly “concerned with defining ‘computer/computation’ on very different substrates.”

        Yes and no. It doesn’t assume any particular substrate vis-a-vis vacuum tubes/electronics/virtual machines, but it does tend to assume that certain operations are atomic and that certain other operations are not. Addition is usually taken to be atomic, factorisation is not. If we had a machine that through some analog process could factorise numbers in one step, then we could build algorithms using that machine. The study of such algorithms could be happily folded into computer science (as is happening now with quantum algorithms), but the initial traditional computer science assumption that factorisation is not atomic would not be grounds to say that using such factorisation hardware means you’re not engaged in computation.

        Like

        1. “Digital computers can do anything analog computers can do including model chaotic systems.”

          I’m not sure how we went on this tangent. I agree a good enough digital simulation can produce numbers that are good enough for most real-world use.

          The crucial distinction is between a real-world process and a numerical simulation of that process. A real-world process does things. A numerical simulation turns input numbers into output numbers.

          My example is a laser versus numerical simulations of a laser. The latter are very good, but they don’t produce photons. Further, you can turn a milli-watt “laser” into a mega-watt “laser” just by moving a decimal point.

          “Do you deny that a brain’s main purpose is to process information?”

          I think it’s reductionist to the point of saying little. Sure, that’s a crucial part of it, but the brain does so much more than process information.

          “I think you’re taking issue with my definition of a computer.”

          Well, yes, that too. 😀

          “…which would make [the brain] a hypercomputer rather than a computer.”

          If you believe the brain is a hypercomputer, you’d believe the brain is something very special, indeed!

          “If we simulated a brain, the simulated brain should be able to do whatever you can do.”

          Including not believing it’s possible? 🙂

          It all depends on whether a numerical simulation produces output numbers that indicate a consciousness in the algorithm itself. They may only indicate the simulation is “alive” and “functioning.” It may have no volition or will.

          “[T]hat doesn’t mean that there is an objective fact of the matter about whether a particular concrete physical system is instantiating a particular computation.”

          It’s not entirely a matter of interpretation, though. Computation requires an engine with an instruction set and code (instructions; an algorithm). If a system is computing, those should be pretty clear. They are very clear in everything we normally call a computer.

          “…if I give you 5 apples…”

          You see no objective difference between apples and atoms?

          I agree the mapping of a concept to a physical instance is open to interpretation. I’m saying mapping the concept of a computer to the brain doesn’t make much sense to me. It takes interpretation beyond the snapping point.

          “[T]he initial traditional computer science assumption that factorisation is not atomic would not be grounds to say that using such factorisation hardware means you’re not engaged in computation.”

          Right, but we have “a machine that through some analog process could factorize numbers in one step,” so the factorization part is no longer a computation.

          Consider a real-world need for truly random numbers in encryption. There is no algorithm that generates random numbers, there are only pseudo-random number generators. So systems that need truly random numbers must use hardware to produce truly random numbers. (I love the one that uses lava lamps!)

          That addition is seen as (relatively) atomic admits the underlying hardware (the binary adder circuit) is not a “calculation.” Likewise with a factorization circuit.

          (Which would be conceivable, because a finite series of products has an immediate value, so presumably it could work the other way around. That’s exactly why factorization is potentially a much faster operation in QC.)

          Like

  11. Replying to Wyrd at the top level as columns are getting a bit narrow…

    Hi Wyrd,

    > The latter are very good, but they don’t produce photons.

    Right, and a simulated hurricane isn’t wet, as per Searle’s well known argument. So, on biological naturalism (Searle’s position), while the brain may process information, it also produces consciousness somehow. On Mike’s position and mine, it is the processing of information itself that produces (or is) consciousness. On Searle’s, and perhaps yours, the two things are separable — not sure Searle would say you can have consciousness without computation (that way lies panpsychism, perhaps), but Searle is certainly suggesting that you can compute as a brain does without having consciousness/understanding.

    So, yes, if Searle is right, then the brain is much more than a mere computer. But it is not obvious to me that consciousness/understanding etc is a physical thing analogous to wetness or the emission of photons. To me, consciousness/understanding is more of a structural feature of certain information processing tasks, akin to complexity or recursion or modularity. A simulation of a complex/recursive/modular process is itself complex/recursive/modular at some level of description. A simulation of a wet thing is not itself wet. Whether consciousness is more like wetness or complexity is up for debate, and as long as it is up for debate such arguments only serve to illustrate a position rather than to establish that it is correct. So long as we’re just clarifying our respective positions, this difference perhaps accounts for our different points of view.

    > If you believe the brain is a hypercomputer, you’d believe the brain is something very special, indeed!

    Yes. I don’t personally believe that the brain is a hypercomputer. But Roger Penrose appears to think it is. I’m just laying out the different possible positions on this. If Penrose is right, then strictly speaking the brain is not a computer but a hypercomputer. But I don’t think Penrose is right.

    > Including not believing it’s possible?

    Yes.

    > Computation requires an engine with an instruction set and code (instructions; an algorithm)

    I disagree, at least on my conception of computation. A full adder digital circuit is a computer (I would say) and it doesn’t really have an instruction set or code.

    > They are very clear in everything we normally call a computer.

    Usually, yes, because everything we normally call a computer is intentionally designed by humans to be understood and used by humans. A naturally occurring (i.e. evolved) computer would not need to be easily understood by humans. But I would say artificial neural networks are clearly computers and yet they are not at all easily understood.

    > You see no objective difference between apples and atoms?

    No, I said there was no objective fact of the matter of how many objects I had given you, because what counts as an object is open to interpretation. Likewise, given any physical system of unknown provenance (e.g. an evolved, ancient or alien computer), what counts as a step in an algorithm and what that algorithm might be trying to compute is open to interpretation. It’s not usually as open to interpretation in artificial computers of known provenance as we can always appeal to the intentions of the designers to pick a canonical “official” interpretation. But intentions are not magic and can’t be said to bestow any objective qualities on a physical system that would not have been there had the system remained physically identical with different intentions.

    Mark Bishop uses the example of a table for the inputs and outputs of a logic gate, expressed in voltages.

    i1 | i2 I o
    0v | 0v | 0v
    0v | 5v | 0v
    5v | 0v | 0v
    5v | 5v | 5v

    “So, what function is this computing?”, Mark Bishop would ask. You might be inclined to think it’s computing AND, interpreting 0v as false and 5v as true. But you could also interpret 0v to mean true and 5v to mean false, in which case it’s computing OR. There simply is no objective fact of the matter regarding which function it is computing, despite the fact that the functions themselves are perfectly well defined. You have to be told the intentions of the designers to make that call, but to say that those intentions are somehow inherent in the physical gate itself seems mystical to me.

    > so the factorization part is no longer a computation.

    You seem to be saying that an algorithm requires at least 2 steps, and that 1-step algorithms are not algorithms, or computations.

    Fair enough, but I would rather say they are trivial algorithms than require an arbitrary 2 step minimum. In any case, whatever the brain is doing, it’s not jumping right to the answer, so perhaps we can abandon this line of discussion.

    > So systems that need truly random numbers must use hardware to produce truly random numbers.

    Yes, so if these systems really do generate genuinely random numbers, then I would have to grudgingly admit that they are *technically* hypercomputers. However (a) I doubt that randomness is really more useful than very good pseudorandomness (i.e. in environments where no agents know how to predict the next number) and (b) I doubt that the randomness of QM is actually random. Personally, I favour the Everettian interpretation, where there is no real randomness because all outcomes take place, and so a regular TM can actually compute the same functions by just evaluating all outcomes. It is true that this TM is not useful for helping you pick out a single course of action among many, but it is technically computing the same function as the (ensemble of) random systems.

    Like

  12. Once more unto the breach,… 🙂

    Maybe we should move over to my blog if we want to get into this and not test Mike’s patience.

    “Right, and a simulated hurricane isn’t wet, as per Searle’s well known argument.”

    Yep, the original numerical simulation argument.

    “On Searle’s, and perhaps yours, [information processing and consciousness] are separable…”

    Yes. To me this is the more likely bet. There is no currently understood path from information processing to consciousness. Equating them is a large assumption.

    It’s one thing to say, “Well, obviously consciousness involves information processing.” Obviously it does.

    It’s quite another to work the other way from I.P. to consciousness when there are no examples suggesting that makes any sense. With the exception of occurring in the brain, no other form of I.P is conscious or anywhere close to it.

    “But it is not obvious to me that consciousness/understanding etc is a physical thing analogous to wetness or the emission of photons.”

    Consciousness (whatever it is) emerges from the behavior of the brain. Wetness emerges from the behavior of H20. Wind emerges from the behavior of Earth, the Sun, and the atmosphere. Laser light emerges from certain materials in certain energy configurations.

    I see a lot of parallels between complex physical systems and emergent behavior.

    “To me, consciousness/understanding is more of a structural feature of certain information processing tasks, akin to complexity or recursion or modularity.”

    Dude, I gotta be honest, to me that sounds like a lot of hand-waving that doesn’t really say anything or mean anything.

    It does seem to say you believe the mind is a mathematical abstraction, which would make it the one mathematical abstraction in all of creation. I find that a huge ask.

    “A simulation […] is itself complex/recursive/modular at some level of description.”

    Yep. The only way a numerical simulation of the mind works is if the mind is already a numerical simulation. I don’t believe anything in nature is a numerical simulation.

    “[A]s long as it is up for debate such arguments only serve to illustrate a position rather than to establish that it is correct. “

    They can do more than that! They can reveal the coherence and logical consistency of the arguments.

    Mike and I have been around this enough times that I am familiar with your shared views. I’m just not sure you realize how big an ask some of this stuff really is. I’d actually be really interested in discussing that aspect, but to do that we need to agree on how big the problem is.

    “I disagree, at least on my conception of computation.”

    Yes, I understand you and Mike insist on defining it your way. 🙂

    “A full adder digital circuit is a computer (I would say) and it doesn’t really have an instruction set or code.”

    I’m sorry, but you’d be wrong according to digital designers and computer scientists. A full adder is no more a “computer” than a transistor radio is. (Or, if you insist, then a transistor radio is a computer and so is any electric circuit. I guess my house wiring is a “computer” then.)

    “A naturally occurring (i.e. evolved) computer would not need to be easily understood by humans.”

    Why not? We’re pretty good at understanding physical stuff. Our materials science is awesome, we’re also awesome at optics and electronics. Why would a natural computer be so opaque to us?

    “I said there was no objective fact of the matter of how many objects I had given you, because what counts as an object is open to interpretation.”

    Yes, I know. My point was that if you hand me five apples you pretty clearly handed me five apples and not five groups of atoms. Nor did you hand me five groups of apple seeds or apple stems or any other unlikely interpretation.

    IOW, some “interpretations” are reasonable while others are, to borrow your word, daft.

    Regarding the logic table, you know that:
    ¬(A ∧ B) = (¬A) ∨ (¬~B),
    and:
    ¬(A ∨ B) = (¬~A) ∧ (¬~B),
    right?

    They’re the same truth table, which is the objective fact involved. (If you’re confused: ¬0v = 5v; ¬5v = 0v)

    “You seem to be saying that an algorithm requires at least 2 steps, and that 1-step algorithms are not algorithms, or computations.”

    That’s a consequence of what I’m saying, yes. The real difference is an algorithm-based process versus taking a measurement of a physical system. Any algorithm will have multiple steps; taking a measurement is a single step.

    “Yes, so if these systems really do generate genuinely random numbers, then I would have to grudgingly admit that they are *technically* hypercomputers.”

    As far as we’re concerned, they do, but they’re pretty weak hypercomputers. Even your laptop is likely using hard drive access times and other physical system parameters to generate random numbers. Most hypercomputers are capable of “magical” tasks and aren’t feasible in reality.

    Like

    1. Word, you constantly refer to numerical simulations and mathematical abstractions. Why not just simulations and abstractions? Is a baseball game a mathematical abstraction? Is a fistfight? A conversation? Aren’t “consciousness” and “information processing” just similar abstractions, just words to describe patterns in what physical things are doing?

      I suspect when you use the term “consciousness”, you are referring to human consciousness, but that’s like using the term “information processing” to refer to what IBM’s Watson is doing. To paraphrase what you said above, this comes out as “with the exception of Watson, no other form of processing is information processing or anywhere close to it.” If, in fact, consciousness is information processing, human consciousness would be just an example of one kind of very complex consciousness. No one has created anything anywhere close to doing what the brain does because it’s the most complicated processor in the universe. That doesn’t mean we can’t create simpler consciousness.

      *

      Like

      1. “Word, you constantly refer to numerical simulations and mathematical abstractions. Why not just simulations and abstractions?”

        It’s Wyrd, thanks.

        Because there are other kinds of non-numerical simulations and non-mathematical abstractions. What’s being discussed here (by me, anyway) are numerical simulations (of a mind) and mathematical abstractions (likewise).

        Like

        1. Sorry Wyrd, I missed the autocorrect until it was too late. My kingdom for an edit button!

          Hmmm. Not sure what a numerical simulation of a mind would be.

          On the other hand, if a mind is a collection of information processing capabilities, and an information processing capability can be expressed as a function (input -> output), and a function is multiply realizable, then a simulation might be something that performs the same functions with a different physical substrate. But then, if the mind is just the abstract collection of functions, the new substrate performing the same functions would count as a copy of the mind, not just a simulation of it.

          So clearly you are saying some part of the above is not correct. Which part[s]?

          *

          Like

          1. “My kingdom for an edit button!”

            There are WordPress plug-ins that provide it, but WordPress.com doesn’t supply them (at least not at my subscription level). If you ever see a typo you’d like corrected, feel free to email or Twitter DM me.

            Like

          2. “Not sure what a numerical simulation of a mind would be.”

            Well, that’s exactly my point. 😀

            Consider the one fact we know (and hopefully all agree upon): The Mind (whatever it is) arises from the Brain. We have no clue how, but we know that it does. So we have this fact:

            Brain  \Rightarrow Mind

            The assumption Mike, DM, you(?), and others make, is that the brain is some integrated combination of hardware and software:

            {Brain}_{hw} + {Brain}_{sw} \Rightarrow Mind

            This assumption allows the possibility that:

            {Comp}_{hw} + {Comp}_{sw} \Rightarrow Mind

            Which is the desired goal, running a mind software on computer hardware.

            The assumption requires that brain software actually exist and that running such software on any given platform must result in a mind.

            All software is numbers. So mind software is necessarily a numerical simulation of what a brain does.

            “… if the mind is just the abstract collection of functions, the new substrate performing the same functions would count as a copy of the mind, not just a simulation of it.… “

            We don’t know what the mind is, but I’d bet it’s more than an abstract collection of functions.

            But, yes, if the belief the mind is an algorithm is correct, then that algorithm can be extracted from the brain hardware and made to run on other hardware. So long as it was essentially the same algorithm, then you would just have copies.

            In contrast, consider software that emulates a neural network. That’s a simulation of a physical system. The algorithms involved are very different; they are simulation algorithms.

            My understanding is there hasn’t been much progress over the years trying to replicate the functionality of the brain — that’s the original approach. OTOH, simulating the neural net has made considerable progress with deep learning networks.

            Like

  13. After much consideration of these discussions I’m taking a new tack to explain my understanding. The main problem I see here is keeping definitions straight. I need to point out here that if someone is trying to make an argument, it is not good to attack their definitions. To say their argument is bad requires accepting their definitions and explaining how their argument fails even using those definitions.

    That said, I will state that the brain is not a computer. In fact, the brain is a qomputer, and more to the point, the brain is not only a komputer, it is also a pkomputer. [Didn’t see that coming, did ya?]

    So, a qomputer is anything that takes input and produces output. Guess what? That’s everything physical. A rock is a qomputer.

    A komputer is any qomputer that produces output which is valuable with respect to the input. The “valuable” part here takes a whole lot of unpacking which involves teleology and/or teleonomy. That unpacking constitutes an entire research program. Brains, bacteria, viruses, possibly whirlpools, are komputers.

    A pkomputer is a komputer which is capable of one or more komputation which includes a symbolic sign as some intermediate part of the komputation. Brains are pkomputers.

    A computer is what Wyrd has been describing in this discussion. Computers are pkomputers.

    A mind is a repertoire of pkomputations.

    So to recap: Computers are a subset of pkomputers which is a subset of komputers which is a subset of qomputers. Brains too are a subset of pkomputers, …

    *

    Liked by 1 person

  14. Hi Wyrd,

    I’m not sure that I agree that I am merely assuming that the information processing performed by the brain is consciousness. It feels to me like I have reasons for this view and that I have considered alternatives. Calling it an assumption suggests that I don’t/haven’t.

    A problem with looking for other examples of I.P. leading to consciousness is that there would be no way to recognise it. It is necessarily true that we can only ever have one example — our own minds — because consciousness is a private phenomenon. That said, I’m inclined to agree that it is appropriate not to regard any systems other than biological ones as conscious at present, if only because we have no other examples of information processing as adaptable and competent as that performed by brains.

    > Consciousness (whatever it is) emerges from the behavior of the brain

    I guess, but it’s an open question whether this is because of the physical particulars of what the brain is doing or more of an abstract causal structure that could be realised on other substrates. A simulated ocean wave is not wet like a real ocean wave, but it does propagate across the simulated sea in much the same way as an ocean wave propagates on the real sea. Whether consciousness is like the pattern of propagation or like the wetness is the point of contention. So all I’m saying is that the analogy to the wetness of water is a controversial one. It’s a great analogy for getting your view across, but it doesn’t really work to persuade the other side.

    > Dude, I gotta be honest, to me that sounds like a lot of hand-waving that doesn’t really say anything or mean anything.

    Thanks for your honesty. Not sure how I can clarify other than to say that I think that consciousness is an artifact of the patterns rather than something physical produced by the stuff.

    > It does seem to say you believe the mind is a mathematical abstraction, which would make it the one mathematical abstraction in all of creation.

    Well, here’s where I’m all out on my own, in that I do actually think the mind is an abstract structure rather than something that is literally being physically produced by the brain. You lose me where you say that this would make it the one mathematical abstraction in all of creation though. A typical simple orbit has an elliptical shape. The ellipse it traces out is a mathematical abstraction. Patterns abound in nature. No idea why you would think the mind is the only one.

    > A full adder is no more a “computer” than a transistor radio is.

    I would say it is a computer because it has the job of processing inputs and producing outputs. It is clearly not a general purpose computer in the usual sense. Neither is a simple (non-programmable) calculator. But if I were pressed to answer a philosophical question about whether a calculator was really engaging in computation, I would be inclined to say yes. The fact that the algorithms it follows are implemented in circuits rather than software does not strike me as metaphysically important. Similarly, though my Nintendo Switch is not a computer in the ordinary sense, it would seem perverse to me to deny that it is the same kind of thing as a computer for metaphysical or philosophical purposes.

    > Why would a natural computer be so opaque to us?

    You’re really asking me why it’s hard to understand the brain? Because it’s bloody complicated, that’s why! There’s no design document or blueprint. It evolved by a process of messy natural selection. Its provenance is utterly unlike that of an artificial computer.

    > IOW, some “interpretations” are reasonable while others are, to borrow your word, daft.

    Agreed, 100%. But to me at least that doesn’t mean that there is an objective fact of the matter inherent in the system itself regarding the *correct* interpretation. Interpretations instead have varying degrees of subjective reasonableness or usefulness. So it may be reasonable to interpret a computer as implementing some particular computation, but there is no fact of the matter on whether this interpretation is correct (except with reference to the intentions of the designers).

    > The assumption requires that brain software actually exist and that running such software on any given platform must result in a mind.

    I don’t think we can really separate out brain software and hardware in this way.

    But first let’s go back to the full adder, which is a computer in the broader metaphysical, philosophical sense in which I am using the term, and not a computer in the computer science sense of a machine which is capable of running arbitrary algorithms (software) in something akin to a Von Neumann architecture. We understand the term “computer” differently, but it should be clear to you that Mike and I are only claiming that the brain is a computer in this broad sense, and if you still disagree then it might be more productive to entertain this definition just so we can discuss it. Or perhaps we can adopt James’s term of “pkomputer” which I think captures what I want t say.

    I think the brain is a computer in the sense that the full adder is a computer, namely a machine for processing inputs into outputs in a manner that could also in principle be carried out by a Turing Machine. I don’t think a full adder really has separate software and hardware. There is only a causal network which allows information to flow through it, combine and transform and ultimately produce outputs. This causal network is realised by gates and wiring in the adder but can also be reproduced as an abstract algorithm running on a desktop computer, e.g. in a simulation of that circuit. I think the same is true of a brain, i.e. that it ought to be possible in principle to process information on a desktop computer in a way analogous to the way a brain processes information, and so to get similar behaviour with respect to inputs and outputs. At the limit, we can achieve this by simulating every particle making up a brain (and its environment), but there may be (and probably are) simplifications we can make which will achieve the same ends, just as we can replicate the behaviour of a full adder without simulating every particle. Artificial neural networks are just a first step in this direction.

    It seems clear to me that such a simulated brain would be functionally identical to a real brain in terms of its ability to solve problems, reason, etc. The only way this would not be the case is if the brain is some form of hypercomputer exploiting uncomputable physics (per Penrose). More controversially, I also think we ought to regard the simulation as conscious. Perhaps you might accept the first claim while rejecting the second. In that case, it would seem the brain is both a computer/pkomputer and a consciousness/experience generator, and so you would be right that it isn’t only a computer/pkomputer. But computation/pkomputation is still a very important part of its function, and in fact the only part that is relevant from a natural selection standpoint. If only from a strictly teleological or functional point of view, the purpose of the brain would therefore be pkomputation, and the fact that it also generates consciousness would just an unimportant side effect like its colour. So even in the world where Searle is right, I think there would be a lot of truth to the claim that the brain is a computer, even if that didn’t capture everything about it that we find significant as conscious beings.

    > In contrast, consider software that emulates a neural network. That’s a simulation of a physical system. The algorithms involved are very different; they are simulation algorithms.

    This may be the crux, or a crux of disagreement, so this point is pretty important. Nobody is claiming that the brain is running a conventional step-by-step algorithm involving problem-domain relevant symbols in the way an ordinary CPU does. My claim is only that it is possible to achieve the same information processing tasks as a brain does by means of some conventional algorithm, even if only such a simulation algorithm.

    Again, this simulation can probably be made somewhat simpler than a 100% accurate physical simulation of all particles in the brain without loss of function. Especially once it is sufficiently abstracted, it seems reasonable to me (it may not to you) to interpret our computer program as one implementation of this algorithm, and the brain as another. That is, the symbols of the algorithm we might interpret a brain to be computing would not correspond to dogs, cats, trees, etc, or even to thoughts, beliefs, qualia, etc, but to abstracted versions of neurons and synapses and neurotransmitters. In the same way a simulation of a full adder does not explicitly deal with representations of digits or addends or augends or sums or carrying over, but rather models gates and voltages and connections.

    Liked by 1 person

    1. “If only from a strictly teleological or functional point of view, the purpose of the brain would therefore be pkomputation, and the fact that it also generates consciousness would just an unimportant side effect like its colour.”

      Yes. Two aspects of “consciousness” that have often and rightly been distinguished are semantic (I’m conscious of this table, for example) and phenomenal (my sight of the table looks yellow to me). I do think that the phenomenal aspects are “unimportant” side effects, when “importance” is judged from Mother Nature’s viewpoint. So f**k Mother Nature.

      Like

      1. Right.

        To be clear, my own view is that these two aspects are unavoidably linked, and that you can’t have the functional abilities of humans without phenomenal consciousness, that phenomenal consciousness in some sense just is the sort of information processing human beings perform. But I can entertain the idea that the two types of consciousness are distinct in order to explore other viewpoints.

        I agree with you that even if I were wrong to unify the two, the phenomenal consciousness would not be unimportant.

        But even so, in that case the evolved purpose of a brain is not to bring consciousness about, and if a computer is something which has the purpose of computing (i.e. processing information), then the brain would clearly be a computer. So either way it’s a computer, it’s just that it may be a computer with some profound side-effects.

        Liked by 1 person

        1. After settling a definition question that’s debatable, I would actually agree with you that a system that can do advanced information processing must be “phenomenally conscious”. Advanced stuff like self-modeling, imagination, planning, categorizing its emotional states, etc. But that wouldn’t mean it had our kinds of phenomenal states. Maybe instead of pains it has some radically different feeling, which it hates.

          When I see a car accelerate up a hill, I deduce that it has an engine. But I don’t assume that it has an internal combustion engine like my car. Internal combustion isn’t the only way to power a car.

          Hemoglobin and hemocyanin are two ways of carrying oxygen in the bloodstream. Pretending for a moment that each was just as good as the other, there would be a sense in which it was true that Mother Nature selected for hemoglobin in our blood. After all, without it we would die. Except that in the wildly improbable case that a mutant was born with hemocyanin instead, it would live. So in that sense, Mother Nature doesn’t care a whit about hemoglobin.

          And in exactly that sense, Mother Nature doesn’t care a whit about our phenomenal states. They don’t have to have any particular character, any item that can play the same functional role will do.

          Advanced “computational” systems (in your (over)broad sense) can perceive their environment, are aware of tables and chairs and so on, and also know their own thoughts and imaginings, it seems right to say they’re conscious of tables and thoughts. And they are also aware of internal states which mediate between external properties (630 nm light) and beliefs (“that’s red”), which it seems only fair to call “sensations” — without implying that they must be similar sensations to human ones.

          Like

          1. Formatting fail. That first italics was supposed to cover internal combustion. The next was supposed to come at “aware of tables”. Sorry.

            Like

      2. I actually think the phenomenal aspects are not side effects, that they’re adaptive. Color, the usual example for phenomenal qualities, is information. It’s information encoded in a convention the brain uses to communicate different wavelengths of reflected light, but information nonetheless. For example, primates’ ability to detect red may help them find ripe fruit.

        Like

        1. You’re describing functional aspects, though, Mike. From the point of view of someone who sees them as separate (which I don’t), then you could have philosophical zombies (perhaps differently physically constituted, unlike Chalmers p-zombies) which have no phenomenal experience at all and yet can functionally distinguish colours. Computer vision can already do this, but you wouldn’t (I presume) say that such systems have phenomenal experience.

          Like

          1. I am a relentless functionalist 🙂

            Usually for people who do see them separate, I ask for an explicit description of exactly what they’re seeing separate from the functionality. They usually just say something along the lines of raw ineffable experience.

            I actually have my own answer to that question. I think what they’re struggling to articulate is that we extract meaning from the sensory information coming in, meaning we use to evaluate possible courses of action. Of course, even when we’re not actively thinking about what to do, the meaning extraction machinery is running. Basically it’s action scenario intelligence, imagination, which is what I think computer systems are currently missing, although the Deepmind people are working on that, but it’s reportedly an extremely difficult slog. Which considering imagination uses most of the forebrain, shouldn’t be too surprising.

            Like

          2. (“I think what they’re struggling to articulate is that we extract meaning from the sensory information coming in, meaning we use to evaluate possible courses of action.”)

            What you are describing here Mike is the discrete, binary system of rationality, and that tsunami of sensory information coming in is what everyone calls subjective experience. That tsunami of sensory information contained within our environment constitutes the binaries which are then contrasted against each other to extract meaning of kind, unfortunately, many of those binaries are radically indeterminate.

            This anecdote is for Disagreeable Me and all other Platonists. The Greeks handed us a bogus construct called subject/object metaphysics (SOM). Fundamentally, the tsunami of sensory information is not a subjective experience because there is no such thing as a subject. That sensory information is an objective experience that is radically indeterminate. The control system of the brain utilizes the mechanism of rationality to determine what those indeterminate experiences are: and that is why we refer to them as subjective, because those experiences become “subordinate” to the power of our own determination, True or False, Yes or No…. So in the truest sense, homo sapiens create their own reality, most of which is a delusion simply because we do not understand the environment and our own objective experience within it.

            Like

          3. “homo sapiens create their own reality, most of which is a delusion simply because we do not understand the environment and our own objective experience within it.”

            Have you ever considered looking at that creation of our own reality, delusions and all, as subjectivity?

            This is similar to the response many make to philosophers who say phenomenal consciousness is an illusion, that the illusion is the experience. In your case, maybe the created reality with the delusions is subjectivity.

            Like

          4. (“Have you ever considered looking at that creation of our own reality, delusions and all, as subjectivity?”)

            Yes I have. Subjectivity is not an objective reality like everyone believes, subjectivity is a derivative of rationality, an intellectual construction, something invented by the Greeks that we just happen to accept without question. The model of (SOM) is one of the delusions we embrace; unfortunately, that model is a stumbling block to understanding and meaning. Old, deeply entrenched myths die-hard, and SOM is one of our longest standing myths, second only to the notion of “law”.

            (“This is similar to the response many make to philosophers who say phenomenal consciousness is an illusion, that the illusion is the experience.”)

            I for one do not believe that phenomenal consciousness is an illusion. That experience is real enough, but that “realness” is contextual. What is the illusion are the delusions we create for ourselves attempting to make sense of ourselves and the place we find ourselves in. Robert Pirsig once commented that if one person believes a delusion, one is considered to be insane. But if more than one person believes the same delusion, it’s called a religion. To avoid being offensive, I prefer to call our shared delusions a circle of mutual definition and agreement.

            Like

    2. “Calling it an assumption suggests that I don’t/haven’t.”

      Apologies, not my intent! The assumption I see is that the mind is a distinct abstract mathematical object. Regardless of the amount of thought behind it, it’s still an assumption.

      We’re all making assumptions here. The question to ask is what they’re based on.

      “…but it’s an open question whether this is because of the physical particulars of what the brain is doing or more of an abstract causal structure that could be realised on other substrates. “

      Absolutely, no question!

      “Whether consciousness is like the pattern of propagation or like the wetness is the point of contention.”

      Exactly so.

      “So all I’m saying is that the analogy to the wetness of water is a controversial one. It’s a great analogy for getting your view across, but it doesn’t really work to persuade the other side.”

      My laser light analogy is another version. Here the question is whether consciousness resides in the description of the atoms and emitted photons *or* in the laser light itself.

      Given that it is a controversial analogy, perhaps one discussion lies in why it is, or is not, persuasive. I’ve never really understood the objections to it. For instance:

      “A simulated ocean wave is not wet like a real ocean wave, but it does propagate across the simulated sea in much the same way as an ocean wave propagates on the real sea.”

      Yeah, but so what? It’s still not wet, you still can’t surf on it. It’s just numbers!

      “Not sure how I can clarify other than to say that I think that consciousness is an artifact of the patterns rather than something physical produced by the stuff.”

      Do you have a mechanism in mind for how patterns, in themselves, lead to consciousness? Or do you take it as fact that the right sort of pattern is necessarily conscious? (Are you familier with “dust” theories? If so, do you subscribe to them?)

      “Well, here’s where I’m all out on my own, in that I do actually think the mind is an abstract structure rather than something that is literally being physically produced by the brain.”

      Oh, you’re not alone! Everyone who believes the mind can run as software on a computer, let alone believes in mind uploading, necessarily believes the same thing. A belief those things are possible entails a belief the mind is an algorithm — a mathematical abstraction.

      “You lose me where you say that this would make it the one mathematical abstraction in all of creation though.”

      I would say the math describes an orbit. The ellipse describes only a two-body solution. The real-world reality is much messier. We don’t have the ability to accurately calculate with computers orbits of three+ bodies. We can only approximate.

      There is clearly math that describes isolated parts of the brain, but if we don’t have the math to describe orbits, the full math that describes the brain’s operation must be complex beyond imagining.

      The question though: Is the mathematical description the thing? You say yes, I say no.

      “I would say [a binary full adder] is a computer because it has the job of processing inputs and producing outputs.”

      Then a transistor radio and a(n analog) guitar pre-amp are also “computers?”

      “But if I were pressed to answer a philosophical question about whether a calculator was really engaging in computation, I would be inclined to say yes.”

      No need for deep thought! A calculator is clearly engaged in computation. There is an engine with an instruction set, and there is an algorithm (instructions; code). That’s computation.

      “Similarly, though my Nintendo Switch is not a computer in the ordinary sense,”

      That the algorithm is burned into ROM doesn’t change anything. It is a TM that defines computing, not a UTM. (And it certainly isn’t von Neumann architecture.)

      But calculators, microwaves, all digital gear, all use recognizable CPUs and recognizable algorithms (written by humans).

      “You’re really asking me why it’s hard to understand the brain?”

      No, I was asking why it doesn’t look or work anything like a computer.

      “But to me at least that doesn’t mean that there is an objective fact of the matter inherent in the system itself regarding the *correct* interpretation.”

      That’s another concept I don’t understand. Don’t all natural constructs have only one correct interpretation?

      For instance, we can use a rock in all sorts of ways: chair, paperweight, weapon, pet, gravity test, or what have you. But it’s still just a rock.

      “[I]f you still disagree then it might be more productive to entertain this definition just so we can discuss it.”

      I do understand your use and we are discussing it. I just think using the term “computer” confuses the issue. I’ve always been fine with “information processor” as a general term. (That doesn’t mean I believe it’s a correct theory, but I think it’s the appropriate label for the theory.)

      Note that an analog computer, a transistor radio, a guitar pre-amp, a full-adder, all qualify under this heading. So does any computer, but to me a computer is a specific sub-set of information processors.

      And then, as you say, the central issue is still whether a numerical simulation or emulation of the brain can produce the same results as the physical instance.

      “My claim is only that it is possible to achieve the same information processing tasks as a brain does by means of some conventional algorithm, even if only such a simulation algorithm.”

      Yes, I know. I disagree. 😀

      Or rather, my guess is that it’s unlikely.

      Like

  15. Hi Wyrd,

    You say you don’t understand the objections to the laser/water analogy.

    > Yeah, but so what? It’s still not wet, you still can’t surf on it. It’s just numbers!

    So, I would say that it’s clear that a simulation conserves some features of the phenomenon (patterns, structure, causal relationships and that sort of thing) and does not conserve other features (wetness, emission of photons). So the objection is just to point this out, because it’s not agreed what kind of feature consciousness is. The objection is not to say that this is necessarily a disanalogy, it’s pointing out ways in which it could be a disanalogy.

    I wasn’t familiar with the dust theory of consciousness, but am familiar with very similar ideas such as Mark Bishop’s “Pixies” argument and Hilary Putnam’s Rock argument from Representation and Reality.

    I discuss these ideas favourably here as valid critques of physicalist computationalism:
    http://disagreeableme.blogspot.com/2016/02/putnam-searle-and-bishop-failure-of.html

    and here answer them with an argument from a Platonic perspective:
    http://disagreeableme.blogspot.com/2016/03/rescuing-computationalism-with-platonism.html

    In a nutshell, I think these arguments are a serious problem for most computationalists (including Mike), but the point is moot given other views from my broader worldview, namely:
    * The acceptance of Platonism
    * My view that there is no fact of the matter on whether any given physical system implements any specific computation
    * My view that it is the abstract structure itself that is conscious rather than the physical implementation
    * My view that two instances of the same computation just are literally the same computation and not two distinct abstract objects
    * The Mathematical Universe Hypothesis, which suggests that all possible universes exist, and so that all possible minds exist, so the fact that we can interpret dust to be computing some mind in some virtual environment doesn’t really mean anything, as that mind and its environment must exist anyway

    > A belief those things are possible entails a belief the mind is an algorithm — a mathematical abstraction.

    In light of the above, I agree that this is the only way to make computationalism coherent, but most computationalists don’t quite accept that. For instance, you can’t destroy an abstract object, but most computationalists would believe that you can destroy a mind by destroying a brain.

    I on the other hand believe that you can’t destroy a mind because it continues to exist in two senses:
    1) The history of that mind, e.g. everything it ever experienced and everything it ever thought, continues to exist timelessly. That history exists as an abstract object and cannot be destroyed. This resembles the immortality Einstein was talking about when he comforted the family of his friend Michele Besso with a block theory of time, saying “Now he has departed from this strange world a little ahead of me. That means nothing. People like us, who believe in physics, know that the distinction between past, present and future is only a stubbornly persistent illusion.”
    2) There is presumably some possible universe in which the mind (and its brain) continues to exist past the point in which its instantiation was destroyed in this universe, and so the mind continues to exist. This resembles Quantum Immortality (https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality)

    > Then a transistor radio and a(n analog) guitar pre-amp are also “computers?”

    I said right at the beginning that you could regard anything as a computer (or a paperweight). The question is whether this is a useful way to look at it. It seems to me like a somewhat useful way to regard an adder, calculator, Nintendo Switch or brain. It seems less useful to regard a radio or a pre-amp as computers, and quite a bit less so again to regard a rock or an orbiting planet as a computer.

    > A calculator is clearly engaged in computation. There is an engine with an instruction set, and there is an algorithm (instructions; code).

    Not necessarily. A simple non-programmable calculator may just be a glorified adder. It can all be implemented as circuits.

    > That the algorithm is burned into ROM doesn’t change anything. It is a TM that defines computing, not a UTM.

    What’s the difference between something which has an algorithm for adding binary digits burned into ROM and an adder?

    > For instance, we can use a rock in all sorts of ways: chair, paperweight, weapon, pet, gravity test, or what have you. But it’s still just a rock.

    But is it actually, really, truly, a paperweight? That’s what it’s like to ask is system X really implementing computation Y. Computation is in the eye of the beholder. That’s the point the dust argument establishes.

    You close by saying that you think it is unlikely that an algorithm could ever do what a brain can do, even from a functional perspective.

    But how can you account for this? I mean, what’s to stop me using an algorithm to simulate a brain and using the simulation to process information as a brain would? Granted it’s likely to be infeasible in practice, but unless you assume something uncomputable about the laws of physics being exploited by brains, it seems that this is necessarily possible in principle.

    Like

    1. “I would say that it’s clear that a simulation conserves some features of the phenomenon (patterns, structure, causal relationships and that sort of thing) and does not conserve other features (wetness, emission of photons).”

      I would add (crucially): It cannot conserve those features. It can only conserve a description of a physical process.

      To me that breaks it because I believe the important features are in the thing, not the description. You have a Tegmarkian view that makes the physical properties (wetness, force, light, etc.) an implementation detail not relevant to the underlying mathematical reality.

      “…given other views from my broader worldview, namely:”

      I have some Platonist leanings mathematically, but I’m very far from being Tegmarkian. I do agree two instances of computation are identical (but, of course, under the CS definition of computation).

      (I’ll pop over and check out your posts when I get a chance. I find the dust/pixies argument pretty strong. I’m not sure I follow Putnam’s Rock argument; at first blush it seems nonsensical to me.)

      “For instance, you can’t destroy an abstract object, but most computationalists would believe that you can destroy a mind by destroying a brain.”

      If I destroy the computer running the one and only copy of a given software, haven’t I’ve destroyed an abstract object in the same sense as destroying a brain?

      “1) The history of that mind, e.g. everything it ever experienced and everything it ever thought, continues to exist timelessly.”

      Poetic! 🙂

      “2) There is presumably some possible universe in which the mind (and its brain) continues to exist…”

      Sounds like heaven! 🙂

      “A simple non-programmable calculator may just be a glorified adder. It can all be implemented as circuits.”

      Doesn’t it still have to manage the keypad and display and allow input of multiple numbers and operators? You can’t really do that with circuitry; that’s why calculators didn’t exist prior to CPUs.

      “What’s the difference between something which has an algorithm for adding binary digits burned into ROM and an adder?”

      Digital v. Analog. Simulation v. real-world process. I believe that matters.

      “But is it actually, really, truly, a paperweight?”

      Does a falling tree make a sound? Depends on how we define sound; then the answer is clear. Likewise, depends on how we define paperweight. Once we do, the answer is clear:

      If a paperweight is ‘a heavy object capable of holding down papers in a reasonable wind’ then a rock is clearly a paperweight. If a paperweight additionally defined as ‘something manufactured with the intent of being a paperweight’ then a rock clearly isn’t.

      But ultimately a rock is a rock is a rock. As I said, it can be used for many things.

      “That’s what it’s like to ask is system X really implementing computation Y.”

      I do see some circularity in saying the brain must be a computer because it can be interpreted as computing (especially given the the interpretation is rejected by many).

      That’s like saying a rock must be a paperweight because we can interpret it thus. I say, no, it’s just a way of looking at a rock; it’s not reality.

      If we’re going to interpret the rock as a computer, can we use it that way? Can we validate our interpretation? Only with a bunch of actual computer gear, which gives us a system in which the rock is a small — very replaceable — part.

      Doesn’t that make it pretty clear that, no, the rock is not a computer? Some small part of, perhaps, but in a role easily replaced with just about any other thing.

      The brain is a much more complicated proposition…

      “But how can you account for this?”

      Well, I wrote a whole bunch of posts about that. You’ve seen a few of them. (This one, anyway.)

      Bottom line, to answer your question, I see software brains at three levels: (1) Simulations of physics, from the quantum level up to bio-cellular level. Such raw physics simulations could simulate other systems besides brains. (2) Simulations of the human connectome. (3) Functional emulations of the human mind.

      My understanding is that #3 is the “old-fashioned” AI that led to AI winters. It produced a variety of knowledge-based and expert systems, but nothing like AGI (nor even close). The challenge does seem more than formidable, possibly insurmountable.

      My suspicion is that #1 and #2 will result in a biologically functioning, but non-active, mind. All the biology will appear to work (according to its numbers), but there will be no “ghost” (Ghost in the Shell is a favorite of mine, and I think the term “ghost” is perfect.)

      Or maybe there will be a ghost but it’ll be just noise. Or there will be output, but incoherent and insane. Or maybe the ghost will work, sort of, for a while, but then computation errors and chaos will degrade it. (HBO’s Westworld used that idea.)

      Or maybe, just maybe, it’ll actually work, and I’ll be wrong. 😛

      Like

      1. Hi Wyrd,

        Thanks for pointing me at your blog. That previous correspondence had slipped my mind but I’m refreshed now!

        So, I think I see where you’re coming from a little better but it seems to me that your doubt as to the potential of digital simulations hinges on the idea that the brain is taking advantage of analog interactions between infinitely finely tuned physical quantities to achieve what a digital computation cannot, and from my point of view this simply cannot be the case as it would be infinitely delicate. The mind you posit is a needle balancing indefinitely on its point — unassisted — while an earthquake rages.

        > I’m not sure I follow Putnam’s Rock argument; at first blush it seems nonsensical to me.

        It’s basically the same argument as pixies/dust. It’s just saying that under some complex interpretation you can interpret the particles in a rock to be instantiating any computation you like, just as the dust argument says you can interpret the movements of dust particles to be instantiating a computation.

        > If I destroy the computer running the one and only copy of a given software, haven’t I’ve destroyed an abstract object in the same sense as destroying a brain?

        For practical purposes perhaps, but from a Platonic perspective software is an abstract mathematical object and so continues to exist abstractly even if no humans know its details. It could be “rediscovered” by another programmer at another time. This is easier to imagine if it’s a trivial piece of software such that it wouldn’t be very surprising if two people independently wrote what amounts to the same thing (especially if you’re more interested in the abstract algorithms involved and less interested in superficial details like whitespace, variable names or even the choice of programming language). It’s less intuitive if it’s a program involving millions of lines of code and branded everywhere with stuff like “Microsoft Word 2019 v1.0.1.20190112”, but the principle is the same.

        > Sounds like heaven! 🙂

        No, usually just some other mundane universe where you didn’t die for some reason. As long as it is at all possible, however unlikely, that you might have survived what it looked like was going to kill you, there are infinitely many universes where you did and you will continue to live in those. And in practice I think it’s always at least infinitesimally possible that you will survive anything, even if it’s by the spontaneous quantum teleportation of all your particles ten meters to the left. Alternatively you could survive by finding yourself in a universe where your previous brain state of perceiving yourself to be falling off a 100 storey building turned out to be the result of a dream or hallucination. So there is a prediction of sorts — each of us should perceive ourselves to be immortal, but things may get pretty weird (and perhaps unpleasantly so) as we live longer and longer. So this is not wishful thinking. Death may well be preferable than immortality of this kind. Some of the possibilities for improbable continued existence are ghastly (Roku’s basilisk for example).

        > Doesn’t it still have to manage the keypad and display and allow input of multiple numbers and operators? You can’t really do that with circuitry;

        I think you’re wrong here. You can do pretty much anything with circuitry. As long as you’re happy to make the device inflexible and limited in purpose, you can implement any algorithm with gates and connections alone. Especially in the case of simple devices like basic calculators. That’s why I don’t see a difference between having an algorithm burned into ROM and something like an adder. An algorithm burned into ROM is really no different than a bunch of logic gates wired up in a circuit. You only need a general purpose CPU and memory and all that if you want to have practical, flexible, general purpose computing devices.

        > That’s like saying a rock must be a paperweight because we can interpret it thus.

        Yes, it is exactly like that, as I’ve been saying since my first comment.

        > I say, no, it’s just a way of looking at a rock; it’s not reality.

        Again, yes. This is what I mean when I say there is no fact of the matter on whether physical system X is actually, really, objectively computing Y. It’s just a way of looking at X. Mike would disagree with me here, I think. My view only works because I identify my mind with Y and not with X. I am the pattern of information processing we might most naturally interpret my brain to be computing — I am not my brain.

        > If we’re going to interpret the rock as a computer, can we use it that way?

        No, because it is daft and useless to interpret a rock to be a computer. But that doesn’t mean that there is a fact of the matter on what is a computer and what is not. Just more and less useful interpretations.

        > My suspicion is that #1 and #2 will result in a biologically functioning, but non-active, mind.

        This suspicion seems unfounded to me. If a simulated hurricane behaves like a hurricane, and a simulated solar system behaves like a solar system, and a simulated protein folds like a protein, etc, then I see no reason at all why alone of all systems in creation a simulated brain would not behave like a brain. Note that I’m not saying we can predict what a given brain will do with a simulation of it — chaos seems to suggest that this is impossible. But any good simulation should behave in a manner characteristic of the kind of thing it is simulating, such that if you were interacting with a real brain or a simulated brain inside a black box (Turing Test style) you should not be able to tell which it is, no matter how long you interact with it.

        Like

        1. “[I]t seems to me that your doubt […] hinges on the idea that the brain is [analog].”

          It’s definitely an important cornerstone. There are others.

          “The mind you posit is a needle balancing indefinitely on its point — unassisted — while an earthquake rages.”

          Not at all. Myriad physical systems depend on feedback loops to maintain stable trajectories through their phase space. I do absolutely believe the mind is finely balanced in a sense related to turbulence or chaos.

          It’s kind of like our minds live right on the boundary of the Mandelbrot, if that means anything, and that boundary can be treated as a kind of phase space attractor.

          “[Putnam’s Rock is] basically the same argument as pixies/dust.”

          Right, that much I get. It’s the use of disjoint states. I don’t see what that brings to his argument, and it seems to open the door to arguments about their validity.

          “[F]rom a Platonic perspective software is an abstract mathematical object and so continues to exist abstractly…”

          True of the brain as well, right?

          If the mind is an abstract mathematical object, and the brain is destroyed, couldn’t a sophisticated enough programmer reconstruct it?

          “I think you’re wrong here.”

          I’ve designed a fair amount of relay-based and TTL-based circuitry (I used to be a hardware hobbyist). Possible? Yes. Reasonable? No. It would be hideously complicated to implement just the four basic math functions. Multiplication, let alone division, requires steps, and doing that with normal logic gets ugly really fast.

          “[Y]ou can implement any algorithm with gates and connections alone. “

          I’m not sure that’s true. (I’m pretty sure it’s not.)

          Try it. Try designing a four-function calculator (with keypad and display) using TTL.

          “That’s why I don’t see a difference between having an algorithm burned into ROM and something like an adder.”

          The algorithm requires an engine that understand it to execute. The adder just does what it does.

          The algorithm, thus, is incomplete. The adder isn’t.

          “I see no reason at all why alone of all systems in creation a simulated brain would not behave like a brain.”

          I said that, too, but distinguished between content-empty biological function and the presence of a “ghost.” I’m not sure the former necessarily leads to the latter in a simulation.

          Like

  16. Hi Wyrd,

    > Myriad physical systems depend on feedback loops to maintain stable trajectories

    Sure. Of course it is possible to build robust systems using physical devices, where small perturbations make little difference. But I don’t see what you would lose by simulating those systems digitally, precisely because they are robust. The tenth decimal place doesn’t matter because the system is robust to perturbations in the tenth decimal place.

    > It’s kind of like our minds live right on the boundary of the Mandelbrot

    You’re suggesting that are minds are perhaps semi-chaotic in some sense. Can you make it more precise? In any case, I would expect that any digital simulation of a brain would exhibit the same characteristics. Digital simulations can be stable or chaotic, and if our mind can exist at the boundary then I don’t see why a digital simulation could not also exist at the boundary.

    > True of the brain as well, right?

    Yes.

    > If the mind is an abstract mathematical object, and the brain is destroyed, couldn’t a sophisticated enough programmer reconstruct it?

    Sure. Although this programmer would not strictly speaking be creating it anew but would instead be building a physical system which had the same pattern.

    > Possible? Yes. Reasonable? No.

    OK. I’m mostly talking of what is possible. I am more or less assuming that simple calculators are implemented without a general purpose CPU and anything you could call software, that it’s all done on integrated circuits which are basically made of up of logic gates (including larger units such as registers which are in turn basically made up of logic gates). I still believe this is true and everything I can find on it seems in line with this view. I’m open to correction on this if you can find me a source. But if we agree that this is possible then just assume I’m talking of such a calculator. I would say it is engaged in computation and you apparently would not. For a specific example of a calculator that does not appear to have a general purpose processor or anything you could call software, I could point to Babbage’s Difference Engine.

    > Try it. Try designing a four-function calculator (with keypad and display) using TTL.

    No thanks! I’m not an electronic engineer!

    > distinguished between content-empty biological function and the presence of a “ghost.” I’m not sure the former necessarily leads to the latter in a simulation.

    What you mean by “ghost” is not very clear. Based on your later sentences you appear to mean consistently rational intelligent behaviour over time as opposed to something like phenomenal experience. I’m leaving the latter out of it for now. If you are talking about the former, then you are denying that a simulation of a brain could behave consistently rationally and intelligently over time as a real brain does. That means that it does not behave in a manner of a real brain, and so that there are physical aspects of a real brain that cannot be captured by a simulation, which could only be true if the real brain were exploiting uncomputable physics.

    Like

    1. “Of course it is possible to build robust systems using physical devices, where small perturbations make little difference.”

      Do you withdraw your earlier objection that:

      “The mind you posit is a needle balancing indefinitely on its point — unassisted — while an earthquake rages.”

      Because we agree stability is possible?

      “You’re suggesting that are minds are perhaps semi-chaotic in some sense.”

      In the same sense as turbulence, multi-body orbits, and myriad other natural dynamic processes.

      “In any case, I would expect that any digital simulation of a brain would exhibit the same characteristics.”

      It could exhibit other chaotic characteristics, but not the same ones.

      I know that as a Tegmarkian you believe a numerical simulation must have an identity with the seeming physical thing it simulates. At this point in science that can’t be demonstrated (and there are coherent arguments against it).

      What we can say is that the numerical simulations we know are not the same as the things they simulate. This is implicit in how they work.

      It’s not just the wetness/laser light problem. It’s that numerical simulations cannot faithfully and consistently reproduce the real world using finite numbers. Every calculation is a little off.

      “I am more or less assuming that simple calculators are implemented without a general purpose CPU and anything you could call software,…”

      For a look at what’s possible, check out the world’s first electronic calculator (1961).

      I can’t find a schematic, but the article mentions a 4 KHz pulse rate, which refers (I assume) to the “system clock” that controls the circuitry. I mentioned that calculator operations required steps. That’s why there’s a clock.

      Clocked circuitry has an easy mapping to an FSA. Each clock step represents a step from one state to another. The states are casually connected through the logic. There is some TM with a one-to-one mapping with the FSA (of course).

      So clocked circuitry is easily considered a crude form of computation. It has the necessary characteristics of saving state and conditional branching. It’s hard to not see it as computation.

      The point is that these kinds of calculators (including the Babbage Engine) are clearly FSAs. They step from state to state according to their design (their algorithm). There is a one-to-one mapping, an isomorphism, with the FSA.

      In 1971, the first calculator to use a microprocessor. It’s pretty much been microprocessors ever since. They make development so much easier and cheaper. And you can use the same chip for different things.

      The full adder, on the other hand, requires no clock, has no steps to speak of, and no real algorithm (it’s really just a logical expression):

      S,Co = ((A xor B) xor Ci), (((A xor B) and Ci) or (A and B))

      Where S is the output, A and B are input, Ci and Co are carry-in and carry-out.

      It can be simulated numerically, but that is not a one-to-one mapping. (I know you think that doesn’t matter, but currently it appears to.)

      Also, the full adder only has two obvious states, before and after, which ignores whatever is going on inside (which could entail a number of transitory states). Any FSA designed to simulate its operation will be abstract and not an isomorphism.

      “What you mean by ‘ghost’ is not very clear. Based on your later sentences you appear to mean consistently rational intelligent behaviour over time as opposed to something like phenomenal experience.”

      The “ghost” is, in a sense, Chalmers’ hard problem. It is definitely phenomenal, but it’s also the seat of our apparent agency. It’s what we think of as “I”.

      Like

      1. Hi Wyrd,

        > Do you withdraw your earlier objection … Because we agree stability is possible?

        No. From my point of view, if you assume that the analog system is stable, i.e. robust to small perturbations, then the fact that digital simulations cannot be infinitely precise cannot be used to say that the analog system is more capable in any dimension. There are small perturbations that don’t matter for the analog system, and as long as the digital simulation is fine grained enough then any imprecision should be within the range of small perturbations that don’t matter.

        > What we can say is that the numerical simulations we know are not the same as the things they simulate. This is implicit in how they work.

        You’re emphasising differences but there are also similarities. This is beating a dead horse. The differences impress you. The similarities impress me. That’s all there is to it. I think I understand your position with regard to these analogies, and I hope that by now you also understand mine. We should move on to discussing whether a simulation can exhibit all the same apparent kinds of behaviour, because here the “wetness” question doesn’t matter.

        I take your point about needing a clock to implement a calculator. You’re presumably right on that one. This is useful because it gets me closer to understanding what you think computation requires, i.e. a series of discrete sequential steps. This helps me understand how you can deem a full adder not to be a computer but a calculator to be a computer. The hardware/software distinction seems less crucial. In that case, the brain is unlike a computer for you more because it lacks these distinct time slices and less because it doesn’t have an identifiable separation into hardware and software.

        In that case instead of the calculator I’d ask you to consider distributed or parallel algorithms, where there may be many independent processors with their own clocks working in concert towards some goal. These can be built as single devices (e.g. multi-core CPUs, or a CPU and GPU working in concert to simulate a virtual world for a game) or be wildly varied configurations of widely distributed devices (e.g. Folding@Home or Bitcoin). Personally, with my liberal interpretation of computing, I’m inclined to admit the ensemble of processers in either case as a single distributed computer as well as counting individual processors as computers in their own right. I wonder how you think of it. In any case, for cases like these, it seems clear that there need not be any global shared clock, and yet it also seems clear (at least to me) that there is some overall global computation happening. The set of finite states of the overall automaton is the power set of the possible states of each individual processor.

        The analogy to the brain here might be to consider the algorithm the brain is running to be a distributed algorithm where each neuron is a processor. While each individual neuron might not literally be a discrete FSA the way you might require, it seems to me that a model which treats a neuron as a simple FSA implementing a simple computation on the inputs would be reasonably accurate.

        > The “ghost” is, in a sense, Chalmers’ hard problem. It is definitely phenomenal, but it’s also the seat of our apparent agency. It’s what we think of as “I”.

        Right and that’s how I would have usually interpreted it, but that’s not particularly helpful as we were quite distinctly not discussing the hard problem. You raised this point in response to my assertion that a simulated brain would *behave* like a real brain.

        Like

  17. Here’s a question my reply to DM raised for me: How does a computational theory of mind define what a mental state is in the temporal sense? What’s the clock that says state-1, state-2, state-3, etc.

    The presumption is the brain is a FSA, which requires a mapping from mental states to distinct states in the FSA. So how do you define the distinct mental states in an analog parallel system?

    How would you define distinct states in an ant hill or rainstorm?

    Like

    1. Hi Wyrd,

      For those of us who don’t take a very narrow, Computer Science idea of computing and adopt instead a broader understanding, you don’t really need distinct states or a clock. If you want distinct states, you can think of what steps you would have in a simulation of such a system.

      Like

      1. I’m not sure the CS idea of computing matters here.

        The computational mind theories claim there is a mapping from mental states to some FSA, thus demonstrating the mind really is an FSA.

        But if there are no such mappings, just states that are imagined, doesn’t that pretty seriously weaken (if not destroy) the computational mind argument?

        Like

    1. Right. Which means a numerical simulation of a real-world system with all the issues that entails. But my point is there doesn’t seem a strong claim that the system is computational in the first place and therefore a simulation must work.

      From your above response:

      “From my point of view, if you assume that the analog system is stable,…”

      I got the impression you were saying my conception of a mind couldn’t be stable at all. Now I understand you’re saying a numerical simulation should be identical to a stable real-world system.

      “You’re emphasising differences but there are also similarities. This is beating a dead horse. The differences impress you. The similarities impress me.”

      The problem is that the differences mean they are different. Similarities don’t change that. I don’t think it’s right to just hand-wave away the differences. It’s not a matter of being impressed by them; it’s a matter that they exist.

      “We should move on to discussing whether a simulation can exhibit all the same apparent kinds of behaviour, because here the “wetness” question doesn’t matter.”

      My understanding is that a numeric simulation cannot accurately describe a complex analog system. It can only get close. So, yeah, that’s the crux of it.

      “This is useful because it gets me closer to understanding what you think computation requires, i.e. a series of discrete sequential steps.”

      My definition is the computer science definition, which equates computation with a TM or lambda calculus or other formalism. A natural consequence of a computation is that it has steps (the isomorphism with an FSA makes that pretty clear).

      Whether the algorithm is reified in hardware is irrelevant. (In some sense, “software” always is, from magnetic cores, to punch holes, to charged RAM bits.)

      “This helps me understand how you can deem a full adder not to be a computer but a calculator to be a computer.”

      Because a full adder is just a reification of a logical expression. That it can be numerically simulated doesn’t make it a computation. I can simulate population growth, but no one is computing anything.

      “…the brain is unlike a computer for you more because it lacks these distinct time slices…”

      More that I don’t see it as a TM. Therefore, it’s not computational.

      “I wonder how you think of [distributed processing].”

      Pretty much exactly as you do. There is no need for a global clock. There are myriad ways to communicate asynchronously. (Your hard drive and graphics card likely have their own CPU systems that are completely separate from the main CPU. They only communicate over the data bus.)

      “[I]t seems to me that a model which treats a neuron as a simple FSA implementing a simple computation on the inputs would be reasonably accurate.”

      Well, synapses are said to be extremely complicated, so maybe not a simple FSA, but the idea of parallel computing where nodes or processes are neurons is a viable approach.

      I can see three levels:

      (1) Everything implemented in software. Just lots of processes running on lots of systems. Probably can’t pull off the parallelism of the brain, but it’s all just calculations anyway. Note that the network here is virtual.

      (2) Neurons and synapses are standalone processors running neuron or synapse software. But the system itself is physically interconnected, a network like the brain. Such a system could be fully parallel.

      (3) It’s all hardware, and analog hardware at that. It’s essentially a machine version of a brain. Neurons and synapses are just analog signal processors, no digital to speak of. Fully parallel, fully physical. (Think Asimov’s “positronic” brain.)

      I’m pretty sure #3 would work. I’m dubious about #1 and #2 because of the differences between simulation and real-world.

      There’s also the level you mentioned at top, of simulating a physical system, which is also where we end…

      “[W]e were quite distinctly not discussing the hard problem. You raised this point in response to my assertion that a simulated brain would *behave* like a real brain.”

      I mentioned Chalmers as a pointer to what I meant by “ghost.”

      I did answer your question. I can see many possibilities, as I mentioned. Only one of them is that it works as you expect.

      To restate: I think it’s possible the simulation will work in the sense of modeling a living system at whatever level it simulates. Similar to a laser simulation showing a “working” laser. But as I equate consciousness with the laser light, my gut sense is that the living system will be mindless. No ghost. No agency. Like a comatose brain.

      Or an insane brain. Or an epileptic brain. There are many possibilities.

      Like

      1. To be clear, the levels (1), (2), (3), in the above are sub-levels of level (2) of the three I listed in this comment, where I first used the term “ghost.” The level mentioned at beginning and end of the above comment would be the other comment’s level (1). I don’t believe we’re talking about that comment’s level (3). Correct me if I’m wrong on any of this!

        Like

        1. Wyrd,
          Just jumping in here to comment on your levels, which I do think add some important clarification here.

          I think (1) is possible in principle but may never be practical. It might take a small number of serial processors far too long to do things that a massively parallel system can do in short order.

          (2) does seem plausible to me. In addition, I can imagine a (1)-(2) hybrid that makes use of the speed of technological processors to reduce the necessary parallelism. In other words, the additional speed may change the sweet spot on just how much parallel processing is required. That said, I think we’d still be looking at a system with at least hundreds of thousands of processing cores.

          (3) seems required in the case that there is a physical ghost that needs to be generated. If there is such a ghost, then matching the capabilities of a system with such a ghost might require us figuring out how to generate it. Even a positronic brain might fail if it’s missing some physical aspects of the neurobiological substrate.

          I know you disagree, but I haven’t seen any evidence for a ghost, and cognitive neuroscience seems to be progressing without needing it. Of course, that could change any time, but until something in the data requires it, the ghost strikes me as an unnecessary conjecture.

          Without the ghost, the question then seems similar to other cases where digital systems have reproduced analog mechanisms. We all listen to music and watch movies that were originally recorded in analog formats, without any perceived loss of vital functions. Analog computers were once used to do differential calculations, but those are now handled by digital computers, whose precision reached a high enough point where the quantization noise was as low, or lower, than the variance noise in the old analog systems.

          Maybe, even aside from the ghost, the mind is different. Maybe it’s special in some hidden manner that science can’t yet detect. If so, I would expect that artificial intelligence will eventually hit a brick wall until the hidden aspect is understood. Only time will tell.

          Liked by 1 person

          1. Hi, Mike, I think (differences aside) we’re on the same page. I don’t disagree with anything you said. I’ll run with this ball a bit and hand it back…

            “I can imagine a (1)-(2) hybrid…”

            Absolutely. These (1)-(3) are more like poles. There are lots of possible hybrids and combinations and even new stuff added into the mix.

            An interesting question is how important the actual network is — hence the key difference between (1) and (2). Ideas about structure might come into play. I’ve always wondered just how important the actual physical network is. What if it only works with full parallelism?

            “(3) seems required in the case that there is a physical ghost that needs to be generated.”

            Exactly. Or even if the physical structure is somehow key. We’ve talked about EMF standing waves inside the skull. Does the electrical operation of the brain matter? Could consciousness, in part, lie in that?

            Maybe it turns out we need a huge physical network, massively connected, fully parallel, and confined to a small area in a resonant chamber. The only data point we have right now (us) doesn’t falsify the idea.

            Given that the brain is a natural, evolved organ in a real-world environment, I think the idea can’t be ignored.

            “Even a positronic brain might fail if it’s missing some physical aspects of the neurobiological substrate.”

            Good point! We’ve fairly recently learned how important ‘white matter’ is. Glial cells turned out to be important, too. Who knows what else we’ll find.

            “I haven’t seen any evidence for a ghost,…”

            Other than subjective experience (the “something it is like” to be you) and whatever it is that gives us a sense of agency and free will. That is the “ghost” — perhaps the real difference is in what we think that is?

            “We all listen to music and watch movies that were originally recorded in analog formats, without any perceived loss of vital functions.”

            That has a lot to do with how bad our perceptions are. As you know, most video and audio is compressed, and we get away with that because of how bad our perceptions actually are.

            Or consider how any video or movie you watch is just a series of snapshots. Life certainly does not occur at 25 or 30 frames per second.

            There’s no question numeric simulations can be accurate enough for many purposes. I agree our computational abilities far exceed what analog computers can do. (Just like any calculator is far more precise than a sliderule.)

            The entire crux of this hinges on whether they can be good enough to simulate consciousness. You think yes, I think no, and I mostly wish I could live long enough to see the answer! 🙂

            As you say, only time will tell! 😀

            Liked by 1 person

      2. Hi Wyrd,

        > Now I understand you’re saying a numerical simulation should be identical to a stable real-world system.

        I think identical is too strong. I don’t think even stable real world systems are identical to themselves if you think of them as dividing in a Many Worlds type scenario (and even if you don’t entertain such a scenario you can think about each mind having many possible futures, only one of which is realised). In such a scenario, each mind has an ensemble of possible futures containing slightly different versions of itself. Each version (or the overwhelming majority of them) continue to behave in a manner characteristic of that mind, but they don’t behave identically to each other, even though they are all (originally) the same mind. So all I’m saying is that a simulation of a mind should fit right in to this ensemble. It will not behave identically to an arbitrarily selected member of the ensemble, but it will behave in a manner characteristic of the versions of the mind in the ensemble.

        > The problem is that the differences mean they are different. Similarities don’t change that.

        The problem is that the similarities mean they are similar. Differences don’t change that. I don’t think it’s right to just hand-wave away the similarities. It’s not a matter of being impressed by them; It’s a matter that they exist.

        Flogging a dead horse. I beg you to move on to focus on other issues beyond wetness/laser analogies.

        > My understanding is that a numeric simulation cannot accurately describe a complex analog system.

        They can. Just not to infinite precision. Requiring infinite precision is unreasonable, because even an analog system cannot model another similar analog system to infinite precision. As mentioned above, analog systems do not even accurately model themselves to infinite precision under quantum noise — they have many possible futures and not one as a deterministic digital system might. Infinite precision simply cannot be required to do what a human brain can do. Either characteristic human cognition is robust under small perturbances, in which case it can be emulated digitally, or it isn’t, in which case it would quickly fall apart (the needle balancing in an earthquake).

        > More that I don’t see it as a TM. Therefore, it’s not computational.

        No real computer is a TM, including a desktop computer. So I actually don’t understand what you mean here. You could be saying “I don’t see it as a computer, therefore it’s not computational” which is horribly circular (but works I guess just to restate your position, if that were necessary), or “I don’t think it is Turing complete”, which is clearly false, or “I think it can process information in ways a TM could not (i.e. it is a hypercomputer)” which is very controversial.

        > Pretty much exactly as you do. There is no need for a global clock.

        That seems to me to undercut your point that computation requires discrete time slices. In a distributed algorithm, there need be no explicit discrete time slices, at least globally. And yet it seems we agree that the ensemble of processors can be called a computer engaging in computation.

        Finally, in getting to the point about the “ghost”, you’re conflating the hard problem with easy problems.

        > But as I equate consciousness with the laser light, my gut sense is that the living system will be mindless. No ghost. No agency. Like a comatose brain.

        Let’s assume you’re right that there is no ghost. But if it is an accurate detailed simulation, it should still at least behave like a real brain. You seem to be thinking as if all the things a brain does with regard to social interaction, decision-making, navigating the world etc are not objectively observable and detectable in a simulation. But this is clearly not the case. It would be a zombie, sure, but it would have all the outward appearance of agency. Nothing like a comatose brain (or an insane brain or an epileptic brain).

        To be honest, I think the most important issue we need to get to the bottom of is whether analog devices can really do anything useful or significant that digital devices cannot. Everything else can wait, because it seems to me that this is, if not the most tractable, then at least the most interesting problem, as it’s not simply retreading ground covered ad nauseam by anyone who has ever discussed philosophy of mind.

        Like

        1. “The problem is that the similarities mean they are similar.”

          Focusing on the similarities and hand-waving away the differences is cargo cultism.

          I’m not dismissing the similarities. I just know similarity isn’t enough.

          See the discussion Mike and I had about the Mandelbrot.

          “No real computer is a TM, including a desktop computer. So I actually don’t understand what you mean here.”

          I’ve explained this repeatedly. There is some TM that represents a desktop computer. I do not perceive there is a TM that represents a mind.

          The car does not appear red, therefore it cannot be a red car.

          “In a distributed algorithm, there need be no explicit discrete time slices, at least globally.”

          As I said, there is no requirement for a global clock. Just that each system have a clock. Asynchronous communication protocols handle the rest.

          “But if it is an accurate detailed simulation, it should still at least behave like a real brain.”

          I agree it might. I disagree it must produce consciousness for reasons I’ve explained.

          “To be honest, I think the most important issue we need to get to the bottom of is whether analog devices can really do anything useful or significant that digital devices cannot.”

          At the moment, the answer is an obvious and resounding “Yes!”

          Like

          1. Hi Wyrd,

            Whether the similarites are enough or whether the differences are salient depends on whether what we’re looking at depends on the differences or on the similarities. That is what is up for debate.

            Really not that impressed with the argument regarding the Mandelbrot, as the computation you are using to demonstrate the limitations of digital systems was computed with a digital system — precisely because it would be infeasible to do so with an analog system. As limited as digital systems may be, analog systems are more so, because of their inherent unreliability and imprecision.

            > I agree it might. I disagree it must produce consciousness for reasons I’ve explained.

            I’ve said I’m putting consciousness aside. I’m trying to zero in on why you think it might not behave as a brain would. You have only done so by pointing out that digital simulations have limited precision. I have responded by posing a dichotomy — either analog systems are also effectively limited in their precision, or they are infinitely precise, and if they are infinitely precise then they are infinitely fragile and so impossible.

            > At the moment, the answer is an obvious and resounding “Yes!”

            Not obvious to me for the above reasons!

            Like

          2. Differences and similarities aren’t symmetrical. It’s the barrel of wine, barrel of sewage asymmetry. That’s why differences are far more significant than similarities.

            “Really not that impressed with the argument regarding the Mandelbrot,…”

            It’s not about comparing analog and digital systems. It’s about the limits of calculation.

            “I’ve said I’m putting consciousness aside. I’m trying to zero in on why you think it might not behave as a brain would.”

            But since I’m saying it would fail to produce consciousness, isn’t consciousness the point?

            I honestly don’t understand your complaint about fragility. There’s nothing fragile about planetary orbits or turbulence or any other analog dynamical system. Do you know about “attractors” in phase space? That’s part of why dynamical physical systems are stable.

            “Not obvious to me for the above reasons!”

            It’s pretty obvious to me that Mozart, van Gogh, Einstein, myriad others, are far beyond anything we currently can do digitally.

            This all seems to turn on what we believe a numerical simulation can do. My background and beliefs suggest strong limits. Your background and beliefs suggest no limits. Perhaps we should leave it at that.

            Like

  18. I’ll write a post to explain this in detail later, but it’s on-topic to the discussion about numerical simulation and explains why I’m not a believer.

    I was watching a Mandelbrot zoom on YouTube, and these days powerful specialized calculation techniques allow these zooms to go deep. We’re talking about numbers with many hundreds of decimal digits being calculated with full precision. (As a reference, IEEE floating point doubles only have 15 digits of precision.) A consequence is that it can take hours for a fast system to render one frame.

    The level of zoom is formidable. If we say the pixels in the final zoom are Plank length areas, then the full Mandelbrot (which is fully confined by a circle of radius 2) is many, many times the size of the visible universe.

    A bit of math: 6.187×1034 Planck lengths per meter is 1.855×1043 Planck lengths per light year. If the universe is 30×109 LY across, that is a mere 5.565×1053 Planck lengths across. But a Mandelbrot zoom? An easy one goes to 10200. A really deep on can go to 10800. Let that sink in for a moment.

    Here’s the point: At that level of zoom, with numbers having hundreds of decimal digits with only the last few being different from pixel to pixel, the chaos absolutely explodes. All that precision magnifies what we can see, but it also magnifies the chaos.

    When I keep saying my understanding is that numerical simulations can never faithfully reproduce an analog system, this is exactly what I mean.

    Here’s one such video that’s both short and which starts zoomed in and reverses. That’ll let you see the deep down chaotic complexity I mean.

    Like

    1. Aw, crap. WP doesn’t support the {sup} tag in comments. Mike, any chance you can add them back in? All the exponents should be in the form of 10{sup}##{/sup}. (You can just use 10^## if that’s easier.)

      Like

      1. Just put them back in. Frustratingly when I attempted to do it with the new admin UI, it ate the sup tags again, even for me as the blog owner. I thought I was going to have to settle for the hats, but the old admin UI accepted them. I wonder what WP has against that particular tag since it obviously allows a lot of others like italics and bold ones.

        Anyway, take a look and make sure I did it right. Let me know if any adjustments are needed.

        Liked by 1 person

        1. Looks excellent, thanks! Yeah, that is weird about the tags. Maybe they’re just erring on the side of extreme caution. (Or maybe it’s a white-list situation and they didn’t think to add those.)

          Liked by 1 person

    2. Hi Wyrd,

      “Glial cells turned out to be important, too. Who knows what else we’ll find.”

      There’s also microglia, whose relevance was recently shown in a horrible case where a boy was born with a serious deficit of them due to some genetic defect. His brain was malformed. He had a short painful life, ending in about 10 months. There’s no evidence they participate directly in cognition, but they seem to guide the development of the brain, ensuring that neurons expand into the right places and have the right connections.

      “That has a lot to do with how bad our perceptions are.”

      Definitely. But it doesn’t seem like artificial intelligence needs to be exactly like us for us to recognize a fellow mind there.

      In the admittedly much more difficult case of mind copying, if a copy of me sufficiently reproduces my behavior such that none of my close friends or family, nor I, nor even copy-me, can tell the difference, do the remaining differences matter? Yes, due to chaos dynamics, I and copy-me will inevitably have differences, but if no one can be aware of them, should we be concerned?

      Perhaps a more relevant question is, can it actually be close enough that someone who’s known me all my life couldn’t tell with their all too human perceptions? And even if they could, can it be close enough that they’d still accept the copy as me, particularly if original-me was dead by then? Or might there be some uncanny valley type effect to contend with?

      On the Mandelbrot set, I’m probably missing something. How relevant is it for a physical system?

      Like

      1. “Perhaps a more relevant question is, can it actually be close enough that someone who’s known me all my life couldn’t tell with their all too human perceptions?”

        IF a copy or a simulation did emulate you that well, I’d call it a copy of you, no problem.

        The copy would report having the same subjective experience you report, so externally they would be functionally identical. (It would only matter in the edge case where: (1) “souls” matter; (2) “souls” must be naturally born, organic, whatever.)

        I do find the p-zombie issue interesting. Is your copy really having the subjective experience it reports having? Only zombies know for sure!

        I believe you do not watch The Good Place (CBS)? Suffice to say there is a very advanced, practically omnipotent, AI, named Janet. She is indistinguishable from human (and played brilliantly by D’Arcy Carden; she alone is worth watching the show for, but so is Ted Danson and pretty much everyone on the cast).

        Janet has a kill switch. (She can be rebooted, but then she’s a new Janet.) There have been situations where Janet is insisting that she be “killed” and rebooted. But when her kill switch is approached, another routine kicks in, and she piteously, strenuously begs and pleads for her life.

        The moment the taken aback human retreats, Janet resumes insisting they push the switch. It’s hysterical.

        In the context of this discussion, Janet herself insists she’s just an AI, that the begging is just a routine she can’t help (survival instinct), but it means nothing.

        As an aside, it turns out she becomes a little more sapient and “human” with each reboot because it incorporates lessons learned; it’s not a cold restart. As such, rebooting her is more and more like actual murder (including to Janet).

        “How relevant is [the Mandelbrot set] for a physical system?”

        It’s a mathematical object with an extremely simple description, but which has seemingly infinite complexity and demonstrates chaotic behavior along its boundary. As such, it shows the limits of what a mathematical model can do in certain situations. That has implications for the ability of a numerical model to represent reality, because the model itself is the source of chaos.

        In particular, it shows that increasing the precision doesn’t always help. The chaos is still there.

        Remember how in Westworld the James Delos clone kept breaking down? I think that, even if a simulation had a good starting place (say from careful analysis of a brain), I suspect it might diverge in catastrophic ways, perhaps very quickly.

        Alternately, chaotic effects might prevent the model from working (from producing a ghost) or might result in a chaotic ghost.

        Or not. I just know that modeling turbulence and n-body orbits is very, very hard.

        Like

        1. “Only zombies know for sure!”

          The problem is that not even the zombie would know. Supposedly it would have systems that can do all the processing to produce results insisting that it had subjective experience without the actual experience, but there would be nothing inside. If copies were actually zombies, neither us nor them would know. The fear is that if the whole human race became copies, we might extinguish consciousness from humanity.

          One philosopher suggested that we start with hybrid systems and see if the organic parts of us can be conscious of what’s in the inorganic parts. The problem I see with that is the inorganic parts might be able to supply information to the organic parts as if they were conscious, and those organic parts may not know the difference.

          Ultimately there’s no way to know beyond all doubt. All we have is our intuitions about what we’re observing. Of course, that’s all we ever have for concluding that each other is conscious, but I could see some people never accepting copies as the real thing.

          You’ve told me about The Good Place before I think, although I don’t know if you mentioned Janet before. Sounds interesting.

          On the Westworld clone, I actually thought that was a bit contrived. They could build a fully artificial human being with no problem, including one that could discuss its feelings, have manufactured memories, etc. They could port a human mind with enough clarity to recognize someone they knew and have a conversation with them. Yet they couldn’t marry those two together to produce a stable system? Even after years of trying?

          It felt to me like a plot trick to close off that existential question so it wouldn’t take over the show. (Which it probably would, so I can’t blame them too much.) In reality, if they were having that much trouble with the port, I suspect even the initial conversation would have been impossible. The host would have immediately been incoherent. But doing it the way they did was far more dramatic, particularly as it’s only shown happening to jerks.

          Like

          1. “The fear is that if the whole human race became copies, we might extinguish consciousness from humanity.”

            I saw a video recently about an extension of the VR hypothesis that asks if we’re really like Westworld robots. Called “Earthworld,” it’s kind of a mashup of the Zoo hypothesis and the VR. We’re constructed as an amusement park or historical exhibit.

            I agree with your assessment of slow replacement.

            I can’t recommend The Good Place highly enough. A sitcom with serious philosophy as its central core. Extremely well-written and very smart.

            “On the Westworld clone, I actually thought that was a bit contrived.”

            There were definite in-world issues with it. I discussed it here.

            It did make for one of the better episodes of the season, but it’s hard to explain.

            The divergence and resulting insanity worked for me; it’s actually one thing I think might happen, as I’ve mentioned. But then the hosts should diverge, too.

            If I had to justify it, I’d write a story about how host minds are much simpler because they start as numerical models and don’t contain the junk the human mind does. They can maintain their stability, in part because there’s a clean original model as a reference. But a human mind, which is too holistic to separate signal from noise, has so much junk in the copy, which can’t be entirely faithfully reproduced, eventually causes the model to diverge wildly.

            Decades of Star Trek have trained me in backfilling… 😉

            Like

          2. Re zombies: if you believe p-zombies are possible, then it is entirely possible that the current population of humans, including you and I, are p-zombies, which makes the concept of consciousness vacuous. It would be like saying the smallest constituents of matter are angels. If we happen to figure out what quarks are made of, those would be the new angels. The concept of angels, or consciousness, would tell us nothing about our world. There would be nothing we could do with that information besides tell stories to ourselves.

            *

            Like

          3. I personally don’t buy p-zombies. But I see consciousness as something that only exists subjectively. Put another way, maybe we’re all zombies with a simplified internal model of self that makes us compute that more is going on, zombies who think they’re more than zombies.

            Like

          4. Mike, I feel the need to push back a little here. How can something exist only subjectively? How might that work? We should be able to explain objectively what a subjective feeling is. I think we can.

            *

            Like

          5. The problem is that consciousness is a pre-scientific term that doesn’t map well with scientific understandings of the brain. In that way, it’s similar to love, beauty, or life. In a deflated sense, we can say that it maps to some combination of objective cognitive capabilities. The problem is no one seems able to agree on which ones, which I why I usually describe a hierarchy.

            Then there is the more inflated conception of something above and apart from the functional capabilities.

            Michael Graziano reported a story of a patient with a delusion, that he had a squirrel living in his head. The patient’s doctor told him they needed to figure out why he thought there was a squirrel in his head. The patient disagreed. What needed to be figured out, according to the patient, is how the squirrel got there.

            For the patient, the experience of the squirrel exists subjectively, but the squirrel itself doesn’t exist objectively. When it’s said that consciousness is an illusion, the reply is often that if so, the illusion is the experience. I have sympathy with this view. It’s why I don’t say consciousness is an illusion. But our experience is built on a simplified model that implies that there is something above and beyond the functionality, but it’s a model of something that isn’t there.

            Like

          6. On p-zombies: that’s Chalmers’s term (abbreviated). If you don’t agree with Chalmers about the possibility of his kind of zombies, you shouldn’t use his term. Unless you love inviting confusion. If you believe that a non-conscious being might be able to fool some observers – and I think all of us here believe that – you should use a different word for *those* “zombies”.

            Like

          7. I think there is already confusion. Chalmers’ specific formulation of a being physically identical to a conscious one, but who isn’t themselves conscious, is often conflated with the weaker form sometimes called a behavioral zombie. But a b-zombie is widely regarded as a type of p-zombie. So using the word “zombie” for both is inevitable.

            At this point if I specifically meant a Chalmers p-zombie, I think I would say “Chalmers p-zombie”.

            I would agree that a b-zombie who can fool people into thinking it’s conscious for a short time is possible. But the longer it succeeds, the more we have to consider the possibility that it has some kind of consciousness.

            Like

          8. On my understanding, if it processes information in any significant way, it has consciousness and is not a zombie. The only way something could behave like it is processing information without actually processing information is if it has pre-scripted behavior, and that behavior just accidentally matches the behavior of a conscious entity.

            *

            Like

          9. Wouldn’t following pre-scripted behavior be information processing? Or by “in any significant way” do you mean something beyond that (and beyond typical computer systems)?

            Like

          10. By “on my understanding” I meant “using my definition of information processing” which requires the Input —> [mechanism] —> Output paradigm. Remember qomputation, komputation, pkomputation, computation? I personally put consciousness at pkomputation, i.e., use of symbolic symbols. Because a human conversation necessarily uses symbolic symbols, it would be impossible for an entity to participate in a conversation but fake using such symbols. The only way way to get the appearance of human-like conversation without pkomputation would be to pre-script one side of the conversation from start to end and then hope it works out. It would have to be some insane coincidence for that output to seem like normal human conversation.

            *

            Like

          11. Right, but wouldn’t following a script entail the same things, the input->mechanism->output and symbol processing?

            It seems to me, along the lines of our other conversation, what’s missing from the script follower is its own models. It’s using the models of the script writer as well as the other conversation participant (although that participant may not realize it).

            I do definitely agree that the probability of a script working to fool someone for any significant length of time is so low that for pragmatic purposes it rounds to zero.

            Like

          12. SelfAware,

            Yes, philosophical zombies (p-zombies) are “often conflated with the weaker form sometimes called a behavioral zombie.” So please don’t make that problem any worse and instead solemnly swear to use another term where appropriate. “B-zombie” will do. (But consider the movie Ex_Machina: does the robot of that movie really deserve any label involving the word “zombie”? Won’t “robot” suffice?) Stanford Encyclopedia will tell you how philosophers use their terms of art: note the lack of b-zombies in that article.

            Like

          13. Paul,
            I can only promise to use language in the clearest way I can in relation to the point I’m currently trying to make. If we try to challenge every misconception in every sentence, those sentences will read like the densest legalese anyone’s ever seen. I think the mind copying context of the discussion above should have made it clear that we were discussing the behavioral variety of philosophical zombies.

            The word “zombie” was originally stolen from Voodoo narratives of re-animated and soulless dead bodies. Language evolves and so is almost always ambiguous. Philosophical discussions are far more productive when we can show each other interpretational charity.

            Like

          1. Because now we’re modeling a very complex real-world object. The nodes in a NN are implementations of a comparatively simple mathematical model. A model that model was designed with software in mind, was designed to be coded.

            At the system level it may turn out that subtle interactions among the nodes of the system are crucial, which will seriously complicated the whole thing.

            Like

  19. Hi Wyrd,

    Wine and sewage are not symmetrical. Whether differences or similarities are symmetrical depends entirely on what you’re interested in. A circle made of wood has the same geometric properties as a circle made of copper, and if you’re intereted in geometry then the differences don’t matter. It doesn’t conduct electricity as well, however, so if you’re interested in this then the similarities don’t matter.

    Your brain is not identical to mine. I think you are conscious because your brain is similar to mine and I know I am conscious. The differences don’t matter, or I might doubt you are conscious.

    > It’s about the limits of calculation.

    Except this doesn’t really show anything about the limits of calculation. It just shows that we can get as precise as we want with our calculation, and that’s pretty bloody precise in this example. If we really wanted more precision, we could get there. How this relates to whether the brain is a computer I no longer see.

    > I honestly don’t understand your complaint about fragility. There’s nothing fragile about planetary orbits or turbulence or any other analog dynamical system. Do you know about “attractors” in phase space? That’s part of why dynamical physical systems are stable.

    The fact that you don’t yet understand my point suggests that some sort of progress can yet be made, because to me this point is crucial and pretty clear.

    I’m presenting a dilemma and you’re focusing only on one side at a time. You dismiss one side of the dilemma at a time and then act confused about why I would pose such an issue. But if you reject both sides of the dilemma, then as I see it you don’t have a leg to stand on with regard to whether analog can do anything digital can’t.

    I’m trying to get you to see that in order to say that analog systems can do something digital systems cannot, you need to argue either that (a) the analog system avails of infinite precision or (b) the analog system is robust (attractors, etc) and so does not require infinite precision. The point about fragility refutes (a).

    Yes, I know what an attractor is. So if you’re choosing option (b), a digital simulation of an analog system, if precise enough, will exhibit the same behaviour with regard to attractors. Hence the limited precision of a digital simulation doesn’t matter — the simulated system is robust in exactly the same way as the analog system because of these attractors.

    > But since I’m saying it would fail to produce consciousness, isn’t consciousness the point?

    Ultimately, perhaps. But functional issues are more tractable, and as long as you’re saying that you doubt that a digital simulation could ever be functionally equivalent to a brain, then that’s a more productive issue to focus on.

    > It’s pretty obvious to me that Mozart, van Gogh, Einstein, myriad others, are far beyond anything we currently can do digitally.

    Oh hell yeah. Currently! And perhaps ever! The question is not what can we achieve right now with current tech, it’s about the limits of digital computation. In light of thought experiments about simulating brains, and the dilemma I posed to you, I can’t see how digital computation could be so limited as to make these achievements impossible in principle.

    > Your background and beliefs suggest no limits.

    There are limits, e.g. Gödel, Halting Problem etc. There are limits of feasibility and cost and precision. However I think analog systems are just as limited, if not more so. So if the brain can do it then I think it should be possible in principle for a digital system to do it also. If I didn’t have the example of the brain, or if I were a Cartesian dualist such that I believed that my mind were some sort of spirit, then I would perhaps doubt that a digital system could ever do what the brain can do.

    We can leave it at that if you like but as far as I can see you have a problem because you haven’t really tackled the dilemma I’ve posed regarding the effective equivalence (or not) of analog/digital information processing.

    Like

    1. “Wine and sewage are not symmetrical.”

      Neither are similarities and differences. Similarities have low entropy, differences have high entropy.

      “Except [the Mandelbrot] doesn’t really show anything about the limits of calculation.”

      It demonstrates chaos, undecidableability, and computational cost. (I’d think its Tegmarkian computational qualities would appeal. The M-set is explicitly defined as a computation, yet (because of undecideability) it can never be fully computed.)

      “(b) the analog system is robust (attractors, etc) and so does not require infinite precision.”

      Yes, an analog system is robust, but I think the second claim needs unpacking. The problem isn’t individual numbers, which I agree can be arbitrarily precise. The problem is what happens to a numerical simulation over time.

      In general I think analog systems differ from numerical representations in important ways. No matter how good the resolution of a screen is, it’s still only emitting red, green, and blue, photons. The real-world version of that same scene has photons of many different frequencies. I’m skeptical differences along those lines don’t matter in some important way.

      (On top of that, I’m skeptical a numerical simulation of a brain, or a neural net, will give rise to consciousness.)

      Like

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.