Gödel’s incompleteness theorems don’t rule out artificial intelligence

I’ve posted a number of times about artificial intelligence, mind uploading, and various related topics.  There are a number of things that can come up in the resulting discussions, one of them being Kurt Gödel’s incompleteness theorems.

The typical line of arguments goes something like this: Gödel implies that there are solutions that no algorithmic system can accomplish but that humans can accomplish, therefore the computational theory of mind is wrong, artificial general intelligence is impossible, and animal, or at least human minds require some as of yet unknown physics, most likely having something to do with the quantum wave function collapse (since that remains an intractable mystery in physics).

This idea was made popular by authors like Roger Penrose, a mathematician and theoretical physicist, and Stuart Hameroff, an anesthesiologist.  But it follows earlier speculations from philosopher J.R. Lucas, and from Gödel himself, although Gödel was far more cautious in his views than the later writers.

I’ve historically avoided looking into these arguments for a few reasons.  First, I stink at mathematics and assumed that would get in the way of understanding them.  Second, given all the times reality has stomped over the most careful logical and mathematical deductions of great thinkers, I have a tendency to dismiss any assertions about reality based solely on theorems (other than for mathematical realities).  Finally, these arguments are widely regarded as unsuccessful by most scientists and philosophers.

Still it does seem to capture the imagination of a lot of people.  Fortunately it turns out that there’s a lot of material describing the theorems that don’t get lost in the technicalities.  One excellent source is this two part Youtube of Mark Colyvan describing the them, going right to the edge of, but not falling into the mathematical details.  (These videos add up to about 45 minutes.  You don’t have to watch them to understand this post, which continues below.  I’m just including them for those who want more details.)

There are many concise English statements of the theorems.  Based on what I’ve been able to find out about them, these versions from the Stanford Encyclopedia of Philosophy seem relatively comprehensive.

First incompleteness theorem
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.

Second incompleteness theorem
For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself.

All of the sources stress how profound these theorems are to mathematics and logic.  It shows that any mathematical system is going to have blind spots, called Gödelian sentences, aspects of itself that it cannot logically prove or disprove.  And if mathematics as a whole are taken as system, it shows that there may be mathematical realities which can never be proven or disproven.

A common English analogy of a Gödelian sentence is the Liar’s Paradox:

This sentence is false.

Is the sentence true or false?  If it’s true, then it’s false, but if it’s false, then it’s true.

Another analogy more closely relevant to the theorems is:

This statement cannot be proven.

If the statement is proven, it is disproven.  If it is not proven…, well hopefully you get the picture.  The point of this last example is it is a statement that we can say we know to be true, even if we can’t logically prove it is true.

Okay, fair enough.  But what does this have to do with human minds and artificial intelligence?  Well, a computer program is an algorithm, a mathematical system.  It would seem to follow that such a system will have the same blind spots, called Gödelian sentences, as any other mathematical system.  The idea is that a purely logical system cannot resolve these sentences, but a human mind can.  Therefore, the argument goes, a human mind must be non-algorithmic, or at least some portions of it must be.

I think this argument fails for two broad reasons.  To start, let’s look at the English versions of the theorems again, but with a couple points in each emphasized:

First incompleteness theorem
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.

Second incompleteness theorem
For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself.

The first point to consider is that the theorem addresses a system’s ability to prove Gödelian sentences within itself.  Proving such sentences using information from outside the system is reportedly not an issue.  For many statements that we, human minds, can look at and see the truth of, it is quite plausible that we are simply seeing that truth by comparing it with a wide range of patterns from our overall life experiences.

In other words, we are using information from outside the system to see the truth of a Gödelian sentence within that system.  There is nothing that prevents this from being a fully algorithmic process.  In other words, the human mind can be a computational system that can see truths in other systems.

Of course, this point doesn’t prevent there being Gödelian sentences within the system that is the human mind.  If the human mind is algorithmic, there may be aspects of it that the mind itself can’t logically prove.

It pays to remember that describing the mind as one system can be a bit misleading.  It’s really a collection of interacting systems.  Each system may well contain its own Gödelian sentences, that the other systems may be able to process without trouble.  Still, a collection of systems is still itself an overall single system, albeit a profoundly complex one.  It seems likely to have its own Gödelian sentences.

And using outside systems doesn’t solve the issue of considering mathematics as a whole, where there would be no “outside” of the system.  If we regard all mathematics as an overall system, then wouldn’t there be truths that can’t be mathematically proven?  Gödel himself considered this possibility, that there may be mathematical problems which could not be solved, although he thought it was implausible, feeling that there must be an infinite aspect to the human mind to enable solving them.

But here’s where the second point comes in.  Gödel’s theorems apply to consistent systems.  Is the human mind consistent?  It may be that it is consistent in the sense that, given the same sensory perceptions, beliefs, and natural tendencies, it would always arrive at the same answer.  Of course, the same combination of these factors will never repeat themselves, making consistency difficult to demonstrate.

Perhaps a better question is, what if we allow for an algorithm that isn’t guaranteed to consistently derive a Gödelian sentence?  Inconsistencies in mathematical proofs make them useless.  Wouldn’t they also make algorithms useless?  Not necessarily.

If an algorithm to solve a problem with certainty is impossible, it doesn’t rule out algorithms that can arrive at probable solutions.  Indeed, when we remember that human intuition is often wrong, it seems quite plausible that a lot of what is happening when we say we “know” the truth of a Gödelian sentence is exactly this type of algorithmic probabilistic reasoning.

This type of reasoning can be wrong of course.  It frequently is in humans, including in mathematicians intuitively feeling like they know a solution they haven’t proven.  If we allow computers to function this way, then Gödel’s theorems seem circumvented.  Indeed, this is an insight that goes all the way back to Alan Turing in 1947:

…I would say that fair play must be given to the machine. Instead of it giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques… In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

So Gödel’s theorems don’t seem to rule out machine intelligence or the computational theory of mind, although they do imply interesting things about how intelligence works.  (As I understand it, artificial intelligence researchers have known about this pretty much from the beginning and incorporated into their models.)

Unless of course I’m missing something?  Perhaps something that resides in my own Gödelian sentences?

40 thoughts on “Gödel’s incompleteness theorems don’t rule out artificial intelligence

  1. What many people seem to ignore is that the only “intelligent” species of which we are aware is so due to a very flawed, wildly inconsistent, subject to illusion, etc. biological system. So, why would an articificial system need to be so damned perfect to simulate such a flawed construct?

    Like

      1. That’s an interesting idea. But to what else than human thinking can a biological system cause a problem? Can any mathematician know what thinking actually is and how it actually works? Let’s say you think of a problem and then you find the solution. How do you do this? Is it pure trial and error as we do program computers?
        Buddhist monks and yogis spend decades observice just this simple phenomena. I’m not sure any othem could ever find a viable mathematical model that could be implement and model what we call experience of feeling and subjectivity.
        Cheers

        Liked by 1 person

        1. Thanks for commenting!

          The science seems to show that introspection, the chief method available to Buddhist monks, is of limited usefulness in understanding consciousness and the mind. Mathematicians also probably have no more insight into it than any other educated lay person. But scientific psychologists and neuroscientists do. That’s where progress is being made.

          Like

  2. Thanks for the explanation of Godel’s incompleteness theorem. (I didn’t watch the videos out of fear that they’d complicate what you’ve put forth so clearly and simply…my mind can only take so much.) 🙂

    I don’t know if this speaks to the matter at all, and I suspect it doesn’t, but I’ve heard the idea that logic on the whole can’t be proven by logic (which sounds similar to the theorem, although much broader. And this knowledge is self-evident.) In other words, logic is at its foundation intuitive. There’s no way to “prove” logic itself.

    Before I got to your section on probability, I had the thought, “Don’t we mostly operate based on beliefs or suspension of disbelief?” I guess this is similar to probability, although weaker (I’m taking your term “probability” in a more mathematical sense…I don’t know if that’s what you meant). I think we might look at a Godelian sentence and say, “I’ll call this ‘true’ and see what happens” rather than “This is true…based on logic or reasoning,” or “I don’t know if this is true and I won’t move on until I find out.” We are willing and able to suspend judgment on matters that aren’t clear, or make Godelian sentences “true for now”—even if we seek nothing more than a punchline to a joke…which might be a kind of nonsense that’s internally consistent.

    (BTW, The idea of human minds being algorithmic strikes me as very strange. I probably don’t know enough about what is meant by “algorithmic.”)

    On fallibility, Alan Turing’s quote, and human thinking: The idea that we err seems to be central to our thinking of how we think, the stumbling block for AI. I wonder if a study in how we err, where we err, would be beneficial to AI research? I’m pretty sure it’s not random, not a matter of “an occasional wrong answer.” The wrong answer must make a kind of sense. On this note, I’ve been planning a post on AI and phenomenology, since the latter pays a lot of attention to the way we err, revealing what error is. My delays in writing the post are due to my lack of knowledge in AI. I hope that if I ever do write the post, you’ll help clarify things for me and point out my, um, errors.

    Like

    1. Thanks. Glad you found it useful.

      I’ve heard that about logic myself. It’s one of the reasons I see logic as essentially a theory about the most fundamental relationships in reality. (The other is that Graham Priest, an expert in logic, also takes that position, and explores eastern logic that often seems…illogical by western logic.) Logic is a theory that’s ultimately based on induction, which of course never proves anything beyond doubt. And like all theories, it should be subject to revision.

      On suspending disbelief, I do think that’s another way of stating it. Since this was ostensibly a post on mathematics and logic, I wanted to phrase it in those terms. But I think you’re right. We often see something that we can’t logically understand, and then try on a solution to see what happens, although the first attempts are biased by our beliefs. When those biases prove beneficial, we often consider that first attempt inspired or based on intuition. Indeed, we often use this even when we could understand something logically, but want to see if can get away with the quick heuristic.

      The term “algorithmic” can sometimes be controversial. I’m using here in the sense of a process or set of rules. The computational theory of mind is that the components and operations of the mind ultimately boil down to these “mindless” processes. No one seriously sees this happening in the way it does in a modern computer, but more in a massive parallel neural network fashion.

      On how we err, I think the main benefit is that it allows us to go beyond simple logical conclusions. But it doesn’t rule out algorithms. Meteorologists give us a probability of rain in their forecasts, which they arrive at using formulas and models (i.e. algorithms). It’s rarely possible for them to tell us whether it will rain tomorrow with certainty, but if they say 90% chance of rain, most of us will regard that as the same as telling us that it will rain, even if we’ll be wrong 10% of the time when we use that heuristic.

      I definitely want to see your post on AI and phenomenology. I wouldn’t worry too much about AI expertise, unless your post gets into the nitty gritty of AI engineering. Even after reading about Gödel, I’m definitely no expert, but felt like I’d learned enough to write the post.

      Liked by 1 person

      1. You’ve inspired me to start working on that post now. I’m focusing mostly on phenomenology of perception…hopefully it won’t be too incoherent.

        “On how we err, I think the main benefit is that it allows us to go beyond simple logical conclusions. But it doesn’t rule out algorithms.”

        I’m not sure (knowing nothing of how computers and algorithms work). There is a great deal to be discussed on human error. That complexity could be discovered, I think, but I have no idea whether or not it can be translated algorithmically. Not all error is created equal. 🙂

        Here’s a funny example of error:

        http://nicholasrossis.me/2015/12/29/artificially-created-romance-novels-the-next-big-thing/

        Liked by 1 person

        1. “You’ve inspired me to start working on that post now. I’m focusing mostly on phenomenology of perception…hopefully it won’t be too incoherent.”
          Awesome! Looking forward to it.

          Just to clarify: algorithms always proceed logically, even when the logic is wrong for the given situation. If I feel intense fear, the process of me feeling fear may be algorithmic, whether or not it’s actually logical for me to actually feel fear in the current situation.

          On the example of error: ouch! Yeah, probably not going to see AI generated stories for a while. Although to be fair to the AIs, those aren’t exactly the best pictures to feed them if you want a romantic story. I can see why they’d start with romances though. From what I’ve read, they are the most rigidly formulaic.

          Like

  3. Interesting comments, but I think one can mount a more definite case against the relevance of Lucas/Penrose type Gödelian arguments regarding strong AI. First, the human mind can in fact only infer the correctness of the Gödel statement G(T) of a formal theory T if T is consistent. Otherwise, if T is inconsistent, it proves every statement, including G(T), meaning that G(T) is, in fact, false—as you note, it asserts its own unprovability from T.

    However, in order to conclude that T is consistent, one would have to be able to carry out reasoning going ‘beyond’ T already, since, as the second incompleteness theorem asserts, T itself does not suffice to prove its own consistency. Thus, in order to ‘see’ the truth of G(T), the human mathematician must already be able to carry out reasoning outside of T; and hence, that they can establish the truth of G(T) does not tell us that they can’t be described by T, as we had to implicitly assume so from the start. Put the other way around, it’s perfectly consistent to assume that a human mathematician is described by a formal system T whose consistency they can’t prove, since in that case, they can’t establish the truth of G(T), but only that ‘G(T) is true, or T is inconsistent’—which is, however, a logical triviality.

    Perhaps even more damning, no agent in contact with the real world really can be described by a static formal system T—whenever the agent gains new information, it effectively gains new axioms from which to reason. This includes even being asked the question of whether the system T that governs it (say, at a given point in time) is consistent: for in order to ask this question, the agent must be supplied a description D(T) that is intelligible to it. But then, it is no longer governed by the formal theory T, but rather, by T + D(T), and there is no contradiction at all involved in supposing that T + D(T) may prove G(T). It won’t prove G(T + D(T)), of course, but that’s a different question, and in order to ask that one, one must again supply a description of the formal system, and so on.

    So ultimately, the Gödelian phenomena simply don’t tell us anything regarding the possibility of strong AI.

    Like

    1. Thanks Jochen. I’m grateful for your insights. It seems to me that we’re somewhat saying the same thing, although your language is more formal and precise, and you have some additional insights on the details.

      I especially like your distinction between T and D(T), an insight I didn’t have. The description of a system is inherently a superset of that system. While it’s effectively part of my first point in the post, I have to admit this aspect of it hadn’t occurred to me. Thank you!

      Totally agree with your final point.

      Like

    1. Hmmm. Perlovsky seems to be in the same camp as Penrose and Hameroff. Given the previous writings you’ve shown me, I can’t say I’m too surprised. I think he makes the same mistake that Turing warned against, that is, insisting that every answer an AI comes up with must have a precise algorithm. If we allow for imprecise algorithms, probabilistic reasoning, with all the shortcuts that then become available, then a lot of things become possible, albeit with many of the same flaws as human / animal minds.

      Like

      1. I’d been meaning to adress this earlier, but I’m not sure how ‘probabilistic reasoning’ is supposed to help (I’m also not sure what you mean by ‘imprecise algorithms’, if that’s supposed the same thing, or something different…): Any probabilistic algorithm can be implemented deterministically (via branching/interleaving strategies, e.g. running the computation with varied values of a random parameter and then taking a majority vote) and vice versa, such that the set of things that can be computed using probabilistic reasoning is exactly the same as the set of things that can be computed deterministically—the two models are equivalent with respect to their computational power.

        Liked by 1 person

        1. Just to be clear, I didn’t mean to imply that probabilistic algorithms aren’t deterministic. They’re just not guaranteed to come up with the right answer.

          Consider the weather forecast. The process to determine with certainty whether it will rain is far more difficult than determining the probability of rain. The first requires a completeness of information and degree of calculations that, due to chaos theory dynamics, may be forever impossible. The second can never give the answer with certainty, but can do it with far less data and processing power, and can provide useful information, even if the conclusions based on that information will sometimes be wrong.

          The same thing happens with polling and surveys. There is always a margin of error and a possibility of being outright wrong. But by accepting that uncertainty, we can derive the probable views of an entire population by only looking at a small sample size. Getting those views with certainty would require polling the entire population, which is almost always cost prohibitive.

          Once we allow a machine to work this way, essentially with shortcuts, it enables it to come to conclusions that it might otherwise never be able to reach infallibly. It seems probable that humans reach a lot of our intuitions this way, using these kinds of shortcuts.

          Like

      2. I can’t say much not understanding as I do, but while Perlovsky does cite Penrose and Lucas I don’t think he is doing so in support of “the mystery” of consciousness. I may well be completely wrong but, reading him sounds to me something like what you’re describing:

        “Dynamic logic algorithms model uncertainty by using similarity functions among representations of concepts and incoming data. Often, these similarity functions are modeled functionally similar to probability densities. The dynamic logic idea “from vague-to-crisp” is implemented by initiating probability density functions with large variances. In the iterative dynamic logic processes variances might be reduced to small values, resulting in logic-like very narrow pdfs.” – from the link

        … though I’m more than a little in the dark on any significance the last sentence might have.

        “… consistent systems. Is the human mind consistent?” – your post

        THAT is an interesting question! I take Perlovsky to think so when he says:

        “If Gödel’s arguments are applied to any finite system, such as a computer, or a brain, and only finite combinations are considered, Gödel’s proof of the existence of unprovable statements would not stand.” – from the link (emphasis added)

        … but that’s going to require a substantial amount of further reading, and, at the pace I’m learning … 🙂

        Liked by 1 person

        1. Thanks for pointing those out. I may well have misjudged his position, although consider these snippets from near the end:

          “Dynamic logic is computable. Operations used by computers implementing dynamic logic algorithms are logical. But these logical operations are at a different level than human thinking.”

          “The reader’s logical understanding is on top of 99% of the brain’s operations that are not “logical” at this level. Our logical understanding is an end state of many illogical and unconscious dynamic logic processes.”

          “The mind’s “first principles” do not include logic. Nature uses different “first principles” at its different levels of organization.”

          Taken all together, I’m not entirely sure what his position is. He spends a lot of time discussing “dynamic logic” which as you note sounds similar to what I was talking about, but then appears to finish up saying the brain isn’t like that.

          Now, I took him to be saying that the mind isn’t logical, including not dynamically logical, but it’s possible he is considering “dynamic logic” to not be “logic”. If so, his wording is maddeningly ambiguous.

          Like

      3. “maddeningly ambiguous” – his phrasing is absolutely undecipherable at times(his English and my ignorance I suspect minimally), but what stands out to me most are his specific ideas on how he sees things working(eg the knowledge instinct, dynamic logic, modeling field theory) rather than an overall position on the source/origins of consciousness or whatever.

        My understanding is that “dynamic logic”(ie the vague-to-crisp perception process) is Perlovsky’s hypothesis/theory of how the brain/mind works rather than “purely” logical? and that to acheive AI it will require the process of “dynamic logic” to be replicated. I can’t stress enough my ignorance of the entire field of inquiry and the incompleteness of my reading and comprehension of anything related to the above. One more quote, if I may so blatantly attempt to provoke you into further distraction:

        “Hameroff, Penrose, and the author (among others) considered quantum computational processes that might take place in the brain [14, 33, 36]. Although, it was suggested that new unknown yet physical phenomena will have to be accounted for explaining the working of the mind [33]. This paper describes mechanisms of the mind that can be “implemented” by classical physics mechanisms of the brain neural networks and, alternatively, by using existing computers.” – page 3

        ‘Neural Networks, Fuzzy Models and Dynamic Logic’, Perlovsky 2007 http://www.leonid-perlovsky.com/4%20-%20Mehler.pdf

        … describes his parting with their thinking? Key to my reading there is “classical physics” and “existing computers”, ie without “the mystery”. Unless I’ve misread, misunderstood, etc …

        If you’re a glutton for this flavor of punishment take your pick at: http://www.leonid-perlovsky.com/papers-online.html 🙂

        Liked by 1 person

        1. Thanks Mark. I’m grateful for the clarifications. Based on those snippets, it looks like I misjudged his position, which actually seems pretty similar to where Daniel Dennett, Steven Pinker, and other thinkers are. Given his opaque style, I think I’ll hold off investing more time trying to parse his writing. But if you have anything else he wrote about that you’d like to discuss, I’d be happy to participate.

          BTW, I’m currently reading Steven Pinker’s ‘How the Mind Works’. Not sure why I hadn’t read it before now. Pinker is a good writer but he has a tendency to belabor points to death. Long after he’s convinced me of something, he continues putting forth evidence. One thing he is convincing me of is that psychologists have more to say on this subject than I had started to believe. He’s describing clever experiments that manage to get at what the mind has hard coded machinery to handle and what requires additional (usually conscious) processing.

          Liked by 1 person

  4. This is a fascinating subject. I actually think Godel is correct in his conclusion that a purely logarithmic system is incapable of operating in the same way that a human brain does, but I think his reasons are wrong. Might have to make this a blog post of my own. 🙂

    Like

      1. For me it’s the definition of logarithmic that’s most problematic. You defined it as always logical (even if it’s wrong in the context) and deterministic. I’m not sure it’s possible to meet either of those conditions in a computer and I’m pretty certain it’s not how human brains work.

        I agree with your analysis on the problems of consistent systems having problems they cannot solve within themselves. The answer, it seems, is pretty simple. Make more systems and have those systems communicate. I’m certain that, partially, that’s exactly what our brains do.

        However, I still think a purely logarithmic Ai is doomed to be, at the very least, fundamentally different from our intelligence because I think that the brain (and likely any sufficiently complex computer system) is an incredibly chaotic system.

        At this point I need to explain what I mean by chaotic. A good example is a double armed pendulum. Tiny variations in initial conditions in this simple, chaotic system, result in huge variations in the path of the pendulum.

        http://www.instructables.com/id/The-Chaos-Machine-Double-Pendulum/

        This does not mean that the double pendulum, or any other chaotic system, operates on magic, it just means that it shows an extreme sensitivity to initial conditions. On a practical level, this means that we need very sophisticated tools and lots of computational power to predict the path of a double pendulum.

        If you agree with me that the brain is likely to be a very chaotic system, this means we are going to need enormous amounts of computation power to predict its actions, perhaps more computational power than exists in the universe.

        However, even granting the possibility of building a computer theoretically powerful enough to determine the logarithmic action of a human mind, we run into the next layer of problems with the chaotic systems. The billions of double pendulums in our brains are incredibly tiny. This makes them sensitive to problems like the Heisenberg Uncertainty Principal and quantum uncertainty (which is partially random). And remember, these tiny, quantum scale shifts in initial conditions get magnified in chaotic systems so we can’t ignore them.

        As such, trying to impose logarithms, always logical and deterministic, on a chaotic system with initial conditions that are, by definition, unknowable and random, seems to me unlikely to succeed.

        The cool thing I just thought of, however, is that a computer of sufficient complexity is likely to end up being both chaotic and sensitive to quantum effects. In other words, I don’t think logarithms will work but computers might end up shedding their logarithmic limitations naturally anyway.

        Liked by 1 person

        1. In other words, I think that determinism rests on shaky ground the farther it goes from pure abstraction and becomes unworkable in chaotic systems or systems with very small parts. The workings of the brain, then, seem like the perfect match made in hell for deterministic algorithms – tiny, ultra complex and deeply chaotic.

          Liked by 1 person

        2. I actually agree that the mind could well be a chaotic system. The brain is not a digital processor. Chemical synapses vary in strength rather than just being on or off. Any ability to measure these strengths will involve margins of error, which could make predicting the processing of the brain impossible, even in principle. Digital computers don’t suffer from this issue because we essentially engineer it out, designating voltage ranges of transistors that are well apart from each other as either on or off, 1 or 0, etc.

          On the face of it, this might make the idea of creating an AI with human level intelligence, much less the idea of mind uploading, hopelessly unfeasible. However, I think we have to ask the question, how crucial is the exact pattern of processing to a successful mind?

          Consider that the brain itself ends up operating in a variety of states, affected by things like fatigue, drugs, nutritional input, electromagnetic fields, and a variety of other environmental factors. There seems to be some resilience built in. It’s quite possible a copy of an organic mind could operate quite successfully even if the exact processing of the original isn’t preserved.

          On indeterminacy, I think it helps to consider evolution. Why did brains evolve? Take humans out of the mix and consider why the simplest animals have brains. It seems obvious that it’s to help them find food, avoid being something else’s food, and find mates. What would rampant indeterminacy do to any evolutionary benefit they might have? If they didn’t respond appropriately to certain patterns of sensory inputs, it seems obvious they would have been selected out.

          None of this is to imply that AIs wouldn’t be very different from organic minds. It would take enormous effort to make them exactly like animal minds, and we’d probably find little benefit to doing so. But it shouldn’t present any obstacle to an AI eventually being able to do the same things that humans can.

          This issue does present complications for mind uploading. The copied mind will never be exactly like the original. But then my mind isn’t an exact copy of itself from yesterday, or last year. Asking for a perfect copy is, I think, the wrong standard. A more realistic standard to me is, is the copied mind similar enough to the original to satisfy friends and family, as well as the mind itself, that it is the same person as the original? I suspect the first uploads will fail this test, but that each subsequent one will get closer. (I suspect and hope most of this will be done first with animals.)

          Liked by 1 person

          1. You bring up a number of interesting points.

            I agree that making a copy of the brain probably isn’t necessary, but I’m not sure that something as non-deterministic and non-logrithmic as the brain can be modeled with a purely deterministic computer. Perhaps the computers also need to become non-logarithmic and chaotic. They don’t necessarily need to be a copy, but I suspect Godel is right that logarithms won’t cut it.

            As for indeterminacy and evolution, I actually think it’s adaptive. Let’s consider the case of the lowly nematode. These critters have fewer than a thousand neurons.

            So let’s imagine a group of non-parasitic nemotodes in a pond. On any given day, the vast majority will go to the surface to feed on algae. However, a couple of them, because random happens, will get a wild hair and dive to the bottom of the pond.

            When a cow comes along and consumes three quarters of the water in this tiny pond, those nematodes with a wild hair are there to restock the population.

            If all the individuals behave logically and go to the surface for algae, that cow ends the population.

            Liked by 1 person

          2. It does remain possible that the brain is not fully algorithmic since we’re a long ways from having all the neural circuitry mapped. But virtually all of the neuroscience I’ve read points in the opposite direction. Neurons sure appear to be doing computation, effectively implementing AND, OR, and NOT gates, along with much more complex logic, in the patterns of synaptic inputs and outputs.

            Nematodes are an interesting example. The c elegans nematode is the only creature whose connectome has been mapped, as well as the only one so far who has been uploaded into a robot.
            https://selfawarepatterns.com/2014/12/16/worm-brain-uploaded-into-robot-which-then-behaves-like-a-worm/

            The scenario you lay out is interesting. Indeterminacy as an adaptive trait. But I think we need to think it all the way through. If one or two of our renegade nematodes go on to solely pass on their genes, then wouldn’t all of the future nematodes be diving to the bottom and paying the cost in terms of nourishment?

            Or maybe they only do it, say, in 1 out of 100 days. On most days they go up to feed, except for a small portion of the population. Then a straight algorithmic artificial nematode would still be 99% the same as the natural one.

            Even the algorithmic one might avoid going to the top if it has the nematode version of indigestion that day, or if any number of environmental factors (such as strong electromagnetic fields or weird smells) influence its processing. In such a case, its behavior might be effectively indistinguishable from the indeterminate one.

            All of which is to say that indeterminacy is virtually impossible to demonstrate. Of course, hard determinism is also virtually impossible to demonstrate. In truth, even if we do succeed with AI and / or mind uploading, we may never know if there actually is a difference between the original evolved versions and the engineered or copied ones.

            Liked by 1 person

          3. That’s interesting you have been seeing the brain as basically a bunch of three way switches. One of the things I was thinking about was from a National Geographic article regarding brains and how they seem to have very strange structures. Indeed, a lot of the white matter structures seem to me to be designed almost to interfere with each other. Perhaps not to the point of complete chaos, but to create such a tangled web of complexity and feedback as to really play up the system’s sensitivity. Considering the tiny scale of most of these structures, it makes me think they’d be sensitive to quantum randomness, as well.

            http://ngm.nationalgeographic.com/2014/02/brain/zimmer-text

            This, for example, is very much not what you’d expect from a biological system, but might be what you’d expect for a system trying really hard to build in quasi-random feedback.

            I actually used the nematode example because I read that article of yours. Very, very cool stuff. However, I think you are making the case for random over genetic behavior.

            If the nematode dives that day because DNA commands it, then those nematodes survive cowpocalypse but they then starve to death because they never go to the surface to get algae.

            If they have the same DNA as all the other nematodes but are diving that day because of chaotic feedback loops or just randomness, they still survive cowpocalypse but the next day they are likely to go to the surface and chow down on algae salad.

            Liked by 1 person

          4. If I gave the impression that I see the brain as a series of three way switches, then my bad. It’s far more complicated than that. My point was that something like AND, OR, and NOT gates can happen in the synaptic connections between neurons. Two adjacent synapses, neither of which is quite strong enough to trigger an action potential (a nerve impulse), could trigger one if they both signal at the same time, an AND gate. Of course, too adjacent synapses that are both individually strong enough to trigger the action potential are an OR gate. Inhibitory synapses can be a NOT gate. That said, most of the computation is far more complex. Each neuron is like its own processing core.

            White matter is indeed interesting. It’s basically millions of neural axons connecting (mostly) the thalamus (a central coordinating hub-like region) to the neocortex. It’s how brain regions communicate with each other. It’s white because axons have myelin sheaths, glial cells that provide insulation to protect the nerve impulse from interference. The grey matter parts are grey because they include the somas and dendrites of the neurons, along other types of supporting gial cells.

            (The idea of a nerve impulse affecting other neurons other than through synapses is called ephaptic coupling. In experiments, interference between two adjacent neurons reaches about 20% of the threshold for the action potential. In actual brains, there are glial cells between the neurons, making it even less likely, although definitely not impossible.)

            Quantum randomness could conceivably have effects on mental processing, but modern neuroscience appears to be making progress without needing to consider it. It’s worth noting that it isn’t size per se that leads to quantum effects, but isolation. Quantum effects show up in the laboratory because the experiments take place in isolated conditions. The environment inside a brain is very noisy, making it unlikely that quantum effects survive other than at the subatomic level. The question is whether those subatomic quantum effects bleed over significantly into the larger scale molecular machinery of neurons and synapses.

            (Incidentally, one of the challenges that quantum computing has been struggling with is keeping quantum circuits isolated enough to prevent decoherence / wave function collapse. It’s why most quantum processors have to execute at near absolute zero temperatures, to keep the system as isolated as possible.)

            On nematodes, perhaps. As I noted above, demonstrating it one way or the other may never be possible. Even if that randomness does exist, it may be too subtle to notice its absence in an uploaded version.

            Wow, just realized how long this response got. Sorry about that. I can blather on about this stuff all day 🙂

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.