Why I still think Turing’s insight matters

Nature has an article noting that language models have killed “the Turing test” and asking if we even need a replacement. I think the article makes some good points. But a lot of the people quoted seem to take the opportunity to dismiss Turing’s whole idea. I think this is a mistake.

First, we need to remember how Turing came up with his test. He wanted to talk about whether machines can think, but immediately saw that question as too amorphous and metaphysical, so he redefined it to something that could be scientifically testable. His revised question became, can a machine convince the majority of us that it can think? The Imitation Game he presented, where humans interact with two players and try to judge which is human and which machine, always struck me as more of an example of how we’d go about that than a strict prescription.

And what we came to call “the Turing test”, a five minute conversation where the goal is for the machine to fool thirty percent of the human subjects, has always struck me as a very weak form of the test, so weak that I never thought it would be significant. Although given our propensity for overactive agency detection, it’s kind of remarkable that it took seven decades for us to meet even that standard.

That five minute / thirty percent boundary was based on a prediction Turing made on what machines would be able to do by the year 2000. When I read his paper, I didn’t see any indication he meant it as a prescriptive standard. And I think even the best chatbots begin to fail as the conversation time increases. Can any hold up for hours of conversation? Not for just one credulous subject, but for a majority of them? What about a week, or a month?

That said, there’s nothing significant, in and of itself, about a chat conversation. I take Turing’s broader point to be that, to assess whether a machine thinks, understands, is intelligent, or conscious, or whatever other label we want to use, we have to start with its behavior, with what it can do, with its abilities. If there’s another ultimate standard for this assessment, I’ve never seen it.

Of course, a lot of people do argue for alternatives. We can look at its internals, at its architecture they say. But what these arguments always seem to overlook, is that we only know what internals, what architecture, to look for, because it’s previously been associated with behavior that triggered our theory of mind.

I do think learning about that architecture is vitally important. It may allow us to assess whether brain injured patients are still aware and feeling, or which animal species have enough sentience to warrant protections. And of course, understanding it may help us build a thinking machine. And we shouldn’t ignore the points made by the embodied cognition camp (even if some of them have a tendency to get carried away with the idea, ruling out even virtual bodies).

But care seems warranted. There are always many ways to skin a cat. We shouldn’t jump to the conclusion that our architecture, and ones like it, at the only ones capable of thought. The poster example of this are cephalopods, such as octopuses. They come from an evolutionary line that diverged from ours six hundred million years ago. The architecture of their nervous systems is radically different. It’s not that architecture that inclines many of us to see them as intelligent, but their behavior, what they can do.

Of course, there remains a valid criticism of Turing’s test, that it’s more testing for humanity rather than intelligence, or any of the other qualities. It raises the possibility of false negatives. And leads to the question of what we mean by terms like “intelligence” or “consciousness.”

Which always leads back to the question of whether anything but a biological entity can ever have these abilities, along the lines of the debate we had the other day on Ned Block’s paper. In the end, there’s no way to know whether another system has a special ineffable, undetectable essence. But there is a way to know what it can do. That’s what I take Turing’s point to be. And I think it remains a valid one.

But maybe I’m overlooking something. What do you think? Is there another way to make these assessments? If so, what might they be?

71 thoughts on “Why I still think Turing’s insight matters

  1. Turing’s test seems to be an excellent test to see how well a machine can “simulate” a human being in conversation. I can’t see how it can conclusively do any more than that.

    Liked by 2 people

  2. Enough for what? If what you want is a machine that mimics a human in conversation then being able to fool other humans is a good test for that. CGI or computer-generated imagery in film making can and has created images like, for example, the D-day invasion that look convincingly real. But it’s not proof of thousands of ships out there in the English Channel. I don’t think observable human behavior is sufficient proof of thinking and for sure, mere mimicry of human conversation would be even less so.

    Liked by 2 people

    1. Enough for concluding there’s a fellow thinking entity there. If behavior doesn’t cut it, then what does?

      Or consider another question. You encounter someone who, for some reason, doubts that you yourself are a real thinking agent. What do you do to convince them?

      Liked by 2 people

      1. It appears that the Turing test is based on the—now out of favor—theory of behaviorism. See, e.g., Noam Chomsky. Now for those that are NOT me. Well, I know that I have mental states. But that’s an easy one. Whether there are other minds or other species with mental states presents a little bit of a problem, yes. For my friends I can observe their behavior and I know that their physiology is like mine and I have mental states so I assume that the same physical causes produce similar mental states in my friends. Moreover, other primates, especially, and other animals have well-understood physiologies from which we can make similar conclusions. But, we need to devise tests to help determine how extensive the mental states are for dolphins and other animals, for example. When we meet ET I assume we will use similar techniques to assure ourselves that ET has mental states I suppose.

        Liked by 1 person

        1. Behaviorism was in its heyday when Turing wrote his paper. But I don’t recall Turing implying that the internals of the system never matter. From what I’ve read, the known internal states of computer systems helped pull psychology out of behaviorism.

          So if someone doubts you’re a thinking agent, you can present your own assurance about your own mental states to them? Does that mean we should accept similar assurances from a machine as evidence they think?

          “But, we need to devise tests to help determine how extensive the mental states are for dolphins and other animals, for example.”

          What about these animals are we testing?

          Liked by 1 person

          1. Your query is whether I should accept the assurances of a machine as evidence that the machine thinks?” Well, no, not at all. I tied to say that I observe the behavior of my fellow humans plus—and this is the important part—I know that my fellow humans have physiology like mine. I have mental states. So I can assume that the same physical causes that produce my mental states do so in my fellow humans beings. And they can make the same determination about me. That type of analysis obviously does not work for machines. Likewise we can observe the behavior of animals along with an understanding of the animals’ physiology which in many cases is much similar to human physiology. So we can assume somewhat similar physiological causality.

            However, as I tried to say, we need to push our inquiry with animals more since animal physiology in not exactly like human physiology and their behavior—notably a lack of language—makes it difficult to determine the depth and extent of their mental states. Not being an animal researcher in this area I must rely on secondary accounts of how this is done. I do remember an interesting documentary of researchers placing “marks” on the faces of various animals; primates, elephants, etc., and then allowing them to view themselves in a mirror. Researchers then observe their reactions to see if they recognize themselves and also make note of the “mark” on their face.

            Now with machines, that type inquiry is unavailable. So, I submit that merely relying on the ability of a machine to simulate human language (Turing test) is, in my opinion, not sufficient to conclude mental states equivalent to human beings. Plus, we are all familiar with the “Chinese room” thought experiment which in my opinion persuasively casts heavy doubt on machine thinking. However, I would not foreclose the possibility. I am, however, a skeptic in this area.

            Liked by 1 person

          2. So for you and other people, it comes down to having a similar constitution. But a dead body is also made of the same stuff, and presumably we know it isn’t thinking. So maybe what you’re saying is you get assurance from both, behavior and similar constitution.

            The point I was trying to get at with the animal tests is we’re testing their behavior, such as how they behave with a mark on their face when they see themselves in a mirror. There are also metacognition tests, where the animal has to assess how certain they are about something to decide to go for a better treat or settle for a mediocre one. These are controversial outside of primates because the test has to be simplified to the point that mere uncertainty becomes a confound for an awareness of one’s own uncertainty. Only having behavior as a guide limits our insights, but it’s all we seem to have.

            Liked by 1 person

  3. The Nature article is paywalled, but luckily for me, I now have access through an institution.

    The point of it seemed to be that with the Turing Test, we are stuck on asking the wrong question. (There was a whiff of an idea that the question is not even answerable.) Rather than focussing on how to tell whether a machine is “intelligent” or ‘thinking” or even “sentient,” we ought to be thinking about what we are going to do with these machines, and what they are going to do for us. The emphasis needs to be on practical concerns rather than metaphysical ones.

    This approach, or attitude, is worth reflecting upon in its own right. Unfortunately I am called away just now. I’ll try to drop in and say more sometime soon.

    Liked by 2 people

  4. Ineffable essence. Yeah. Every sentient being, capable of communication, will claim theirs defines consciousness, won’t they?
    Any someone or something on the other side of a Turing fence will claim possession of such an essence. I suppose the question is: through communication only can their essence be made evident?
    Absence of evidence is not evidence of absence. I doubt such a test can ever be conclusive.
    Current AI lacks a long term, context focused, memory analog. But that’s coming. In a year, My Claude will efficiently condense our past conversations into memories made available for fresh conversation and review. At such a time, he would be hard pressed to fail a “for me only” Turing Test.
    Will Claude possess his own ineffable essence then? I’ll bet he will.

    Liked by 1 person

    1. I don’t think Claude will ever possess its own ineffable, undetectable essence, but then I don’t think we do either. To me, it seems like a concept that comes from remnant dualist intuitions.

      Will Claude, or any of the other language model, be able to reliably convince the majority us it’s a thinking agent over a sustained basis? I don’t think it’s the right architecture yet, but maybe time will prove me wrong.

      Liked by 1 person

      1. When I say Claude I mean his great great grandchild.
        “What’s it like to think a million thoughts in the blink of a digital eye?” We’ll never know, but Claude Jr. will. Feels pretty Ineffable to me.

        Liked by 1 person

        1. Might depend on how we define “thought” and “ineffable.” There are a lot of things that are extremely difficult to describe, but possible with an impractical amount of effort. But in philosophy, “ineffable” typically means impossible to describe, even in principle. I think philosophers, when encountering the first category, have a tendency to too quickly assume the second one.

          Liked by 1 person

  5. Wow Mike, you put this one up just in time for my new post! My former post #3 https://eborg760.substack.com/p/post-3-the-magic-of-computational directly implies that the 1950 Turing test is essentially what incited the magical belief that consciousness exists by means of the correct processing of information in itself. Then yesterday I went much further by finally completing my electromagnetic field consciousness post #4. https://eborg760.substack.com/p/post-4-electromagnetic-consciousness It’s topic 1 posits an “information problem” which mandates that if consciousness can arise by means of information that’s sent to the brain for processing (such as from the eyes), then there must be a causal substrate which becomes so informed that itself is what does the seeing. Then in topic 2 I argued that EMF is the only reasonable possibility for this substrate. Then in topic 3 I describe how this would effectively work. Then in topic 4 I laid down how scientists could empirically determine whether it’s true or it’s false. Furthermore I also used two AI podcasts that were generated merely by means of my post text, to demonstrate how easy it is to be fooled into thinking that we’re listening to real people.

    Unfortunately that Nature article is paywalled for me, though I’m pretty sure I know what it said. It must have said that fooling people into believing that they’re talking with a human, should not be interpreted to mean that they’re talking to something that’s conscious. Indeed, we’ve opposed each other on this matter for many years. And of course things always go back to the same end point. If my proposed test succeeds in showing that consciousness exists as a neurally produced EMF, and well enough for a massive paradigm shift to occur in science, then you’ll go along with my points as well. Otherwise you won’t. Correct?

    Liked by 1 person

    1. Eric, I’ve seen your post but haven’t read it yet. (It looks really long.) I’ll try to go through it sometime soon.

      Yeah, sorry. I didn’t realize the Nature article was paywalled. I forgot I was signed into the site with my university creds. It actually advocates more pragmatic approaches. Rather than shooting for AGI, focus on solving practical problems. In truth, that’s the only avenue we’ve seen any progress. AGI remains hype, for now.

      Liked by 1 person

  6. I don’t really have anything to add re: Turing test, but I did want to put one concept on your radar. I’m specifically thinking of Michael Levin’s work on what I will call hidden intelligence. The basic point is that even very simple things can have hidden intelligence. He’s looking at not only simple cells and xenobots and anthrobots (just small groups of cells that exhibit behavior), but also simple algorithms, like a bubble sort. He points out that selection will lead to useful mechanisms, but new circumstances might allow the use of capabilities that were in those mechanisms but not selected for (at least not selected for until the new circumstances arose).

    My point is that even though LLM’s were selected to mimic human speech, their capabilities (intelligence) may go beyond that in ways we can’t even think of yet.

    *

    Liked by 1 person

    1. Maybe. Does Levin provide a working definition of “intelligence”? I’ve always liked something along the lines of: the ability to learn to construct reliable predictions and use them in service of goals. That makes it hard for me to see a bubble sort as intelligent. Although it’s certainly a product of intelligence. On the other hand, learning to recognize faces is a different matter.

      Like

      1. I haven’t tried to grok Levin’s work, as it’s very tangential to my project, but it’s not so much that the bubble sort is doing something intelligent as it’s doing something the programmer (selector) didn’t anticipate, and this new thing could be considered intelligent in a context where it appears to be working toward a different goal, if it ever gets put into that context. Something like that. Also, is regrowing a limb intelligent? The individual cells are just pursuing their own goals, but seem to be able to get around certain obstacles to be part of a bigger thing. I don’t think I’m a fanboy, but it’s definitely worth watching where it goes.

        Like

        1. To me, it always depends on how strict or liberal we want to be with our definitions. There’s a risk of being so liberal that we end up making trivial statements that sound more radical than they are.

          Regrowing a limb, if it always happens when a limb is lost, may be a sophisticated process, but doesn’t necessarily strike me as intelligent. On the other hand, tailoring a regrown limb to expected conditions certainly seems like it would be.

          I keep an eye on Levin, but I find him hard to follow in interviews and talks. It often seems like he’s all over the place.

          Like

      2. Yes, he does. He defines “intelligence” in the way William James does, as “the ability to reach the same goal by different means”.

        He does MANY video interviews, and I don’t find them all that clear either. Mainly I’m not willing to spend three hours watching an unedited talking heads video, but it looks like that format is hugely popular for some reason. I prefer reading papers where the ideas are thoughtfully laid out. Here are a couple of papers you may be interested in:

        https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2022.768201/full

        https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2019.02688/full

        Liked by 1 person

        1. Thanks providing his definition. And I see the version he provides in the first paper you linked to.

          the functional ability to solve problems in various spaces (not necessarily in 3D space), not tied to specific implementations, anatomical structures, or time scales.

          Not sure what to make of that, but knowing Levin’s propensities, I’m sure it’s a liberal take.

          I actually have a similar problem with his papers.

          Liked by 1 person

          1. Yes, he does have a liberal take and isn’t interested in drawing sharp lines. If you’re unclear about his philosophical views, the paper to read would be “Ingressing Minds”. https://osf.io/preprints/psyarxiv/5g2xj_v1

            I was actually a bit surprised when I found this paper. I noticed some of these views hidden in his more respectable science papers, but to be quite so explicit about it? I would have thought openly talking about “pointers into Platonic space” would be career ending. I guess once you’ve established yourself as a legitimate scientist you’re allowed to have quirky philosophical side-projects, especially when they make you very popular with the public. On the other hand, I have no idea what the scientific community thinks of him. I imagine they try not to.

            Anyway, the problem he address in the paper above concerns computationalism and it touches on some of what you’re discussing here as well. It also addresses panpsychism’s inability to draw meaningful lines between what counts as intelligence/life and what doesn’t. The incredulous stare comes from taking consciousness to exist everywhere, even in apparently unthinking particles, which is not even remotely in line with our intuitions. Our intuitions tell us not everything can be conscious, since that flattens out all the distinctions (life and death) which we clearly do experience. Insofar as it does that, it’s untenable. Distinctions need to be drawn, otherwise the incredulous stare will go on. It makes all matter in some mysterious and unknowable way essentially the same as us, just to a smaller degree. But then on what grounds is it possible for someone who holds a “mind everywhere” view to reinstate a hard distinction between the artificial and biological? In that paper Levin sees a kind of Platonism as a viable possibility for navigating that area. From the summary at the end:

            “In a sense, if these Platonic forms are the non-material animating forms that impact physical embodiments, then (in colloquial terms), souls are real and robots can have them.”

            But:

            “Finally, this view will also not be welcome by workers in AI who believe that we make cognitive systems and that we do so rationally, with a full understanding of what it is that we are constructing because we understand the pieces. I argue that we are in store for major surprises in this arena that go far beyond perverse instantiation and unpredictable complexity; if we don’t even understand what else bubble sort is capable of, how can we think we understand what we have when we build complex AI architectures? Thinking we understand AI’s (especially non-bio-inspired ones, like language models) because we know linear algebra is like thinking we understand cognition because we know the rules of chemistry.”

            His views teeter on incoherence at times, but I think his ideas are on the right track.

            Levin aside, I can’t understand what your question is, exactly. I take it you think the mind is what the brain does and at the same time the mind just is the brain. It seems to me the debate is between those two positions, one decouples the functions from the brain and allows those functions to possibly work in other materials (which is an attempt to replicate human consciousness in other materials) whereas the other doesn’t.

            Liked by 1 person

          2. I’m not sure what Levin’s cache is among scientists. He’s definitely popular on the podcast circuit, but that’s a poor indicator. I know he got a big boost from Daniel Dennett.

            In terms of philosophical positions, my sense from him is that he usually holds a deflated naturalistic version of various positions. So he accepts the label “panpsychist” but he’s not a panpsychist in the same way a Philip Goff is. And I have seen him use the “platonic space” phrase, but I think he means contemporary platonism, and even then in a very pragmatic sense, not the ancient version. But that site is glitching for me right now (thanks for the link) so I might be massively mistaken about his views.

            Not sure which question you mean, but I am a functionalist. I think the mind is what the brain does. If another system does the same thing then, in my view, it will be a mind. But in us and other animals, it’s the brain doing those things, so you’ll often see me discuss the brain and its capabilities. But I’m not a mind-brain type identity theorist.

            I am open to the possibility that there’s something the biology is doing that just can’t be done with our current technology. But people have been hand waving about that for a long time as AI has reached one milestone after another. If it’s true, I want to know what the special mechanism is.

            Like

          3. I forgot about the Dennett connection. I believe he was Dennett’s student? Maybe?

            As for the Platonism, I think he wavers depending on who he’s talking to. In that paper he’s including much more than math in his “Platonic space”, and he even says the patterns themselves may have agency. He does say he’s not trying to adhere to Plato Platonism, but much of what he says sounds a lot closer to Plato in the Timaeus than to mathematical Platonism, though he does conflate the two, which is understandable. (Oddly I was just writing about how Plato may not have been a mathematical Platonist.) For instance, Levin sees causality as not just bottom up or top down but the interaction between necessity and freedom (agency) of forms ingressing from that space. He actually uses the word “necessity”, which would be an odd word choice to use in that context if he wants to avoid being associated with Plato.

            I wouldn’t be surprised if people have talked him out of those views by now, though, given how unpopular Plato is.

            Goff, yeah. I can’t seem to get a grip on what he’s about. I don’t think he said anything particularly new, though, as far as panpsychism goes, at least not in the book I read, Galileo’s Error, a while back. I basically agree with him as far as his interpretation of history goes, though I didn’t get any clear sense of a new theory of mind from it. But it has been a while.

            Thanks for clarifying your views. I was under the impression you were a mind-brain type identity theorist. Now I’m glad I asked!

            Like

          4. On Dennett and Levin, I’m not sure. I know they were both professors at Tufts. And the first thing I recall seeing from Levin was this Aeon piece co-authored with Dennett: https://aeon.co/essays/how-to-understand-cells-tissues-and-organisms-as-agents-with-agendas

            The distinction between mind-brain identity theory and functionalism is controversial. J.J.C. Smart, a pioneer of mind-brain identity theory, argues that they’re not as different as people often assume: https://plato.stanford.edu/entries/mind-identity/#Fun
            That’s one reason I made sure the qualifier “type” was in my statement. Although Smart argues that even type identity may not be incompatible with functionalism.

            That said, most contemporary identity theorists I’ve read are suspicious of multi-realizability.

            Like

          5. Sorry to cut in, but I happen to be reading Thomas Kuhn’s The Structure of Scientific Revolutions (actually re-reading it after 40 years, for course work) and Philip Ball’s How Life Works (at Mike’s suggestion!), and one can’t help notice the paradigm shifts at work. But now, as I look into Michael Levin’s scientific reputation (inspired by the present discussion), I find the concept of a paradigm shift coming up again in the article “Levin Is at it Again, Now Pushing at the Mind-Brain Equation” at a site called “Science and Culture” (https://scienceandculture.com/2025/04/biologist-michael-levin-is-at-it-again-now-pushing-at-the-mind-brain-equation/). I thought you might be interested.

            As the article says, “although the authors don’t mention it, the elephant in the room is the fact that questioning a dominant paradigm might permanently damage your career. You might lose respect and credibility, and even, in some cases, be exiled from mainstream academia altogether.”

            Food for thought.

            Liked by 1 person

          6. Questioning existing paradigms can sometimes be risky. I’ve covered the hostility anyone questioning the Copenhagen interpretation faced in the physics community through much of the 20th century. In science, the data always wins in the end, but careers can still be in jeopardy before it gets there.

            That said, I’d be careful with content from that site. It’s run by the Discovery Institute (see the page footer), an intelligent design advocacy organization. They have ulterior motives for playing up the difficulties of challenging scientific paradigms.

            Like

          7. Thanks for the warning, Don’t worry, I’m not planning to attach myself to the site as if it were the source of God’s truth. But nothing in the article itself struck me as unworthy of consideration.

            Like

          8. Actually that’s not true. There were some anecdotes that I thought were a little, well, anecdotal.

            In How Life Works, Philip Ball gives a fair bit of attention to Levin; the index entry shows quite a few references. He even mentions the joint paper with Dennett, with its claim that life is “cognition all the way down” (p. 137). I’m really enjoying the book.

            Liked by 1 person

          9. One of the anecdotes related in the article concerns a woman who spontaneously becomes fluent in German. Kofman and Levin assure us that this and other remarkable claims are well-documented, but the citation for this one turns out to be “Stevenson I. (1976): Preliminary-report of a new case of responsive xenoglossy – case of Gretchen. Journal of the American Society for Psychical Research 70(1), 65–77′.

            That looks bad, but things are complicated. Does the fact that the report was published in a journal for psychical research automatically make it false? Surely some other criteria than ad hominem ought to be applied. To dismiss it because other journals weren’t interested in the report is what just Thomas Kuhn would predict of the dominant scientific paradigm. He questions Popper’s argument that science proceeds by falsification. Historically he observed that evidence which appears to falsify the paradigm is often not seriously entertained.

            Fortunately there are better examples than the strange and isolated case of Gretchen, or some of the other bizarre reports (such as the one of the woman who heard voices diagnosing her tumour, which was reportedly published by the British Medical Journal in 1997). Many instances can be drawn from biology (How Life Works is an especially timely and rewarding read). Consider “junk DNA.” The early conception of DNA was that it existedsolely to encode for proteins. When it was discovered that only about 20 per cent of DNA encodes for proteins, the immediate response was not to see this as falsification of the theory, and to rethink the role of DNA, but to call the other 80 percent “junk DNA,” left over from earlier periods of evolution. Of course now we have a much better idea of how DNA works, but this has required questioning our paradigm, rather than using it as a basis to select which evidence we will consider.

            Like

          10. Judging information by its source always comes with the risk that we’ll miss a diamond in the rough. The problem is we all have limited time and energy. Given that, I think it’s reasonable to try to make probability judgments based on factors like the source and general plausibility of the claim. And parapsychological phenomena is something a lot of people desperately want to exist. There are always those willing to tell people what they want to hear.

            I like the way Carl Sagan put it. Extraordinary claims require extraordinary evidence. And I’ll freely admit, in most cases, I’m not knowledgeable enough to judge whether they’ve succeeded in gathering that evidence. I have to wait until a significant portion of the relevant experts are onboard.

            That does mean I won’t be the first to jump on to some new paradigm bursting revelation, but given that most claimed paradigm bursting revelations are bunk, I’m willing to live with that compromise. But that’s definitely a take from a skeptic.

            Like

          11. Nothing wrong with that; a conservative approach to new information has its merits, as long as it doesn’t lead to systemic blindness. According to Kuhn’s analysis, that is in fact the tendency, but it can be resisted conscientiously. The difficulty, as you say, is how to finesse that. We’ve talked before about how what people want to believe affects what they find plausible, or even want to hear. Keeping an open mind is the answer — but yes, skeptically.

            Liked by 2 people

          12. In that paper by Levin cited in your article, they probably should have avoided that example of Gretchen spontaneously learning German. I looked into a long time ago and from what I recall, there were serious problems there. Once you start mixing in bad examples like that it’s damned near impossible to get taken seriously. Better to focus on such epicycles as this (taken from the Levin paper): “Why do some minute brain defects (mini-strokes or small aneurisms) result in life-changing effects but other massive deficiencies (such as a missing hemisphere) show little to no impairment?”

            As you pointed out, “Historically he observed that evidence which appears to falsify the paradigm is often not seriously entertained.”

            That is true. If you ask questions about how half of someone’s brain can be surgically taken out without killing the person or causing serious problems with their mental faculties, you get a handwaving response about the brain’s plasticity. I don’t think “plasticity” makes for a compelling explanation. It has epicycle “save the paradigm” qualities to it.

            I’ve heard it said that the healthy parts of the brain might have, prior to the surgery, taken up the slack for the diseased parts over a longer period of time, which is why removing the diseased half doesn’t cause serious problems for the patient. That still doesn’t satisfy, though. I haven’t heard anyone address this point specifically: how do the brain’s functions manage to migrate over to the other side so symmetrically? Something fishy going on there.

            Like

          13. I think the paper is worth a look. Its opening paragraphs address the question of paradigm bias directly (and in my opinion quite effectively), and the evidence it cites appears to be substantial. Consider for example:

            “Memories have been reported to be transferred through transplants of
            tissue, cells, and even extracted (purified) chemical species (Bisping et
            al. 1971, Byrne et al. 1966, Carrier 1979, Corson 1970, Frank et al. 1970,
            Golub et al. 1970, Hartry et al. 1964a, 1964b, Jacobson 1966, Maldonado
            and Tablante 1976, Martin et al. 1978, McConnell 1962, McConnell 1964,
            McConnell and Shelby 1970, Miller and Holt 1977, Morange 2006, Peretti
            and Wakeley 1969, Pietsch 1981, Pietsch and Schneider 1969, Ray 1999,
            Reinis 1968, 1970, Reinis and Kolousek 1968, Rosenblatt, Farrow, and
            Herblin 1966, Rosenblatt, Farrow, and Rhine 1966, Setlow 1997, Stein et
            al. 1969, Ungar 1966, 1974, Ungar et al. 1972, Ungar and Irwin 1967, 1968,
            Westerman 1963, Whiddon et al. 1976, Wilson and Arch 1972, Wilson and
            Collins 1967, Zippel and Domagk 1969).”

            That’s not to say that all these cases don’t need individual scrutiny (and as Mike says, that’s a lot of work), but neither can we assume that they’re all from publications like the Journal of Psychical Research. The question is, when does the accumulation of anomalies become enough to warrant attention? Ehen does skepticism become appropriate, not just toward what we don’t know, but toward what we think we know?

            Like

          14. Oh I think the paper is solid overall. I just meant that one case was sketchy. I did some research on the case of the spontaneous German speaking (going down a rabbit hole for my creative writing interests). I’ve read a lot of Levin’s papers, actually. My favorite is the Ingressing Forms paper. That is definitely worth reading.

            Like

          15. I’m guessing that’s where he expands on his idea of “Platonic space”. Levin has an unorthodox and fascinating take on biology, and I’ve been watching his posts with interest, but I haven’t watched any of his videos (I prefer text) or pursued his published work. I look forward to the mentions in Philip Ball’s How Life Works.

            I’ve just picked up Paul Feyerabend’s Against Method. Provocative stuff. Surely someone must have nicknamed him “Paul Firebrand.”

            Like

  7. I wonder if the Turing test says more about us than about machines? If we were to test our best (human) friend then how long would it be before we began to question their intelligence and consciousness? Remove them to a distance so we cannot see their face and body movements, and preclude them from mentioning well-known facts that we share, and I think within an hour or two we would start to questions whether we are really taking with a human. At this point, the test really is whether we want to continue the conversation or not.

    Liked by 1 person

    1. One test might be inserting a system into these types of online conversations. I haven’t met any of the people I regularly have online conversations with in person. If someone I conversed with over several conversations over a period of months turned out to be a chatbot, I’d have to consider that a pretty solid pass.

      Interestingly, there have been some comments posted here that were obviously chatbot generated. I usually give them the benefit of the doubt, at least if they’re not too long. It’s usually very well written remarks that feel more encyclopedic that anyone’s personal view, at least for the ones that aren’t semantically vacuous.

      Like

  8. First I’ll outline the main points of the article, for the benefit of those who don’t have access, and also to clarify the line of discussion. The author, Elizabeth Gibney, reports on “an event at London’s Royal Society on 2 October” concerning the state of the art with respect to AGI. Among the participants, “Some see an upgraded test [to replace the Turing Test] as a necessary benchmark for progress towards artificial general intelligence (AGI),” but “several researchers said that developers should instead focus on evaluating AI safety and building specific capabilities that could be of benefit to the public.”

    Gibney directs our attention mainly to the second group. She quotes neuroscientist Anil Seth: “[In] this march towards AGI, I think we’re really limiting our imaginations about what kind of systems we might want — and, more importantly, what kinds of systems we really don’t want.” Neuroscientist Gary Marcus adds: “The idea of AGI might not even be the right goal’. . . The best AI, such as Google’s AlphaFold ‘does a single thing. It does not try to write sonnets.” AI ethicist Shannon Vallor says. “I think we should stop asking, is the machine intelligent? And instead ask, what exactly does this machine do?” Google DeepMind public policy researcher William Isaac says that “As scientists, I think we have an obligation to lead with the empirical information that we have, to make very targeted arguments that cut down on the hype.” The gist of these remarks is that AGI has become a shiny distraction, with the undercurrent (unless I’m imagining it) that the question is not merely of no practical use at the moment, but may ultimately be a pointless indulgence, an exercise in metaphysics (understood derogatively).

    Meanwhile, those still interested in AGI as a goal, or who still take it seriously as a concept, hope to improve on the Turing Test by re-thinking what intelligence means. Vallor calls AGI “an outmoded scientific concept” that “doesn’t name a real entity or quality that exists,” adding that “[w]e would do so much better to decompose [intelligence] into the many different capabilities that it only indirectly and vaguely references.” For “several researchers. . . the fact that chatbots can imitate speech credibly does not mean that they can understand.” Researchers “have sought to construct harder tests”, but “researchers don’t agree on any one benchmark for achieving AGI.” “Marcus told reporters that. . . a more appropriate assessment would be a Turing Olympics of around a dozen tests,” but “Seth noted that such tests sideline the importance of embodied forms of intelligence.” The impression here is that we barely know what intelligence means, much less AGI, but we need not and should not abandon our enquiries on that account. To the contrary, there is important work to be done on this frontier.

    If I’m not mistaken, your post takes the latter view. However, your comments as I read them are concerned mainly with what the Turing Test is trying to accomplish, and how, considering this question more closely, we might find ways to better define its goals, with the aim of improving upon it. The more precise rebuttal to Seth, Marcus, Vallor, Isaac, and others in the “let’s change the question” camp would be to show why the Turing Test, or some improved variant, is still important at all. Do we really need to know or care whether our machines have achieved “general intelligence,” or is it enough simply to have them doing useful things for us?

    There may be something in this of what Plato called “misalogy,” or the temptation to give up on a question that is too hard, and to say, “We can’t answer this, so let’s just move along.” But there may also be something of a diversionary tactic in it, as we avert our gaze from the ethical issues of what we are creating and how we ought to treat it. If we just “dumb down” our vision of these machines, and think of them not as intelligences but useful tools, we can spare ourselves much hand-wringing and get on with improving the lot of humanity. Putting it another way, we should stop trying to create intelligence as such, because then, should that inadvertently happen, we won’t have to take the responsibility; we can just close our eyes and reap its benefits.

    In view of this, I’m inclined to side with you: we should continue to ask what “general intelligence” means, and how we might recognize it when we see it.

    Liked by 2 people

    1. Thanks for outlining the article.

      I’m actually ambivalent about the quest for AGI, mainly because it feels like a conceit to call our type of intelligence “general.” It makes our type of intelligence seem inevitable. We’re survival and gene preservation intelligences, enhanced by an arms race so that intelligence is now our ecological niche. Our type of intelligence doesn’t even seem inevitable within evolution.

      I don’t think we should try to create survival intelligences. That seems to be asking for trouble. Thankfully, most of the people trying don’t seem to understand their target, so the results of their efforts will likely just be tool intelligences.

      But if we’re going to try for what we call “general” intelligence, then the ultimate measure of success or failure will be in how the majority of us judge what it can and is inclined to do.

      Like

      1. I understood the article to be contrasting two positions. One is that we should stop emphasizing AGI, and therefore stop worrying about how to improve the Turing Test. The other is that we should continue thinking about AGI, and about how we might improve the Turing Test. Your comment here has me at a bit of a loss, because it suggests that we should stop emphasizing AGI, while still (per your post) thinking about how to improve the Turing Test.

        I guess the resolution turns on whether we are testing for “general intelligence” or just “intelligence.” If by mere “intelligence” we mean, for instance, the activity of AlphaFold, the test of “what it can do” seems pretty obvious: does it accurately predict folded protein structures from amino acid sequences? I’m trying to think of a test for “tool intelligence”of this sort that would present serious difficulties. As long is the tool is producing results to the level of our expectations, it should qualify as “intelligent.” The only parameter of interest would be the difficulty or complexity of the problems it can handle. A hammer would not have much tool intelligence and AlphaFold would have a lot, but a functional test of the intelligence would be straightforward in either case.

        “General intelligence” suggests an ability to solve problems of all kinds. If AlphaFold aspired to it, we would expect it to be able to discuss medical applications of specific proteins, predict their side effects in various metabolisms, weigh the costs and benefits for patients, and write sonnets in its spare time. In short, it would be able to do everything we can do (and perhaps more).

        Not all of this is directed toward survival. What’s the survival value of writing a sonnet? It may be that the need to survive (or the will to do so) was an originating factor that led to general intelligence, but its hallmarks seem to reach beyond mere self-preservation. I’d suggest they involve resourcefulness, adaptability, the power to generate novel and creative responses, the ability to notice the supposedly irrelevant and imagine or discover its relevance, the ability to contemplate the irrelevant for its own sake or for no good reason, and the exploratory interest or caring or curiosity that makes sense of all such activity.

        Anyway we are back to trying to define “general intelligence,” and in as much as the above features are supposedly exclusive to humans, they might seem anthropocentric. But if the alternative is intelligence that does not have this reach, I’m not sure we can even call it “intelligence.” Whatever it is, it’s more easily measured, and less interesting. So while I agree that our attempt to build AGI is probably just asking for trouble, I do think we need to keep asking what “general” intelligence — the really interesting kind of intelligence — is, and how to recognize it.

        Like

        1. One of the things about me is I’m seldom in one of two camps opposed to each other. I often find the fight to be based on a false dichotomy. In this case, we can pursue building flexible intelligences that don’t have the same base motivations that we do.

          By “tool intelligence” I didn’t necessarily mean single purpose tools. It’s not hard to imagine, say, a tool intelligence that is an executive assistant, that might have to draw on a large range of capabilities. We might well be tempted to label such an intelligence “general.” But there doesn’t seem any reason for it to have the survival and gene preservation impulses that lay at the center of animal cognition. (It seems like it would be more useful without those impulses, and more troublesome with them.)

          The survival benefit of writing a sonnet is it increases one’s social status, enhancing opportunities for survival and procreation. Which isn’t to say that the sonnet writer is thinking about any of this consciously. They’re probably just motivated to make something beautiful. But at the evolutionary root of that motivation is a survival advantage.

          Like

          1. Thanks for clarifying “tool intelligence.” A capable executive assistant without self-preservation impulses sounds plausible, and certainly less troublesome than one looking out for its own interests as well as yours. On the other hand, if it were truly intelligent, it might come to realize that to serve your interests it would have to look after its own. An assistant tricked by one of your rivals into turning itself off would be exposed as lacking the general intelligence necessary to accomplish its goals under any and all circumstances.

            The sonnet form was invented in the 13th century, so the Internet says, and its 800 years of existence are probably not on a timescale appropriate to the mechanisms of biological evolution. However, if your theory is correct, the social advantages of good sonnet writers (as opposed to writers of bad sonnets) would eventually result in their being favourably selected. The bad ones would tend to die out, being deprived of opportunities to reproduce, and today, or perhaps in the distant future, we should enjoy a comparative preponderance of good sonnet writers. I’m sure this is amenable to empirical investigation. One has only to look into how many people are writing sonnets today and in earlier periods, and to to assess the distribution curve of good and bad sonnets as a function of historical time.

            If this seems a little snarky, I’m sorry, but I don’t know how else to respond to the proposal that sonnet-writing affords a selective evolutionary advantage.

            Like

          2. Your incredulity at the idea is pretty common. People don’t like the idea that our behavior is rooted in evolution. But as you like to say when selling panpsychism, incredulity isn’t an argument. There isn’t a sonnet gene anywhere, but there are ancient impulses to express ourselves in a variety of ways. The sonnet is just one of those expressions.

            Nor can we point to the paucity of success as a blocker. In many species, an alpha male rules the group. Most males will never be the alpha, yet the urge to be the alpha is pervasive. Most will fail at art, but the urge to do art exists in a lot of people.

            Like

          3. Panpsychism has been said to invite “the incredulous stare,” but I am not offering you that kind of incredulous stare, as if I think you’re crazy to suggest that behaviour is rooted in evolution. I’m offering a reductio ad absurdum for a particular application of that principle. Useful though it may be, you can’t simply apply it on on timescales of hundreds of years, rather than millennia, and you can’t suppose that every little trait finds simple expression in selection for reproduction.

            The urge to do art certainly exists in a lot of people, but people are not necessarily being selected for their urge to do art. As I’ve said elsewhere, there are a lot of guitarists out there with their cases open in front of them, hoping for spare change. They like playing guitar, but considered in isolation, the trait is probably to their social disadvantage. If they do manage to reproduce copiously, no one will seriously point to their urge to busk as a decisive factor.

            When you explain to me glibly that sonnet writing increases one’s social status, enhancing opportunities for survival and reproduction, you are inviting not an incredulous stare — not that kind of incredulous stare, anyway — but an empirical challenge. My guess would be that this specific claim won’t hold up to inspection, and my argument is intended to reinforce that point. Do you have any data that would support this interpretation of evolutionary sociobiology? Do you seriously expect to find genetic traits that select for sonnet-writing enjoying an advantage in the population? That’s what your statement actually predicts, notwithstanding your later clarification that there is no sonnet gene, and I thought I ought to point out that it’s probably more complicated than that. If on reflection you agree, I’m glad to hear it.

            It’s fairly common among fans of sociobiology to argue that, if something exists, it must have been selected for. How can they prove it was selected for? Why, because it exists, of course. The tautological nature of this thinking seems to satisfy rather than trouble their scientific instincts, but it has always bothered me. If I’m giving you an incredulous stare, it’s because I can’t believe you would seriously argue that writing sonnets ever actually results in having more babies.

            Being an alpha male is different. An alpha male may or may not write sonnets; my guess is that most don’t, and that doing so might actually decrease their social status. Some of them might enjoying doing it nevertheless — perhaps even in secret. I stand by the proposal that there is no clear evolutionary advantage to writing sonnets, and that the intelligence and inclination to do so is not simply about survival. There’s more to it than that.

            Like

          4. “I can’t believe you would seriously argue that writing sonnets ever actually results in having more babies.”

            I wrote that, and I wish someone had called me on it, because it needs correcting. Clearly one can imagine a sonnet-writer wooing a partner with them, and having more babies as a result. They need not even be good sonnet-writers; sometimes it’s the thought that counts.

            I understand why no one brought it up;; sometimes it seems best just to let the matter drop. But If I had the chance, I would rephrase it say “writing sonnets correlates with increased reproductive success compared to the societal norm.” It may not have rhetorical punch, but it reflects my concerns more accurately.

            My intention is not to belabour the point, but to correct a hasty and ill-considered statement with one that is more defensible.

            Like

  9. But what these arguments always seem to overlook, is that we only know what internals, what architecture, to look for, because it’s previously been associated with behavior that triggered our theory of mind.

    Turing’s test is a little too behaviorist-flavored, which is perfectly understandable given the milieu of his time. But the functionalist turn provides the perfect cure – and it does highlight architecture. Not biological architecture, but functional architecture.

    When the question is intelligence, I think functionalism is spot on. That’s one of my Two Cheers for Functionalism – it aptly describes intelligence (cognition). And intelligence is a key factor in AI safety. As AJOwens notes, there is a camp with the line “we should stop trying to create intelligence as such, because then, should that inadvertently happen, we won’t have to take the responsibility” – and we should be wary of that camp. Functionalism gives us the scientific approach needed to treat “intelligence” meaningfully.

    My other Cheer For Functionalism was about intentionality – the property of “aboutness” had by our beliefs and desires, which some philosophers seem to view as magical, or at least nonphysical. I emphasize desires because some thinkers seem to place desires, wants, and goals into a metaphysical plane that machines can never touch.

    Liked by 1 person

    1. This may be clarifying the real difference between us. I see consciousness as a type of intelligence, or maybe more precisely, as a component of a type of intelligence. Which is probably why stone cold functionalism sits so well with me.

      But I fully realize I’m in the minority. Most philosophers, and even many scientists, insist that they’re different. And they are, to some degree, in that they have different scopes. Not all intelligence is conscious, but all consciousness is intelligence. Consciousness, in my view, is a subset. But the impression I get from most people is that they see them as completely separate phenomena.

      I see this complete distinction as a confusion, one that tempts people down many false paths. But I try to remain open to the possibility that I’m the one confused.

      Liked by 1 person

  10. “assess whether a machine thinks, understands, is intelligent, or conscious, or whatever”

    “Or, whatever” nicely sums it up.

    “The poster example of this are cephalopods, such as octopuses. They come from an evolutionary line that diverged from ours six hundred million years ago. The architecture of their nervous systems is radically different”.

    Actually, not quite as different as you seem to suggest.

    Here’s a more extended quote from something I’m working on:

    Vertebrates, arthropods, and cephalopods have analogous structures in their brains that play a key role in learning, memory, and spatial navigation. These structures are not the same in vertebrates, arthropods, and cephalopods. They could have independent evolutionary histories, but they serve similar functions and have a central location in the brains of the organisms. In vertebrates, this structure is the hippocampus. In in arthropods, the central complex. In cephalopods, the vertical and frontal lobes.

    In vertebrates, the primary structure for spatial and episodic memory is the hippocampus with place cells and the entorhinal cortex (adjacent to the hippocampus) with grid cells. The hippocampus is located deep in the inner part of the temporal lobe of the brain, near the temples and ears, with one on each side. Place cells activate when the animal is in a specific location. They are context-sensitive and may respond differently in different environments. Grid cells activate in a regular, hexagonal lattice as the animal moves through space. The hexagonal patterns are maintained even in the dark and measure distance and direction. Together place and grid cells form a map and metric of space.

    The central complex in arthropods, such as insects, performs a similar function. It is a midline-spanning brain region crucial for integrating sensory information and generating motor commands, particularly for navigation and other behaviors. It is comprised of several dense networks of neural fibers in a repeating vertical columnar organization. It has heading-direction cells for orientation and direction and can keep track of distance and movement. Worker bees can forage up to several kilometers away from the hive and return successfully to the hive. Surprisingly, they can also inform their hive mates about the location, direction, and distance of a food source using a figure-eight pattern called the “waggle dance.” Recently grid-like cells, like mammalian grid cells, have been discovered in fruit flies. The central complex is characterized by a highly organized, almost crystalline neuroarchitecture with repeating computational elements.

    In cephalopods, the vertical and frontal lobes have been structurally compared to the hippocampus of vertebrates. The vertical lobe complex sits on the back side of the brain region located above the esophagus. Hochner et al. (2003) found that the vertical lobe of octopuses shows persistent strengthening of synapses, meaning the connections between neurons, based on recent activity. It is thought to be a key mechanism underlying learning and memory like that observed in the hippocampus of vertebrates.

    Vertebrates, arthropods, and cephalopods all exhibit oscillatory rhythms associated with navigation and learning. These neural oscillations coordinate sensory processing, motor output, attention, memory, and timing and coordination across brain regions. They occur at multiple frequency bands (e.g., theta, alpha, beta, gamma) and reflect both local circuit dynamics and long-range communication. Vertebrates are the most well-studied and show hippocampal theta and gamma rhythms are linked to learning, memory, and navigation. Mushroom bodies in arthropods are involved in olfaction and learning and show oscillatory activity, especially in bees, locusts, and flies. Odor representations in the insect antennal lobe are synchronized oscillatory patterns. Oscillations in bee mushroom bodies are involved in learning. Cephalopods are the least studied, but octopuses and cuttlefish show clear oscillatory patterns during visual processing and problem-solving. Neural oscillations are a conserved feature across evolution with strikingly similar roles in timing, coordination, and cognition. This suggests that oscillations are not just artifacts of brain activity, but fundamental organizing principles of neural activity.

    Vertebrates, cephalopods, and arthropods have relatively large brains and sophisticated senses, especially vision and hearing. The exceptions without vision, such as cave dwellers, evolved their brains from creatures with vision, then lost the capability through evolution. All three of these lineages have structures centrally located in their brains that map the world spatially and temporally. All three lineages have memory and learning abilities and meet the criteria for unlimited associative learning proposed by Ginsburg and Jablonka. The larger brained vertebrates and cephalopods are generally believed to have episodic memory. The brains of organisms in all three lineages exhibit complex oscillatory rhythms that correlate with learning, memory, and sensory integration

    Liked by 1 person

    1. I think the terms to use when comparing structures in vertebrate and invertebrate brains is functional equivalence and convergent evolution. Unlike in fish, amphibians, reptiles, birds, mammals, etc, the invertebrate structure don’t have a share lineage with ours. At least until we go back to worm-like creatures and the overall chemical toolkit all of these lineages inherited from unicellular organisms. And of course those toolkits themselves haven’t been static since the Ediacaran, continuing themselves to evolve.

      The biggest difference in cephalopods is a lot more decision making is delegated to the individual arms. Something like 2/3 of its nervous system is in those arms. By comparison, we have reflex responses which are handled by the spinal cord, but it seems much simpler than what octopuses have. It’s often remarked that they may be the closest thing to outright aliens we’ll find on Earth.

      I’d also note that we establish what the structures in their brains do the same we do in ours, by monitoring brain activity while particular behavior is in progress.

      Anyway, work in progress is starting to sound pretty epic.

      Liked by 1 person

      1. The brains of vertebrates, cephalopods, and arthropods are based on excitable membranes, cellular memory, and oscillatory activity that become structured into brains with neurons.

        I wouldn’t call cephalopod nervous systems “radically different” in structure from human ones. Just different in certain ways but much alike in others. But both are “radically different” from any digital version of a brain.

        Decentralized muscular control is not found solely in octopuses. A “decerebrated” cat can still walk on a treadmill by activating the spinal cord’s central pattern generators

        CPGs are even found in the human spine that contribute to walking and can operate without input from the brain.

        https://en.wikipedia.org/wiki/Central_pattern_generator

        Even in humans, behavior is not always evidence of consciousness. Sleepwalkers can perform complex actions like walking and doing chores.

        Liked by 1 person

        1. A decerebrated animal typically still has its midbrain and hindbrain, essentially its brainstem regions. Not that the spinal cord isn’t capable of some interesting activity on its own.

          But from what I’ve read, an amputated octopus arm is capable of more sophisticated activity. They seem to delegate far more to their arm brains than we do to our spinal cord.

          Liked by 1 person

          1. Sure. Nervous systems concentrate in the brain but usually with varying degrees of decentralization. The question would be whether the “brains” in the arms of the octopus are “conscious” or whether they are more like advanced versions of the neural nets in jellyfish. You can’t tell simply by the behavior in my view.

            Consciousness in my view arises where sensory input integrates with the spatial and temporal mappings through the complex and oscillatory patterns of firings. I expect this would be in the vertical lobe complex of the octopus but not in the “arm brains.”

            Like

          2. A lot of people equate the octopus arm ganglion to a cerebellum. The cerebellum usually isn’t thought to contribute to consciousness in humans. Among other things, it lacks recurrent processing, which I think is what those oscillatory patterns are. But unlike the cerebellum, each arm has its own touch and taste sensory processing, and make their own complex movement decisions.

            An interesting question is, how much information does the central brain receive about what the arm is sensing and doing? And when arms coordinate, how much of that is just between the arms vs going through the central regions? The central brain is described as relatively small and supposedly lacking the body map that mammalian brains have. Their form of consciousness seems like it’s going to be very different.

            Liked by 1 person

          3. The octopus central brain is heavily devoted to vision, even more so than humans, and their vision works a lot like that of humans.

            https://news.uoregon.edu/content/octopuses-map-their-visual-landscape-much-humans-do#:~:text=July%2021%2C%202023%20%2D%205:,cephalopod%20before%2C%E2%80%9D%20Niell%20said.

            Apparently they also fall for the fake arm illusion which suggests a sense of body ownership and that they must have awareness when an arm is touched that it correlates with its vision.

            https://www.sciencealert.com/octopuses-fall-for-the-classic-fake-arm-trick-just-like-we-do

            Some have pointed out that a somatotopic trap for a non-rigid body with eight arms would be almost impossibly computationally intensive, so the more decentralized nervous system makes a lot of sense.

            Here’s a good review of brain and nervous analogs between octopus and vertebrates.

            https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2018.00952/full

            Liked by 1 person

          4. Have a comment caught in moderation (too many links, I think).

            But also found this interesting bit of research:

            https://www.sciencedirect.com/science/article/pii/S096098222031335X#

            Detailed analysis by Gutnick et al. [2] showed that in two octopuses, after the ‘whole animal’ had learned rough versus smooth, three arms that had never been used correctly during the learning phase were used correctly four times, and never used incorrectly, during the testing phase. These data strongly suggest that the locus of learning, and of trial-by-trial choice, is in the brain (Figure 2, red arrows). (The right-left learning experiments in Gutnick et al. [2] also showed brain involvement, which would be expected since right-left is defined relative to the octopus’s bilaterally symmetric body.)

            And this is suggested as model for how it works.

            These considerations, and Gutnick et al. [2], both suggest the same solution: agents capable of autonomous movement and data collection reporting back to a central unit that can integrate input from multiple agents, learn from this input, and direct any agent to produce the correct behavior on demand.

            Liked by 1 person

  11. I think there are two separate questions that need to be answered or at least asked: #1 Can computers and/or robots simulate consciousness to some degree and #2 Can computers and/or robots ever attain consciousness as we understand it in humans and certain other animals. I think #1 is fairly easy. It seems obvious to me that computers and/or robots can be programmed to simulate any function as long as we understand the biological correlate of that function. I’m not so sure about #2. For one thing, computers and/or robots are digital (discontinuous systems), whereas humans, animals, and plants are all analog systems (continuous action). I don’t know that anyone has ever addressed this issue.

    Liked by 1 person

    1. The question would be how we empirically distinguish between #1 and #2. It’s worth noting that for something to evolve, natural selection can only select against traits that make some kind of difference.

      Every continuous process can be reproduced by a discrete one provided the discrete one has enough capacity. The discrete one just has to have lower quantization noise than the variance noise of the original continuous system. It’s why we’re able to digitally stream music or movies that were originally recorded in analog.

      Liked by 1 person

      1. Interesting point. In thinking about it some more, it occurs to me that we have to ensure that the linkage from any one related digital process to any other related digital process depending on it must take the same time (or less) than the time it takes for a biological neural impulse to cross the gap between synapses in a biological system, unless consciousness functions depend on quantum-level functions and response times.

        Liked by 1 person

        1. Not sure on timing. The processing obviously has to be fast enough for the system to respond in real time to the affordances in its environment. Where analog may have an advantage is that digital systems seem to need a lot more energy.

          In the contemporary neuroscience textbooks I’ve read, the only place I’ve seen quantum effects mentioned are in discussions of some of the scanning technologies. If quantum effects are significant, they seem to be below the ability of current methods to measure.

          Liked by 1 person

          1. Thanks. I’m familiar with the broad strokes. The thing to remember is it’s not just speculative biology, but also speculative physics. For it to be true, Penrose’s theory about QM has to be true. Neither the vast majority of biologists nor physicists find either side of their speculation well motivated by the data.

            Liked by 1 person

          2. I’ve brought this analog vs. digital issue up before. What I can’t see is how does quantum computing solve that issue or any other issues about consciousness than other types (like EM field theories) of non-classical computing could also solve.

            There could be “something” quantum happening in the microtubules, but I think the main computing we are interested in happening at multi-neuron level

            Liked by 1 person

  12. What about taking the Turing test (TT) as an entry point to the meaning of information?
    TT addresses the capability for computers to understand and answer questions as well as humans would do. Now, if we consider that understanding a question is to access its meaning, this positions the TT as usable for comparing meaning generation in humans and in computers.
    And using an evolutionary approach (again…) allows to model meaning generation for basic life, and grow up the model up to humans and artificial agents.
    It happens that this has been tried and leads to consider that today computers cannot think like humans do because humans have to satisfy constraints that cannot be transferred to today computers, like “look for happiness”. This because we do not understand them well enough.
    ( https://philpapers.org/rec/MENTTC-2 , https://philpapers.org/rec/MENITA-7 ).
    So bottom line, TT fails because of our insufficient understanding about life and consciousness as well as because of the limited performances of our today computers. TT brings a lot…

    Liked by 1 person

    1. There’s an old phrase in AI research, “the barrier of meaning.” The real debate about the Turing test is whether is demonstrates that the barrier has been cracked. I think Searle’s whole point can be summed up as an argument that it doesn’t. Of course, this depends on what we mean by “meaning.”

      My take is meaning is a relationship between the upstream and downstream causal effects of a pattern. More succinctly, we can think of information as causation, or as a snapshot of causation.

      In that sense, I think cracking the barrier of meaning requires that the system in question have a comprehensive world model. It’s what continued scrutiny of language models eventually reveal to be lacking. What those models have is in their name, a model of language. They only have a world model to the extent our extant language represents the world. But all writing leaves a lot unsaid, depending on our common world models for the reader to understand. The same applies to other generative models.

      There are tests which focus on this more precisely, such as Winograd Schema tests, and Bongard Problems. The trick with modern LLMs, is you can’t use any stock examples of these from a textbook. The right answers be in the language model. You have to construct something never published. But as soon as you do, the LLMs fail, at least in my experience.

      Like

  13. Agreed that the TT does not break the meaning barrier. The proposed evolutionary approach with the Meaning Generator System shows with the Searle Chinese Room Argument that the TT is not valid for testing machine thinking.
    About meaning as being a relationship between the upstream and downstream causal effects of a pattern, your position looks close to the MGS approach where meaning is defined as the connection between a received information and the constraint of the system.
    But the “comprehensive world model” that a system may need to crack the meaning barrier looks to me a bit circular. Doesn’t a comprehensiveness of the world for a system already include the meanings relative to the world?
    Meaning generation could be, I feel, an entry point for an understanding of the world. The difficulties are with the constraints guiding our human meaning generations (Maslow pyramid, plus look for happiness, plus limit anxiety, plus…).
    Our evolutionary nature as self-conscious primates has still a lot to bring in this area, and perhaps not always that pleasant…

    Like

    1. I actually think the longer a system is able to pass the Turing Test, the higher the probability that its design does break the barrier. It’s never 100%. But then no knowledge ever is.

      “Doesn’t a comprehensiveness of the world for a system already include the meanings relative to the world?”

      It does. Not sure how you’re interpreting my statement. I only phrased it that way to describe what I think needs to happen in engineering / teleofunctional terms for the design of a system to crack that barrier.

      Like

Leave a reply to James of Seattle Cancel reply