The barrier of meaning

In the post on the Chinese room, while concluding that Searle’s overall thesis isn’t demonstrated, I noted that if he had restricted himself to a more limited assertion, he might have had a point, that the Turing test doesn’t guarantee a system actually understands its subject matter.  Although the probability of humans being fooled plummets as the test goes on, it never completely reaches zero.  The test depends on human minds to assess whether there is more there than a thin facade.  But what exactly is being assessed?

Cover of "Artificial Intelligence" showing an pixelated outline of The Thinker statueI just finished reading Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans.  Mitchell recounts how, in recent years, deep learning networks have broken a lot of new ground.  Such networks have demonstrated an uncanny ability to recognize items in photographs, including faces, to learn how to play old Atari games to superhuman levels, and have even made progress in driving cars, among many other things.

But do these systems have any understanding of the actual subject matter they’re dealing with?  Or do they have what Daniel Dennett calls “competence without comprehension”?

A clue to the answer is found in what can be done to stymie their performance.  Changing a few pixels in a photographic image, in such a manner that humans can’t even notice, can completely defeat a modern neural network’s ability to accurately interpret what is there.  Likewise, moving key user interface components of an old Atari game over by one pixel, again unnoticeable to a human player, can completely wreck the prowess of these learning networks.  And we’ve all heard about the scenarios that can confuse a self driving car, such as construction zones, or white trucks against a cloudy sky.

In other words, whatever understanding might exist in these networks, it remains thin and brittle, subject to being defeated with unforeseen stimuli or, even worse, being completely fooled from an adversarial attack carefully crafted to fool a recognition (as in the face of a fugitive).  These systems lack a deeper understanding of their subject matter.  They lack a comprehensive world model.

The AI pioneer Marvin Minsky long ago made the observation that with AI, “easy things are hard,”  that is, what is trivially easy for a three year old, often remains completely beyond the capabilities of the most sophisticated AI systems.

Ironically, it’s often things that are hard for humans that computer systems can do easily.  The term “computer” originally referred to skilled humans who performed calculations.  Computation was the original killer app, a capability that the earliest systems were able to do in a manner that left human computers in the dust.  And we all use systems that do accounting, navigation, or complex simulations far better than anything we could do ourselves.

But even the most sophisticated of these systems remain severe idiot savants, supremely capable at a limited skill set, but utterly incapable of applying it in any general manner.  Moving beyond these specialized systems into general intelligence has remained an elusive goal in AI research for decades.  The difficulty of achieving this is often called, “the barrier of meaning.”  But what exactly do we mean by terms like “meaning”, “understanding”, or “worldview”?

As humans, we spend our lives building models of the world.  For instance, even a very young child understands the idea of object permanence, that an object won’t go away if you look away from it, at least unless someone or something acts on it.  Or unless it itself moves, if it’s that kind of object.  We can think of object permanence as an example of intuitive physics, and the understanding that some systems can move themselves as intuitive biology.

As children get older, they also develop intuitive psychology, a theory of mind, an understanding that others have a viewpoint, and an ability to make predictions about how such systems will react in various scenarios, enabling them to navigate social situations.

These intuitive models serve as a foundation that we build symbolic concepts on top of.  Arguably, we only understand complex concepts as metaphors of things we understand at a more primal level, that is, a level involving our immediate spatio-temporal models of the world.  When we say we “understand” something, what we typically mean is that we can reverse the metaphor back to that core physical knowledge.

So what does this mean for getting an AI to a general level of intelligence?  Mitchell notes that a lot of researchers are now thinking that AI will need its own core physical knowledge of the world.  Only with that base, will these systems start understanding the concepts they’re working with.  In other words, AI will need a physical body, along with time to learn about the world.

But is even that sufficient?  The other day I relayed Kingson Man’s and Antonio Damasio’s proposal that AI needs to have feelings rooted in maintaining homeostasis to really get this general understanding.  This might make sense if you think about what it means to actually perceive the environment.  A perception is essentially a cluster of predictions, predictions made for certain purposes.  For us, and any other animal we’re tempted to label “conscious”, those purposes involve survival and procreation, that is, feelings.  A system with instincts not calibrated for satisfying selfish genes may perceive the world very differently.

Which leads to the question, how similar to us do these systems need to be to achieve general intelligence?  At this point, I think it’s worth noting that it’s somewhat of a conceit to label our own type of intelligence as “general.”  A case could be made that we’re a particular type of survival intelligence.  Is the base of our intelligence the only one?

On the one hand, if we want AI systems to be able to do what we do, then it seems reasonable to suppose that their intelligence should be built from similar foundations.  Although hewing too closely to those foundations opens us to the dangers I noted in the post about Man and Damasio’s proposal.  We want intelligent tools, not slaves.

On the other hand, do we necessarily want to limit AI systems by the biases of our species, or even of all animals?  One of the things we hope to get from AI are insights that we as a species may be blind to.  If we build them too much in our image, it seems like we’d be forgoing those types of benefits.

But it might be that we have little choice.  Maybe we have to start by giving them a base similar to ours, until we can learn enough about how this all works.  Once we understand more, it may become obvious whether an alternate base is feasible.  If they are feasible, those alternate bases might produce astoundingly alien minds.

Even starting with the physical base, I tend to think shooting directly for human level intelligence is unrealistic, at least until we first have fish level, reptile level, mouse level, or primate level intelligence to build upon.

How will we know that we’ve achieved general intelligence?  A robust version of the Turing test remains an option, but Mitchell also discusses other interesting tests that go a bit further.  One is a Winograd Schema test.  Consider the following two sentences:

The city councilmen refused the demonstrators a permit because they feared violence.

The city councilmen refused the demonstrators a permit because they advocated violence.

In the first sentence, who feared violence?  In the second, who advocated violence?  The answers are clear to humans with a world model, but no current AI can answer it.  Winograd Schema challenges attempt to get at whether the system in question has a real conceptual understanding of the text.

Another type of test are Bongard problems, where two groups of images are provided, and a subject is asked to identify the characteristic that distinguishes the two groups from each other, such as all small or all large shapes.  It’s a test of pattern matching ability that humans can usually do, but that again are currently beyond machine systems.

I’m not sure these tests are really beyond the ability of a system to conceivably provide answers without deep comprehension, but elaborations like Winograd or Bongard do seem far more robust.  But then, a one hour Turing test also seems extremely difficult to pass with shallow algorithms.

So remember, when seeing breathless press releases about some new accomplishment of an AI system, ask yourself whether the new accomplishment shows a break in the barrier of meaning.  I don’t doubt that announcement will come some day, but it still seems a long way off.

Unless of course I’m missing something.

52 thoughts on “The barrier of meaning

      1. The classics are classics because they stand the test of time! It even has a Wiki page!

        (Which is more than I can say for S.L. Huang or her novels. I just read Zero Sum Game, and I can’t believe I can’t look up either the author or the novel on Wikipedia. I’m shocked.)

        Like

        1. Wikipedia does tend to have blind spots. Of course, a lot of people might say we should just add the article ourselves. But the community has notability criteria to decide what articles to allow, the enforcement of which sometimes discriminates against female authors.

          Like

    1. It certainly doesn’t appear to be an issue for the immediate future. Longer term? I think a lot depends on whether we end up building intelligent tools, or survival machines. The latter’s interests may not align with ours.

      Like

        1. Thanks. That was interesting. Game learning doesn’t seem to translate well into the real world, and you have to wonder how robust is the knowledge these little guys are gaining, but it does seem to show that AIs can demonstrate creativity.

          Like

          1. That would be my question. If you make very small changes to their environments, do their skills vanish and they have to start over? As you noted, just moving things a pixel on the display of video game playing systems can destroy their ability to play the game at all.

            Liked by 1 person

  1. I think the clue is in homeostasis, namely that our intelligence takes the form of a set of controls that translate sensed inputs into external adjustments for a particular purpose. Consciousness is then control of control. So the subject matter of conscious intelligence is not the external world, it is the internal set of first order control modules that we have available to us. Language and grammar are the clue to how we structure, combine and communicate about those controls – as objects, attributes, actions and subjects of actions, all with implications for our own sense of what will have good and bad implications for us.

    So for me a mindset of ‘self control of control for a purpose’ is a key starting point to get beyond mere statistical pattern recognition, or brute force search, because ‘control’ forces us to think about inputs and outputs in relation to each other, and in the context of an overarching intent.

    Like

    1. I think that’s essentially true. Consciousness, at least the human variety, is knowledge of our knowledge, awareness of our awareness, or as you note, control of our control. It’s a second order (or higher) phenomenon.

      But that second order has to be built on top of a robust first order framework, and that appears to be where we’re struggling. That’s the irony of AI. Easy things are hard, while hard things are often easy. But without the “easy” things, AI remains thin and fragile.

      Like

      1. The structure and intelligibility of the first order framework is helped by breaking it down into a number of control relationships that map sensed inputs to motor outputs, each in the service of a particular measure of goodness, in a particular context.

        Then the requirement that the second order control framework should be able to manipulate, combine and communicate about elements from the first order control framework, taking into account what is open to control by the self, versus what is ‘given’ by the world, gives rise (I conjecture!) to meaning, self, ‘what it’s like’ and the appearance of free will.

        Like

        1. I pretty much agree. Although I think there are likely more than two orders. And in humans, when we get to introspection, it becomes somewhat recurrent. In many ways, we’re describing higher order theories of consciousness, which I’ve found increasingly plausible over the last year or so.

          Like

    1. I think, to a program, there are patterns of pixel values which have attributes that give them high probabilities of being associated with the pattern of bytes: ‘b’, ‘l’, ‘a’, ‘c’, ‘k’, ‘ ‘, ‘h’, ‘o’, ‘r’, ‘s’, ‘e’.

      Like

      1. Just to be super pedantic: Those pixels, once processed in a certain way, match a point in a phase space with lots of dimensions. Since many other sets of pixels (all tagged with the label “black horse”) were previously processed, and those sets of pixels after processing created “nearby” points in that phase space, that region of phase space is, as Mike said, associated with that label.

        Any unknown point is identified based on what known points are nearby. Such systems are essentially holographic search engines.

        Like

        1. The question is, is perception itself any more than a holographic search engine? At a certain level, we do know it’s more than that, since an organism’s current actions and mood affect it perceptions, that is, which predictions it makes based on sensory data. On the other hand, maybe it’s still a search engine, just with lots more terms thrown in than the current machine learning networks.

          Like

          1. Indeed. A difference may have to do with to what extent these neural networks encode associations, since it seems that’s a lot of where our power comes from. Identification is not the same as placing in some greater context.

            Liked by 1 person

  2. Interestingly enough, it seems that human communication is a negotiation over meaning. We use words that mean one thing to me and another to you but if we speak long enough we understand one another. But a couple of skinheads conversing may take the same words as a couple of business people and come to a completely different understanding.

    I see many people thinking that “meanings” exist outside of their minds, residing in dictionaries, etc. and that our “meaning” can be either right or wrong based upon some objective standard (this thinking underlies much religious discourse) but I suggest that meanings are not objective things and are quite malleable. So, how id an AI supposed to master context, meaning, nuances of communication, etc. to become “conscious.”

    Methinks we ask too much. In stead of us teaching AIs how to think like us, wouldn’t it be nice to have AIs learn how to think on their own and then have two ways to address our problems instead of just one (natural and artificial, but the same).

    Interesting also is that dictionaries often list dozens and dozens of meanings for the same word … depending upon context, which makes my point for me.

    Liked by 1 person

    1. On dictionaries, it is notable that every word in the dictionary is defined by other words, so you could go round and round in circles reading definitions in terms of other words, and never break outside the dictionary. There must be some partial element of meaning in the internal structure of the dictionary, but it is not until the words make contact with the real world that meaning is cashed out.

      Like

    2. In terms of the definitions of words, I agree, they’re utterly relative and subjective.

      On the other hand, information, patterns we observe, often has implications due to their causal history. For instance, the number of tree rings tell us how many years that tree lived. So not all meaning is subjective. But I do think it’s all relative, which isn’t the same thing.

      Words can be particularly vexing, because as Peter mentioned, dictionaries always define them in terms of other words. And those definitions are based solely on usage (at least in traditional dictionaries).

      But language always eventually reduces to conscious experience or action. For example, once you get to words like “red” or “cool”, it’s difficult to define them without resorting to synonyms. If you don’t understand the synonyms, you hit a wall. If you doubt this, try explaining what “purple” is like to someone born blind.

      We don’t hit the wall because we have the conscious experience these words represent. But these are sensory, visceral, primal experiences. We can’t describe them to an AI. It must experience them itself if it’s to understand them.

      Like

      1. For me, to understand something is to have a sophisticated conceptual structure such that the structure can be queried and produce correct answers. Thus, king – man + woman? Queen.

        The reason that pattern recognition networks can be fooled is that their conceptual structures, their understanding, is very shallow. When we combine face recognizers with nose recognizers and eye recognizers and mouth recognizers etc., face recognition will become much more robust. (“I see a face, but no eyes or nose or mouth, so hmmmm, maybe not”).

        I think we can have very good understanding without experience, and easily have misunderstanding with experience. Consider this statement: “Scientists have just invented a filter that can block all electromagnetic radiation from the Sun without blocking any of the heat.” A typical 5 year old might say “Cool!”, whereas Mary the color scientist is likely to say “Hey, wait a minute …”. There’s also this example from my personal experience with (my) baby twins:
        “ … and these are our twins, [girl name] and [boy name]”
        “Oh, a boy and a girl, how nice. Are they identical?”

        I guess my point is that understanding, i.e., intelligence, is separable from experience. Experience, or perception, is simply what ties the physical world to a conceptual structure. Experience can also create or alter conceptual structure. But I see no reason that conceptual structure cannot also be designed.

        *

        Like

        1. I think this gets into the question I asked in the post, can an AI have what we call “general” intelligence from a base different than our own? I agree that it seems possible. They could start from a very different base, but still have their concepts overlap enough with ours to be useful. Although we should expect shockingly unhuman responses from them.

          What seems to be needed though, is depth, a depth that leads to concepts much more abstract than the current systems are able to accomplish. Can we take an image recognition network, a language comprehension one, and a host of others, have them feed their answers into a hierarchy of more general association networks, that could then provide meaningful backpropagation feedback to the sensory networks?

          Feinberg and Mallatt, in their book, identify five layers in the human visual system, but their method of counting those layers seems simplistic. There are three just in the retina, another in the thalamus, and then there are various cortical regions: V1, V2, V3, etc, each of which as cortical regions have six layers. All of which seems to indicate, to me, that biological networks are using a lot more layers than the artificial ones so far.

          Like

  3. “The answers are clear to humans with a world model, but no current AI can answer it.”

    I think that gets at the root of the problem. Humans do have a world model and our brains seem to be able to associate very diverse facts and memories (quite likely even from different parts of the brain) to bring them to bear on any task. So the councilmen sentences invoke not just a vocabulary and grammar but an understanding of what councils do and what demonstrators do, possibly from memories from personal observation, reading, television, or other sources to resolve the ambiguities.

    Like

    1. Definitely. The question is how we do it. At least part of the answer is our innate impulses and intuitions, which allow us to learn certain things much faster than the machine networks. Another part is that we haven’t created anything like the deep layers and network of networks that make up the brain yet. Yet another is that we still don’t fully understand all the backpropagation that happens in the brain, particularly how movements and emotional states affect perception.

      Like

    2. “Humans do have a world model..”

      The metaphysics behind that statement asserts: The “reason” humans have a world model in the first place is because they are not separate and distinct from the model itself but are intrinsically linked to that world by virtue of being a part of the whole. It is this feature which gives rise to objective experience itself and makes meaning possible for all conscious systems; the mind itself being the most complex system within the whole. In other words, the mind is not separate and distinct from the whole, it is merely a part. It is because of the emergence of mind that intricately, complex meaning is possible…

      Peace

      Liked by 1 person

      1. Help me out with this Lee. I’m an extreme naturalist and perceive you to be the same. This is to say that neither of us believe in any spooky stuff. As an existing mind, from such a premise I must be a causally produced product of reality. But it seems to me that as such I will not have an objective perspective, but rather one that’s subjective, or what reality imposes upon me. Here I might be amazingly enlightened, or deluded, though on the basis of the causality of the system either way.

        So then what’s this talk of “objective experience”? Reality functions objectively, and as a product of it I must function that way as well. But here I should be a causally produced product of that system. You can’t mean that I’m some kind of god that grasps our realm from beyond reality? So what do you mean?

        Like

        1. Eric,
          Since we do not know what reality actually is, and by virtue of being a derivative of that reality through the mechanism of causality as such, our own experience will be an objective experience that is radically indeterminate. Which in laymen’s terms means; we are just to dumb to figure it out.

          According to my model, since there is no such thing as a subject, only objects which have qualitative properties that are either determinate or indeterminate, the meaning of the word “subject” defaults to a definition of either “subordinate” or “topic”. So in this context, if we are just to dumb to figure out what our indeterminate objective experience actually is, then that experience by default becomes subordinate to the solipsistic self-model’s own interpretation of that experience. Having said that, this rationale in no way makes our experience a subjective experience, it’s still an objective experience which is radically indeterminate and we are all just too dumb to figure it out. In conclusion: reality is not subordinate to any ideas we choose to embrace, the inverse is actually true.

          Hope that helps

          Liked by 1 person

      2. I think I get it Lee. You’re saying that objective reality is all that ultimately exists. Thus everything which constitutes you or me exists objectively in order for it to exist at all. Furthermore I perceive you to be unhappy with all the certainty which the human tends to display about such noumena. For you platonism must be particularly odious. We should be more respectful given that we’re ultimately idiots who thus display mere “subjective opinions”. So it could be that you’ve taken to denying the subjective entirely in order to help fight the false claim of various arrogant subjective propositions. Is that a reasonable assessment?

        Either way I wonder if you have a problem with my own use of the term? To me this represents a guess on the basis of potentially false evidence, though I consider it to be the only position from which the human can potentially work (and this is still from the presumption that causality does not fail). Note that in many regards science does seem to progress anyway. As you know, I’m interested in helping to fix the areas where science seems most troubled.

        Like

        1. Eric,
          Here’s a statement I posted on Phillip Goff’s blog which directly addresses your own assessment.
          ________________________________________________________________________

          Lee Roetcisoender says
          November 18, 2019

          “Noble prize Physicist David Gross says fundamental reality is one that can be calculated, measured, or observed.”

          The assessment of David Gross is a derivative of subject/object (SOM) which simply means; rationality itself is held hostage by that construct. Structured systematic thought is based upon a hierarchy, a hierarchy which creates the foundation upon which everything else is constructed. The entire structure is therefore subordinate to and literally held hostage by its very foundation. SOM is the latent foundation which becomes the prism of how we in the West view the world. SOM is a suppressive model which needs to be jettisoned if the sciences are to move forward.

          Peace
          ________________________________________________________________________

          Fundamentally, I am opposed to subject/object metaphysics (SOM) simply because it makes an ontological distinction between the subject and the object as articulated by the Nobel prize Physicist David Gross. The distinction is completely arbitrary and yet, that ontological distinction is what cripples the very advancement of the institutions of science. If you think my concerns are unfounded, try challenging the architecture of (SOM) as the underlying architecture with any scientist or physicist, let alone bring up the topic and see how far you get. The model is cast in concrete and might as well be the very word of God.

          The discrete binary system of rationality is held hostage by the (SOM) paradigm. And yes, we can trace its origin back to Plato, Aristotle and all of their Greek cronies. SOM is venerated the West just as the Pope is revered in Catholicism, it is “the” grounding tenet of structured, systematic thought.

          Liked by 1 person

      3. Lee,
        For all I know this Phillip Goff would agree with us that reality in a “fundamental” state cannot be calculated, measured, or observed — that is once our noumena distinction were explained to him. But I don’t know the guy so I can’t really say. Furthermore I don’t know physicists or scientists in general. One specific physicist that I do consider myself reasonably familiar with, is Sabine Hossenfelder. My perception is that you’re also quite familiar with her. Do you consider this physicist to also possess such arrogance?

        Like

        1. “Do you consider this physicist to also possess such arrogance?”

          Absolutely. I don’t think there is any question where Sabine stands. She is the quintessential bigot when it comes to the notion of metaphysics. In direct contrast, I consider metaphysics to be the purest form of scientific inquiry. Unfortunately, it’s a lost art. Furthermore, if people were to actually run into a real metaphysician they wouldn’t know how to deal with one. In our modern era of science and technology, metaphysics is literally like a foreign language, simply because metaphysics addresses the objects of our reality which have qualitative properties that are radically indeterminate.

          Objects, which have qualitative properties that are radically indeterminate are ignored by the scientific community and literally dismissed. Granted, I get their concerns, because the religious, i.e. mystical institutions which ruled our lives in the past were terrible task masters. Nevertheless, adhering to an arbitrary distinction made by the SOM paradigm is an irresponsible position to take. Furthermore, demonizing philosophy and metaphysics is as detrimental, if not more destructive than the inquisition of the dark ages. So pick your poison…

          As far as Goff is concerned, since he is a member of the Church of Reason, i.e. an academic himself, he is not free to express himself for fear of being ostracized by the Church. And I’m not sure that he is even a capable of free thought. Other academics, the likes of Chalmers or even Schwitzgebel are also stymied of free thought. They have been programmed by the Church of Reason to be compliant and subordinate. It is because of this persecution and subsequent purging that why we do not see philosophers nor metaphysicians within the academic and scientific communities. So yeah, I lay the blame at the feet of our scientific and academic communities for allowing themselves to be held hostage by a radically repressive paradigm called subject/object metaphysics.

          Good talking to you Eric…
          Peace

          Liked by 1 person

          1. I think I’m going to regret this, but …

            Lee, what do you mean by “qualitative properties that are radically indeterminate”? Can you give examples? What is a qualitative property? What does it mean for a property to be indeterminate, and then radically indeterminate?

            *

            Like

          2. No need for regret James. A qualitative property that is determinate has well defined boundaries like the mass, spin and charge of a particles. A qualitative property that is indeterminate does not have boundaries which can be captured by any of the so called laws of physics. Indeterminate qualitative properties cannot be calculated, weighed, measured or tested, such as the qualitative property of consciousness.

            The distinction dividing indeterminate qualitative properties from radically indeterminate qualitative properties is really an arbitrary distinction and has to do with the complexity of the object being investigated. Some metaphysical phenomena are easier to articulate than others.

            Peace

            Like

  4. This is a post which displays my own perspective, for example, in how vast learning supercomputers set up to play Pac-Man at superhuman levels, can fail given a one pixel adjustment. Of course a human wouldn’t even notice such a discrepancy, and given that it functions in a very different way. The only way to build a computer which does what we do, I think, would be to have it produce a second entity with affective states to serve as an agent, or something that’s conscious as I define the term. It’s exclusively this second entity which should have any potential for “meaning”, “understanding”, or “a worldview”.

    Apparently as life ventured into more “open” environments, or circumstances that it thus couldn’t generically program to effectively deal with (and here I mean situations unlike the game of chess), non-conscious central organism processors did poorly. Note that our robots also do poorly under more open environments. So apparently evolution added conscious function, or the kind which is displayed by the brainless boy in the classic tale of Pinocchio. But while Pinocchio displays consciousness by means of magic, life displays consciousness by means of brains. (Or I could use the straw man from the Wizard of Oz. And hey, don’t straw man my straw man! 🙂 )

    So Mitchell believes that what AI needs is physical knowledge, and proposes that this could be attained by means of a physical body given time to learn about the world? So apparently this is standard stuff. Panpsychists put their money on complexity. IITers put their money on life based complexity. Here we seem to have the view that if you build a computer driven body, and give it some time functioning that way, mind will emerge as well. Is that her position?

    Mike,
    Since our conversations can go on forever, in which case I rarely engage in other posts, here’s a response for your last reply found here: https://selfawarepatterns.com/2019/11/10/the-problems-with-the-chinese-room-argument/comment-page-1/#comment-42491
    I certainly agree that psychology and neuroscience are related. When they conflict however, the weaker must be discounted against the stronger. Given the many big question which need answering in consciousness studies today, to me it’s not given that we’d see neural correlates for things such as “affection”. Barrett is probably hoping for too much convenience here, which is to say that if we don’t find what we expect, then it must be because it doesn’t exist at all.

    I consider the theory of constructed emotion to be suspect for a couple of reasons. Firstly, how would conscious life in general evolve if it had to be taught to feel things like fear, affection, or hatred? Shouldn’t these be standard tools which evolution adjusts in a population for species survival? And even if only the human were considered “conscious”, if environments were responsible for teaching us to feel the complex things that we do, wouldn’t we see far greater disparities between humans under different circumstances?

    I’m sure, for moral reasons, a clinical psychologist will assume a subject has conscious feelings. But in terms of understanding, aren’t you the one arguing that we shouldn’t let morality cloud our judgement in these matters?

    Yes, I believe that psychology needs to be explored amorally just as hard sciences are. But I also argue that evolution didn’t leave emotions to the whims of social education. So here we should presume that people feel all sorts of things given their humanness, including babies. Did even Steven Pinker in his “Blank Slate” book, foresee the rise of someone like Barrett? (Well I guess he probably did, but still…)

    Actually in matters of reasoning, I like that you wear the hat of a heartless Vulcan. This is the amoral stance which I believe our soft sciences need in general. It’s difficult to maintain however given that we’re socially punished for proposing uncomfortable things about ourselves. Perhaps I’ll explain this again over on Eric Schwitzgebel’s recent post.

    You told me “If the primary business of nervous systems is information processing, then the claim [that a “locked in” person may feel horrible frustration while a software package will not] falls out as a natural consequence.” But no, from what you’ve just stated it doesn’t fall out. I’m also able to claim that information processing is the primary business of the nervous system. I simply suspect that it does something else as well, or implement whatever physics is required to cause affects. You instead seem to be proposing that the exclusive business of the nervous system is information processing, or a much stronger claim. Thus Chinese rooms and all manners of sci-fi fun become conceptually possible. You needn’t go quite that far.

    Liked by 1 person

    1. Eric,
      “Here we seem to have the view that if you build a computer driven body, and give it some time functioning that way, mind will emerge as well. Is that her position?”

      The actual view is that a general world model requires core physical knowledge as a base. But that doesn’t mean an enormous amount of engineering isn’t still needed. I think whether that’s the only way it could happen is debatable, but portraying it as an assertion that if we just give AI a body then general intelligence will emerge, is straw manning the position.

      “Firstly, how would conscious life in general evolve if it had to be taught to feel things like fear, affection, or hatred?”

      Again, that’s not Barrett’s position. She is open that animals have basic affects. She explicitly lists the four Fs: feeding, fighting, fleeing, and mating, as innate. But in her view (and that of other constructivists), there are survival circuits in all animals, survival circuits which are ancient, but these are not the same thing as the feeling.

      As I’ve noted before, I think the notion that feelings only happen in humans is too restrictive. (Although, as always, a lot depends on the definitions being used.) But I agree with Barrett and Ledoux that far too many people conflate the feeling with the survival circuit, what I often call the reflex.

      On your final paragraph, since you’re quoting stuff from another thread, I’m going to first quote the snippet from you that I was responding to:

      “But if the theory that generic computer processing alone which follows a specific procedure can produce “thumb pain” and so on, well that would be a different story. This seems like an amazing claim and I don’t know of any supporting evidence for it.”

      …and then my full response, which I’ll let stand since it had already addressed your final point:

      “If the primary business of nervous systems is information processing, then the claim falls out as a natural consequence. Maybe someone will eventually find evidence that something else besides information processing is happening, but I haven’t seen any yet (aside from processes that physically support the information processing), at least not any that are widely accepted.”

      Liked by 1 person

      1. Mike,
        So her position is that “a general world model requires core physical knowledge as a base”? It’s not something about how a computer driven body ultimately works things out consciously? Well okay. I asked because from your post I wasn’t entirely sure. But then a true straw man attack would be one in which a person deliberately mistakes a given position in an uncharitable way rather than asks about it. Though I didn’t do that, I did present an unflattering perspective by associating her ideas with panpsychism and IIT. Of course many consider these positions quite solid, not that you or I do.

        Maybe instead of straw manning her position I (wait for it…) hay manned it? 🙃 A saying like that among geeks like us might, in more capable hands, even go viral! “Hey man, your understanding of my position is so bad that you can’t even attack it with a true straw man. That’s just a hay man!”

        So Mitchell says that in order to be conscious, a computer will need to be provided with core physical knowledge as a base? But isn’t knowledge already considered a product of consciousness? Maybe she means something more basic such as core physical information as a base? But that would seem to take things back towards my original speculation. An embodied computer which thus compiles enough information tends to also become conscious? Or am I still hay manning?

        On Barrett, I notice that this four Fs thing has become quite a catchphrase for many in her field. According to Wikipedia it seems to have originated with psychologist Karl H. Pribram in the late 1950s. But come on, isn’t the whole thing just a bit suspicious? Exactly four Fs? It would be one thing if we had neural correlates for these four Fs and nothing else, though I’ve not heard that claim. It seems to me that Jaak Panksepp had the same problem with his seeking, disgust, rage, fear, lust, panic, care, play, and power reductions. Rather than various specific classifications of felt things, to me a continuous affect spectrum (as in the color spectrum) seems more likely. Of course we could still have standard itchiness or curiosity forms of affect, just as we have standard red or blue colors, but they’d exist on this spectrum given whatever it is that produces affects.

        Ultimately I suspect that the incredibly diverse things which an advanced conscious entity affectively feels, will essentially exist by means of evolutionary mechanisms rather than cultural teachings (not that nurture wouldn’t play some part as well). Note that many animals are not social, though presumably are motivated by a vast array of feelings. That modern neuroscience cannot yet correlate neural firing to what people report feeling, does not surprise me. Alan Turing helped break Nazi code by means of computing machines. Is it not possible that there exists far more complex code associated with what individual conscious entities feel? To me that seems far more sensible than deciding that our failure to find something must mean that that something doesn’t exist, and so we must largely feel what we’ve been nurtured to feel.

        On the ending bit, we’re in agreement that the primary role of the brain is information processor. But something that is said to primarily do something, is also said to have the potential to do something else as well. Otherwise we could say that it does so exclusively. It’s clear that brains produce affects, though how sure can you be that it produces them by means of information processing alone? Given the state of things, I don’t think you can be. Furthermore if one of your main interests does happen to be the hard “How?” of consciousness question, it seems to me that you should be open to the potential that certain mechanisms exist in the brain which are associated with the physics of affective states. In that case these mechanisms would surely not reside in generic computers, let alone Chinese rooms. This would not mean that “life” would be mandated for consciousness, but rather that certain physics based mechanisms would exist for potential human discovery.

        Liked by 1 person

        1. Eric,
          “So Mitchell says that in order to be conscious, a computer will need to be provided with core physical knowledge as a base?”

          The core physical knowledge only provides the foundation for a world model.

          On actual consciousness, I’ll quote one of the few passages in the book that touches on it:

          It’s hard to talk about understanding without talking about consciousness. When I started writing this book, I planned to entirely sidestep the question of consciousness, because it is so fraught scientifically. But what the heck—I’ll indulge in some speculation. If our understanding of concepts and situations is a matter of performing simulations using mental models, perhaps the phenomenon of consciousness—and our entire conception of self—come from our ability to construct and simulate models of our own mental models. Not only can I mentally simulate the act of, say, crossing the street while on the phone, I can mentally simulate myself having this thought and can predict what I might think next. I have a model of my own model. Models of models, simulations of simulations—why not? And just as the physical perception of warmth, say, activates a metaphorical perception of warmth and vice versa, our concepts related to physical sensations might activate the abstract concept of self, which feeds back through the nervous system to produce a physical perception of selfhood—or consciousness, if you like. This circular causality is akin to what Douglas Hofstadter called the “strange loop” of consciousness, “where symbolic and physical levels feed back into each other and flip causality upside down, with symbols seeming to have free will and to have gained the paradoxical ability to push particles around, rather than the reverse.”

          Mitchell, Melanie. Artificial Intelligence (pp. 241-242). Farrar, Straus and Giroux. Kindle Edition.

          I would note that the simulations part agrees with what I’ve often seen you say, but the overall view fits nicely with higher order theories, although she doesn’t mention them or any other theory of consciousness.

          Liked by 1 person

      2. Mike,
        I now see that Mitchell’s ideas are about artificial intelligence rather than standard intelligence. Is it technically okay in her field to call this “fake consciousness”? Are they trying to build machines which may legitimately be referred to as “conscious”, or rather machines that are “conscious like”? Just curious.

        I see that in terms of life she supports higher order theories of consciousness. This is to say entities which think about their thought, or possess metacognition. But I wonder how people with this position generally consider sentient life? These subjects feel and think, but I presume are nevertheless referred to as “non-conscious” given an inability to think about their thought. And if something is sentient but not conscious, do these people feel sympathy for such creatures? Or if discounted, how much?

        You’re right that simulations do exist in my own model of functional consciousness. I call this “constructing scenarios”. But I don’t get into scenarios of scenarios. Furthermore these scenarios are associated with conscious processing, or “thought”. As I define it, sentient life in general is functionally conscious by means of such processing.

        Note that a dog, unlike a video game playing computer, will not be confounded by a few out of place pixels in an image. If the AI people would like to at least conceptually grasp how life is able to get around such catastrophic failures, it seems to me that they’ll need to develop a more basic conception of consciousness than something that thinks about thought. Why not try to grasp the nature of thought alone?

        Liked by 1 person

        1. Eric,
          I’m pretty sure Mitchell is in for the real thing. She’s in the Hofstadter camp of wanting to build a real mind. (In fact, she was a graduate student of Hofstadter.) I think she was just reluctant to weigh in on consciousness, for all the reasons we frequently discuss here: it’s a definitional morass with lots of people arguing past each other.

          In terms of simulations of simulations, she’s obviously focused in that passage on human level consciousness, metacognitive self awareness, not the more primary level you focus on. I wouldn’t rush to conclude from this that she, or anyone else talking like this, is blase about animal welfare.

          And most of the book is not focused on this second order processing. She is much more focused on successfully getting the first order stuff right. As you note, it’s pointless to aim for metacognition until we get actual cognition mastered. That’s what the focus on core physical knowledge is about.

          Liked by 1 person

        2. Eric,
          I’ve got a dialogue going on with Wyrd about SOM under the Time and Thermodynamics post. I don’t know if you are following it, but thought you might find it interesting.

          Peace

          Liked by 1 person

      3. Mike,
        Something isn’t quite adding up here. You suspect that Mitchell wants to build truly conscious machines, or the lower order sentient form of the term. Nevertheless regarding life she only considers something that has cognition of cognition to qualify as “conscious”, or a higher order form of the term. So maybe she’s with me at lower order theories for consciousness itself, but simply considers thinking about thought really powerful? In that case I’d have her use a separate term for the human variety, such as “metacognition consciousness”, which might even be shortened to “m–consciousness”. Here she could legitimately work on building machine consciousness, and even consider sentient life in general to be conscious, which would put her in line with how people in general use the term. What do you think about that as a means to help alleviate the definitional morass here and people arguing past each other?

        On “core physical knowledge” inciting basic conscious function, I think she’s mistaken, but I suppose she’ll need to figure that out for herself. As you know, I believe consciousness begins and ends with affect itself, or a dynamic which I consider to be the most amazing stuff in the universe.

        Liked by 1 person

        1. Eric,
          Again, I want to emphasize that Mitchell barely touches on consciousness in the book. So I’m not sure how she would respond to your questions. I suspect she’d say that she’s focused on achieving general intelligence first and foremost, however we want to label it.

          I noted to someone else recently that I think any discussion of concepts involving consciousness should probably, for clarity, avoid the word “consciousness” without qualification. The history of that term doesn’t back anyone’s definition as the one true one. For what she discusses in that brief passage, I like the term “introspective consciousness”. (The term “self awareness” is often used, which is okay, as long as we understand we’re talking about self-mind awareness, not self-body awareness.)

          I like “affect consciousness” or “sentience” for what you like to discuss, or “primary consciousness” to refer to the combination of sensory and affect consciousness.

          On core physical knowledge, again, she’s looking for a foundation for a comprehensive world model. It strikes me as equating to “sensory consciousness”, but not necessarily any of the others.

          Liked by 1 person

      4. Mike,
        I of course agree with you that the consciousness term needs qualifying, particularly today. And I do applaud Mitchell for trying to find something basic to begin with. I just don’t think that with “core physical knowledge” she’s got the right answer. Regardless of how much “sensory information” is provided to a computer about the world, this shouldn’t in itself solve the hard “How?” of phenomenal experience. This is to say that as I see it, she’s not addressing the right kind of stuff here — the conscious entity should emerge by means of affect alone. It’s only from this position that an agent should exist, and then might might make productive use of sense information. Going the other way simply should not work.

        You’re right that adding the Man and Damasio feelings proposal helps get the theme over to my own perspective. But I am also critical of their proposal itself.

        So Man and Damasio want to build the artificial equivalent of feeling. If we take “artificial equivalent” as “real”, then okay. My essential issue is that they’re presuming that this will happen by having a machine monitor its internal function, or “homeostasis”. They propose that this would create self preservation feelings. What I don’t see however, is any good reason to believe such speculation. Notice how coincidental it would be if monitoring the function of something, also tends to create a new entity which feels good/bad about such monitored function! It’s surely far too convenient to be true. I’m happy that they’re showing signs of being on the right track regarding the significance of affect, but that particular proposal shouldn’t reflect anything more than blind hope.

        I consider affect to be the most amazing stuff in the universe. This may productively be said to exist as consciousness itself, and whether “functional” or not. It provides meaning, agency, purpose, or just plain sentient existence. Apparently evolution used the physics associated with affect to build a functional agent which could survive under more open environments not suitable for non-conscious programming alone.

        Regarding affective states, I see this as a vast spectrum in the human, and have expanded upon this moments ago here: https://selfawarepatterns.com/2019/11/25/the-layers-of-emotional-feelings/comment-page-1/#comment-43197

        Liked by 1 person

  5. Likewise, moving key user interface components of an old Atari game over by one pixel, again unnoticeable to a human player, can completely wreck the prowess of these learning networks.

    Reverse the controls on a game and watch a human flounder.

    What they’ll do next is layered neural networks – that is neural networks whose inputs are the conclusions of other neural networks. These second layer networks will themselves choose which first stage neural networks seem to be the best at a task and will configure themselves to work this out. Then you get third layers figuring which of the second layers did a good job. It’ll run deeper and deeper, working less and less from precise pixel placement to heuristic processing of a situation. Each of them losing precision grasp of a situation (ie, the pixel) and instead working from a bigger picture, that gets bigger with each new layer added. Then eventually like us, it gets big enough that it doesn’t see the point of the task it has been set to do – it eclipses its training set and hits the philosophical.

    Philosophy often being that we don’t really comprehend what we’re doing as humans. I mean how does that computer in front of you work, really?

    Like

    1. If you think about it, a deep network is already that. The only thing “deep” means in deep neural networks is that there are multiple hidden (intermediate) layers between the input and output layers. These networks can be arbitrarily deep, although as I understand it, going deeper isn’t without costs in effectiveness and performance.

      What might be more useful are hierarchies of networks, where the output of multiple networks feed into higher level networks, which integrate signals, perhaps forming something like multi-modal concepts. For this to work though, these networks have to feedback to those lower level networks. That’s what seems to happen in the biological versions. The problem is no one yet understands in detail how it works.

      On philosophy, I read a quip a few years ago that a superhuman AGI might quickly figure out the pointlessness of it all and just self terminate. Marvin the robot in Hitchhiker’s might be what ultimate intelligence looks like. Maybe ignorance really is bliss.

      Like

      1. If you think about it, a deep network is already that.

        I thought you’d think that, but no they aren’t. A very complex single layer network will attune itself to every single pixel on a screen because there is no competition involved, it’s all one system that can and will fixate because it doesn’t have to do anything else.

        There really is a difference between networks that have to work off the conclusions of other networks and potentially disregard those networks. Those sub networks, if set to want to be the provider of information (ie wanting to be selected by the greater layer is one of their training sets), can’t just fixate on pixels. They have to take into account what the second stage is selecting for.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.