Michael Graziano on building a brain

I’ve written a few times on the Attention Theory schema of consciousness.  It’s a theory I like because it’s scientific, eschewing any mystical steps, such as assuming that consciousness just magically arises at a certain level of complexity.  It’s almost certainly not perfect, but I think it’s a major step in the right direction.

Michael Graziano, the author of the theory, has a new article up at Aeon, describing, under his theory, the essential steps in giving a computer consciousness.  Of course, the devil is in the details, as they always will be.  But it’s a fascinating new way to describe the theory.  If you’ve read my previous posts on this and still didn’t feel clear about it, I recommend checking out his article.

Artificial intelligence is growing more intelligent every year, but we’ve never given our machines consciousness. People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own. But that hasn’t panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine. And so the race is on to figure out what exactly consciousness is and how to build it.

…In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow.

Read the rest at Aeon.

 

27 thoughts on “Michael Graziano on building a brain

  1. Great stuff. I heard this week that the theology/philosophy wing of Aalborg Uni (Belgium) is getting quite involved in thinking about the god, God, and AI. In August they’re holding a joint conference with professors from Århus Uni to discuss just this.

    Like

    1. Thanks. Hmmm. Is there something unique about that conference? Just curious. I hadn’t heard of Aalborg before and I’m not familiar with whether they’re a religious or secular institution.

      Like

      1. From what I understand, religious. Don’t think it’s a “big” conference, just the two institutions, but Prayson Daniels is going to give an address on the god, God, and AI.

        Like

  2. I read the article with interest Mike, and for which, many thanks. Nonetheless, what Graziano describes as ‘consciousness’, obviously still “arises at a certain level of complexity”, and I’m uncertain why some, yourself included, preface such statements with the word ‘magically’ when describing other theories – such as Tononi’s for example? Take human sentience: Why is a sensory system created, time-shifted, selectively rendered superimposition of sense representations ‘magical’? Something like that is just the functional process that Graziano’s model presents, is it not? And come to think of it, does gravity ‘magically’ arise at a certain level of material mass? Can it be ruled out that some kind of awareness/subjectivity obtains as a fundamental property of the universe, like gravity? I don’t mean to be contrarian, I’m nowhere near clever enough, it’s simply that there’s something I’m obviously not getting at the moment. Thanks!

    Like

    1. Hariod, I think what’s missing from Tononi’s theory is an explanation beyond complexity and integratedness. (I’ll admit that I may not be fully aware of Tononi’s ideas. I’ve read an article or two by him, but not his book.) If it’s complexity and integratedness alone, then we’d have to explain why other complex integrated systems, such as the internet or the phone book, aren’t conscious. (Some people will claim that they are, but obviously we have a testability issue since, if they are conscious, no one knows how to interact with those consciousnesses.)

      What Graziano’s Attention Schema theory brings to the table is a proposed architecture for that complexity and integratedness. Graziano’s point is that complexity and integratedness are required, but not sufficient. If it’s correct, the attention schema architecture is the difference between a complex system and a conscious system. It’s a data processing architectural explanation of inner experience.

      Of course, as I said in the post, the devil is in the details. As a system’s designer, I can see a lot of unspecified details. (For example, exactly how would the attention mechanism work?) But it seems, at least to me, a lot better than simply measuring the degree of complexity and integratedness and wondering if consciousness has arrived yet.

      Liked by 1 person

      1. Thank you Mike; I appreciate your time and patience. I must confess to still being a little mystified as to why this theory might be seen as novel or unique. Graziano describes consciousness as “a crucial set of information”; it is the necessary abstraction from a largely unnecessary (from a survival point of view) flux of sense impressions. To quote him: “somehow the brain experiences its own data”. That is just what we already know consciousness to be functionally and from a computational perspective, and to repeat what I wrote above, it is a meta-level superimposition selectively rendered by the sensory system, from the sensory system. It is, by other words, and to quote Graziano: “a mental possession of something . . . that empowers you to react”. We know this already, and to me this meta-level representation seems functionally indistinct from his attention schema, neither of which descriptions necessarily posit any magical sauce, even though we do not understand how they are implemented. And both are the accessing of internal models which remain captive to themselves – consciousness is stuck in the gearbox of its own comprehension. From the perspective of wanting to build a conscious brain, we would already have known that it would require the ability to abstract a meta-level representation, or what Graziano calls an ‘attention schema’, and so I ask myself what’s new?

        Like

        1. Hariod, based on my own reading in the literature on consciousness (which include Dennett, Blackmore, Gazzaniga, Graziano, and articles by numerous scientists and philosophers including Tononi and Chalmers), I’m not aware that the attention schema concept existed before, at least not in the detail that Graziano describes it, although Gazzaniga’s interpreter concept approaches it in some ways. But I make no claim to having read every theory on consciousness, so it’s quite possible I’ve missed something? (If so, I’d be grateful if you could cite some prior art.)

          But Graziano’s theory is the first I’ve personally read which covers the crucial transition, the distinction, on what makes an information processing system conscious, without resorting to mystical language or throwing up its hands and saying such a point is simply undefinable or emergent, or just calling the whole thing an illusion.

          Just to resum the theory’s main points:
          1. You have attention, which arises from competing sensory and mental processing. Every magician knows that your attention can be manipulated without you being aware of it.
          2. You have awareness, which is a feedback mechanism that collects data from across the brain on its attentional state and presents an executive summary of it to the rest of the brain, essentially allowing the brain to experience its own attentional state (inner experience).
          3. Graziano sees 2 as providing control. I don’t know if “control” is the right word. I prefer the phrase “causal influence”, which in my mind is much like a news media’s influence on what happens in society. But that causal influence provides a measure of unified purpose. Which would explain its evolutionary “purpose”.

          Liked by 1 person

    2. Interesting. Thanks Hariod. It does sound like Torey’s on the same broad track, but based on the review I read (http://journals.uvic.ca/index.php/pir/article/viewFile/3286/1634) and the one article of his I could find and scan (http://www.imprint.co.uk/pdf/Torey.pdf), it doesn’t sound like the same developed concept. (I’m also a little concerned about the criticisms at the end of the review on his dated understandings of neuroscience.)

      His idea seems to be that the language mechanism is a crucial component. I’ve read this idea in other venues. It makes me wonder whether he thinks animals are conscious, particularly intelligent ones who don’t have language. I imagine that he addresses this in his book, but at the price they want for it, I’d need to read some more stuff from him before springing for it.

      I see that he’s deceased. A shame. It would have been interesting to see his take on Graziano’s theory.

      Liked by 1 person

      1. Well, he makes a distinction between animal awareness (not consciousness) as a given in all creatures, and human consciousness – that being different in that it’s self-reflective and not an impulsion to produce motor responses to stimulus continuously; it has a kind of off-line functionality. Graziano doesn’t presuppose something called ‘animal awareness’ it appears, and so treats the thing more globally. It was a long time ago that I read Torey’s book – he sent me his personal copy! – but I was puzzled by the insistence that language was fundamental to human consciousness. Thinking and symbolic representation comes in forms other than language of course. He had an interesting life, and I found his book quite exhilarating to read, it being a broad multi-disciplinary take on the subject. I wonder how much of his theorising came out of the fact that he was blinded in an industrial accident; it could well have been a factor.

        Liked by 1 person

  3. This is so cool. I’m still processing it, but it’s the most reasonable explanation of consciousness that I’ve seen yet. My mind is rattling with implications.
    I really like your idea of the awareness having a causal influence, but not absolute control. It fits really well with my observations of self-control. Part of my background is in special ed, working with a lot of kids who have behavioral problems. I keep seeing this problem where teachers expect that just because a kid understands what is right, they can be expected to do it. Sometimes this is true, but there are underlying factors that can mess with that.
    A kid who is hungry, stressed and exhausted is more likely to lose their temper, regardless of whether or not they know that losing their temper is inappropriate. If a kid is both from a low-income broken family and has some issue like ADHD, they can get hungry and exhausted much faster than another kid, because they are less likely to get an adequate meal, there is probably something stressful going on at home and it takes so much more energy to just sit still and focus. If you pair that kid with a teacher who only hammers home how bad it was that they lost their temper, their awareness is being drawn to the problem, but that’s no guarantee the problem will be solved.
    My job is often about drawing kid’s awareness to things they can proactively do to take care of themselves. For example, to deal with the exhaustion problem, some kids get three “I need a break” cards every day. Your newspaper/city analogy made me think “those teachers who go on about how bad the temper tantrum was are like those magazines that bemoan obesity and anorexia and make people feel awful about their bodies, and tricks like the break cards and free or reduced lunches are like the magazines giving actual diet and exercise advice.”
    Okay, enough rambling. Thanks again for sharing, because this is really cool.

    Liked by 2 people

  4. There seems to be the implication in MG’s article that some sort of Turing test would be sufficient in determining whether we’ve achieved consciousness, which I think is contestable. Maybe if he called it “conscious behavior” that would be more accurate. But I’m probably just being pedantic and maybe all this is understood or implied in that community.

    However, his attention schema theory does seem to me to be a move in the right direction. I don’t think we can just assume that consciousness arises out of a complex system…he seems to be trying to understand more about that complex system in terms of the way it organizes itself, which involves more understanding about the way we actually experience things (something I’m very much interested in, as you know). As you say, the devil is in the details, but it’s fascinating stuff!

    Liked by 1 person

    1. I agree that passing the Turing test isn’t necessarily a guarantee of consciousness, particularly not the time limited one some teams have been trying to game. Passing an extended Turing test however, where the human can take as much time as they need to make a determination, may be a different matter. At some point, the architecture to fake hominid consciousness might have to have so much of the genuine components embedded in it (planned or unplanned) that it’s just a more complicated version of that genuine architecture. That said, I’m now skeptical we’ll ever stumble on the genuine architecture by accident.

      Very much agreed. Fascinating stuff. My questions as a programmer would be on exactly how to recreate the attention system (a messy emergent process), and how exactly the schema is pulled from the various subsystems.

      Personally, I tend to doubt that the first human equivalent intelligence will have that architecture. We’ll probably find better alternate ways to accomplish the same functionality. But, while those machines may be as intelligent as humans at work tasks, they won’t be able to pass as human.

      Liked by 2 people

      1. “At some point, the architecture to fake hominid consciousness might have to have so much of the genuine components embedded in it (planned or unplanned) that it’s just a more complicated version of that genuine architecture.”

        That’s an interesting point. Suppose we clone a human, we’re not likely to call that “AI”. The makeup of the AI matters a lot in how we view/define it. And I agree, I think if we were to somehow create consciousness, there might be a lot of “genuine” components…the funny thing is that those components might be the very thing that defines consciousness for us.

        I also agree with you on the first human equivalent and they way they will be viewed. I think usefulness to us will be the driving force, although there’s always someone out there creating things out of wonder. Still, wonder without usefulness tends not to get funding.

        Liked by 1 person

        1. A while back, I started to think that “artificial” intelligence is a misnomer. It implies that AI wouldn’t be “real” intelligence. If we start to imagine designed life, the lines could become very blurred. It’s one reason that I prefer the term “engineered intelligence”, as opposed to “evolved intelligence.” Of course, that would leave both types with the same initials (EI), which obviously can’t be allowed 😉

          “Still, wonder without usefulness tends not to get funding.”
          Too true. All we have to do is look at the state of manned space exploration. All the chest thumping about the spirit of exploration is unlikely to mean much until we find a space age equivalent of the Age of Exploration spice trade.

          Liked by 1 person

  5. “Building a functioning attention schema is within the range of current technology.”

    I agree with what you said to Tina, I want to see a plan for what he has in mind. What exactly does a “functioning attention schema” look like? I want more than a diagram of boxes and arrows. I want a (relatively detailed) system description!

    To me his hypothesis seems to assume the thing it seeks to explain. Where does “attention” come from if not from consciousness? What exactly does “attention” mean in the context of the human mind? Saying I ‘focus my attention on one model over another’ is a very high-level description that leaves a lot to explain!

    And it strikes me that a machine implementation of attention and the experience of attention might be very different things. Chalmers’ philo-zombies again. The confounding thing about consciousness is that reporting it and experiencing it are such vastly different things.

    One specific comment: We’re several orders of magnitude away from building any system as complex as the human connectome. Nor do we know what role processing speed or cycling might play. That none of our machines so far has ‘woken up’ is hardly surprising.

    Liked by 1 person

    1. I should clarify that while I definitely think the details are important, I don’t see Graziano’s failure to provide them today as a fatal flaw in his theory. Darwin couldn’t explain how inheritance or mutations worked, but it didn’t stop him from describing natural selection. Newton fundamentally didn’t know what gravity was, but he was still able to describe its effects.

      That said, without those details, we’re not about to build a conscious machine. I tend to think we’ll have intelligent machines for a while before we’ll have conscious ones, and I wouldn’t be surprised if consciousness per se isn’t all that useful in practical AI. I suspect it arose in animals to compensate for the messy way nervous systems evolved; it’s not clear to me that compensation will necessarily be needed for engineered intelligences.

      Liked by 1 person

      1. “I wouldn’t be surprised if consciousness per se isn’t all that useful in practical AI.”

        I quite agree. The last thing I want is my microwave getting testy just because I didn’t notice its new diodes. I’m not sure I’d want even my computer system as a “partner” — it’s a tool, not something I want to grant parity to.

        I’ve always seen the search for machine consciousness more as pure research. An attempt to solve the puzzle of consciousness and understand ours.

        In fact, there seem to be some conundrums regarding another species of intelligent beings — beings potentially more capable than us in many areas. It’s not something you do because you want a better iPhone.

        From what I read in that article, Graziano’s hypothesis has a way to go to rise to the level of Newton or Darwin — both of who quantified their observations, detected clear patterns, and formed specific ideas about those patterns.

        How I saw it is that Graziano has an idea of where to look — a path to explore. If he’s right that this hasn’t been studied, then it’s certainly worth looking into. Given our current state of knowledge, any coherent idea is worth looking into!

        Liked by 1 person

        1. We’ve discussed this before, but I’m not convinced awareness (or even self awareness) necessarily implies desire for self actualization, for self concern. (To what degree those components are necessary for what we intuitively see as “consciousness” may be another matter.) But I totally agree with your sentiment.

          As I understand it, Newton did work out the mathematics of gravitation, and that’s why he gets credit over the others in the Royal Academy who had come up with the idea earlier. I haven’t read Origin of Species, but from my perusing of it, and based on what others have said, it’s not a math laden work. It is however a work that discusses a lot of evidence for its ideas.

          I wouldn’t be surprised if Graziano’s scientific papers have their share of mathematics in them. I don’t think his theory is at the point where Darwin was in 1859, at the culmination of decades of evidence gathering, but he might be where Wallace was at that point, or where Darwin was in the 1830s. In his book, he cites a large number of neuroscience studies to make his case. But he’s also clear that a lot of empirical work remains to be done, and that his theory will almost certainly need to be refined in light of that work, which strikes me as normal science.

          Of course, he could still end up being wrong, but his theory feels like progress. I like that he’s found a plausible scientific model to explain inner experience / qualia.

          Liked by 1 person

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.