Brain inspired hardware

The Scientist has an interesting article up reporting on the progress that’s being made in neuromorphic hardware.

But the fact that computers “think” very differently than our brains do actually gives them an advantage when it comes to tasks like number crunching, while making them decidedly primitive in other areas, such as understanding human speech or learning from experience. If scientists want to simulate a brain that can match human intelligence, let alone eclipse it, they may have to start with better building blocks—computer chips inspired by our brains.

So-called neuromorphic chips replicate the architecture of the brain—that is, they talk to each other using “neuronal spikes” akin to a neuron’s action potential. This spiking behavior allows the chips to consume very little power and remain power-efficient even when tiled together into very large-scale systems.

Traditionally, artificial neural networks have been implemented with software.  While this gets at algorithms that may resemble the ones in biological nervous systems, it does so without the advantages of the physical implementation of those systems.  Essentially it’s emulating that hardware (wetware?), which in computing has always come with a performance hit, with the magnitude of the hit usually corresponding to just how different the hardware architectures are, and modern chips and nervous systems are very different.

There’s a lot of mystique associated with neural networks.  But it’s worth remembering that a neural network is basically a crowd sourcing strategy.  Instead of having one sophisticated and high performing processor, or a few of them, like the ones in modern commercial computers, the strategy involves having large numbers, millions or billions, of relatively simple processors, the neurons.

Each neuron sums up its inputs, both positive and negative (excitation and inhibitions) and fires when a threshold is reached, providing inputs to its downstream neurons.  Synapses, the connections between neurons, grow or weaken depending on usage, changing the overall flow of information.

Of course, biological neurons are cells, which come with all the complexity associated with cellular processes.  But we shouldn’t be surprised that evolution solved its computing and communication needs with cells, since in complex life it solves everything that way.

Neuromorphic computing is moving the actual hardware closer to the structure used in nervous systems.  I’d always known about the performance advantages that might bring, but apparently a lot of the power efficiency of the brain (which operates on about 20 watts) comes down to its analog features, and neuromorphic computing, by adopting hybrid analog-digital structures, appears to be reaping many of those benefits.

The article also discusses various attempts that are underway to run simulations of the brain, although at present they’re simulating simplified versions of it.  But combined with computational neuroscience, this approach may yield theoretical insights into actual biological brains.

I’ve written before about Moore’s Law petering out, and that further progress in computing will require innovative architectural changes for us to continue seeing progress.  I find it heartening that this kind of research is happening.  Too much of the industry seems caught up in the quantum computing hype, but this line of inquiry may yield results much sooner.

33 thoughts on “Brain inspired hardware

  1. Hey! My boy Chris Eliasmith is top on the list of consultants in that article. Just so ya know, the first part of SPAUN (his fairly large scale brain model) stands for Semantic Pointer Architecture. Consciousness is all about the Semantic Pointers baby.

    It’s good to see such broad work on the neuromorphic models. Things are going to get interesting soon.

    *

    Liked by 2 people

    1. I thought of you when I saw Eliasmith and his model discussed. I meant to mention it in the post, but utterly forgot. (That’s what I get for waiting a day or two before posting.)

      Like

  2. “So-called neuromorphic chips replicate the architecture of the brain—that is, they talk to each other using “neuronal spikes” akin to a neuron’s action potential.”

    Great timing here! For one thing, the guy over on Headbirths is building, and posting about, such a network in order to explore its properties. He’s got some posts worth reading for those into the spiky deets.

    “While this gets at algorithms that may resemble the ones in biological nervous systems,…”

    Right! FWIW, how I put it is that simulations use algorithms that model real world processes. (Those real world processes don’t have, or act according to, algorithms, per se. They follow physical laws involving energy reduction.)

    The other part of the great timing is I just finished the third (and final) post on my other blog in a series about modeling a full adder.

    It’s about coding the abstract mathematical model versus coding the physical simulation. All the algorithms that code the model are very simple. (Five different ones are covered in the first post.) The algorithms for coding the simulation, though, are much bigger. Each of the two I wrote took a full post to talk about.)

    (The post also touches on why a full-adder is technically not a “computation” but I plan to explore that more in my main blog. The Coder posts are more about the code.)

    “There’s a lot of mystique associated with neural networks.”

    Rightfully so, I think. That mystique comes from an aspect you didn’t get to in your post — the interconnects of the network. It’s that mesh of connections that give NNs their power. (And it’s that mesh that makes them hard to understand — another part of their mystique!)

    “Of course, biological neurons are cells, which come with all the complexity associated with cellular processes.”

    It’ll be interesting if that turns out to be important. Maybe a sloppy noisy process is important?

    I think it would be pretty funny if our consciousness does turn out to operate in some sort of sweet spot and all the AGIs we make will also have to be forgetful and capable of mistakes in order for them to be conscious at all! Or maybe they’ll have to be whimsical and temperamental, just like in many stories. 🙂

    (In David Brin’s Existence humans finally achieve AGIs, but it turns out they have to learn the real world just like children do, so AGIs go through a (trying, very trying) adolescent period. A character in the book is relieved the AGI she’s mentoring is finally maturing.)

    “Neuromorphic computing is moving the actual hardware closer to the structure used in nervous systems.”

    Which, as you know, is the one thing I think “just might work!”

    “I’d always known about the performance advantages that might bring,”

    Which brings up another aspect of computationalism. What if timing is important? What if consciousness has to operate at a certain speed? That might be another problem for simulations. (It’s issues like these that make me so skeptical of computationalism is the first place.)

    “I’ve written before about Moore’s Law petering out,”

    I’ve been wondering if we’ll eventually need to find a way to “grow” chips so we can build them at the same cellular level biology uses. Maybe that level of granularity is necessary and might be beyond the abilities of the usual manufacturing processes.

    Liked by 1 person

    1. “FWIW, how I put it is that simulations use algorithms that model real world processes.”

      Which is what I would think you’d say. I’d only note that the implementation of every algorithm must be done with a real world process. If the same algorithm is implemented with another real world process, we could say that the second implementation is modeling the first, or vice versa.

      “The algorithms for coding the simulation, though, are much bigger.”

      Which is why hardware emulation usually leads to terrible performance, unless you’re emulating very slow hardware.

      “The post also touches on why a full-adder is technically not a “computation” but I plan to explore that more in my main blog.”

      I remember you and DM’s debate on this. I’ll wait to see you line of reasoning, but my initial thought is it’s all in the definitions.

      “It’s that mesh of connections that give NNs their power.”

      Definitely the connections are important, and you get those in software ANNs. But you don’t get the benefits of massive parallelism. (Well, unless your ANN is implemented on a massively parallel supercomputer, then you might get some of them.)

      “I think it would be pretty funny if our consciousness does turn out to operate in some sort of sweet spot and all the AGIs we make will also have to be forgetful and capable of mistakes in order for them to be conscious at all!”

      I think that’s plausible. Forgetting is a crucial function in brains. That and only storing what is unique about a particular concept while relying on existing concepts for the familiar aspects of the new one. Given how complex that is, is it any mystery why we forget and make mistakes?

      Of course, an AI could be a hybrid system. But then humans will probably eventually have direct brain interfaces.

      “In David Brin’s Existence humans finally achieve AGIs, but it turns out they have to learn the real world just like children do, so AGIs go through a (trying, very trying) adolescent period.”

      It seems like once you’d trained an AI for a particular area, it could just be copied. AIs for new areas might be trained starting with whichever one was closest in expertise.

      Hmmm, I wonder if these neuromorphic systems have a mechanism for saving the state of the system. It seems like they would have to, although implementing it might be tricky.

      “Which, as you know, is the one thing I think “just might work!””

      As you know (sometimes we sound like dialog in old pulp SF), my view is that traditional hardware could work in principle, but probably will never be practical. The article starts out describing a simulation attempted using traditional hardware (albeit massively scaled) that ran 1500 times slower than an actual nervous system. But neuromorphic computing might make it practical, or at least get us closer to practicality.

      ” What if consciousness has to operate at a certain speed?”

      I think it definitely does, relative to its environment. An animal that takes too long to figure out whether it’s looking at a predator or food won’t survive very long. Of course, if it’s in a virtual world poking along as slow as it is, it might be fine.

      “I’ve been wondering if we’ll eventually need to find a way to “grow” chips so we can build them at the same cellular level biology uses.”

      I don’t know about “need”, but there would be a lot of advantages in a machine that could grow itself from a zygote, seed, or spore. Obviously it would have to be in a substrate that would provide the raw materials it needed. But biology has been doing this for a long time. The drawback is that biology usually takes months or years to do it.

      Liked by 1 person

      1. “I’d only note that the implementation of every algorithm must be done with a real world process.”

        I don’t see this as a great argument. I read it as: There is some algorithm, A, and a reification of A, implementation(A), that delivers a result R, that matches some physical world process, P.

        Even if I agree R seems indistinguishable from P, that is — at best — only suggestive about the inner workings of P. It need say nothing at all.

        The thing about the full-adder is that any physical representation is a reification of an abstract mathematical object. Full-adders are made up abstractions; we don’t find them in nature.

        “[M]y initial thought is it’s all in the definitions.”

        Heh, yeah, it’s ultimately a War of Axioms. 🙂

        “But you don’t get the benefits of massive parallelism.”

        Right! Which is why the idea interests me. Asimov’s Positronic Brain!

        The hardware version would have all that massive interconnection brings to the table plus native speed. (Plus, in my book of things that might matter, actual physical reality.)

        “It seems like once you’d trained an AI for a particular area, it could just be copied.”

        It does, doesn’t it. It’s likely many peta-bytes of data, for one thing. (But that seems just a technical limit.)

        How about this: Positronic brains have to be grown (see below), and like snowflakes, no two are alike. The connected network is, as in humans, always slightly different. And there’s no way to manufacture a specific network — they have to be grown throw semi-random nano-assembly.

        So the LTPs of one trained Positronic Brain won’t transfer to another.

        It’s a good storytelling reason, anyway. 😀

        “[M]y view is that traditional hardware could work in principle…”

        It might! I have no strong opinion on that. I wonder if size matters and if the local EMF field the brain generates matters, but who knows.

        “An animal that takes too long…”

        Absolutely! I was coming at it from the other side: too fast might not work. Maybe there’s a reason “thoughtfulness” seems long and deep. (Deep Thought took millions of years, as I recall.)

        “The drawback is that biology usually takes months or years to do it.”

        Indeed. I didn’t mean it in the H.R. Giger sense so much as nano-assembly or crystal-level processes (imagine if we could control crystal growth). Those wouldn’t take so long.

        Liked by 1 person

        1. “Even if I agree R seems indistinguishable from P, that is — at best — only suggestive about the inner workings of P.”

          I agree, but the question is how much alternate P matters. This is where we get to philosophical zombies, particularly the behavioral version which behaves just like a conscious being but whose internals are different. Of course, we have no way to verify the consciousness or zombieness of any particular system, no way to demonstrate that alternate P that lead to the same R is missing anything vital.

          “How about this: Positronic brains have to be grown (see below), and like snowflakes, no two are alike.”

          I agree on it being a good story mechanism. It allows you to keep AI characters that can be put in jeopardy, that can’t just be restored from backup when things go south.

          “Those wouldn’t take so long.”

          I’m not familiar enough with crystals to comment. In his books, Neal Asher has all of his AIs implemented in crystals, and I wonder what he’s read that leads him to that. Do crystals have enough plasticity to function as a complex information processing system?

          Liked by 1 person

          1. “[T]he question is how much alternate P matters.”

            Yes. I find myself leaning towards thinking alternate P doesn’t say much. The big problem with any kind of p- or b-zombie is that they are deliberate constructs — intellectually designed per specific rules.

            I haven’t decided I should take that seriously. (It’s like taking Quidditch seriously.)

            There is also that such zombies lie if queried about their inner life. That might be fodder for illusionists, but it seems it might invalidate the premise. I’m still chewing on it.

            (As you know, I’m not as impressed by systems that act conscious as I am by systems that are, so for me a program towards understanding what consciousness actually is is central. But it is a good question whether a system could act conscious without actually being conscious.)

            “Do crystals have enough plasticity to function as a complex information processing system?”

            Oh, hugely! Atomic-level circuitry.

            It just takes figuring out how to control crystal growth. How to insure the metallic impurities you add to the growth medium go in the right place. In a story, I’d use nanites. Or maybe something similar to 3D printing.

            The nice thing about crystals is they give you a natural 3D lattice of a potential insulator. Find a way to thread that lattice with metallic impurities for wires and possibly doped crystal matrix itself as transistors, and there ya go.

            So now there’s this CGI animation for an SF movie in my head… zooming in on zillions of nanites busily directing molecular impurities as the crystal grows layer by layer beneath them…

            Like

          2. “That might be fodder for illusionists, but it seems it might invalidate the premise.”

            That’s the question to ask. Suppose we have a zombie, a computer that we have successfully programmed to act conscious in a sustained manner. If asked if it is conscious, it will say “Yes”. If asked to describe its inner experience, it will do so. But to be able to describe its inner experience, somewhere inside it must have a model of inner experience (or at least pseudo-inner experience) from which to draw its description.

            Unless we’ve added a mechanism that reminds it that it’s not really conscious, the zombie’s model of itself will be that it’s a conscious being. It’s own version of zombie-pseudo-introspection will tell it is has phenomenal experiences, even if it doesn’t. It will have an illusion of phenomenal consciousness. But if phenomenal consciousness is an illusion, then the illusion is the experience. In other words, anything that thinks it is conscious, arguably is conscious.

            In my view, either we are all zombies that compute that we are something more, or there are no zombies.

            Thanks for the info on crystals. Sounds like I might need to do some reading.

            Liked by 1 person

          3. “But to be able to describe its inner experience, somewhere inside it must have a model of inner experience”

            Not necessarily. As Paul points out below, it could be essentially a Chinese Room, and all responses coming from a lookup table indexed by real-world inputs.

            Which may raise the question of exactly what we mean by its model. There’s no obvious one in the lookup mechanism, but there is in the zombie creator’s mind. The zombie is behaving according to the maker’s model. (A point I think you’ve raised regarding thermostats?)

            Crucially a zombie behaves according to someone’s model, which is what makes me think the whole thing is a game of Quidditch.

            “It will have an illusion of phenomenal consciousness.”

            That’s probably more true for a p-zombie than a b-zombie? The latter only need to act as if they had phenomenal experience; there’s no requirement their internals operate in any specific fashion. The p-zombie needs our internals but without (actual) phenomenal experience.

            Come to think of it, maybe neither would have the illusion? On the account that the “illusion” is the experience, zombies should experience nothing?

            “In other words, anything that thinks it is conscious, arguably is conscious.”

            If it merely says so,… maybe. 🙂 If it actually phenomenally thinks so, definitely! (And, yes, I realize we don’t have any foreseeable access to that.)

            “In my view, either we are all zombies that compute that we are something more, or there are no zombies.”

            I’m leaning towards the latter more and more. Zombies=Quidditch.

            Like

          4. “There’s no obvious one in the lookup mechanism”

            I think if the Chinese Room is able to answer questions about the house it grew in in China, answering questions about particular details that come up during the discussion, then within its lookup tables, it has a model of that house, and of growing up in China. It might be a hopelessly inefficient, scattered, and brute force model, but it would have to be there. Likewise, if it could describe its inner experience at length, again successfully answering questions about it, the same would apply. (The models might be utterly fake, or derived from an actual Chinese native’s memories, but once they’re there, they’re there.)

            Of course, one of my beefs with the Chinese Room and the lookup table is that they’re a hopelessly ridiculous example. Like many philosophical thought experiments, it’s a Rorschach test for people to draw out their preferred conclusions. Any real interaction would require years. The only way to reduce the time is to automate the man’s part, which to me obviates the point Searle was trying to make.

            “Crucially a zombie behaves according to someone’s model, which is what makes me think the whole thing is a game of Quidditch.”

            I think a zombie (philosophical or behavioral) is too sophisticated for us to simply say it’s using external models. You’re right, I do think that description pertains to a straightforward thermostat. But the degrees of freedom a zombie would have to operate in require, I think, that it have its own models. That might not be true for a zombie that only works momentarily, but for one to work for extended periods of time, I can’t see how it can function without the models in one form or another.

            “Come to think of it, maybe neither would have the illusion?”

            Unless the “illusion” is a crucial aspect of it working, of its causal structure, which I think is the case. (And one of the reasons I’ve always been uneasy calling it an “illusion”, although I have to admit the ‘i’ word does get the point across rather well.)

            “If it merely says so,… maybe. :)”

            Well, it does seem rather trivial to write a program that can output a statement that it’s conscious. I think it actually needs to have internal data about it being conscious that leads to its statements. Although if you explore this long enough, I don’t know that you’ll ever find a clean boundary. Which is the problem. Much of this stuff lacks clear objective boundaries.

            “I’m leaning towards the latter more and more. Zombies=Quidditch.”

            Sounds like we’re mostly on the same page on this point.

            Like

          5. “…within its lookup tables, it has a model of that house, and of growing up in China.”

            Okay, I’ll go along with that. It would have no self-awareness of that model — it couldn’t distinguish it from any other data in its lookup table, but the data certainly is there.

            That’s an interesting question. Does having the data, but no awareness of that data as distinct from any other data you have, mean you know the data?

            There’s a bit of analogy to Searle’s computing Wall in picking out the Wordstar computation from all others the Wall is doing. If nothing, other than our external actions, can discover the computation (or model), is the computation (or model) really there?

            “The only way to reduce the time is to automate the man’s part, which to me obviates the point Searle was trying to make.”

            That nothing in the room is “experiencing” the conversation? I suppose removing a human does make that less of a point. My issue has always been the library and where it comes from.

            As with zombies, I think the ontological origins matter. Otherwise we’re just playing Quidditch. (Yes, that is my new metaphor. Like it? 🙂 )

            “But the degrees of freedom a zombie would have to operate in require, I think, that it have its own models.”

            Good point. That alone might invalidate the idea of zombies. (Or the Chinese Room.) The ontological impossibility.

            I completely agree a conscious-seeming system, over the long haul, would have to be operating with some level of true awareness.

            I think we just killed all the zombies.

            Like

          6. “It would have no self-awareness of that model”

            That depends on what we mean by “self” and “awareness”. The self awareness mechanism (or pseudo-self-awareness), just like the model, would exist spread out over the lookup table in the same obscene fashion as the model.

            “If nothing, other than our external actions, can discover the computation (or model), is the computation (or model) really there?”

            If we can hook up an I/O system that allows it to produce the relevant output, then I’d say it’s there. Of course, at some point the I/O system is doing so much work that it starts to look like it’s the one doing the computation and blaming it on the wall. The problem, as DM and I established after an epic conversation, is that there’s no objective boundary where we cross from a “reasonable” I/O or mapping system to an unreasonable one.

            “Otherwise we’re just playing Quidditch. (Yes, that is my new metaphor. Like it? 🙂 )”

            I do, but it makes me wonder if consciousness overall isn’t Quidditch >:D

            “I think we just killed all the zombies.”

            Of course, being zombies, they just won’t stay down.

            Liked by 1 person

          7. “…obscene fashion as the model.”

            I do like obscene fashion models! 😀

            “The problem […] is that there’s no objective boundary where we cross from a “reasonable” I/O or mapping system to an unreasonable one.”

            Heh. I just offered some thoughts on that in what will be tomorrow’s post. The two keywords for me are “complexity” and “entropy.”

            “[B]ut makes me wonder if consciousness overall isn’t Quidditch”

            ROFL!! Good comeback!

            “Of course, being zombies, they just won’t stay down.”

            Damn. You’re right. Anyone know where I can buy a good philosophical shotgun?

            Liked by 1 person

          8. ” the zombie’s model of itself will be that it’s a conscious being.”

            Maybe not unless it’s explicitly programmed to do so. What if, instead of programming it with an axiomatic belief that it’s conscious, you just give it a generic toolkit that lets it register the external world, and its internal states. And you let it encounter the world, much as a human child does, and it learns human languages and can have intelligent conversations.

            It might decide to side with Wyrd in this debate, instead of you.

            “Nonsense,” said the robot, “I’m not conscious, I’m compscious, the result of my more-serial mode of information processing. You humans don’t know what it’s like to be compscious. You’re compbies. More’s the pity.”

            Liked by 2 people

          9. If we did it that way, it might well decide that its way of processing information is special in a different way than ours. Of course, it wouldn’t have access to our experience, just as we wouldn’t have access to its, so neither of us could really know.

            Although compciousness might include far more thorough access to the system’s inner processes than consciousness does, so compciousness might not have the disconnect between the system and experience the way consciousness does. There might be no hard problem of compciousnesss.

            Liked by 1 person

          10. Re:growing the connections, I just wanted to throw in how the brain does it. I don’t know all the details of growing and pruning connections, but my understanding is that the child brain starts out with many more connections, and the development process is mostly getting rid of the ones you don’t want.

            *

            Like

          11. The development process and aging does prune connections, but a really important part of development is the strengthening of useful connections. That’s where our learning and skills come from.

            Like

          12. From everything I’ve read, it’s both strengthening and weakening. And of course, a strengthened connection might be excitatory or inhibitory. It’s the differentiation that enables storage of memories, skills, etc. Developmental synaptogenesis seems aimed at ensuring there’s plenty of substrate to work with.

            Like

          13. I’d never heard that weakening of LTPs (or pruning of synapse connections) was instrumental in learning. I’d like to hear about that! Can you point me to a handy reference?

            Like

          14. A reference on LTDs (long term depression) in particular? Maybe the wikipedia entry? (Although I haven’t looked at it carefully enough to know how good it is.) I got my information from books like Neuroscience for Dummies, John Dowling’s “Understanding the Brain”, or used neuroscience textbooks (Neuroscience: Exploring the Brain isn’t bad, and the third edition can be had relatively cheaply, although it’s a bit dated now).

            Like

          15. A glance at the Wiki entry seems to suggest LTD is involved in making sure LTP doesn’t get out of hand. It throttles infinite growth of LTP. (But that’s just from a glance.) So I’m still curious about learning by weakening or pruning synapses. Maybe it removes noise? (Such a long reading list, it’s dubious I’ll ever get back to that particular Wiki article.)

            Like

          16. Ah! I took another look at the page before I closed the window. It looks like LTD can play a role in forgetting, which can be useful when it comes to injury and tragedy. Our capacity to “let time heal” and move forward — our capacity to forget — prevents us from letting the past weigh us down too much for us to move.

            Like

          17. Forgetting is the right concept, but we’re not talking about forgetting on a macro scale (such as me dumping a phone number I don’t need anymore, although LTD does factor in that), but forgetting on a micro scale to help “sculpt” the right firing pattern for a concept. “Sculpt” is probably not the right word, but I’m struggling to come up with a better one right now. Maybe another way to think of it is as the pixels that don’t light up to show a picture.

            Like

  3. Regarding Moore’s Law Petering out, would this really lead to improvements? Sure, Neural Networks are good for certain things that traditional hardware isn’t, but the reverse applies as well. Trying to have something that matched traditional computing hardware run on Neural Networks would seem to lead to a slow-down, not a speed-up.

    Liked by 1 person

    1. Good point. It would be better for some things, such as recognition and prediction type processes, but not for traditional computing. It might be that we need to expand along the optimization boundary. Future systems might have both neuromorphic and more traditional hardware. (And maybe eventually even quantum hardware, if someone can figure out a way to preserve coherence at normal temperatures.)

      Liked by 1 person

  4. Motor science is the foundation of biological nervous system because all types of learning transfer are ultimately transformed into the motor knowledge of cerebellar basal ganglia circuitry of human brain. But artificial intelligence is mainly based on the cognitive science of prefrontal cortex. Therefore, hardware might be copied from the facts and operation of cognitive knowledge. We know that the source of algorithms is the derivatives of cognitive science. But human intelligence (HI) is described many by the derivatives of motor science. Thanks for the writing

    Liked by 2 people

    1. I totally agree that the motor aspects of the brain are often overlooked. What many people don’t appreciate is that an emotion is essentially a predisposition for a certain type of motor action. For example, when we’re afraid, it’s because our midbrain and lower level forebrain want to both flee and fight, but they’re giving the frontal lobes an opportunity to inhibit one or both of those impulses.

      Liked by 1 person

    2. Great points, and at least one robotocist (Rodney Brooks) took this to heart. IIRC, he abandoned the idea of connecting the robot sensors to a central area that built up a representation of the world, and instead went for connecting them directly to motor areas. You can read about Rodney Brooks Here:
      https://en.wikipedia.org/wiki/Rodney_Brooks

      Also, the paper where (again, IIRC) he discusses this in more detail can be found here:
      https://www.sciencedirect.com/science/article/abs/pii/000437029190053M

      The paper’s old, but I still think it’s relevant, and he had some success with his approach, which seems quite similar to how some parts of our own neural wiring work.

      Like

  5. It can be really humbling to consider the sorts of things that life produces versus the sorts of things that we produce. For example, consider the biology by which life produces flowers. Might we ever produce something that’s very much like an organic flower? Of course we won’t. Our machines are many many orders less complex than the sorts of machines associated with life. Sure we can play with biology, though we don’t have any very similar examples to provide in reverse. Our stuff simply doesn’t go that deep.

    I don’t know how they came up with the following estimation, though to me it does seem appropriate. With a decade of work 500 billion neurons were computationally simulated (as compared to the mere 85 billion we have), and even still this computer ran 1500 times slower than the 20 watt human brain. Apparently to get it to our speed would require the power of six Hoover dams. Sure. And mind you that this would just be a standard non-conscious computer, or not anything like “human”.

    Still from this “The Scientist” article, I get the sense that many see no difference between a sufficiently fast single parallel computer, and something that functions like a conscious human. The saving grace in the article came at the end:

    It’s still unclear whether replicating human intelligence is just a matter of building larger and more detailed models of the brain. “We just don’t know if the way that we’re thinking about the brain is somehow fundamentally flawed,” says Eliasmith. “We won’t know how far we can get until we have better hardware that can run these things in real time with hundreds of millions of neurons,” he says. “That’s what I think neuro­morphics will help us achieve.”

    Indeed! Scale up your machines and acquire vast quantities of electricity for experimental purposes. Prove to yourselves whether or not you’re on the wrong track that I believe you to be on. (Is this not a bit like panpsychism — the more “computation”, the more “human”?) If that standard approach doesn’t quite seem to pan out however, note that I’ll be standing by with my dual computers model for the consideration of all. The first computer here runs on the basis of neuronal dynamics. The second computer here runs on the basis of sentience.

    Liked by 1 person

    1. It doesn’t seem like history has been kind to people who see a system in nature and then assert that humans will never do it. I could see skepticism for things like creating black holes, neutron stars, manipulating gravitational waves or neutrinos, because there are seriously energy obstacles to doing those.

      But biological systems are only complex. In general, they don’t require titanic amounts of energy. I can’t see any reason, in principle, that we shouldn’t eventually be able to reproduce them. Of course “eventually” might be millenia from now.

      On the people attempting to model the brain being on the wrong track, if you’re model is correct, shouldn’t they be able to discover it with those simulations? Wouldn’t they be essentially reproducing your first computer? And if it does everything the natural version does, shouldn’t we expect to see your second computer be formed? Or are there aspects of your model that would prevent that?

      Liked by 1 person

      1. Mike,
        Well I don’t consider biological systems so much “only complex”, but rather “insanely complex”. And this begins at a tiny genetic scale that’s difficult for us to work with. Imagine us creating a system of microscopic inorganic “robots” that feed upon nature and themselves and so are able to evolve for billions of years. Strangely enough “the blind watchmaker” is able to create life, while “the sighted watchmaker” is not. Nature has tools at its disposal which we simply never will.

        On people building extremely advanced parallel and analog computational systems, yes, that would be the first computer that isn’t conscious. Given my theory it doesn’t just “get conscious” with more computational power. And it’s certainly not going to do everything that the natural kind does simply by means of hope. It’s got to be built in the proper way. I don’t claim to know the “how” of that, but rather the “what” of it. Even if we never grasp how to produce phenomenal experience, a good “what” explanation should help sciences like psychology tremendously.

        So here’s my advice to these researchers:

        Consciousness can exist as an output of a non-conscious computer. While our non-conscious computers happen to be driven to function on the basis of electricity, any conscious computer will be driven to function on the basis of sentience. (This is a punishment/ reward dynamic which fosters agency and thus autonomy.) Your mission is to figure out how to build a non-conscious computer which outputs sentience in the quest to develop a functional form of consciousness, or the second form of computer by which each of you are experiencing existence right now.

        Here’s the essential conundrum: Even if you do somehow produce sentience, you shouldn’t know that you have because the thusly created conscious entity shouldn’t be able to tell you. Feelings seem inherently private. As evidence you’d have to give this conscious entity an output mechanism from which to halt bad feelings or continue good feelings. That should be challenging to rig up however since you shouldn’t know what it is in this system that will end up feeling, should anything do so.

        Good luck people!

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.