Could a neuroscientist understand a microprocessor? Is that a relevant question?

A while back, Julia Galef on Rationally Speaking interviewed Eric Jonas, one of the authors of a study that attempted to use neuroscience techniques on a simple computer processor.

The field of neuroscience has been collecting more and more data, and developing increasingly advanced technological tools in its race to understand how the brain works. But can those data and tools ever yield true understanding? This episode features neuroscientist and computer scientist Eric Jonas, discussing his provocative paper titled “Could a Neuroscientist Understand a Microprocessor?” in which he applied state-of-the-art neuroscience tools, like lesion analysis, to a computer chip. By applying neuroscience’s tools to a system that humans fully understand (because we built it from scratch), he was able to reveal how surprisingly uninformative those tools actually are.

More specifically, Jonas looked at how selectively removing one transistor at a time (effectively creating a one transistor sized lesion) affected the behavior of three video games: Space Invaders, Donkey Kong, and Pitfall.  The idea was to see how informative correlating a lesion with a change in behavior, a technique often used in neuroscience, would be in understanding how the chip generated game behavior.

As it turned out, not very informative.  From the transcript:

But we can then look on the other side and say: which transistors were necessary for the playing of Donkey Kong? And when we do this, we go through and we find that about half the transistors actually are necessary for any game at all. If you break that, then just no game is played. And half the transistors if you get rid of them, it doesn’t appear to have any impact on the game at all.

There’s just this very small set, let’s say 10% or so, that are … less than that, 3% or so … that are kind of video game specific. So there’s this group of transistors that if you break them, you only lose the ability to play Donkey Kong. And if you were a neuroscientist you’d say, “Yes! These are the Donkey Kong transistors. This is the one that results in Mario having this aggression type impulse to fight with this ape.”

While I think Jonas makes an important point, one that just about any reputable neuroscientist would agree with, that neuroscience is far from having a comprehensive understanding of how brains generate behavior, and his actual views are quite nuanced, I think many people are overselling the results of this experiment.  There’s a sentiment that all the neuroscience work that’s currently being done is worthless, which I think is wrong.

The issue, which Jonas accepts but then largely dismisses, is in the differences we think we know about how brains work versus how computer chips work, specifically the hardware / software divide.  When we run software on a computer, we’re actually using layered machinery.  On one level is the hardware, but on another level, often just as sophisticated, if not more so, is the software.

To illustrate this, consider the two images below.  The first is the architecture of the old Intel 80386DX processor.  The second is the architecture of one of the most complicated software systems ever built: Windows NT.  (Click on either image to see them in more detail, but don’t worry about understanding the actual architectures.  I’m not going down the computer science rabbit hole here.)

Architecture of the Intel 80386DX processor.
Image credit: Appaloosa via Wikipedia

 

Architecture of Windows NT
Image credit: Grn wmr via Wikipedia

The thing to understand is that the second system is built completely on the first.  If it occurred in nature, we’d probably consider the second system to be emergent from the first.  In other words, the second system is entirely a category of actions of the first system.  The second system is what the first system does (or more accurately, a subset of what it can do).

This works because the first system is a general purpose computing machine.  Windows is just one example of vast ephemeral machines built on top of general computing ones.  Implementing these vast software machines is possible because the general computing machine is very fast, roughly a million times faster than biological nervous systems.  This is why virtually all artificial neural networks, until recently, were implemented as software, not in hardware (as they are in living systems).

However, a performance optimization that always exists for engineers who control both the hardware and software of a system, is to implement functionality in hardware.  Doing so often improves performance substantially, since it moves that functionality down to a more primal layer.  This is why researchers are now starting to implement neural networks at the hardware level.  (We don’t implement everything in hardware because doing so would require a lot more hardware.)

Now, imagine that the only hardware an engineer had was a million times slower than current commercial systems.  The engineer, tasked with creating the same overall systems, would be forced to optimize heavily by moving substantial functionality into the hardware.  Much more of the system’s behavior would then be modules in the actual hardware, rather than modules in a higher level of abstraction.

In other words, we would expect that more of a brain’s functionality would be in its physical substrate, rather than in some higher abstraction of its behavior.  As it turns out, that’s what the empirical evidence of the last century and a half of neurological case studies show.  (The current wave of fMRI studies are only confirming this and doing so with more granularity.)

Jonas argues that we can’t be sure that the brain isn’t implementing some vast software layer.  Strictly speaking, he’s right.  But the evidence we have from neuroscience doesn’t match the evidence he obtained by lesioning a 6502 processor.  In the case of brains, lesioning a specific region very often leads to specific function loss.  If the brain were a general purpose computing system, we would expect results similar to those with the 6502, but we don’t get them.

Incidentally, lesioning a 6502 to see the effect it has on, say, Donkey Kong, is a mismatch between abstraction layers.  Doing so seems more equivalent to lesioning my brain to see what effect it has on my ability to play Donkey Kong, rather than my overall mental capabilities.  I suspect half the lesions might completely destroy my ability to play any video games, and many others would have no effect at all, similar to the results Jonas got.

Lesioning the 6502 to see what deficits arise in its general computing functionality would be a much more relevant study.  This recognizes that the 6502 is a general computing machine, and should be tested as one, just as testing for brain lesions recognizes that a brain is ultimately a movement decision machine, not a general purpose computing one.  (The brain is still a computational system, just not a general purpose one designed to load arbitrary software.)

All of which is to say, while I think Jonas’ point about neuroscience being very far from a full understanding of the brain is definitely true, that doesn’t mean the more limited levels of understanding it is currently garnering are useless.  There’s a danger in being too rigid or binary in our use of the word “understanding”.  Pointing out how limited that understanding is may have some cautionary value, but it ultimately does little to move the science forward.

What do you think?  Am I just rationalizing the difference between brains and computer chips (as some proponents of this experiment argue)?  Is there evidence for a vast software layer in the brain?  Or is there some other aspect of this that I’m missing?

45 thoughts on “Could a neuroscientist understand a microprocessor? Is that a relevant question?

  1. Part of the problem is the brain evolved to deal with diseases and injured. If one set of neurons, whose function is X is damaged, a certain amount of rewiring occurs to transfer that function X to another set. Other than memory, this doesn’t seem to be built into computer architecture (yet). So, the removal of one neuron is not exactly equivalent to removal of one transistor.

    Realize, too, that our understanding of neurobiology is still in its infancy (many of these tools just did not exist 10 years ago) so the tools are still quite primitive.

    Interesting reality check, though. I wonder of theologians would be interested in such a test of tools?

    Liked by 3 people

    1. It’s not quite the same thing, but once we expand our view to software systems and systems involving networked hardware, there is “self healing” going on. A good example is the internet’s ability to route around failure points. Assuming it’s not the router in your home, a failure anywhere along the line doesn’t stop you from connecting to this site.

      Like

  2. As I understand it the brain’s hardware is its software. If I learn something it is encoded physically as new or reinforced neurons and synapses. Spatial knowledge is plotted on physical a grid of neurons. A brain doesn’t have the same kind of architecture as a CPU.

    My sense of it is that there is almost nothing analogous between a microprocessor and a brain; nor between a neuron with dozens of dendrites and synapses and a three legged transistor. Nor between a solder joint and a synapse. Nor between the text editor app embedded in Chrome and the brain modules that produce the sentences I’m typing. Nothing about the tool I’m using to write this, resembles my brain – in structure or function.

    The authors of the original paper took methods developed to study a physically substantiated neural network (a brain) and used them to study a physically instantiated CPU. No one would argue, I think, that a CPU with 10 million transistors was analogous to an artificial neural network with 10 million nodes. The method in question is not so general that it could apply to both.

    I’m not convinced that the brain is doing computation any more than a ballistic rock following a parabola is doing computation. Not all evolution over time involves computation; some of it is just patterned.

    Liked by 1 person

    1. Strictly speaking, when software records information, it’s also a physical event. It might be magnetic alignments changing on a disk surface or floating gate transistors in a SSD, but they are a physical event that changes the system.

      On computation, I guess it depends on how narrow or liberal we want to be with the term “computation”. What would you say are the essential attributes to call a system “computational”?

      Like

      1. Sure there is a physical event. An element changes state. But the CPU never creates new transistors. It never creates new connections between existing transistors. All the CPU can do is switch existing elements on and off. Everything is etched in stone (or silicon which is the major component of most rocks).

        It’s the same problem as redundancy not equating to healing. When the only tool you have is a hammer, everything starts to look like a nail. Or… when the best tool you have is a computer, every starts to look like like computation.

        Computation is not my area of expertise, so I doubt any definition I came up with would satisfy many people. I would think that at a minimum it requires a set of instructions and a machine that can process those instructions. One might argue that cells do computation when they make proteins, since DNA is a set of instructions and the ribosome is a machine for carrying out those instructions. Inputs are amino acids, outputs are proteins. The problem is that making proteins is perhaps half of what a cell does. It also makes lipids, nucleotides, and haems for which there are no instructions, only catalysts. Metabolism is not computational. If you look at how a complex process like the electron transfer chain then there is no instruction set. It just leads to higher local entropy, so it happens without needing instructions.

        I think those people who liken the laws of physics to an instruction set are reaching. The universe is constrained to evolve according to patterns, but there are no instructions. And there are many layers of emergent properties for which no instruction set is possible.

        A CPU has a similar level of complexity to the protein making equipment in a cell – but a computer like the one I’m typing this on is significantly less complex than a whole cell. Scale differences tend to obscure this.

        Which all says that JamesOfSeattle is on the right track in suggesting that the methods used for studying brains are appropriate for studying neural networks. OTOH let some biochemists loose on the CPU to discover how it works.

        My guess is that what is happening in the brain is more like climate and weather than computation – think of those images that Henry Markham produced a few years ago while simulating a single cortical “cell” from a mouse brain (ca. 10,000 neurons) – it was like like looking at the surface of a lake with rain falling on it and rivers flowing out. Which is why it is so difficult to simulate.

        If the brain were merely doing computation we’d already have a good simulation of the C. elegans brain of just 302 neurons. But, to the best of my knowledge, we do not.

        Like

        1. My understanding is that a mature brain generally doesn’t create new neurons (except possibly in the hippocampus, but that proposition is controversial).

          What would you say is the difference between redundancy with the associated ability to route around failed components, and the brains ability (to an extent) to rewire around lesions?

          Your requirement for a set of instructions is actually similar to criteria I’ve heard from computer scientists, so you’re on good ground. As you allude to, it comes down to what we’re willing to consider “a set of instructions”. You characterize thinking of the laws of nature as instructions as a reach, but I’m not clear why. It seems like those laws are the universe’s base programming. In any case, would you say that innate instincts could qualify as instructions? If not, why not?

          On saying that if the brain were only doing computation, we’d have a good model by now, I think that actually one of the valid takeaways from Jonas’ experiment is that doesn’t appear to be true. His whole point is that even discovering how a designed computational system works, if we don’t know the design, is profoundly difficult. Or am I missing something here?

          Like

          1. Starting at the end, I think we have established that the analogy fails and thus we don’t expect the method to work on a CPU. This seems to be what you are missing.

            If the laws of physics are an instruction set, in what medium are they stored? How are they encoded? What is the machine that processes them? Who programmed the instructions?

            Redundancy in a packet switching network relies on all possible paths being equal. In other words the route any given packet takes does not determine the final effect that it has. This is not true in the brain. Damage in one pathway or node produces location specific deficits – Capgras, hemi-neglect, amnesia, anosmia, etc. In the brain, the pathway is the effect.

            Like

          2. “If the laws of physics are an instruction set, in what medium are they stored? How are they encoded? What is the machine that processes them? Who programmed the instructions?”

            You’ve put a lot of restrictions here on what instructions can be, restricting them to something that can be stored in an identifiable substrate and encoded, presumably in some identifiable coding. And your question about who programmed them implies that you require a conscious programmer. You’ve effectively constrained your definition of computation to engineered systems.

            In my view, that definition is artificially narrow. But ultimately definitions are utterly relativist. There’s no right or wrong definitions, only more or less productive ones. And just within computer science circles, people have been arguing about the definition of computation for a long time. I think using the word to refer to neural processing is productive, that it conveys important similarities between neural and technological systems. Much, if not most of the neuroscience community appears to agree. But maybe as more data comes in we’ll turn out to be wrong. Only time will tell.

            Like

  3. I think the reason Jonas’ experiment did not provide much insight is that the analogy between transistors in a general purpose computer and neurons in a brain is quite poor.

    Suppose that instead of experimenting with computer games he had experimented with an actual neural network which recognizes cats. The lesion analogous to removing a neuron would be to remove one of the nodes in the software. Because the computer is a general purpose computer, it works by simulating each neuron one at a time. Removing a transistor from the computer thus has a potential effect on every node/neuron. The equivalent neural lesion would probably be to remove a gene from all the neurons.

    *

    Liked by 2 people

  4. I’m interested in your question “Is there evidence for a vast software layer in the brain?”, but I need to understand your definitions of “software” and “layer”.

    I personally equate software with functional description. That is, any physical system that has a function has two descriptions, a physical description (hardware) and a functional description (software). The functional description is hardware independent and so could be implemented in more than one physical system. So for (an over-simplified) example, suppose neuron A fires in response to a c-fiber signal coming from the left thumb. Neuron A is connected to neurons B and C, each of which is connected to many other neurons. Say neuron B starts a cascade of firings which result in moving the left hand. Say neuron C starts a cascade which generates a memory. The physical description describes what happens to all the parts. The functional description would probably describe a pain signal generating a reflex pull of the arm and a memory used to avoid that circumstance in the future. Now neuron A could be replaced with a silicon version that works completely differently but has the same effects on neurons B and C in response to the c-fiber. The functional (software) description would remain the same, but the physical description (hardware) would be different.

    So when you ask “Is there evidence for a vast software layer in the brain?” My initial response would be yes, the functional description of the brain would be vast, obv. So am I misunderstanding the question?

    *

    Liked by 1 person

    1. Good points on testing neural networks and equating transistors to genes.

      On my definition of software and layers, I like your description of it. In computing systems, the functional layer, in comparison to the hardware, is so vast and encompassing that it renders correlating physical aspects of the processor to capabilities moot. I guess what I was trying to get at with my question is, do we have evidence that the functional layer in the brain is large enough that it renders any association of physical regions to primal capabilities moot?

      Or perhaps another way of asking it is, what should we consider to be the primal capabilities of the brain? We know the primal capabilities of a computer processor is general purpose computation because it’s designed that way. And what should we consider to be capabilities that are completely defined and handled in the functional layer?

      For example, recognizing shapes in a computing system happens in the functional layer.
      But it appears to be a pretty primal capability of the brain associated with specific regions, albeit refined and tuned by learning. On the other hand, recognizing a good investment seems like a much more functional event.

      Like

      1. No offense, but I don’t like the way you phrase your question “do we have evidence that the functional layer in the brain is large enough that it renders any association of physical regions to primal capabilities moot?”

        That phrasing makes it sound like whether there is an association of functions to specific structures depends on the total size of functional layer. I think instead there will be some functions tied very closely to the architecture, like the visual system, and other functions, like the creation of concepts (“a good investment”, “President Obama”) that have a more general purpose architecture.

        *

        Like

  5. Very cool thoughts. The way we currently understand it, brains learn by strengthening and modifying neural pathways, which seems precisely to be information translated down to the hardware level. In ANNs for example this would be the weights of the nodes, which does not get translated to the hardware but stays as software.
    However I dont feel that is the key difference here. I feel the lesioning of the 6502 processor was not done in the same manner as the brain lesioning. If we consider each neuron in the brain to be separate, we might find the same effect, that half the neurons don’t individually affect much. Conversely if the processor’s transistors were divided into groups as graphics, controls, scoring, game agents etc, then lesioning would effectively take out one of these sections and note the effect. I feel like that would be the more comparable analog which might yield more similarity between artificial and natural processors.

    Liked by 2 people

    1. Thanks. Good points, although wouldn’t you think that controls, scoring, and game agents are at a higher level of abstraction? I do agree that graphic manipulation and rendering might be a good thing to test, as well as the ability to add or subtract, load, hold, and retrieve memory, do comparisons, etc.

      Like

  6. I’m not enough of an expert on either computers or neuroscience to comment on this topic specifically, but I think you raised a really good point that applies to science in general. For some people (i.e. science skeptics, science deniers) the idea that science can’t understand everything must means that science understands nothing at all. That’s a pretty dangerous road to start going down.

    Liked by 2 people

    1. Good point. There’s often a sentiment that if science can’t understand something perfectly, then it doesn’t understand it yet. But understanding should be viewed in terms of reductions of uncertainty. If I start with no knowledge of something, and science only gives me a blurry gappy picture of it, at least I have more information than I did before. New information may later fill in surprising details, but assuming the early evidence is solid, the broad outline is unlikely to change.

      I think that’s where we are with the brain. There are still enormous amounts of detail to learn, such as exactly how networks in the brain interact, how exactly our sense of self is constructed, or the molecular details of exactly how synapses work. But there’s a lot of agreement on the broad outlines. There will be new, often shocking details for these outlines in the future, but their overall structure is unlikely to be radically changed. For example, we won’t suddenly discover that the occipital lobe isn’t heavily involved in vision processing, although we probably will learn surprising new details about how that processing happens.

      Liked by 1 person

    1. An argument could be made that they’re similar interruptions, since both are interruptions in gated flows of energy, altering the resulting logic of the system. Of course, we could discuss the differences all day long.

      Like

  7. We reached the end of the WordPress threading capacity so I’ll start it again here

    I asked: “If the laws of physics are an instruction set, in what medium are they stored? How are they encoded? What is the machine that processes them? Who programmed the instructions?”

    You’ve put a lot of restrictions here on what instructions can be, restricting them to something that can be stored in an identifiable substrate and encoded, presumably in some identifiable coding. And your question about who programmed them implies that you require a conscious programmer. You’ve effectively constrained your definition of computation to engineered systems.

    In my view, that definition is artificially narrow. But ultimately definitions are utterly relativist. There’s no right or wrong definitions, only more or less productive ones. And just within computer science circles, people have been arguing about the definition of computation for a long time. I think using the word to refer to neural processing is productive, that it conveys important similarities between neural and technological systems. Much, if not most of the neuroscience community appears to agree. But maybe as more data comes in we’ll turn out to be wrong. Only time will tell.

    ~~~

    You asked for my definition. And I gave it with the caveat that I was outside my area of expertise. And your response was “Your requirement for a set of instructions is actually similar to criteria I’ve heard from computer scientists, so you’re on good ground.”

    “But ultimately definitions are utterly relativist.”

    This definition of definitions is self-defeating. Relativism is about the most unproductive philosophy ever invented. It is the worst possible philosophy because collective progress gives way to atomised views and solipsism. Eventually one ends up down the cul de sac of naive idealism.

    When you promote computation as metaphysics, as you seem to, then I can only scratch my head and try to ask questions that elicit some kind of reasoning behind that view. So far I seem to be asking the wrong questions and you are not volunteering any useful information (we are already at the limits of relativism).

    I have identified some processes that seem to me to be examples of process that are not the result of computation (ballistic rocks, lipid synthesis in cells) but despite prompting you have not given any indication of how we can consider these to be computational.

    You say, “I think using the word to refer to neural processing is productive”

    Productive in what way? Can you suggest a well-known example of how treating a neural process as “computation” has helped to solve a problem in the field of neuroscience?

    ” Much, if not most of the neuroscience community appears to agree. ”

    Hmm. I’ve read a good deal of neuroscience (Damasio, Le Doux, Sacks, Ramachandran, Metzinger, Blanke, Fine, Church, etc etc). And I follow the field as best I can through various neuroscience blogs and science reporting websites. I have yet to see any neuroscientist refer to computation. The only references to it that I can recall come from computing people and philosophers of the “it’s all just X” variety *where X can be information, computation, simulation, hologram and so on).

    So could you point me to some prominent neuroscientists who are using this paradigm?

    So far the combination of computation as metaphysics and relativism strikes me as an intellectual disaster. Perhaps some concrete examples of how it solves problems might dispel this impression.

    Like

    1. I did ask for your definition, and I appreciate the effort you put into providing one. But if I gave the impression I would just accept and adopt what you provided, then I apologize. It was only meant as a conversation prompt.

      Observing that the meaning of the sounds we utter isn’t defined in any objective platonic realm, isn’t a definition of definitions so much as a recognition of the reality. I’m not advocating for its desirability. Reality is reality and doesn’t seem to care about what we’d like it to be.

      On the processes you identified, I should note that I’m a limited pancomputationalist, so I do think those processes are doing computation. In general, to me, any process that can be modeled computationally, can be considered to be performing that computation. It’s a matter of perspective and interpretation. Every computational system can be understood only in physical terms, or in logical ones.

      Of course, that’s different from seeing it as productive to treat those things as computational systems. When I apply that label, it’s because I perceive that the main thing the system is doing is computation, that the computation, the information processing, the modifications of energy flows, is far more causal than the quantity of energy involved. When the quantity of energy is more causal (as in the case of a pump, such as a heart), I think of it as more of a physical system. (Ultimately they’re all physical systems, so these categorizations are just mental crutches.)

      On neuroscientists and computation, I just did a full text search on my Kindle editions of Damasio’s ‘Self Comes to Mind’ and Ramachandran’s ‘The Tell-Tale Brain’ and found numerous references to neural computation in both. Others I’ve read include Frank Amthor, Suzana Hercalano-Houzel, Elkhonan Goldberg, Jaak Panksepp, Giulio Tononi, Steven Pinker, Michael Graziano, Michael Gazzaniga, and Todd Feinberg and Jon Mallatt, and many scientists interviewed on Ginger Campbell’s excellent Brain Science Podcast as well as on the Brain Matter podcast. Most of these people do stipulate that the brain is not like a commercial digital computer (a view I absolutely accept), but still talk in terms of neural computation.

      Like

      1. Sorry to jump in here, but you’re getting into a subject close to my heart.

        If we accept that a computation is equivalent to an information process, I would suggest the key ingredient is semantic information, and the threshold for saying that information has meaning is the identification of a function or purpose. Thus, it would be hard to ascribe purpose to a ballistic rock ejected from a volcano. Lipid synthesis in cells, on the other hand, can surely be ascribed purpose, so the question would be where is information involved.

        Like

        1. No worries on jumping in. Everyone is welcome in any discussion.

          I do see computation and information processing as synonymous. The purpose criteria is interesting. But it seems to imply that there is some agent that considers what the state of affairs would be without a process, what it would be with that process, and chooses to invoke the process. In the case of technology, that’s true. But for natural processes, purpose seems like a matter of post facto interpretation.

          But if we focus on just the process itself, there’s never any purpose. A transistor’s operation unravels according to the laws of physics, a protein’s operations according to the laws of molecular chemistry, and cellular processes according to the available chemical pathways.

          So, strictly speaking, the purpose of anything is a matter of interpretation. We interpret the transistor’s purpose as one of switching, the protein’s purpose at whatever function we can detect for it within its wider context, and cellular operations in terms of the cell’s survival and procreation. We could interpret the purpose of ejected volcanic rock, within its wider context, as fertilizing the countryside.

          There is no real teleology, no natural purposes, only our interpretation of what those purposes might be. Which brings us back to computation / information processing being itself a matter of interpretation. Of course, some processes are far easier (require less energy) to interpret as computation than others.

          Like

          1. Awesome! You got everything just about right, but I want to adjust your concept of purpose. As you suggest, purpose is not something a device or process has or possesses. Purpose is a particular type of explanation of how something came to be. For those things or processes generated by Nature the explanation is as you say, post facto. The concept of purpose is closely tied with the concepts of function and design. Something that has a function also has a purpose (to fulfill or achieve that function). We know of two different kinds of design: natural design and intelligent design. But each type of design can be associated with purpose. Label it “Natural purpose” if you must.

            Now we get to information processes, that is to say, processes in which the key ingredient of the input is information. I can define information in this context, but that would take a lot of verbiage. Let me try a shortcut by saying the important thing about information here is that it represents something else, something in the causal history of the input. The thing is, the “causal history” of any input goes all the way back to the Big Bang (at least). So this is where purpose/value/function/design comes in. Let’s start with an example.

            My go-to example for this is a cell-surface receptor for chemotaxis (moving toward or away from the source of a chemical). Let’s say there is a food source that gives off chemical X. Let’s say that there is a bacterium that has a cell-surface receptor A that does not respond to X. Let’s say that mutations of A produce different bacteria with receptors A1, A2, A3.

            A1 does not respond to X.
            A2 responds to X by moving the bacterium away from the source.
            A3 responds to X by moving the bacterium toward the source.

            Obviously, A3 will be selected for and become dominant.

            So now we come along and find all these bacteria with A3. We see that it responds to chemical X, even though there is no special value in responding to just X.

            Now suppose the source only grows in a location that produces chemical Z. Chemical Z has no effect on our first bacterium with A3, but Z is toxic to a paramecium. Let’s say the paramecium has an equivalent receptor B and equivalent mutations happen such that

            B1 does not respond to X.
            B2 responds to X by moving the paramecium away from the source.
            B3 responds to X by moving the paramecium toward the source.

            So B2 gets selected in the paramecium.

            The point of all this is to show that both receptors A3 and B2 are recognizing the same thing, chemical X, but they are picking out different things in the causal history of X, A3 responding to the source which creates food, and B2 picking out chemical Z. Which specific thing gets picked out is determined by the function being served.

            So I’m saying a computation is a process that uses information for a purpose.

            *

            Liked by 1 person

          2. Hi James,
            Obviously Mike can respond as he chooses, but this would be my reply.

            I’ll grant anyone their definitions given my first principle of epistemology, and even panpsychists who simply define the “consciousness” term such that everything has an element of it. The trick is to build from there (as the panpsychists clearly have not). Today in science/philosophy, “purpose” is consciousness based, or teleological, while the stuff that isn’t conscious, but kind of seems similar, is teleonomic, such as the function of evolution or bacteria. (Massimo Pigliucci taught me this convention here: https://platofootnote.wordpress.com/2017/06/12/purpose-in-science-and-morality/comment-page-3/#comment-21838 )

            We’re in the same essential position I think — theorists outside the system. So where we can it should be helpful to adopt this system’s terminology. But then where we can’t, we should try to present good reason that our nonstandard definitions are sufficiently useful while others are attempting to interpret our ideas. We should never try to bring our own nonstandard definitions somewhere else however, since none are “true”.

            (Then if you ask me to reduce the purpose of teleological existence beyond consciousness, well that’s controversial. My theory is that there is a product of non-conscious existence that creates punishment/reward that drives the conscious form of computer, or thus creates “value”. Unfortunately for me this position can conflict with our existing morality paradigm.)

            Like

        2. James,
          I appreciate the purpose adjustment. I think I can sum up your position as being similar to Daniel Dennett’s, that we shouldn’t apologize for or qualify language discussing the design or purpose of natural objects, that we should be comfortable with the idea of “competency without comprehension”, of design without a designer, of purpose without a planner.

          I don’t really have any issue with that. But I know from personal experience here on the blog that if I don’t hedge or qualify statements about natural purposes or design, there will often be someone who pedantically makes an issue out of that language.

          It helps to bring up the concepts that Eric mentions, the distinction between teleology and teleonomy, that is, the distinction between natural purposes and the appearance of natural purposes. This reminds me of all the effort Dawkins went through in ‘The Selfish Gene’ to clarify that the idea of selfish genes is a metaphor, and yet despite all that hedging, people still decades later accuse him of attributing motivation to sequences of nucleotides.

          Like

  8. Mike,
    Thank you for taking this interview so seriously, and particularly since you know how supportive I am of Jonas and Kording’s message. I see that you aren’t outright challenging their thesis, or maintaining that the tools of neuroscience actually are appropriate for gaining a working understanding of micro processor function. Instead you seem to be disputing their use of this particular analogy, as well as stating (quite rightly I think), that regardless their work should not be interpreted to mean that modern neuroscience is a worthless endeavor. I suspect that the authors would largely agree, and even divulge (though perhaps only confidentially) that their paper itself is essentially “academic window dressing”. Apparently papers like this are the means from which to be heard in their distinguished club. Sure we might fault it in various specific ways, but can we fault their overall message? Can we say that the atheoretical tools of neuroscience actually do furnish us with the useful answers that we need?

    I loved Jonas’ scenario of collecting I/O data from an adding algorithm, with the observation that this in itself will not provide a useful understanding of addition. Then there was Galef’s question about simply crunching numbers to effectively function socially, as a p-zombie would need to. Of course this isn’t sufficient either. If neuroscience is ever to provide the highly useful understandings that we need, it will require reductive theory that comes from a higher level, which is to say, from psychology. Of course psychology remains a troubled field — one of the softest of our soft sciences. Hopefully we can all see the connection by which failure in one field should have repercussions to the next (presuming naturalism of course).

    Regardless, let’s now acknowledge the very patient “elephant” that’s sitting in the room with us. Neuroscience will not function at anywhere near its potential, without effective consciousness theory. It will remain in a holding pattern until future researchers, armed with such theory, come back to make sense of what remains so vexing today.

    The first place I went after reading this post was your post on Steven Pinker’s “From neurons to consciousness” lecture. https://selfawarepatterns.com/2017y/04/15/steven-pinker-from-neurons-to-consciousness/ He demonstrated how neurons function as “and” “or” and “not” mechanisms (the building blocks of all functional logic) on one side of the water, as well as “lateral inhibition”, “opponent processes”, and “habituation” on the consciousness side — no middle ground provided. Nevertheless I believe that I’ve developed a way to span this void, as well as to “harden up” philosophy and our soft sciences. Eric Jonas and Konrad Kording give me hope that the academicians not interested in maintaining the status quo, may be a more sizable group than I’d thought!

    Liked by 1 person

    1. Thanks Eric.

      I actually wasn’t responding to Jonas and Kording in particular, but to the wave of people on social media who over interpreted their findings to mean that they can now safely ignore neuroscience and go back to whatever cherished theories they prefer. (Unfortunately I waited too long to respond, so I can’t find any of those entries from back in August / September to link to.)

      That said, I’ll admit to having distaste for the practice of people pointing out obvious problems without solutions. There are times when it’s productive, such as when researchers pointed out that the results of famous psychology experiments couldn’t be replicated. But it was productive because there are well known solutions to that problem, which weren’t being pursued due to the incentive structure that had developed in the field. But pointing out to neuroscientists that their methods and understanding are limited? That’s already well known in the neuroscience community and reputable neuroscientists regularly point it out. I think you pegged them right as using it as an attention mechanism.

      (My attitude here is probably shaped by the people in the organization I work in who love to point out fairly obvious problems, without providing solutions, and act like they’re contributing something, often it seems, with the same attention grabbing motivation.)

      On a theory of consciousness being necessary for neuroscience, actually I can’t really say I’m convinced of that. I think trying to understand consciousness is an interesting endeavor, but many, maybe most neuroscientists ignore it and still make progress. (Some, such as Elkhonon Goldberg, do so with disdain, seeing it as an obsolete concept.) Consciousness, something people still argue over the basic definition of, is for most of them a red herring. Lots of progress is being made on the components of consciousness: vision perception, decision making, etc.

      But as we’ve discussed before, I think this is a conceptual disagreement between us. I see consciousness as a composite phenomena, a label we give to an ill defined and shifting collection of a wide variety of cognitive processes. (Which I try to make sense of with in the layers I’ve described in earlier posts, but that’s more of a mental crutch than anything.) Consequently, I don’t foresee any one eureka moment when it’s suddenly going to be understood, but perhaps a lot of small eureka moments as each piece is understood.

      Thanks for linking to that Pinker talk! Although very dense, it’s one of the best ones for people who don’t understand neural computation and want to learn what that paradigm brings to the table.

      Liked by 1 person

    2. Mike,
      I do appreciate your point that certain people must have been quite disrespectful of neuroscience after the Jonas and Kording paper was published. This should be similar to how certain people today consider philosophy itself to be a worthless endeavor. There are obviously important questions considered by philosophers, but what answers do these critics provide? At least philosophers try. An even stronger argument can be made against the critics of neuroscience, given the accepted understandings which this community provides us with.

      On neuroscience not needing a functional model of consciousness, be careful not to fall into the trap that I think many philosophers do. Here philosophers decide that they don’t need communal understandings, but not because such understandings would be useless for humanity. No rather because their field doesn’t yet have them and so they’d rather not be criticized in this way. Philosophers have surely failed here, just as neuroscience and all associated fields will need to come to terms with consciousness sooner or later for their continued progression.

      Let’s say that advanced aliens send us two somewhat similar and functional robots, though one seems quite conscious, while the other does not. We are then tasked with understanding how each robot functions computationally. When our computer scientists go to work on the conscious unit, can you not imagine a useful consciousness model which illustrates the computational differences between this form of robot versus the one that is not conscious?

      I’ve developed a model by which a vast non-conscious form of computer facilitates a tiny conscious form of computer, and this conscious part does less than one thousandth of a percent of the processing as the other does. The conscious side harbors three forms of input — roughly senses like sight, a degraded form of past consciousness (“memory”), and a punishment/reward dynamic that drives this sort of function. It has a single form of conscious prosessor (“thought”) that interprets inputs and constructs scenarios in the quest to promote its rewards and diminish its punishments. Then finally it contains a single form of non-thought output, or muscle operation. In it’s entirety I believe that my model would generally help mental and behavioral scientists, and I know of no similar model on the market. (Actually I wish that some prominent person could take credit for such a model, since I could then use that model to better promote the ideas that are most important to me.)

      I’d love to somehow tempt these rebel insiders to consider my own project. Wouldn’t it be great if they’d stop by here? I’ll now send them a quick email just in case.

      Liked by 1 person

      1. Eric,
        On falling into a trap, there are actually two possible traps here. One is the one you mentioned, of ignoring a phenomena that is difficult to understand. But the other is the opposite, of holding onto a concept despite lack of evidence for any coherent objective version of that phenomena.

        A good example of this second trap is the concept of vitalism, the idea that there is something special about living systems. It was once regarded as something biologists needed to explain. While never explicitly disproved, as unique chemical and electrical mechanisms were found for each biological capability, the concept faded more and more into irrelevancy. It’s now more used as poetic metaphor than scientific concept.

        Myself, I think a more tractable problem for scientists to pursue may be how introspection works, particularly what it has access to and what it doesn’t. But understanding that requires understanding imagination, which requires understanding attention, which requires understanding working memory, perception, memory, etc.

        To me, insisting that all of these mechanisms must be explained together is like insisting that the only way to understand how a car motor works is by understanding the whole (motorness?) rather than learning how each component works and contributes to the overall system.

        Liked by 1 person

    3. Well argued Mike! To explain myself I must now resort to using the first two of the four principles of philosophy that I’ve mentioned to you over the break. I believe that they’re solid enough to found a vibrant philosophy community that does have its own generally accepted positions, as well as straighten out a good bit of trouble that exists in science today.

      My first principle of philosophy concerns metaphysics, and it’s the position that things function causally (also known as naturalism). Though I have no way of knowing that this position is ultimately true, I do have a sort of antithesis to Pascal’s Wager to make on its behalf. To the extent that things do not function causally, it’s not possible for us to learn how they function anyway. So if we want to learn about things, there really is no option other than naturalism, even if its ultimately false.

      In science and philosophy today there are plenty of people who flout naturalism and yet are accepted just the same (and even in physics). I do not mean to technically change this system (hell, some on the other side may be right!). But I would have the naturalists develop their own further society as well. Then when one of the nonmembers seeks to use a less than naturalistic position to interest a member, we’d expect a diplomatic reply of, “Yes you may be right about that, but it does conflict with the first principle of our club. I encourage you to work further on that sort of thing, while we’ll work on this sort of thing”. Anyway “vitalism” does not pass the naturalism standard, and of course you know that I’m referring to a “consciousness” that completely emerges from the properties of nature.

      Then my second principle of philosophy, or first principle of epistemology, is that there are no true definitions, but rather only more and less useful ones. So in this four principle club it would be formally recognized that when members are reading about my consciousness model, my own stated definitions will apply. Similarly if you were to hammer out a position regarding introspection, any of us will be obligated to accept your definition in the attempt to assess your ideas themselves. There is no true “consciousness” or “introspection”, but potentially only useful ideas that we’d like to grasp.

      Regarding the car, I think we agree that specialists in certain areas can add to entire understandings. But of course car part specialists today do have general theory about how car motors work. As you’ve mentioned, neuroscientists do not claim any such mastery yet. On top of that issue, Jonas and Kording worry that a good deal of the data which is being produced in neuroscience today, is being seen as an end in itself rather than for its use to develop accessible theory. If true, just like psychology’s reproducibility crisis, there is a solution. Call out this problem in general, and change incentive structures to promote more theory driven research. (Of course if they are right here, I do hope that they haven’t already become alienated.)

      Liked by 1 person

  9. Eric, thanks for that link to the discussion of purpose between Pigliucci and Kaufman. Teleonomic purpose, as opposed to teleologic purpose, is exactly what I was referring to by “Natural purpose”. But you seem to denigrate the teleonomic by saying it “kinda seems similar” to the teleologic. Pigliucci, citing another author I can’t recall, says that the teleonomic property is a real thing. Natural functions are real functions. He then states that Aristoles’s “final cause” references that teleonomic purpose when referencing biological structures. I would suggest to you that the teleonomic is the basis of the teleologic, and thus the teleologic is simply meta-teleonomic. “Why did he climb the tree? To avoid the wolves. Why did he want to avoid the wolves? To survive.” To say it another way, we have brains that can generate goals because that ability increases our survival fitness.

    So to restate the original point, a calculation is an information process that has at least a teleonomic purpose. Those processes with a teleologic purpose are a strict subset.

    *
    [As a kicker, I think such information processes, with at least teleonomic purposes, are the basis of consciousness. As soon as you have a (teleonomic) function, you have a functional description, which is the subjective description.]

    Liked by 1 person

    1. Sounds good James. Before I respond in earnest I’d like to get a better sense of the base teleonomy, or Natural purpose, from which teleology springs. You mentioned “survival” being the core. Could you break that term down further for me? Or if that’s not it, could you explain further through another term?

      Like

      1. Eric, I’m not sure what you’re looking for, but I’ll go with: the Natural purpose is that which drives natural selection, thus “survival of the fittest”, or just fitness. For what it’s worth, I think natural selection is driven by its effect on entropy, but I don’t think that matters for this discussion.

        *

        Liked by 1 person

    2. Happy 2018 all!

      Thanks for the clarification James. Are you familiar with Ed Gibney, who comments here now and then? I consider him to be quite a remarkable fellow. He’s part of a society in which evolution takes the prime focus of many academic pursuits, such as psychology, with him on the evolutionary philosophy side. I’ve had some interesting private discussions with him, and if you’ve not met, I’m sure that he’d like to know you. http://www.evphil.com/blog

      So with “Natural purpose” you’re talking about the stuff that drives evolution. In that sense I can see how teleonomy comes before teleological purpose — it actually creates it. Right. Still that sort of purpose is not what’s generally meant by the term. Interestingly enough this is exactly what I was enquiring of Massimo in the conversion that I originally linked to with you.

      In brief our conversion went: Me — Ordinary definitions for purpose require a subject of reference to be based upon. How can you say that evolution has an intristic purpose? Massimo — I didn’t say that it has an intristic purpose, but rather creates organisms like us that do (thus teleonomy here rather than actual teleology). Me — What separates organisms like us that can act purposefully, and organisms that can’t? Massimo — Consciousness.

      I was proud of myself for getting the “consciousness” reduction out of Massimo there, which he didn’t otherwise state, and I think wouldn’t have without pressure. Nevertheless he and I seem to be talking about the same sort of thing, while you’re talking about something that’s a bit different. Neither you nor Massimo and I can be “wrong” here, but rather are focusing upon seperate topics. He and I are saying that given teleology, evolution may be considered teleonomically. You’ve instead deleted the observing subject to flip this dynamic, and thus teleology can emerge from teleonomy. Sure.

      Anyway with professional philosophers and I talking about the same thing here, I’d like to now present an effective way to reduce purpose beyond consciousness. If you or anyone else has any thoughts about my reduction, then please do provide them!

      If we consider a causally functioning universe without life, is it effective to say that there is any intristic purpose to the events that occur? Well when we look at the stars we might ascribe purpose like creating other elements, but without such a subjective perspective of reference, it’s difficult to say that there is intristic purpose to stars or anything else.

      Would the formation of life itself bring purpose to existence? Well not in the “agent” sense that Massimo and I are referring to. What about when the Cambrian explosion happened by which life developed central processors for their function? Well from there organisms simply became more “robotic”. Sure we can ascribe purpose to a vacuum-bot, though it’s still like the purpose of a fork, or EXtristic rather than INtristic.

      Purpose as we’re using the term requires an agent. But if modern philosophers are able to reduce teleology down to consciousness, why not go further? To me the second half of the Pigliucci / Kaufman discussion gets to why they can’t, or at least don’t want to. In this second half they demonstrated how utterly ensconced philosophy happens to be in the notion of morality, or the rightness and wrongness of behavior.

      I believe that purpose can effectively be reduced down to the product of the non-conscious mind that produces punishing and rewarding stimuli for the conscious mind to experience. This input is theoretically the strangest stuff in the universe — pure value which thus provides purpose to the conscious form of computer. Throughout the universe there shouldn’t be anything like it.

      If so however, then why isn’t this acknowledged? Well I can’t say entirely of course, but where would such an understanding leave our longstanding morality paradigm? If value were acknowledged as nothing more than what feels good/bad, then the notions of rightness and wrongness by which we judge each other, or the social tool by which we impose will over individuals and societies alike, would be seen as a hollow byproduct of something that actually IS tangible. As the discussion of Pigliucci and Kaufman demonstrates, enlightening the world about the social tool of morality will be no simple task.

      Notice however that if all of reality is causally connected, then humanity should not be able to conveniently leave out certain elements of it without impeding other lines of inquiry. I believe that value is such an important element of reality, that our mental and behavioral sciences have suffered heavily in this void of understanding. Thus my quest to help philosophy and these soft sciences, overcome this tremendous obstacle.

      Like

  10. It is very relevant question to ask and contemplate over considering the depth of the topic. Most of the neuromorphic circuits or computing say they mimic human functionality. But as you said, the understanding is too little to be completely explored or exploited to achieve anything beyond getting some user choices etc.

    Liked by 1 person

  11. Comparing CPU damage with brain damage… what an interesting idea!

    “Jonas argues that we can’t be sure that the brain isn’t implementing some vast software layer.”

    It does seem absurd on the face of it. Although it might depend on what we call brain software. Is learning a skill an installation of new software? I would argue it is.

    In that sense, our brain is a general purpose device; one with serious hardware capability. But I agree with the point here that brains, as such, are all hardware not a hardware plus OS combo.

    In fact, I’ve come to believe there’s a big category error in the common idea that “a brain is like a computer.” I think a brain is nothing at all like (what we usually mean by) a computer, and it’s a mistake to compare them.

    A key issue I see with the comparison here is that a CPU is designed to have 100% accuracy with as little entropy as possible. Further, they are strictly binary. Brains are analog, noisy, and massively parallel. I believe they operate under the principle of least free energy.

    Individual neurons die all the time with little obvious effect. (I did once forget Cameron Diaz’s name for nine months. I finally trained new neurons.) Individual transistor failures in a CPU are generally catastrophic. They may not have affected game play, but they will eventually affect something. More to the point, a single transistor failure in a CPU makes the difference between correct and incorrect operation. Period.

    I do agree with you that what should be compared are GP effects. Impairing the 6502 affects everything it does. Comparing games would require damaging parts of the game binary, and in that case, lots of changes would be benign — changes of game sight or sound or a corrupted menu. It would be interesting to see what kind of, or how many, changes seriously damage the game.

    I do think there’s value in studying and understanding everything, so the study of neural correlates is a-okay with me.

    (And, ah, the 6502. I remember it fondly. Wrote a lot of assembly code against it for my C64 and C128 machines. It didn’t stand up to my beloved Z80, but it was a nice chip.)

    Liked by 1 person

  12. “In general, to me, any process that can be modeled computationally, can be considered to be performing that computation.”

    That, from one of your comments above, caught my eye. I think I don’t agree, although it might depend on how we define things and exactly what’s being said.

    It might be helpful to distinguish between an calculation and a computation. A better word for the former, perhaps, is evaluation. The key distinction is that a computation involves steps with intermediate results whereas an evaluation has no steps.

    An analog “computer” implemented by a network of resistors can evaluate certain math equations. The network “finds” the answer instantly due to its physical characteristics. Or consider how the Earth “calculates” its orbit as a result of the interplay between mass and gravity. Or the parabola of a thrown object in a gravity field.

    Compare that to situations requiring steps, that cannot be evaluated in one go. For instance, a Taylor series that generates a sine wave requires iterations for the wave to take shape.

    I tend to not see least free energy physical evaluations as computation. For me, computation is defined per a Turing Machine.

    There are certain aspects of photosynthesis and biology that strike me as computational because they involve steps and have intermediate results. The way DNA unwinds, spawns an RNA copy, which then goes off to make protein, is a good example. (And photosynthesis is downright electronic.)

    What really caught my eye though is the idea that modeling a process in a computer means that the modeled process is also computational. I just don’t think that follows.

    Although it does depend on how you define computation.

    Liked by 1 person

    1. The 6502 was where I cut my assembler teeth as well, but mine was an Atari 400. I think that class of machines were the last ones where someone had a chance to learn the entire system. I learned the Atari 400 in and out, to the point where I had the addressable hardware registers memorized. I remember the jump to my first IBM compatible PC, and the stark realization that I would never know it as thoroughly as I had my old 8 bit computer.

      I agree a lot depends on how we define “computation”. My definition is broader than yours. To me, when I read about how a neuron works, summing up its inputs until a threshold is reached and then firing off signals to its downstream peers, that seems inherently like gated signalling, inherently computational. If you read books on neuroscience, they commonly refer to neural computation and circuits.

      But I definitely agree that the brain is very different from a commercial computer.

      Liked by 1 person

      1. “My definition is broader than yours.”

        Yeah, we’re just putting different labels on the same thing. It amounts to two concepts (eval vs. comp) versus seeing them as functionally the same.

        “…inherently like gated signalling, inherently computational.”

        That’s a great example for illuminating the different views! An electronic gate works because of how it’s electrical components work. Inputs immediately generate the output through the logic. To me that’s a flow process, like water seeking a level.

        I’ve written code to simulate logic gates, and the output is the result of a series of steps that examine inputs and logically generate the output.

        The end result is identical in both cases, hence including the first case as equally a computation. I just like additional terminology that lets me distinguish between a process that flows and one that requires steps.

        (Another way to view it is expression versus algorithm. Expressions have an instantaneous value, which an appropriate physical mechanism can output. Algorithms require steps, have intermediate values, etc.)

        But it’s all a terminology thing. I know what you mean; you know what I mean; nothing is lost!

        “I learned the Atari 400 in and out, to the point where I had the addressable hardware registers memorized.”

        Ha, I remember! For me it was the Commodore machines, but same deal. The info was readily available, and it included everything!

        I had an advantage with the IBM PC. My company’s IT dept serviced them internally, so I had access to manuals, including BIOS dumps and schematics. I wrote a lot of 8086 assembler! (Even a class library! 😀 )

        For me it was the 386 that did me in. By then system architecture was getting complex and more proprietary, and around that time CPUs got too complex for anyone other than a real expert to write assembly. And who’d want to anymore!

        I’m quite convinced programmers with assembly background are better programmers, because they have a strong sense of what’s actually happening in the machine.

        Liked by 1 person

        1. I did some light programming with 8086 and read about the 386 instruction set, but never had a job that actually required it, and writing a game for those systems, by the time I was looking at them, was a much vaster initiative than when I was playing with the Atari 400.

          I used to advise C programmers to at least read a book on assembly. Programmers coming from higher level languages always seemed to struggle with pointers (particularly in the 16 bit segmented address space days), arrays, bitwise operations, etc, while those with assembly experience only had to learn C’s idioms with those things.

          These days I’m more likely to ask why they think they need C or C++ for whatever they’re trying to accomplish. Programming overall is becoming much more of a specialty skill in IT than it used to be, and low level programming a lost art, at least in business IT. But even scientific programmers seem more inclined to use Python rather than C now.

          Liked by 1 person

          1. I would definitely not want to have to implement a modern game in assembly. Yikes!

            Very true about pointers. For assembly-experienced programmers, it’s a basic apsect of how CPUs work. But I’ve seen programmers who mostly just had JS and Java be so lost with that stuff. OTOH, as you say, not much need for it anymore. I can’t remember the last time I had to implement a linked list. Probably the last time I used C, which is decades ago!

            I wonder how many working programmers today even understand the concepts: ‘pass by value’ versus ‘pass by reference’ and what those imply.

            By the time I retired, our IT division was almost entirely Java, plus there was a big move towards off-the-rack software versus creating it in-house. A big threat to what I did for a living, and part of the reason I retired earlier than I really intended.

            I do love Python! I completely agree with the xkcd cartoon about it.

            Liked by 1 person

          2. There is value in understanding the difference between pass-by-ref and pass-by-value in Java and C#. I’ve seen some programmers get burned by it, such as forgetting when objects are passed by reference and that any modifications made by the called code will have effects afterward.

            But yeah, even Java, C#, and PHP coding is becoming increasingly a specialized thing. As you noted, local IT shops are moving en mass to canned solutions, cloud ones in particular. For a while, there was still a lot of programming in integrations, but even that is being increasingly handled by specialized products like Informatica or Dell Boomi.

            IT is changing a lot, making me somewhat glad I’m near retirement myself.

            Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.