Classic and connectionist computationalism

Spurred by a couple of recent conversations, I’ve been thinking about computation in the brain.

It was accelerated this week by the news that the connectome of the fly brain is complete, a mapping of its 140,000 neurons and 55 million synapses. It’s a big improvement over the 302 neurons of the C. Elegans worm, which were mapped decades ago. Apparently there are already new computational models built on the data, including modeling a fly’s taste of sugar and of its vision.

It raises the question of what kind of paradigm shifts we might eventually see from these mappings, and the debates about current ones, such as computationalism and dynamical system views, or between the variety of computationalism.

There are many variations of the computational theory of mind. For this post, I’m going to group them into three broad categories.

The first is not held by serious theorists, but is often the way lay people understand it, and the version typically attacked by critics. It’s the idea that the brain works like a general purpose programmable computer. I have to admit I once thought this. I didn’t blink when Agent Smith downloaded a copy of himself from the Matrix into a human’s brain, Neo having skills instantly downloaded into his brain, or other similar sci-fi scenarios.

But this view doesn’t survive a casual understanding of neuroscience. A person’s personality is thought to be encoded in hundreds of trillions of synapses, the connections between their neurons. Unlike random access memory or disk storage in a computer, there’s no mechanism to update synapses en masse, except by neural activity over time. So Agent Smith can’t just copy himself in, at least not to the organic parts of the human’s brain.

The second type of computationalism is often called “Turing Machine” or “machine state” functionalism. It’s the idea that the brain computes using well defined operations. It’s often coupled with representationalism, Mentalese (“language of thought”), and other concepts. It’s the set of assumptions that underlies much of cognitive science from the late 1900s. It resembles the first version above, but with an understanding that brain state transitions are more stochastic than deterministic, no expectation that we’re talking about something programmable, and other adjustments due to biological realities.

Steven Pinker made a convincing case for this version of computationalism in his book: How the Mind Works. Although even on my initial reading, the idea of Mentalese felt dubious, like too much of a projection of how we might engineer a brain rather than what evolution actually did.

The second version is now considered the classic computational theory of mind. In the 1980s, it started to be challenged by the third version, connectionism, the idea of a neural network, where the processing happens in a massively parallel and distributed fashion, and the nodes have connections whose strength vary continuously rather than discretely.

Which recognizes the analog nature of how brains work. Often the on/off nature of neural action potentials are cited as being discrete in a digital fashion. But the frequency of firing carries informational significance, and it varies continuously, along with the gradual build up triggered from input synapses that leads to the cell firing.

Importantly, this paradigm has no built in symbols in it, just neural processing through constantly updated connections. For decades, there were debates between “implementationist” and “eliminative” connectionists, about whether neural networks were just modeling something at a lower level of description that the classic theories modeled at a higher level, or whether the idea of symbolic computation in the brain was just a mistaken idea ripe for elimination.

For a long time, buttressed by Pinker’s arguments, I was in the implementationist camp. But I realized this week that it’s been a long time since I’ve felt comfortable with symbolic descriptions of brain processes. Somewhere along the line, when reading about neuroscience, or about the recent progress with artificial neural networks, my confidence in the symbolic paradigm eroded. As I noted in a post last year, there are major differences between how a neural network works vs how the device you’re using to read this works, differences symbolic approaches often overlook.

Of course, this is still computation. Artificial neural networks have historically been implemented on digital computers. (Neuromorphic computing may change that.) And the processors in these computers can be thought of as networks of logic gates, a technology actually inspired by the first paper on neural computation in 1943. So they’re still the same type of dynamics. In principle, any neural process can be implemented in a Turing Machine type system, and vice versa.

And I’m not necessarily opposed to using symbols to understand neural processing. But I now feel like they must be used cautiously, with an eye kept on the underlying implementation details. Representations, for instance, have to be understood, not as contiguous images in the brain, but as distributed neural firing patterns that evolve with time. We have to be on guard not to slip into thinking too much in the ways technological computers work.

It’s worth noting that while connectionist networks with their artificial neurons are much more biologically plausible than traditional computational models, they’re still abstract simplifications. Real biological neurons remain much more complex. They probably always will be. The question is which way of abstracting them provides insights and progress. As computational neuroscience continues to develop, the answers will likely evolve.

A lot of people in the embodied cognition camp think that that evolution will lead us away from computation. Embodied cognition is the idea that the mental processes can only be understood in relation to the brain being embedded in a body and environment, and the enactive engagement between them. It extends the mind into the body and environment. Some in this camp think that a dynamical systems view will ultimately prove a better model for the brain.

The embodied movement seems to get a lot right. It does make sense to view cognition as integrated with the brain’s environment. But the more radical factions in this camp, I think, go overboard. The dynamical view might provide insights in some areas, such as muscle coordination, but it’s hard to see how it scales up into full cognition. Every computational system is also a dynamical one. The question is what level of description is more useful. And the desire from many in this camp to eliminate representations as a concept, ironically, has a behaviorist feel to it.

The old school behaviorists seemed motivated to pull psychology away from the freewheeling introspective methods of their predecessors, and focus on what could be measured. They went overboard in denying that mental states had explanatory value. Eventually it was realized that if computers can have internal states that explain their output, there was no reason to think minds couldn’t either, which allowed psychology to break out of this mindset.

The most radical views in the embodied camp feel like a reaction against classic computationalism, and to some extent the lay person’s understanding of it. But it seems to risk falling back to a paradigm that denied mental states, or at least mental content.

So while I think a modest understanding of the embodied, embedded, enactive, and extended paradigm can provide insights on the types of computations that are happening, I don’t see it as a complete alternative to computation, at least not yet.

Which means, for now, I remain a computational functionalist, albeit one now more in the connectionist camp than I had realized.

What do you think? Are there reasons to still favor the classic computational approaches? Does the embodied movement challenge computationalism more than I’m thinking? Or should we be looking at some completely different paradigm?

Featured image credit

39 thoughts on “Classic and connectionist computationalism

  1. There is a relay race relationship between theory and practice. For a time data acquisition outstrips theory, then the baton is passed and theory outstrips data acquisition. When it works it is beautiful with each guiding the other. Where is struggles is when there is a paucity of both.

    Now, you may say that there is a surplus of theory right now, but I would argue that most of that is not theory, but conjecture, proto-theory if you will. So, suspect we will be stumbling along for quite some time until there is a breakthrough in either arena. Think of the breakthroughs that brain scanners, fMRI, etc., provided. A similar experimental breakthrough, or a theoretical breakthrough could lead to huge increases in our understanding.

    But, we ain’t there yet.

    Liked by 2 people

    1. The relay race is a good analogy. It reminds me of the situation in fundamental physics, where theory has largely gone far beyond data. (Not for lack of trying on the experimentalists’ part.) It’s a different situation from what existed throughout much of the 1900s.

      The brain is in a much better place today. For centuries, theories about the mind had to be speculative. That started to change in the late 1800s and into the 1900s, but very slowly. Here I think the behaviorists get a bad rap. What they were trying to do made sense, focus on what could be measured. They just got too carried away with it, probably due to logical positivism.

      But it’s true that a lot of old ideas continue to hang around. Elkhonon Goldberg remarked that “old gods die hard”, a quip he makes in the one paragraph in his frontal lobes book that bothers to mention consciousness.

      Like

      1. Re “It reminds me of the situation in fundamental physics, where theory has largely gone far beyond data.”

        This was in my mind, too. And there is a trap here. Complex theories require a lot of effort to construct and to learn and to use and so they build up a lot of momentum that resists their replacement. And all modern theories are complex. All of the simple ones were addressed long ago, so too many theories today are dead horses, still being whipped by riders. This is what Creationists think the theory of evolution is, but they are wrong, but in cosmology? The BBT has so many patches it looks like a coat of many colors at this point. This is a sign it is soon to fail, but the resistance is high as so many have devoted so much effort into propping it up.

        For example, you cannot call up a CMBR map without it being referred to as a map of the “early universe” when all it is is a map of the temperature of space. Calling it evidence of the “leftover radiation form the BB” is an interpretation (and there are others).

        The same seems to be the case in elementary particle physics. An attempt to create a periodic table of elementary particles resulted in myriad particles being “discovered” which are far from elementary.

        Like

        1. I’m not an expert is cosmology, but think it’s the combination of evidence (red shift, CMBR, abundance of light elements, large scale structures, etc) that keeps most cosmologists convinced that the BBT is true, at least in its broadest sense. You can find other explanations for each individually, but the BBT model seems to explain them all. Which isn’t to say there aren’t problems, but I think most scientists would need a compelling alternative to jump ship.

          That’s actually my take for neural computation. I’m open to the possibility that there’s a better way to describe what’s going on, but I need that alternative developed. Dynamical models just seem to give up on a higher level description, although if someone could come up with a compelling version, I think it would get a lot of consideration.

          Like

          1. Re “I’m not an expert is cosmology, but think it’s the combination of evidence (red shift, CMBR, abundance of light elements, large scale structures, etc) that keeps most cosmologists convinced that the BBT is true, at least in its broadest sense. You can find other explanations for each individually, but the BBT model seems to explain them all.”

            This is the party line I am afraid and it is not at all true. The BBT predicted the existence of the CMBR four times and four times it was wrong and then the CMBR was discovered. It was automatically claimed by the BBT theorists as “proof” (it is not) or “more proof” (it most definitely is not) for the BBT but the BBT predicted it . . . incorrectly.

            What ordinary people don’t see is the wrangling in the background over all of this. Is the BBT the “best explanation” we have currently? I don’t think so. Red shifts were misinterpreted from the get go. Hubble himself refuted the “Doppler effect, expanding universe” explanation. There are dozens of viable explanations for the red shifts we see and not all of them lead to an expanding universe. Without an expanding universe, the BBT evaporates, leaving nothing behind.

            I have commented on all of the ad hoc patches to the BBT which have no support: cosmic inflation, the expansion’s trigger, dark energy, dark matter, etc. If there is no such expansion, coming up with an explanation for the expansion is likely to lead to inanities, no?

            Did you know that space or “space-time” is expanding, according to modern theories . . . but only between galaxies, not within? If the universe expanded for hundreds of thousands of years, away from a single pint of origin, before atoms came into existence, how did all of this organization occur? All of that energy/particle soup should have moved away farther and farther from the other parts before attractive forces could have pulled it together and those forces diminish with distance (let’s see 300,000 years at close to the speed of light means many, many light years between the atoms now forming. All of my archery students know that if an arrow is shot offline, it will just get farther to one side or the other as it goes (which is why targets are larger for farther distances).

            People who have expended a significant part of their careers on a particular theory are not going to publish articles on its weakness, certainly not on its holes (things it either doesn’t explain or explains incorrectly).

            Like

          2. Actually, as I understand it, space is expanding everywhere. Even within the solar system, even between the elementary particles in our bodies. It’s just that the expansion on small scales is infinitesimal, and the various forces like electromagnetism dominate, keeping things together. On the scale of the solar system, galaxy, and even galactic clusters, gravity dominates enough to keep things from coming apart. But that changes as the distances increase, where the cumulative effects of the expansion add up.

            The usual explanation is large scale structures formed from slight variances that existed early on, with gravity doing the rest. Where the variances might have come from is an interesting question. The usual guesses are quantum fluctuations, but who knows.

            It’s true, people usually won’t publish papers questioning theories they championed, but someone in the field often will. It’s a great way for a young scientist to make their name, if they can make the case either empirically, mathematically, or at least logically. It’s why Max Planck quipped that science often progresses one funeral at a time.

            Like

  2. [this topic is very important for my project, Psychule theory, so here’s my take]

    I’ve seen a lot of hand-wringing over the meaning of “computation” lately, so it’s an area of discussion where it’s important to be very clear of your definitions going in. My current definition of computation is “information processing for a purpose”. Now I have to define “information processing”, but that’s not hard. Every physical interaction processes information, specifically, mutual information. Thus, computation is a physical interaction with a (mutual) informational purpose. (FWIW, there is a recent field of study working with the processing of mutual information referred to as Partial Information Decomposition.)

    Under this definition, the most significant computation is pattern recognition. Strictly speaking, any physical interaction can be classified as a pattern recognition, but here we’re talking pattern recognition for a purpose. The neuron is a pattern recognizer par excellence. Also, patterns can be hierarchical, giving patterns of patterns. (Unitrackers are essentially isolatable groups of neurons which recognize multiple inputs and associate them with a single output, or not.) Also, a single group of neurons can take a set of input patterns (from sensory inputs as well as from unitrackers) and produce output that is unique to the set of inputs. See semantic pointers. The outputs from these semantic pointers can then be used as inputs for unitrackers higher up in a hierarchy.

    All of this seems compatible with pretty much all the theories, including the embodied approach. Different groups are simply focusing at different levels. The 4E groups are focusing on the high levels of the recognition hierarchy, and the concept of mutual information extends their considerations out into the environment. This approach is related to those talking about downward or mental causation as well.

    So the bottom line is that both the high end (4E) and the low end (computational) are correct. Ultimately, everything going on in the brain can be described in computational terms (COPY, AND, OR, NOT), and so potentially runnable on a Turing machine (to as close an approximation as desired). The question is how do you want to use your description.

    *

    Liked by 1 person

    1. Interesting take James, as always.

      Information processing with a purpose is an interesting definition. It might raise the question of how we define “purpose”, a topic I think we’ve covered before. It can be as simple as an attractor state, or as complex as a modeled outcome. But I imagine you’re closer to the first one.

      On pattern recognition in all physical interactions, I’m not sure. If a cue ball hits an eight ball, what pattern recognition is happening? Certainly something causal is happening, but “pattern recognition” strikes me as a higher level concept. Or maybe I should ask, what’s your definition of pattern recognition?

      On your basic operations, I can understand how AND, OR, or NOT translate into neural processing (albeit messily). Even COPY if we’re talking about something like an electric synapse, where the signal is going through regardless. But I wonder how you see something more complex being copied in a neural network. It seems like it’s more one cluster of neurons exciting another cluster. There may be a resemblance in the patterns excited, but there doesn’t have to be. We’d probably engineer it that way to make it easier for us to track what’s happening. It could work out that way in some cases of natural systems, like the retinal ganglion pattern being (mostly) preserved through the LGN to V1, but I don’t see evolution having that constraint in general. Even our own networks seem to grow in in directions that make it dodgy.

      I tend toward theoretical pluralism myself, but in my case I need to be able to do the reconciliation in my head to be ecumenical in the way you’re describing. I can see it with moderate 4E and connectionism, but things become less certain with old school computationalism. And when we throw in something like downward causation, my skeptic meter surges. But I do agree a lot has to do with what level of description we want to work with.

      Like

      1. I was tempted to expand on “purpose” in my reply, but decided you’d pick up on it. It’s not that I’m closer to one (attractor state) or the other (modeled outcome). It’s that I think both are equally valid sources of purpose, so the attractor state counts for all intents and purposes (heh).

        Admittedly describing the cue ball interaction as pattern recognition is a stretch, mostly because “recognition” kinda implies cognition, which requires purpose. But to play advocate, the 8 ball recognizes a specific input (mass with certain shape and X velocity) and produces a specific response which response has mutual information with respect to the input. Whether or not you want that to count as pattern recognition is not a big deal.

        re: AND/OR/NOT/COPY as information processing, I am specifically referring to mutual information. So in the pool ball example, a cue ball moving at a certain velocity has mutual information with respect to the cue stick striking it. Suppose the cue ball hits the 8 straight on and transfers all that velocity to the 8. The 8 ball now has (approx.) the same mutual information w/ respect to the cue stick, so that counts as a COPY of that information. Note: the 8 ball has all kinds of extra mutual information as well, such as w/ respect to the cue ball. So whether the operation counts as a COPY depends on the purpose (yes, same purpose again) of the interpreter.

        In a complex neural network each individual operation would be too difficult to track (although possible in theory). That’s part of the reason unitrackers become important, because they distill the complexity down to one pattern(say, recognizing Bob), and then you can follow operations on that pattern (say, saying “Hi Bob!”). I’ll note again that this kinda stuff is the subject of Partial Information Decomposition, which concerns itself with determining when inputs are redundant (same mut. info.), exclusive (completely separate mut. info.), or synergistic (reinforcing for a specific pattern).

        *

        Like

        1. On COPY and mutual information, right. For a discrete digital system, information can be copied reliably without loss. So a CPU can pull a value from a memory location into one of its registers, manipulate it, and then copy the result back to that memory location. The result ends up looking a lot like a series of billiard balls hit in a linear sequence.

          But in a neural network, if a neuron gets excited because it detects a corner (maybe as part of a unitracker), and fires, it’s more like the cue ball hitting a rack of balls, with information getting distributed among them. It’s all mutual information, but much more difficult to describe as any kind of discrete operation. We can say that the state of other neurons, maybe in different regions, changes due to the first neuron detecting the corner, but saying it sends the corner finding to them has to be said in a very metaphorical manner, and we have to remember how much we’re speaking figuratively.

          I may have to lookup that Partial Information Decomposition. It sounds interesting. Thanks!

          Like

          1. [trying to give you my understanding, so …]

            I’m trying to figure out your understanding of what I wrote. In your cue ball/rack analogy I’m not sure which is the activated unitracker neuron, the cue ball or the first ball in the rack (or the cue stick?).

            But given the activated corner detector which fires, the significant mutual information is in the neurotransmitters. Presence of the neurotransmitters carries the mutual info of that unitracker. What a recipient neuron does with that information depends on the recipient neuron and what other unitrackers (or whatever) it’s receiving input from. And other recipient neurons might do something different with that info. Ultimately you might get to a “walks like a duck” unitracker, and simultaneously a “looks like a duck” unitracker, and output from just those two might go to a third unitracker which pretty much does an AND, making it (the third) a “duck” unitracker. The output from that unitracker, i.e., the neurotransmitters from that one unitracker will have significant mutual info w/ ducks, and anything that looks like ducks, and anything that walks like ducks, and somewhat less significant with anything with corners.

            Does this help any?

            Like

          2. Sorry, I didn’t imagine the unitracker anywhere in the ball/rack analogy. I just added the reference to the unitracker to let you know I hadn’t forgotten about them.

            Overall, I think we’re saying pretty much the same thing. I didn’t see anything in your second paragraph to disagree with. My main point is to be careful about invoking traditional computing concepts in neural networks, but it seems like you have a handle on it. I would just note that not everyone you talk with may understand what level you’re referring to when you discuss those operations.

            Liked by 1 person

  3. (WP says the first submission got eaten, so here’s another.)

    Mike, I wonder if your recent aversion to the symbolic nature of brain function, alters your stance on my thumb pain thought experiment? Let’s take a quick run through.

    When your thumb gets whacked, each of us have presumed that associated neural information about this event is conveyed to your brain. So theoretically that information could be encoded with marks on paper to some arbitrary degree of fidelity. Furthermore each of us have presumed that your brain algorithmically processes that information so that it can react appropriately. So theoretically the encoded paper could be scanned into a vast supercomputer that processes this information well enough to print out more paper encoded with marks on it that display your associated brain processing to some arbitrary degree of fidelity.

    Of course it is my belief that for actual thumb pain to result, one more step is required. Here your processed brain information should need to inform an appropriate variety of physics, and probably an electromagnetic field by means of the right sort of neural firing. Or in the case of the processed marked paper, this would need to be scanned into another computer that goes on to create the same sort of electromagnetic field which your brain produced to exist as you the experiencer of a whacked thumb. In the past however you’ve said there is no need for my extra step. This is to say that you’ve posited that your experience of thumb pain would happen when your brain processes the information sent from your thumb. Thus if the right marks on paper were algorithmically converted to the right other marks on paper, then you’ve believed some unknown entity here would essentially experience what you do when your thumb gets whacked. Is this still your position? Or do you now doubt that there’s any potential for marks on paper to symbolically represent neural function? Or perhaps another answer?

    Liked by 1 person

    1. Hey Eric,

      (Weird on WP eating the comment. It didn’t land in spam or pending, so not sure what might have happened.)

      Yeah, sorry, no real change here. In retrospect, I’ve been in the connectionist camp for some time, possibly as far back as reading Damasio, as well as Feinberg and Mallatt, so 2016ish, maybe.

      I will add my usual stipulation that I’m not saying the marked papers by themselves feel pain, but the whole mechanism involving all the transformations. Under normal circumstances, it would be neural processing interacting with a body. It’s possible for people to have chronic pain that only lives in the brain, but the reason they can feel it is the the evolutionary history of their brain and body together. That implies that your mechanism would have to simulate all those interactions. But just like Searle’s thing, or Ned Block’s, it’s possible in principle, and if it were done, I think there would be pain present.

      On electromagnetic fields, I’m currently making my way through Ed Yong’s book on animal perception. He points out that there is a part of our nervous system which interacts with electromagnetic radiation, the retina, which it does with specific proteins in the photoreceptor cells that act as antennae. And apparently a number of animals have an electroreceptive sense, a way to detect electrical fields and perceive things about them, which they do with specific sense organs. Interesting stuff.

      Liked by 1 person

      1. Good to hear Mike! I was wondering if I might need to assess your stance here further for a fundamental change in perspective, or even to adjust my thought experiment somewhat. Furthermore I can still claim that I’ve been able to reduce functional computationalism with more thrift than anyone else that I know of (and certainly the SEP!). So what exactly does that reduction happen to be? It’s the belief that consciousness exists by means of the processing of the right information into the right other information in itself, and so any such resulting information needn’t inform some sort of yet to be determined consciousness physics. And though you dispute the following, I consider the position supernatural — I believe that causality mandates that information only exist as such to the extent that it informs something appropriate. Thus a DVD could be informational in the sense that it informs a table leg as a shim, or a DVD player might unlock what’s actually encoded. In a causal world however I don’t think it’s productive to say that information can exist as such without something appropriate being informed by it.

        On Ed Yong’s book, I presume he says more than that some organisms have eyes and explains how eyes work? So instead there’s some interesting protein dynamics to consider, and so on, and perhaps even some non-eye detection mechanisms as well? Sounds like something you’ll let us know about with a dedicated post!

        Liked by 1 person

        1. Eric,

          Nope, no radical change in view. Just maybe a more accurate description of it.

          I do dispute the supernatural thing. For me, the question is what the causal chain is between sensory stimuli and motor output. I’m open to any additions based on evidence or logic. But I don’t accept the need for an ineffable epiphenomenal essence in addition to that. You’re clear that your addition is meant to be causal, but I haven’t seen anything to convince me it’s necessary, at least not yet.

          On the Yong book, he covers the basics on eyesight, but doesn’t get too deep, only enough to explain the variations between us and other animals. We’ll see on a post. I’ll have to feel like there’s something I can add besides the write ups people can google on it.

          Liked by 1 person

  4. “Ineffable” is certainly off the table for me. And regarding any evolved element of our function such as consciousness (like thumb pain), “epiphenomenal” is off the table too. Furthermore I realize that you’ve been influenced by the teachings of acclaimed naturalists like Dennett and Pinker. So just as they may have been naturalistically failed by their influencers, you may have also been failed by them. But consider how pathetic it would be for you to tell people that you follow the teachings of a nobody like me rather than than such acclaimed intellectuals! Any yet, regarding consciousness it may be that I’m able to present a strong argument that they’ve led you down a non causal path. There’s no problem if you’re able to easily dismiss my argument. In that case your heroes ought to be correct while I’m probably wrong. Otherwise however, the thought that I might demote such people to unwitting supernaturalists, should trouble you. And as a good friend I truly don’t want to trouble you! If the situation were reversed however, I know that I’d much rather that you straighten me out, and even if troubling. So I’ll proceed from this premise. Fortunately the argument is quite simple.

    The general point is that in a causal world, information can only exist as such to the extent that something appropriate becomes informed by it. I can’t think of a single example where this does not apply. So here English letters are not inherently informational, but rather only in respect to what such letters inform. Sufficiently educated English speakers, for one example. DVDs may inform DVD players, or table legs that they shim, and so on. In no case however should we consider DVDs to exist informationally in themselves, and this is because nothing would actually be informed by them.

    If my argument is wrong then I implore you to present some examples of things that can usefully be said to exist informationally in an inherent sense, and so needn’t inform anything to exist as such. But if consciousness is the only example that you can come up with, then I’d have you also consider the possibility that this is also wrong. Thus it could be that naturalism mandates that our brains’ processed information informs some sort of brain physics to exist as us, and so doesn’t break this causal rule.

    Liked by 1 person

    1. Eric,

      I actually prefer not to follow anyone, but to learn from them. There’s no one I agree with 100%, including Dennett and Pinker. Lately I’ve been finding the papers by David Lewis interesting, at least for his thoughts on the mind. But some of his other stuff, on modal realism, I just don’t buy. So I’d say don’t worry about prestige. Just worry about good arguments. And understand that there will almost certainly always be some things we disagree on.

      On information, consider gravitational waves. They’ve always existed, but until Einstein discovered their possibility in his equations, we had no idea they were there. It’s only in the last few years that we’ve been able to detect them. Now that we can, they can inform us about cosmologically distant events, like black hole collisions, we wouldn’t otherwise know anything about.

      Now, to your question, did the nature of gravitational waves change from before we could detect them? Or before we even knew they existed? If so, what was the nature of their change? If they didn’t change, then weren’t they always information?

      Or consider the spectral absorption lines in starlight. Again, they inform us about what elements are present in the star. Same question. Did their nature change from before we discovered them, or understood their significance?

      I could ask the same questions about DNA, or any number of natural phenomena that now count as information.

      If these phenomena didn’t change in any physical manner by our discovery, then whether to consider them information becomes a definitional matter. And we’ve historically agreed that when it comes to definitions, there’s no strict fact of the matter. Which brings us to the distinction I’ve discussed before, between physical information and semantic information. Physical information is there whether it informs us or not. But it only becomes semantic information for an agent when that agent understands its causal history.

      Hope this helps.

      Liked by 1 person

      1. Yes that does help Mike. Furthermore on definitions I think we’ve also agreed that if I’m trying to understand a point of yours, then it’s my obligation to accept your associated definitions so that I might effectively do so. And vice versa. So in this case it’s only my own definition for information that applies. But fortunately what you’ve said about semantic and physical information, does already conform. If we detect something, then it may be said to be informational in this sense, which is to say informational because it informs us, or that it’s semantic. We are what gets informed in this case, and so exist as the causally appropriate instrument that’s informed. If we aren’t informed then there’s no information in this sense given that nothing appropriate becomes informed.

        The main point of our discussion however is not semantic information, but rather what you’ve called physical information. This is to say, causality beyond what’s understood. Gravitational waves, light from stars, DNA, and so on, have causal effects and so may be said to “inform” what they affect regardless of us being informed by them. In no case however should they be said to exist informationally without informing something appropriate. Of course we might say that light from a star which is traveling to a planet carries “potential information” in respect to what it will inform on that planet, but only in the sense of what ultimately gets informed by the light. There will be no inherent information to that light. Similarly a DVD can exist as potential information to a DVD player, or a table leg that could potentially be shimmed by it. My point however is that in no case should anything be inherently informational, whether a DVD, light, DNA, or anything else. This is because causality mandates that something appropriate be informed by it.

        This gets us to a popular conception of consciousness which seems to violate this rule. Here the potential information which is sent to your brain when your thumb gets whacked, could be expressed with an arbitrary level of fidelity by means of marks on paper. Your brain of course processes the potential information sent to it to thus make it actual information. And similarly the marked paper could be fed into a computer which produces more paper with marks on it that correspond with your brain’s associated potential processed information. So here the first marked paper also becomes actual rather than just potential information. But now we’re left with two forms of potential rather than actual information — one brain based, with the other a second set of marked paper.

        You currently posit that thumb pain will result in either case. What this means however is that regarding thumb pain at least, neither brains nor paper need inform anything in order for potential information to also be actual information. My point is that this violates causality because processed information should only exist as such to the extent that something appropriate becomes informed by that information. Furthermore apparently consciousness is the only exception to this rule that you’ve been able to find so far. That’s not a good sign!

        The logical way out of this mess would be to look for what our brain information might inform to exist as an experiencer of thumb pain? So far I think you’ve been too ideologically opposed to this possibility to entertain it. But I’m also observing that your current position seems to conflict with the tenets of causality itself, and specifically because information (physical or semantic) should only exist as such in respect to what it informs. If processed brain information leads to the experience of thumb pain, then causality mandates that such information must inform something appropriate to exist as that experiencer. You’ve not yet provided a second option.

        Liked by 1 person

        1. Eric,

          Let’s review some points.  Maybe you can tell me where you see us diverging.

          1. What I call “physical information”, you call “potential information”.  Since the first word of both phrases starts with ‘p’, we can conveniently just abbreviate both as “p-info”.  For you, the ‘p’ means “potential”, for me “physical”.

          2. What I call “semantic information”, you call “information”.  Let’s just abbreviate that for now as “info”.  

          3. If I understand you correctly, you agree that p-info is causal, even if, as you insist, it isn’t “inherently informational”.  (If p-info is not causal, then what would you say gravitational waves, spectral absorption lines, or DNA are instead?)

          4. I say that mental processes are “information processing”, but by that, I mean p-info processing, or more precisely, the right kind of p-info processing.  For me, that’s information processing, but for you it’s just processing of potential-information.  But it seems like we agree that in any case p-info processing is causal.

          5. If p-info processing is causal, and I say the mental is the right p-info processing, then where do you see causality being violated?  Why isn’t the sum of all this causal processing not a candidate for something that could be “informed”, turning the p-info into info?  Or for something that could feel pain?

          6. If causal processing by itself is not sufficient to be a candidate that can be informed of feel pain, then what has to be added to make it sufficient? And wouldn’t that requirement not itself be the very thing you claim I’m doing, positing something non-causal?

          Liked by 1 person

          1. Wow, major inconsistencies in definitions here! In a sense this is heartening because it suggests that you logically shouldn’t grasp my point to even potentially assess the possibility that you’ve inadvertently adopted a non-causal position. That said however, it’s also possible that you’ve subconsciously avoided my definitions given that from them my point does seem difficult to counter. It’s hard for me to say which of the two might be more correct. Good fun either way though!

            When I say “potential information”, I’m talking about stuff that is not in itself informational. Instead it could or would be information if/when it informs something appropriate. So a DVD should not inherently be considered informative, but rather only in respect to something appropriate that it informs. A DVD player would be appropriate for a DVD to inform, for example. So when it’s not informing such a machine, what’s encoded on it should merely be considered “stuff” rather than “information”. Yes it could potentially become informative if it were to inform something like a DVD player, but when it isn’t, it won’t be. Or if we’re talking about a DVD informing a table leg, this should only be the case to the extent that it actually does shim a table leg rather than could potentially do so. Or light that informs a rock by heating it should only be informative to that rock, to the extent that it actually does this rather than potentially could do it some day after it breaches the atmosphere of a planet where that rock happens to be exposed.

            Also what you call “semantic information” is not what I call “information”. I think I call semantic information what you call it. It’s just that I further mandate that someone be informed by that information. As I see it that’s what makes it “semantic”. Shouldn’t it be said that semantic information will not exist as such to the extent that no one is informed by it? Here a book that informs no one will not be semantic information in itself, and specifically because of the defining point that it informs no one and so in this sense it can’t be. The essential bottom line here (and in all cases that I present) is that information that informs nothing appropriate, should not be considered “information” in that sense, but rather just “stuff”. This is not to say that something couldn’t potentially become information in a given sense if ever it does. In that case there would be a potential information situation that will not be actual information until the informing actually occurs.

            The problem you’ll have if you do permit information to be defined as informative to something appropriate, I think, is this mandates that there must be some sort of brain physics which processed brain information informs to exist as consciousness. You could of course argue that this isn’t a useful definition for “information”. That argument would become suspect however if it seems motivated merely by personal convenience. To potentially counter my claim that you (and many others) have inadvertently adopted a non-causal position, something stronger should be required. Yes we could go round and round with apparent misunderstandings and definitional disputes. The only way to effectively challenge my argument however, should be to display an example of something known that seems informational in an inherent sense. Beyond the theoretical creation of thumb pain by means of marks on paper that inform nothing, what would be a known example of information that can usefully be said to exist as such without it informing anything?

            Of course in the end I look to empirical science to demonstrate that our brains inform an electromagnetic field to exist as thumb pain and so on. So that’s what I suspect processed brain information informs to create consciousness. Until then however I’d love to help science lose some of the habits that I consider most problematic.

            Liked by 1 person

          2. Eric,
            You spent most of this reply reiterating your definitions. But for this discussion, I had already accepted them in my last reply. My point was that they’re irrelevant to the causality point you keep making. You say that “potential information” should just be regarded as “stuff”. Ok, then replace every instance of “p-info” above with “stuff”, and we end up with “stuff processing”. If we agree that stuff is causal, or at least participates in causality, then we’re still talking about causal processing.

            So using your definitions, another way to describe my position is… causal processing, particularly the right kind of causal processing. Yet, you keep saying I’m positing something non-causal, but seem unwilling or unable to identify where exactly the non-causality is. Until you can, I can’t see any reason to follow the rest of your argument. If you want to move these discussions forward, focusing on this point would help.

            Liked by 1 person

          3. The specific place where causality goes missing from your account, is quite simple Mike. If causality mandates that processed information can only exist as such to the extent that it informs something appropriate, then here you’re missing the “informs something appropriate” part of causality. This not only mandates that processed brain information must inform something appropriate to exist as what experiences a thumb whack, but that the right marks on paper would need to inform something appropriate as well.

            I don’t expect you to say, “Okay, now I understand what your criticism happens to be”. Here’s an associated thought however. Notice that your belief that brains create an experiencer of thumb pain by means of processed information in itself (rather than processed information that informs something appropriate), is exactly what makes this position impossible to disprove. Conversely when someone says “Here’s the physical dynamics which processed brain information informs to exist as an experiencer of thumb pain…” that specifically should make the person’s proposal falsifiable. In that case it ought to be possible to empirically check whether or not what’s proposed makes sense. But not otherwise because in that case nothing would exist to check.

            Another way to go would be to forget about consciousness for a moment and consider a well known example of information in this context. Let’s consider a recording of your voice. There are countless ways that your voice could theoretically be recorded, and with various degrees of fidelity. I’m saying that none of them will inherently be informational, but rather only in respect to something appropriate that a given recording informs. Here correlated marks on paper wouldn’t inherently be informational in this sense. Instead marked paper would only be “stuff”, that is except in relation to something appropriate that those marks inform. This is mandated by causality. Consciousness itself is the only exception to this rule that you’re able to propose. Here you don’t consider processed information potentially able to inform something appropriate to exist as such, but rather to inherently exist as such. Thus my objection.

            Liked by 1 person

          4. Eric,
            I’ll take one last shot in this thread.

            I’m not aware that causality mandates anything but that cause precedes effect, and even that, in fundamental physics, reduces to an interactive relationship that can go both ways, except for the second law of thermodynamics (and possibly QM wavefunction collapse). Maybe at some point you could elaborate on why you causality mandates what you’re saying. In any case, for purposes of this discussion, I accepted it above, along with your definitions.

            According to your definitions, what I’m calling “information processing” is actually just a lot of causality happening at low energy levels. So we’ve taken the information thing off the table. We end up with a system which has energy impinging on its boundary regions, some of which are specialized to take those impingings and transform them into various internal effects, and some of those transformed effects are amplified back into environment. (I’d call these “sensory” and “motor” systems, but that might be seen as begging the question that they’re information. So we’ll stick with just physical effects.)

            So we have a completely causal system, going through physical transformations. If we eschew the information language, why isn’t it a candidate for being “informed” in the way you stipulate? What do you consider necessary for such a system? (If you just say “EM fields”, what specifically about them allows them to make the cut while neural processing doesn’t?)

            Liked by 1 person

          5. I had to wait for the weekend to think about this one Mike. Yes let’s talk about “causality”. And I’m sure you have no qualms with a “systemic” clause as well — you and I have no use for outside influences! I think you’re right that we can define this as “cause precedes effect”. And yes, as you note this gets into the variable of time. If time stops then one should not precede the other — the cause will effectively exist while the effect will not even exist temporally. Or if time runs backwards then effect will precede cause. In any case systemic causality is defined not to deviate regardless of time variability, or a so determined reality will result regardless of the time component of when.

            Furthermore I don’t think we yet need to get into any human made laws or quantum mechanics. For the moment we should be able to go with “a priori” definitions. Here we can never know that our world truly is a perfectly closed causal system, but we do presume this in order to preserve the integrity of the domain of science.

            Because we’ve been using terms like “information” and associated “processing”, it should be appropriate to reduce them back to systemic causality so that my assessment of your current perspective might be checked by means of these terms as well. But I can’t quite leave your assessment of thumb pain by means of “a system which has energy impinging on its boundary regions, some of which are specialized to take those impingings and transform them into various internal effects, and some of those transformed effects are amplified back into environment”. Because we’re merely talking about a thumb pain by means of the right marks on paper converted to the right other marks on paper, that account seems to leave out too many essential details.

            Ultimately anything that happens under systemic causality could be defined as “information”, and here with “processing” as what that information causally affects. My full observation however is that information in one sense here can only exist as such to the extent that something appropriate becomes informed by it — otherwise it will be non informational in that sense, or rather just “stuff”. So the entertainment content encoded on a DVD won’t be informational when that DVD is used as a table shim. This is to say that whatever content is encoded on a DVD should be irrelevant in respect to its ability to function as a table shim. Or going the other way, what’s encoded on a DVD is what matters to a DVD player’s output and so will be informational in this sense, though that should be irrelevant to any table shimming informational attributes. So here it should be useful to say that information in one sense is not inherent to anything, but rather should exist exclusively in respect to something appropriate that’s informed by that information. Furthermore we should be able to replicate this model for all elements of causality, and whether this involves molecular structure, bombs, planets, and so on. Also observe that what I’m saying here is true by definition and so should be no more possible to disprove than that squares have four sides. Systemic causality mandates that information in this sense can only exist as such to the extent that something appropriate becomes informed by it.

            This observation is why I consider the scenario from my thumb pain thought experiment to violate systemic causality. Yes the potential information that your thumb sends your brain when it gets whacked, could be replicated with marks on paper to some arbitrary degree of fidelity. Then if your brain processes what was sent to it (just as a DVD player should process once it’s spinning inside it, or a table leg’s molecules should process as they begin interacting with the DVD’s molecules in a way that they tend to counter gravity), this would make it real information rather than just “stuff”. In all cases something appropriate would be informed.

            Here we’re left with processed brain information which might just as well also be represented by marks on paper. I realize that this is where you might say “Yes, what you’ve presented so far is an entirely causal solution for the problem. The processing of that brain information will in itself result in what I feel. Therefore the right marks on paper converted to the right other marks on paper is another causal process that ought to result in something that experiences what I do when my thumb gets whacked (though in practice that conversion ought to be an insanely complex!).” Okay, but the difference between your two scenarios and the two DVD scenarios that I presented, is that in yours nothing is identified to be informed by the presented information that may or may not be appropriate. (Mine were the DVD player and the table.) So your account seems to be missing the main constituent by which information can potentially become informative in a causal world, which is to say something appropriate that’s informed. I don’t know of a single case where it’s effective to say that information may otherwise exist. Here systemic causality seems incomplete.

            Another way to potentially illustrate my point is to talk about what I’m quite sure would change your mind in this regard. If scientists were to empirically determine that our brains create an experiencer of thumb pain (and all consciousness) in the form of the proper parameters of electromagnetic radiation, then in retrospect I think you’d agree with me. Let’s say that scientists were to find that they could not only distort someone’s consciousness in predictable ways given certain parameters of exogenous EMF transmissions in the brain which were similar to that produced by synchronous neuron firing, but even learn to modulate such transmissions to give the person specific images, sounds, feelings, and so on that they’re able to report. Here scientists would meticulously categorize which elements of consciousness correlate with various specific EMF parameters. If something like this were to occur then I think you’d grasp what my point happens to be. At least here it ought to be apparent that the information that your brain processes from a whacked thumb, should only exist as such to the extent that it goes on to inform an appropriate electromagnetic field that exists as you the experiencer. And indeed, in that case I don’t think you’d know of an exception to the definitional rule that information can only exist as such to the extent that something appropriate becomes informed by it, which is to say that Occam’s razor should pare things down. Functional computationalism should remain I think, though now in a more causal way than Alan Turing was able to predict.

            Liked by 1 person

  5. Here’s an interesting take on enactivism:

    Let the category ‘enactivism’ encompass philosophical and theoretical
    accounts of mind and cognition which (a) underline the constitutively
    embodied, affective, situated and action-oriented dimensions of mind
    and cognition, up to the point that cognitive processes are not restricted
    to what takes place in the central nervous system but rather encompass
    bodily, motor and environmental processes; and (b) reject the claim that
    cognitive processes necessarily involve the manufacture or retrieval of
    mental representations, defined as intracranial and naturally contentful
    physical structures. This latter point distinguishes enactivism from other
    post-cognitivist theories of cognition such as extended cognition, distributed
    cognition, embodied cognition1 or situated cognition. –Pierre Steiner, INQUIRY 2023
    https://doi.org/10.1080/0020174X.2023.2216753

    I think the key words there are “intracranial and naturally contentful” in part b. Some obvious ways to reject those “representationalist” (Steiner’s term) claims would be: meaning is social, not natural; meaning is contextual/historical, not self-contained in the brain.

    Steiner also says that enactivists can still talk about intentionality. I can send you the paper if you like.

    Would (a) and (b) put Steiner’s enactivists close to the “eliminative connectionists” you’re talking about?

    Liked by 2 people

    1. Thanks Paul.

      I suspect that version of enactivism would also encompass eliminative connectionism, assuming they still accept any version of computation. But the relationship is messy. You can learn more about eliminative connectionism in the SEP article.

      https://plato.stanford.edu/entries/connectionism/#ShaConBetConCla

      My take on representations is I can lay in my bed and imagine hiking in the forest, walking through an airport, or eating at a breakfast buffet. Of course, I can imagine them because I’ve physically done them before. But I’m not physically doing them while I’m imagining them in bed. For me to accept dismissal of representations, I’d have to see a compelling representation free account of what’s happening in those types of scenarios.

      Like

      1. I think the term “representationalism” is unfortunate because one could believe in a modest form of representations without believing in an intracranial, naturally contentful physical structure. It’s jargon, not a straightforward description.

        Like

        1. I agree. When someone says they’re opposed to representations, it makes sense to ask which theory of representation they’re reacting to. That said, I may not be catching their meaning with that phrase, because it seems reasonable to suppose the content in my imagining was in my head. I can see the argument that I might have an easier time remembering those events if I moved around in a similar way to how I was at the time, but it doesn’t seem necessary in many cases.

          Like

    2. In point “a,” where exactly is the boundary between the central nervous system and bodily/motor processes? Isn’t there a pretty good overlap in the spinal cord? Or, should it really say “brain” rather than “central nervous system?”

      The main point is there is a difference in my mind between processes inside or at the boundary of the body and “environmental” processes, although that also could get fuzzy if you start to count things like a spider’s web or a human tool.

      In point “b,” this gets into the definition of cognitive. If cognitive can be unconscious, then yes we know and discover things we are not consciously aware of. The brain does things of which we are not aware. I don’t know many people doubt that if we define it that way. I think some have estimated that 80-90% of what the brain does is unconscious. I’ve never been sure where that number came from, but it seems likely at least some, maybe a lot, of what the brain does is unconscious. That is what makes the conscious part of it requiring an explanation of what is different about it.

      Like

  6. “Does the embodied movement challenge computationalism more than I’m thinking?”

    I wasn’t aware that the embodied movement was a direct challenge to computationalism itself. Can’t you just make a robot or something? I wouldn’t think this would be a conceptually insurmountable issue. My problem with it is I don’t see how a computational theory of mind could be falsified, assuming of course that we’re talking about consciousness as such. Maybe if it actually came to be a serious question then I might not care one way or another as far as ethical treatment goes, but I wouldn’t know for sure.

    Also, I’m not sure I understand the terminology. What would it mean for a computer to have a mental state?

    Still trying to wrap my mind around information. I take it this is a technical term, but it’s not easy to get around the normal sense of the word as being something that exists in minds, not in the world.

    Liked by 1 person

    1. Embodied cognition doesn’t have to be a challenge even to classic computationalism, although I’ll admit it sits uneasily with it. But many embodied proponents are fine with connectionism. However some, the more radical factions, insist that pretty much everything about cognitive science has to go.

      On falsifying computationalism, we didn’t have to see the progress in AI we’ve been seeing. It might have turned out that computation overall was just the wrong approach. Imagine if recognizing faces, for instance, had required a completely different approach with a completely different class of hardware. If it had, computationalism would have been in trouble. It might even still turn out that some aspects of cognition aren’t amenable to it. Ironically, it’s movement coordination that could end up being the issue, although the Boston Dynamics people seem to be making progress anyway.

      What would it mean for a computer to have a mental state? I think it depends on how we define mental states. For me, a mental state is a functional one, a causal role, which doesn’t leave any barrier in principle to a machine having one. But if someone holds to a version that must be non-physical, then obviously we wouldn’t have a clue how to put that in a machine, unless like Chalmers, we see it coming along for the ride once the functionality is there.

      Maybe it’s due to a career working in IT, but I’ve never really thought of information as something that only exists in minds. Books, for instance, always seemed like information. It’s just that when we really think about what the information in a book really amounts to, that it seems to become a broader concept. As I noted to Eric, if we agree that something like spectral absorption lines in starlight are information about which elements are in the star, then when did it become information? Did it change when we discovered the significance of the absorption lines? Or was it always just what it is now? If so, then it seems like it was always information.

      I make the distinction between physical information, which is always there, and semantic information, the significance of which is understood by an agent (not necessarily a conscious one). But physical information is always there regardless, and always seems to have the potential of becoming semantic. At least in an external mind-independent reality.

      Liked by 1 person

  7. I guess computation is in the eye of the beholder.

    I’m fine with modeling brain behavior with computations, but whether the brain is actually computing in the digital sense of the word is another matter.

    Are snow flakes computing when they form? If the answer is yes, then probably the brain is computing. A very broad definition of computation might include snowflakes as computers.

    “His new model is “semi-empirical,” partly tuned to match observations rather than explaining snowflake growth starting entirely from first principles. The instabilities and the interactions among countless molecules are too complicated to unravel entirely. … Although ice is especially weird, similar questions arise in condensed matter physics more generally.”

    https://www.quantamagazine.org/toward-a-grand-unified-theory-of-snowflakes-20191219/

    Liked by 1 person

    1. I don’t think brains do digital computation.  Analog computation, or maybe a hybrid between digital and analog, seems like a better understanding.

      It seems possible to tell a story of the snowflake computing, but I don’t know that it gives us much (unlike computational models of the snowflake, which obviously do).  It does seem like it gives us more in understanding what neurons and neural circuits are doing, at least until someone finds a more useful way to think about it.

      Liked by 1 person

  8. A lot of thought-provoking stuff in just a few paragraphs. The question is whether computational theories can account for the phenomenology that characterises our conscious experience. It has apparantly become intertwined with the question of whether an AI system could be conscious.
    I recently watched a stimulating discussion on this topic between David Chalmers and Anil Seth
    WSF What creates consciousness?
    Seth has won me over more than Chalmers, I agree with him that he doubts that that computation of some sort can account for consciousness. Being smart is not the same as being sentient, as he once puts it.
    Although I too have difficulties with the whole theory of embodied cognition, I consider consciousness to be a natural attribute of living organisms, where the brain and body work together to stay alive.

    Liked by 1 person

    1. Thanks for the link. I’m pretty familiar with both Chalmers and Seth’s pitches, but watching them interact might be interesting.

      Ironically, when it comes to what consciousness might be, I agree more with Seth on predictive theory. But I think his biological biases are misguided. I can’t see any reason why a technological system can’t have the same mechanisms. So I’m more with Chalmers with discussions of AI come up, although I’m probably more skeptical about how close we currently are.

      Like

  9. The way The Matrix handled this has become increasingly problematic to me as well. I’m currently reading the Bobiverse series, and I’m curious now what you think of the way mind uploading was handled in those books. There is some talk about a “special process” or “proprietary technology” or something like that, which helps to hand-wave the issue away, but I feel like it still doesn’t hand-wave the issue away entirely.

    Liked by 1 person

    1. I enjoy the Bobiverse books, but while they’re harder sci-fi than average, there’s still a lot of dubious stuff. It’s been a while since I read the early books, so I don’t recall too much about the upload process. I know when Bob clones himself, quantum uncertainty is supposed to lead to slight differences in the clone’s personality, which gives each of them a slightly distinct personality. In the later books, that gets elaborated on a bit, leaning on common misconceptions about quantum entanglement. (In Taylor’s defense, it’s a common strategy in sci-fi.)

      But that’s always the challenge. If Taylor had strictly stuck to science, having the Bobs have to deal with real energy issues for their spaceships, no FTL communication, etc, the books would have a very different feel.

      Liked by 1 person

Leave a reply to Paul Torek Cancel reply