The beginnings of purpose

Michael Levin and Daniel Dennett have an interesting article up at Aeon, on the right way to talk about purpose and cognition in biology, particularly in simple organisms and lower level mechanisms. The core thesis is that an organism, at any level, all the way down to a single cell, is an agent, with its own agenda and goals.

They are careful to stipulate that they’re not saying these simple systems are conscious. Biologists often talk about these organisms and mechanisms as though they have purposes and agendas. For example, a biologist might say that a plant “wants” its seeds to be consumed by animals so they’ll eventually be passed in locations far and wide.

But most biologists, if pressed, will say that they’re only talking metaphorically. In a way, this is similar to the usage physicists and chemists use when they talk about what an electron “wants” to do, or which atoms want to bind with each and which have an aversion to each other.

Levin and Dennett argue that biologists should stop worrying about the metaphor caveat. That it’s actually holding back higher level theories about these systems. Systems can have goals and purposes without understanding. In a phrase Dennett has been using for the last few years, they can have competence without comprehension. There may be something to this stance.

Biologists, at least the ones in the US, have been leery of this approach, partly I think to avoid confusion between the scientific view of evolution, which is unguided, and the version often imagined by the public, that it’s guided, at least in part, by a deity. All the battles with creationists and intelligent design advocates over the decades haven’t helped. But those battles seem to have dwindled in recent years. Maybe we’re at a point where evolved purposes can be discussed without theological confusion.

Of course, I can see a lot of biopsychists, those who see all life as being conscious, objecting to Levin and Dennett’s caveat that these goal seeking systems aren’t conscious. For that matter, I can see panpsychists objecting that it’s wrong to draw a boundary at biology, since physicists and chemists do often use the purpose language for what they study.

But as the article mentions, the laws of physics give us a good idea of what an electron will do in any particular situation. They aren’t nearly as useful when considering what a mouse will do. Yes, it’s all physics and computation, but the functionality of a living system adds far more complexity, and arguably requires modeling at higher levels of organization.

Levin and Dennett discuss an important measure of the intelligence of a system, how broad its scope of concern is. A simple system, such as a worm, often just responds to immediate stimuli it receives from the environment. A more sophisticated system takes in information and responds to a wider spatial area. An even more sophisticated system considers information over increasingly larger time scales. Humans, of course, have a very wide scope on this scale, spatiotemporally wider than anything else we currently know of. (This resonates with the hierarchy I sometimes discuss.)

One area the article doesn’t touch on is artificial intelligence systems. In some ways, they’re going to mess up the spatiotemporal scale, because it’s generally easier for technological systems to take in information from a wide variety of sources than it is to make use of information in the immediate environment. It’s the old conundrum that what’s easy for humans (or other animals) is often hard for machines, and vice versa. Of course, with these types of system, no one is shy about discussing their purposes, since the engineers are typically there to tell us about those purposes.

What do you think? Should we just get over the aversion to discussing purpose in cells and other lower level biological mechanisms? What about in proteins? Or even non-living systems? Or are we just reifying metaphors and confusing ourselves? Should we maintain a distinction between teleology, the existence of purposes and goals, and teleonomy, the appearance of such goals?

70 thoughts on “The beginnings of purpose

  1. To quote from the article, “It’s all about goals“. I think the starting point has to be: What is a goal? My answer: a goal is the property of a system which tends to move the system and/or environment toward a particular, non-equilibrium state.

    This definition applies to a lot of things, including whirlpools, roombas, corporations, etc. The article in question only considers life, so, single-celled organisms up to humans, but they do make the important point that subsystems within a larger system can have their own goals. And they also clarify the point that human goals are just an example of what you can get when you combine subsystems which have their own goals.

    This will bring up an interesting question though: to whom/what do you ascribe a particular goal? Does a toaster have the goal of making toast? I would say no. I would ascribe the goal to the system which crates/arranges the mechanism which achieves the goal.

    Whatcha think?

    *

    Liked by 2 people

    1. This is why I’ve historically been reluctant to ascribe goals to some systems. It does lead to the natural question: who or what’s goal? For evolved systems, whose goal are we talking about? We could say “evolution”, or “natural selection”, or maybe just “nature”, and I’ve gone that way before. But we’re not talking about anything that imagined the goal beforehand. Maybe it can work if we’re scrupulous about remembering Dennett’s competence without comprehension.

      In the case of a toaster, it does seem easy to say making toast is not its goal. But when a Roomba is trying to get back to its charging station, or when my laptop is bugging me to update, it becomes easier to just ascribe those goals to the device. They initiate actions to achieve those goals in a way the toaster never does. Of course, they were given those goals by humans, but once acquired, it seems like it’s their goals.

      By the same token, squirrels have a goal of collecting nuts. That instinctive goal evolved because it’s adaptive. But it doesn’t change the fact that it’s the squirrels goal.

      Ultimately, this seems to amount to finding a way to talk about these systems. The systems are the systems, and they’re going to do what they’re going to do. Discussing where a storm “wants” go to can be a useful shorthand.

      I don’t know. My thoughts on this seem to be muddled.

      Like

  2. In From Bacteria to Bach and Back, Dennett put a lot of emphasis on natural selection. That makes sense to me as a way to delineate purpose. The purpose of a feature (e.g. heart) is the function (to pump blood) for which it was selected. I.e., the result which enabled the higher survival and/or reproduction of organisms with the relevant feature. Dennett calls this “design without a designer” – again, right move, I think.

    I really don’t see any danger that anyone more than half-awake will mistake talk of the purpose of biological organs, or behaviors, for Intelligent Design or whatever. Professors in undergraduate bio courses might want to avoid such talk though (because of the half-awake thing).

    Liked by 1 person

    1. Definitely if that’s what we are doing and we know we’re doing it, it is silly and we should stop. On the other hand, if we take ourselves to be doing that when we’re not and stop, what are the costs?

      Like

      1. Sure, errors in judgement can have a cost. I think metaphors can offer personal inspiration and they’re also helpful as indexes to more complex concepts. Ultimately it comes down to what we observe and measure and our (disprovable) theories about those. I’m fine with metaphoric, even poetic, language if it serves the moment. A presumption is that people in the know, know what you mean, and those that don’t can ask.

        FWIW, “purpose” is a general word; Wiktionary lists five distinct modern meanings. Its use, therefore, should always, I think, be understood as metaphoric or poetic. It’s not a scientific term one finds in a theory but in a philosophical effort. Different domains.

        Like

        1. Language always seems to be malleable, unless we’re working in a context of precise definitions. Often in science, even then we run into ambiguity. I can see why scientists gravitate so quickly to mathematical representations. The meaning of variables can still be ambiguous but it’s hard for their relations to when they’re in a mathematical formula.

          I’m actually starting to think that purpose talk is fine, but I think there is value in a periodic disclaimer so people understand how it’s being used.

          Liked by 1 person

  3. I think it is telling that an original article did not start from the definitions of what “goal” and “purpose” are. Moreover, you will have a hard time finding those definitions in a whole text, written by Michael Levin and Daniel Dennett.
    That is a problem. If you do not clearly define what you are trying to prove or disprove – then all the talk became ambiguous.
    I would suggest using the definition of “purpose” from the Oxford dictionary. “Purpose is the reason for which something is done or created or for which something exists”. “Reason is a cause, explanation, or justification for an action or event”. If we combine those two definitions and exclude “explanation, or justification” – then we could hope that “purpose” is not something human-centric. Explanation, or justification – those are things, which people do. The problem here would be that throwing away “explanation, or justification” will probably make the “purpose” an unrecognizable thing.
    With “goal” there is a problem too. Goals do not exist until somebody (something?) did not set them. Well, here is the definition from Your Dictionary (https://www.yourdictionary.com/goal-setting): The definition of goal setting is the process of identifying something that you want to accomplish and establishing measurable goals and timeframes. That is a human-centric definition.
    I’m not saying I choose the best possible definitions. However, in complex situations, the talk without setting definitions and discussing those definitions shortcomings – such talk, I think, is not productive enough.

    Liked by 1 person

    1. Good points on definitions. Following the definitions you looked at and and removing the human portion, we get something like: “Purpose is the cause of something being done, created, or existing.” Purpose without foresight seems to lose its distinction from straight causality.

      On the other hand, if I say that plants grow toward sunlight, I’m conveying a complex process with a very short amount of verbiage. I can restate it in a more causal manner: plants grow in particular directions under particular conditions because that tendency has been naturally selected since it increases the probability of the plant’s leaves receiving energy from the sun.

      We seem to simply find anthropomorphized accounts easier to deal with. The innate tendency to see minds everywhere is still very much with us.

      Liked by 1 person

  4. Fascinating. Of course we humans may “believe” we have a purpose but the pure physicalist will tell us we do not. No free will, consciousness does indeed become the illusion Dennett claims it to be. Who knows. There are better man than I, Gunga Din.

    Liked by 1 person

    1. To me, it seems like it’s all in the definitions. But I think there’s also an argument that we shouldn’t dismiss emergent phenomena. If I envisage a certain result, then take actions to bring it about, then I think it’s completely coherent to say those actions had a purpose.

      The question is how to talk about it when nothing in the causal chain envisioned that result. When we look at the activity, we can see a purpose, that is we can envision it, so it’s easy, maybe even productive, to talk about it as if it’s being done for a purpose. But should such talk require a disclaimer? Maybe another question worth asking, does such a disclaimer cost us anything?

      Like

  5. This really is very similar to Llinás’s idea of qualia originated in single cell organisms in their irritability – the ability to respond to stimuli and react in a goal directed way.

    What strikes me in looking for commonality is that essentially all of the inputs, including the brain system itself, and all of the outputs have a wave-like quality. Light and sound are obviously wave-like but my understanding is that even touch, smell, and taste are triggered as wave-like phenomena in sensors. The output – locomotion – is also controlled and managed in a waves through neuronal -pulses. What sits in between as it becomes more sophisticated also works on wave principles. So basically we have:

    input waves -> wave transformer -> output waves

    or for self-initiated actions we have

    wave transformer -> output waves

    I don’t know there is anything particularly profound about this but it seems like an interesting way of thinking about it.

    Liked by 1 person

    1. When viewed from an information movement perspective, it is hard to find a sharp boundary where we can say, here is when cognition, or qualia, or other concepts we normally refer to in more complex systems, begins. All we can do is look at the various dimensions and see the changes in extent.

      On the waves, even in automated systems, you can find wave patterns, such as the memory refresh in DRAM chips. The question is what role each type of wave plays.

      Like

  6. For the purposes of communicating science to the general public, I think most people understand that atoms do not literally “want” to gain or lose electrons. Most people get that that’s metaphorical language. I think the same is true with biology. In cases where there could be ambiguity, I’ve heard biologists note explicitly that they’re using “want” in a metaphorical sense.

    Basically what I’m saying is metaphors are a convenient way to communicate ideas, and I think most people understand when metaphorical language is being used.

    Liked by 1 person

    1. Generally I agree. But Levin and Dennett’s point is that, for at least the systems they’re talking about, it isn’t metaphorical. But it hinges on what we mean by “purpose”, “goal”, and other terms. I personally don’t think there’s any problem with metaphorical talk, but periodically ensuring everyone knows we’re speaking metaphorically. I can’t see that the disclaimer costs anything. As you noted, it can be as simple as putting quotes around words like “want”.

      Liked by 1 person

  7. Re “the usage physicists and chemists use when they talk about what an electron “wants” to do …”

    This is sloppy usage that should never be used. As Einstein said “Everything needs to be made as simple as possible, but not simpler.” I seriously doubt that eschewing such phrasing is “holding back higher level theories about these systems.” That sounds like hyperbole to me.

    I have read quite a few of Prof. Dennett’s books and he is quite careful with his language, simplifying things as much as is possible, but not more so.

    Liked by 1 person

    1. The article does make a distinction between non-biological and biological systems. Its point about holding back higher level theories applies to biology. And Dennett is the co-author.

      I actually don’t have an issue with saying the electron “wants” to do this or that. It’s often far quicker than a more precise description. It may not work for a scientific paper, but I think for casual conversation, most people understand what is being said.

      Like

  8. It seems to me that there is no difference between biological and non-biological systems, except size and complexity. Our universe appears to be deterministic, in that it is governed only by the laws of physics, those we know and those we’ve yet to know. I believe that all our thoughts, feelings, senses, behaviors, actions, consciousness, and life itself may be decomposed and ultimately understood by the fundamental particles, energies, and waves of physics. The metaphors we use to understand and deal with biological and social systems allow us to form models of reality that are simpler than the realities themselves, but those models are inherently incomplete (and ultimately inconsistent). I believe in consciousness because I experience it in the first person (subjectively), but I don’t know precisely how it works or what it’s composed of, and I don’t know how far possession of it extends among other biological and/or non-biological systems. Once we know its composition, we’ll figure out how it works, and who or what possesses it.

    Liked by 1 person

    1. Thanks Mike. I mostly agree.

      I’m not sure about the determinism part. There are deterministic interpretations of quantum mechanics, but we can’t cash them out, although it’s unclear just how much of that indeterminism bleeds into the macroscopic world. But I definitely agree it’s all the laws of physics playing out, and they’re overall probably far more deterministic than many are comfortable with.

      On how far consciousness goes down, my take is it depends on how we define “consciousness.” I don’t see it as something binary, either completely present or completely absent. I think it can be present in varying degrees. So great apes are a lot more conscious than mice, which are more conscious than frogs or fish, which are more conscious than worms. In the end, consciousness is in the eye of the beholder.

      Like

        1. By “cash them out”, I mean, make use of it. For example, if we accept the MWI as the true account, it doesn’t enable us to predict the result of a measurement (except to say that every result is realized by some version of us). It tells us why we won’t be able to make that prediction, which is good. But it doesn’t enable us, in our emergent classical world, to predict what we’ll actually see along our subjective timeline.

          Liked by 1 person

  9. The notion of ‘purpose’ is a linguistic concept which is a product of the human mind. I don’t think nature knows anything about this. It just works on a physical principle.

    If language is a metaphor for the physical world, then it is an approximation of the physical world. If it is an approximation of the physical world then it is not the physical world, full stop. In other words, the plant does not have purpose.

    I don’t believe that language is a metaphor for the physical world. I would suggest that language is non-representational. That is, it does not represent the physical world. There is no intentionality when we speak or write.

    Which leads to a paradox of course because here we are supposedly talking about the physical world, plants and animals, etc, but then are we? Are we not just generating a linguistic world in our minds? Have we got any closer to the plant just by talking about it?

    Liked by 1 person

    1. I agree that purpose is a human concept, more a psychological and social one than any direct aspect of nature. It seems to help us in thinking about what these systems do in terms of anthropomorphic motivations. We’re just wired to think in social terms rather than straight causal ones. It’s probably fine, as long as we understand that we’re projecting it rather than talking about something actually there.

      Like

  10. “A simple system, such as a worm, often just responds to immediate stimuli it receives from the environment”.

    This is very disparaging of the worm. 🙂

    For the worm to respond appropriately, it must have a map of the world that tells it where the stimuli originate, interpret the stimuli as something to move towards or away from, then trigger the coordinated movements to perform the action. Even at the low level of the worm, there is more going on that we often think.

    Like

    1. Maybe so. But I haven’t read anything to indicate worms build and utilize image maps of their environment. For distance senses, they might have light sensors, vibration detection, and some may be able to detect chemical gradients. But their responses seem largely reflexive, with sensitization and habituation, and maybe some limited associative learning.

      Liked by 1 person

        1. I don’t know that it needs to be a map at all, just a reflex. For instance, a light sensor that stimulates a motor nerve exciting muscles that lead to swimming in the direction opposite to the one the sensor points. Or vibrations sensed triggering inhibitory signals that stifle fixed action patterns.

          Liked by 1 person

          1. It doesn’t seem that it is that simple but your question about why it is not just a reflex has been asked.

            See Understanding the mind of a worm: hierarchical network structure underlying nervous system function in C. elegans.

            “This suggests an intriguing relation between the structural centrality and functional importance of a neuron. In addition, we obtain a glimpse of the possibly different structural principles used in connecting the sensory–interneuron and the motor– interneuron components of the nervous system by investigating the pair-wise degree correlation along the core order hierarchy. The occurrence of assortativity in a biological neural network, in contrast to most other biological networks which are disassortative, is especially intriguing. It may indicate that the nervous system had to face significantly different constraints in its evolutionary path compared to other biological circuits. This may shed light on one of the central questions in evolutionary biology that resonates strongly with the theme of this volume, namely, why did brains or central nervous systems evolve? An alternative could have been a nervous system composed of a set of semi-independent reflex arcs. Structurally, this would have been manifested as a series of parallel pathways that process information independently of each other, rather than the densely connected networks that we are familiar with. This question becomes even more significant in light of the argument that the larger complexity inherent in densely connected networks has led to the emergence of a conscious mind from the simple stimulus–response processing capability of primitive organisms”.

            . https://www.imsc.res.in/~sitabhra/papers/chatt_sinha_pbr08.pdf

            Like

          2. BTW, I didn’t realize that link was going to pop up as a big insert into the post.

            But the article really needs a lot more careful reading from me. I think the key to understanding a lot of this is to look at more simple organisms. My guess is that maybe the worm’s map really is a lot more complex which is why reflexes don’t work. And maybe the map begins with a map of the body itself so that even an organism with no distance senses still needs a map of itself and from a map of itself is gradually built a map of world beyond itself.

            Liked by 1 person

          3. James,
            No worries on the embed. WP adds them without warning. I edited slightly to collapse it to just a link. The embeds tend not to work well for mobile users.

            As I noted above, I do acknowledge worms have limited associative learning. It’s what the “brain”, the cluster of neurons toward the front, provides, multimodal responses and associations. So I oversimplified immediately above by saying just reflexes. But we’re still talking about a stimulus-response system. As the paper conclusion notes, there may be foundations here for later developments. It’s more than the neural nets of a jellyfish, but still seems far short of imagery.

            Liked by 1 person

          4. We can’t really say whether there is imagery or not unless you simply mean they don’t have eyes. Surely vision isn’t a requirement for consciousness otherwise the congenitally blind would not be conscious. Maybe the worm is conscious to its own self, in its own “eye” so to speak.

            Liked by 1 person

          5. Based on what I’ve read of their limited behavioral repertoire, I don’t really see it. But consciousness is in the eye of the beholder.
            I would just note if we’re inclined to label those limited capabilities as conscious, we should be consistent when assessing the consciousness of automated systems.

            Like

          6. I understand the theoretical reasons why you add that stipulation, but I think if system A without those things shows a similar range of capabilities as system B which does have them, and we consider system B conscious, we should be open to the possibility that system A is conscious via an alternate implementation.

            Like

          7. You can’t point to capabilities because consciousness is subjective experience. Capabilities prove nothing. I would want some theory about how qualia and subjective experience arise from bit flipping in a circuit or whatever the theory is.

            Like

          8. Well, that depends on whether you regard subjective experience as something that evolved. If it did evolve, then natural selection had to have something to select against. It can’t select against subjective experience in and of itself. It has to select against what is enabled by experience, which would be capabilities.

            Like

          9. That doesn’t mean that capabilities imply consciousness. The capabilities can exist without consciousness in machines. The capabilities needed consciousness in living organisms because of other restraints like energy and that the capabilities needed to be created from a prior biological foundation. Nature couldn’t evolve directly a silicon brain on a biological foundation of carbon.

            Like

          10. Energy is an issue in technological solutions too, and I think we should consider capabilities in relation to their energy requirements. On evolution, you seem to be implying that consciousness is a spandrel of some sort, something not adaptive that just comes along with other more adaptive functionality. But as far as I can see, every aspect of subjective experience has a functional adaptive role.

            Like

          11. Not implying that all. I wrote an entire post about consciousness enabling learning in living organisms as the primary evolutionary explanation for its existence. I would probably amend that slightly now with the idea that locomotion, which requires learning about the external world, was prior to more generalized learning.

            https://broadspeculations.com/2020/01/15/evolution-learning-and-uncertainty/

            I guess you are going to argue now that machines can learn too.

            But that isn’t the point. Capabilities, including learning, in machines and capabilities, including learning, in biological organisms can arise through completely different mechanisms just like a car can have a gasoline engine or an electric one. Just having capabilities doesn’t mean the underlying implementation must necessarily be conscious. But it is the underlying mechanism that is in question here since we are talking about subjective experience and qualia.

            If you want to say machines will be able to move, navigate, learn, solve problems, identify faces, write poetry, pick the capability you want, I can agreed 100% but there is no reason to assume the machine with the capability is thereby conscious.

            Like

          12. I think this gets us to the core of the matter then. For me, an alternate implementation does nothing to cast doubt about a system’s consciousness. We could say its consciousness is maybe different, but arguing that it’s missing is just arbitrarily designating one implementation as conscious and refusing to acknowledge it in another.

            That’s the problem with consciousness. People define it in all manner of ways, often in inconsistent and arbitrary ways that they twist around to meet their intuitions. Which often makes debate about whether a particular system is conscious, meaningless. We might as well argue about whether such as system is nifty.

            Like

          13. Your argument makes sense if your definition of “consciousness” has nothing to do with subjective experience or qualia, in other words the sorts of things we perceive internally that we normally associate with the word “consciousness”. In other words, in your view, a robot without subjective experience is conscious if it can fool enough people into calling it is conscious or if it can exhibit some set of capabilities at some high enough measurable level.

            But then why even call it consciousness? You can just talk about the various capabilities that machines and organisms have and leave it at that. Yes, organisms can see; machines can see. Organisms can navigate; machines can navigate. No reason to bring consciousness into the mix at all, especially since you don’t seem to think people can agree on a definition for it. or that anybody can call anything they want consciousness.

            Ultimately the question in my view comes down to how nature provided an ability for organisms to move, navigate, learn, and predict about the environment with biological material. Nervous systems were the answer and consciousness, our internal subjective experience, seems to me to be a critical part of nervous systems at some level of complexity.

            Like

          14. I would ask you to consider what terms like “subjective experience” or “qualia” actually mean. They’re really just synonyms for consciousness itself, and equally vague. This is also true for other phrases often thrown around such as “like something.”

            I don’t think zombies that can fool people for any substantive length of time are possible. Eventually the system has to start using optimizations to keep the implementation and its energy usage reasonable, and those optimizations, such as a self and world model, if they enable the same abilities as a conscious system, will arguably constitute an implementation of consciousness.

            Honestly, I periodically wonder if the term “consciousness” adds anything useful to the conversation, beyond giving people a vague pre-scientific concept to argue over.

            Anyway, if you want to limit consciousness to only living things, you can simply stipulate that the system must have biological goals and impulses. That would rule out most technological systems, unless someone goes out of their way to make artificial life.

            Like

          15. Just saw this in my news feed:

            https://medicalxpress.com/news/2020-10-theory-consciousness.html

            If there’s anything to it, it would be along the lines James and I have advocated a long time — that EMF in the brain might be important to the whole picture. I wish I could remember what SF author referred to consciousness as an incredibly complex standing wave in the brain (possibly leveraging some sort of resonance from the skull cavity, I’ve always wondered). But moment I read that a light bulb went off. It seems a reasonable physical possible explanation of experiential consciousness. Interesting idea, anyway.

            Like

          16. This is a new paper from McFadden that I shared with James and Eric a couple of weeks ago. McFadden is the theorist whose ideas they’ve been exploring.

            My concerns about it remain the same. I do find it interesting that McFadden pretty much admits in the paper that this is a form of neo-dualism.

            Like

          17. You say that as if somehow it discredits the theory. How do you know this intuition, which is apparently shared, isn’t akin to Einstein’s intuitions about riding along at light speed or free fall?

            Like

          18. I don’t see the data pointing in this direction. And despite the time the theory has been around, it seems to have no support to speak of in neuroscience. But Eric and James are enthusiastic about it, both having done posts on it. I’m sure they’d be happy to discuss it.

            Like

          19. Not seeing data is at least an objective criteria; your first two statements were more about your intuitions and preferences.

            Do you not consider the well-known fact that the brain does produce a measurable EMF field data? How about other aspects of EM theory, such as resonance? We’ve before touched on the notion of how energy intensive the brain is. Is the EMF field, then, just a waste byproduct?

            Like

          20. Wyrd, I really don’t want to debate this theory again. But I’ll offer the following as an explanation for my lack of enthusiasm.

            It’s widely acknowledged that the EM field of the brain is a factor (one of many) in the stochastic nature of neural processing. That said, axons evolved myelin to minimize interference from environmental fields. And neural circuits often use redundant connections and repeated firing to overcome noise degradation, as well as thicker axons for crucial circuits. All of which seem to indicate that the brain does what it can to minimize the effects of the EM field. Finally, the effects of these fields are minute compared to the internal ones inside the neuron.

            During the EM pulse that indicates an attentional shift, the majority of the circuits are being inhibited, so the EM field is strongly positive in nature. (Hence names like P300.) But the circuits with functional causal effects are lonely beacons of negative charge in that sea of positivity. In that case, the field is going strongly against the important signal.

            All of this is in addition to the fact that lesions that sever circuits have functional consequences, including for consciousness. The EM field doesn’t seem able to overcome those consequences. (McFadden appears to address this in the paper when talking about split-brain patients. I didn’t find it convincing, but I didn’t see him discuss the legions of other brain injury cases that aren’t compatible.)

            The theory claims to be the only solution to problems for which there are far more standard neuroscience solutions. We should prefer simpler solutions to exotic ones, when possible.

            I realize there are endless ways to rationalize away these issues. I don’t find them convincing. If enough neuroscientists did, I’d reconsider, but their reactions seem to be similar to mine. Maybe we’re all wrong and McFadden will be vindicated someday. But until then, I’m more interested in mainstream neuroscience.

            Like

          21. I wasn’t looking for a debate; I was just sharing something I’d seen and knew you guys had an interest in. It being “interesting” the author admits to a form of dualism has no probative value. Likewise references to the intuitions behind a theory, or why others might find it enticing. It’s a clear signal to your opinion, but fact and argument free. Worse, it skates the edges of ad hominem.

            This reply shows fine reasoning, and I have no quibble with it (since, again, not looking for a debate). In fact, I find it a fairly compelling reply. This, or even a shorter version, would have been a good way to answer my original question!

            Like

          22. I do pop over there sometimes, but WP has really cranked up the ads on blogs that allow them, and I have an increasingly hard time with ads. It’s keeping me off more and more websites, because I just can’t take it. It’s definitely become a factor in who I follow anymore.

            Like

          23. In my experience, the motivations for a theory are often a useful heuristic.

            WP didn’t used to show ads to logged in WP users, even on free blogs, at least not to premium ones. I hadn’t noticed the change since I run a blocker. Definitely not an improvement, although as ads go, I didn’t find them too invasive. Not that I found them innocuous enough to keep the blocker off.

            Like

          24. I’m afraid we’ll have to disagree on the heuristic. I try very hard to avoid judging an idea by its antecedents and, as much as possible, to go by what it says, not who said it or why they said it. A good pedigree is no guarantee.

            I need to either tweak my ad blocker or find a better one. Which one do you use? Do you have a preferred browser?

            Like

          25. I use UBlock Origin on Chrome. I like UBlock compared to the others I’ve tried (such as AdBlock). But I haven’t done any research on blockers in a while, so not sure if it’s the current best. I use Chrome just because I have stuff synched into it. I also like Firefox, but only use it occasionally.

            Like

          26. I’ve got AdBlock, and it’s either not very good or I need to tweak it. There was something about AdBlock for Chrome and Android being sold to an outfit that may have turned it into some kind of Badware, but the Firefox version was said to be okay, at least for now.

            I think it might be time to switch, especially since I also use Chrome quite a bit. (I just increasingly am unhappy with Google and am trying to avoid them when I can.)

            Like

          27. I did a little bit of googling earlier and remembered why I like uBlock. It’s light on resources. It doesn’t seem to slow site loading, or at least not by much. It’s probably more technical to configure than AdBlock (although not technical by your standards). I think what hurt AdBlock is they started selling exceptions to advertisers, and in my experience, some of the exceptions were obnoxious.

            The biggest issue is an increasing number of sites are detecting the blockers and either nagging about it, or outright refusing to let you see the content with it on.

            Like

          28. Sounds like I need to switch. I know what you mean about sites nagging or refusing. I suppose it’s another locksmith/lockpicker race, each reverse-engineering the other.

            WP is getting more aggressive about ads, and so is YouTube, I’ve noticed. It has the effect of making me do less — the parasites are killing the host.

            Liked by 1 person

          29. In the spirit of “someone’s wrong on the internet!”, I noted Mike’s suggestion that the myelin on axons might be involved in insulating the neuron from effects of the EM fields. That’s not what the myelin is for. The (main) function of myelin is to speed up the signal down the axon. The action potential of myelinated axons is (I think) about 100 times faster than unmyelinated.

            *

            Like

          30. James, we’ve discussed this before. Myelin fulfill their role by keeping the axon’s signal’s effects contained within it, and keeping outside effects from interfering with it, in other words, by providing insulation. Among those outside effects are the EM field.

            Like

          31. I had to admit James’s account of myelin’s purpose is how I’ve always heard it, although I make no claims to be well-read wrt neuroscience. My question is, if myelin is such a good insulator, how do we record EEG signals not just outside the nerves, but outside the skull?

            Does the myelin act like a Faraday cage? Is it efficient in the EMF domain, or is it as much or more about the electro-chemical action happening along the axon?

            Like

          32. The voltage in neural membranes is typically measured in tens of millivolts. What makes it out to the electrode on a scalp, which is typically measuring the effects of hundreds of millions of neurons, is tens of microvolts. To be fair, a lot is lost through the skull and scalp. A TMS pulse, by comparison, is typically over 100 volts.

            Like

          33. Apparently TMS at those levels can have profound effects on the brain in some people.

            Which demonstrates, doesn’t it, that signals aren’t fully contained by myelin and that fields can have significant effects. The proposition is that, being bathed in a bath of its own EMF, along with possible resonant effects, there might be some influence over how the system works. Analog systems, especially if non-linear dynamics are involved, can be extremely sensitive to small inputs (the butterfly in Brazil thing).

            (Heh. Seems the shoe has switched feet. Now I’m the one arguing a theory seems viable. 🙂 )

            Like

          34. The myelin does not insulate or have much effect, if any, on an electric, magnetic, or EM field. The myelin is an insulator in that ions and free electrons will not flow through it (without a huge voltage).

            The signal is passed down an unmyelinated axon when an ion channel opens up. That creates a localized flow of ions into the cell, and that flow creates a circuit where those ions get pumped out by the nearby membrane. This circuit causes the next channel in line to open, and the signal continues down the axon that way. But if the axon is myelinated, the ions can’t get out thru nearby membrane, so the circuit has to go all the way down the axon until it hits a node where the myelin stops. Here is where another channel opens and continues the signal with the circuit jumping to the next node. This jumping is what makes the signal go faster.

            Note that these channels are “voltage gated channels”, which means they open when the voltage across the membrane changes in a certain way. Applying a high voltage from outside is likely to open all the channels all at once, borking any signaling the neuron could be doing.

            I’m basing the above on my undergrad cell biology classes, and using this: https://www.ncbi.nlm.nih.gov/books/NBK27954/ as confirmation.

            *

            Like

          35. The paper you link to makes no mention of EM fields. But even if the only thing it does is increase the distance between neighboring axons, that has an effect. (Glia around the soma can play a similar role.)

            On what makes the signal go faster, from the paper (emphasis added):

            Myelin is an electrical insulator; however, its function of facilitating conduction in axons has no exact analogy in electrical circuitry. In unmyelinated fibers, impulse conduction is propagated by local circuits of ion current that flow into the active region of the axonal membrane, through the axon and out through adjacent sections of the membrane (Fig. 4-1). These local circuits depolarize the adjacent piece of membrane in a continuous, sequential fashion. In myelinated axons, the excitable axonal membrane is exposed to the extracellular space only at the nodes of Ranvier; this is the location of sodium channels [2]. When the membrane at the node is excited, the local circuit generated cannot flow through the high-resistance sheath and, therefore, flows out through and depolarizes the membrane at the next node, which might be 1 mm or farther away (Fig. 4-1). The low capacitance of the sheath means that little energy is required to depolarize the remaining membrane between the nodes, which results in local circuit spreading at an increased speed.

            Like

          36. And re faraday cages, myelin would be more like an anti-faraday cage. A faraday cage works because electrons are free to flow thru the walls of the cage. An electric field will then spend all it’s energy pushing around the electrons in the walls of the cage, and so doesn’t get to anything inside the cage. As I said above, myelin works because it prevents electrons and ions from flowing thru it. So the electric field would just pass thru myelin.

            *

            Like

  11. Hi Mike,

    I read the article you cited and much enjoyed it. There’s quite a bit there beyond the point you brought forward here, and in particular I enjoyed the discussion of the way “goals” of individual cells are able to combine into novel, collective “goals” (e.g. of organs, tissues, etc.) through intercellular sharing of nutrients and information. I think a bit part of what this article is saying is that there is no singular or obvious threshold at which the “magic” of cognition occurs, and that by allowing for the fact that very simple systems such as human cells have particular competencies, we can understand that the way they form cooperative networks leverages these “basic” competencies into complex adaptive systems. I skimmed it quickly and may not have digested it fully, but I think what is being said is that the alignment of shared “goals” and competencies is a plausible mechanism that gives rise to larger wholes, which possess in turn higher-order goals that are not readily explained in terms of an analytical approach that insists on separate individual parts operating as lone actors. This is, in some sense, about how unification of shared interest releases novel states of order in biological systems, I think, and how the capabilities of the larger systems can be seen as rooted in the adaptive capabilities of the individual components, which when cooperatively networked, produce the intelligence (their word) of the higher-order system. Pretty fascinating stuff!

    Michael

    Liked by 1 person

    1. Hi Michael,
      I agree it’s an interesting article. The mechanism where the agendas of individual parts are converted to agendas of the whole is indeed fascinating. As well as the discussion about when it breaks down, such as in cancer cells. (I found the part about converting cancer cells back to cooperative ones interesting.)

      And definitely the core of any higher order intelligence in biology is rooted in these mechanisms. There’s resonance here with Antonio Damasio’s biological value ideas, and that of consciousness being a homeostatic mechanism.

      That said, after the discussion here, I’m still convinced this is more teleonomy than teleology, but I fully acknowledge it’s often just easier and more productive to speak as if these systems were doing things for certain purposes. They’re not purposes the systems in question ever envision, but they are purposes we can envision, and that’s probably enough.

      Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.