A correction: LeDoux’s actual views on instrumental learning in vertebrates

I have to make a correction.  In my post on LeDoux’s views on consciousness and emotions, I made the following statement:

Anyway, LeDoux states that there is “no convincing” evidence for instrumental behavior in pre-mammalian vertebrates, or in invertebrates.  In his view, instrumental behavior only exists in mammals and birds.

As it turns out, this is is wrong.  In his hierarchy, he makes a distinction between instrumental learning that is habitual versus goal-oriented (action-outcome).  On my first pass reading his description of this, I assumed that a habit could only form after initial goal-oriented learning.  But while checking back on some details, I realized he actually describes learning that leads directly to habits without the goal-oriented stage.

In practice, an animal may engage in random trial and error behavior, some of which leads to a result that reinforces the behavior.  If repeated often enough, a habit develops.  Habitual learning can be distinguished from goal-oriented behavior by seeing what happens when the reward is later removed.  In goal-oriented behavior, the behavior quickly ends, but habits tend to persist for a while.  (Which of course is what a habit is all about.)

Habitual learning is much slower than the goal-oriented version, much more stimulus-response driven, far less flexible, but apparently it does happen.  I’ve dug around a bit in the literature, and it appears to be widely accepted.

So to correct the statement above, LeDoux does see instrumental learning as existing in all vertebrates, not just mammals and birds.  However, it is the goal-oriented learning he doesn’t see as having been demonstrated in pre-mammalian vertebrates.  Fish, amphibians, and reptiles he sees as only having the habit forming version.

I’m not sure what to make of this habitual type of instrumental learning.  Habits by and large appear to be largely nonconscious, so would learning them be as well?  Of course, LeDoux doesn’t even see goal-oriented instrumental learning as conscious, so in his view this distinction only amounts to different levels of sophistication in nonconscious learning.

As I mentioned in the other post, Feinberg and Mallatt, in The Ancient Origins of Consciousness, do see instrumental learning as indicating what they call affect consciousness, aka sentience.  And the indications of instrumental learning in all vertebrates drive their conclusion that all vertebrates are sentient.

But Feinberg and Mallatt don’t get into the distinction between habit and goal-oriented instrumental learning.  So I don’t know if this is a difference they overlooked, disagree with, or accept but see even the habit learning version as indicating affect consciousness.  A clue might be that, when deciding on their behavioral criteria for affect consciousness, they ruled out “persistence in pursuit of reward” as a criteria, because it “could reflect aroused but unconscious habits.”  (Emphasis added.)

A case could be made that even in habit learning, if not in habit persistence, there needs to be a valence, but both LeDoux and the literature make clear this happens in a representation or model free manner, which may not leave much room for it to fall even in primary consciousness.

A question then is, can goal oriented behavior be demonstrated in fish, amphibians, reptiles, or invertebrates?  LeDoux doesn’t think so, and notes that habit and goal-oriented behavior look alike without explicit tests to distinguish them, although maybe the rapidity of learning might provide clues.

So, this may complicate my new hierarchy, particularly the level where affects begin.  I’m going to have to give this some thought, and additional research, but wanted to get  the correction out.

This entry was posted in Mind and AI and tagged , , , , , , . Bookmark the permalink.

34 Responses to A correction: LeDoux’s actual views on instrumental learning in vertebrates

  1. paultorek says:

    Thanks for the update. Interesting. In my own case, habitual responses can be conscious or not, and I could probably make it through a whole day responding habitually but that wouldn’t mean I wasn’t conscious all day. So I think we should appeal to other criteria besides learning patterns alone, to decide whether (e.g.) reptiles are conscious.

    Like

    • Definitely we’re always conscious when engaging in habitual behavior, but our consciousness can be focused elsewhere, planning ahead, daydreaming, having a conversation with someone, etc. So the possibility exists that a creature without a consciousness at all could engage in habitual behavior.

      I’m always interested in ideas along these lines. What other criteria would you suggest?

      Like

      • paultorek says:

        Use the structure of the human brain, and its known functions, to make an educated guess about what’s going on in other animal brains.

        Liked by 1 person

        • The problem is that consciousness in humans is associated with the cerebral cortex, which is missing in pre-mammalians (although I’ve seen some describe the reptile brain as having a basic one). Using that criteria, many biologists rule out consciousness for those species.

          Of course, a number of things that were the province of lower level structures migrated to the cortex, so possibly consciousness did too. But that leads us back to assessing capabilities.

          Like

          • paultorek says:

            Your missing a middle path there though: the functional structure, not just the gross physical structure. The wiring diagram, if you will.

            Liked by 1 person

          • That’ll be viable once our knowledge of the wiring is thorough enough. The thing is, acquiring that knowledge involves correlating wiring with observed functionality.

            Like

          • Use the structure of the human brain, and its known functions, to make an educated guess about what’s going on in other animal brains.

            And then….

            Your missing a middle path there though: the functional structure, not just the gross physical structure. The wiring diagram, if you will.

            Oh man Paul, I think that you’ve really nailed it here! Of course the true skeptic also considers him or her self biased, and thus apparently you’ve proposed something which I’m predisposed to accept. I get the sense that many do not share our position here. Given my own “dual computers” model, at the current end of comments I’m adding a “wiring diagram” of my own. It may be fun to see how well others are able to challenge it and so expose their own motivations to my analysis. Might anyone even support this position? I hope to be hearing from you regardless…

            Like

        • Thanks for taking a shot though!

          Like

  2. Pingback: Joseph LeDoux’s theories on consciousness and emotions | SelfAwarePatterns

  3. James Cross says:

    Since both habitual and instrumental (and probably a good bit of instinctual) behavior is goal-directed, you can’t really tell if consciousness is involved in it or not unless you introduce another non-behavioral criteria of brain structures.

    Like

    • Actually, the literature says that habitual behavior is not goal-directed (at least not in the sense of a representation of the goal existing in the brain of the animal).

      The brain structures idea is similar to Paul’s. As I noted there, that’s problematic since pre-mammalians lack a cortex, the main structure associated with human consciousness. That leads a lot of biologists to rule out consciousness for them solely on that criteria.

      Like

      • James Cross says:

        You’ve based your definition on an anatomical structure but there is no way to know what the representation in the brain of the animal is like.

        Even if the behavior is primarily instinctual with no deliberation or planning involved, there still could be a representation of a goal. Even a spider spinning a web could have some representation of the goal of a web.

        Like

        • Are you sure? What do you mean by “representation”?

          In machine learning, there is a term called “model free reinforcement learning” (as opposed to “model based reinforcement learning”). The assertion in the literature is that habit acquisition can be model free RL.

          A principal in animal cognition is, never assume a higher order explanation when a lower order one suffices. By that standard, it’s hard to assume there’s a representation there.

          That said, I’m not settled on this stance. I’m interested in any counter arguments you or anyone else can think of.

          Like

          • James Cross says:

            What do you mean by representation?

            Spiders have come to mind because a large but apparently common yellow garden spider has built a web in our backyard.

            https://en.wikipedia.org/wiki/Argiope_aurantia

            With multiple weeks of little or no rain, the web has persisted and I’ve watched several insects become caught in it. The male spider in addition to the large web makes this unusual zigzag pattern in the web. You can see in the gallery pictures of the Wikipedia page.

            I can agree that probably this spider has not planned its web in the sense a human might plan a house. The web making is instinctual. However, to accomplish the task, there had to have been some representation or mapping of the spider’s body, the surrounding plants, the web, and some sense about when the web was complete (the immediate goal) so the spider could take up a position roughly in the center of web and wait for prey to become ensnared. The male spiders make the zigzag for reasons not totally understood. The spider, according to Wikipedia, rebuilds the central part of the web every night. Just guessing but since the spiders sits in the central part of the web it might be more prone to breakage if not rebuilt every night.

            So we have a pretty elaborate set of actions that evolved for the purpose of capturing prey. If humans through trial and error learned to construct nets to capture prey, we wouldn’t question that the behavior was goal-directed. Yet, when evolution through trial and error also develops an ability to capture prey in a web, we want to think it is not goal-directed, that it would unscientific to think of it as goal-directed. Certainly there are major differences in time scale. The spider’s web-making ability arose over thousands of years. Humans’ net making ability arose maybe in a few weeks or certainly no more than a generation or two for perfecting the technique.

            We are looking at variations of the same process. Perceptions lead to decisions and actions which affect survivability. The “decisions” and “actions” change slowly with genetic evolutionary changes in the case of the spider. The changes for the human are more rapid because we have greater and more general purpose neurological capabilities. In both cases, the actions require the perceptions and representations needed for them to happen. It is probably pointless to debate whether they represent consciousness or not.

            Liked by 1 person

          • “However, to accomplish the task, there had to have been some representation or mapping of the spider’s body, the surrounding plants, the web, and some sense about when the web was complete (the immediate goal) so the spider could take up a position roughly in the center of web and wait for prey to become ensnared.”

            Not necessarily. I used to think the same thing. But we have to be careful about assuming that the state that binds the sequence of tasks together has to be in the spider’s brain. It could be that the state is in the environment, with the spider reflexively reacting to the web’s different states. So it reacts one way when there is no web, a different way when the web is partially formed, yet a different way when it needs to be strengthened, and finally in a way when it looks complete.

            Biologists once assumed that ants had some conception of what they were doing, until pheromone trails were discovered. A lot of the state of ant tasks are held by those chemical trails, so much so that a scientist (I think it was E.O. Wilson) once spelled his name with pheromones then videoed the ants conforming to the letters of that name. Maybe spiders are more sophisticated, but I wouldn’t be surprised if they could be manipulated in a similar fashion.

            Good point about the feedback cycle in evolution, as opposed to the shorter one in animals. For most life, the outer feedback cycle is sufficient. Unicellular and plant life react to the environment, but those reactions are all programmed by their genes. Simple animals, such as worms, are similar. Only with vertebrates, some arthropods, and cephalopods, do we get the inner feedback cycle, with widely varying levels of sophistication.

            We designate certain levels of that sophistication as “conscious”. I agree it’s pointless to argue on where to draw the line. But I’m still interested in what the actual capabilities are. It matters for my own intuitive line.

            Like

          • James Cross says:

            “So it reacts one way when there is no web, a different way when the web is partially formed, yet a different way when it needs to be strengthened, and finally in a way when it looks complete.”

            How is that different from representations? That is precisely Hoffman’s point. Our perceptions and actions are tied together by natural selection and evolution. A human’s representation may be more sophisticated but it is working the same way.

            Like

          • I’m not quite following what you’re asking. There are all kinds of different representations. An animal can have visual representations of objects in its environment without having complex task sequence ones. The spider would have patterns it perceives, such as the state of a web, without having a complex plan for building and using the whole web, at least not in one centralized pattern.

            Liked by 1 person

          • James Cross says:

            Yes. The spider would have representations in response to its world and its state. Actions, probably entirely instinctual, are triggered by the representations. The spider may have no understanding of an overall plan to construct the web but is reacting to changing states of the web. The perception/action combination developed over evolutionary timescales and was selected by evolution. The lack of an understanding of any overall plan by the spider isn’t disqualifying for me to consider this consciousness. We don’t know how much of our perceptions and actions lack understanding of an overall plan or how limited our own perceptions may be. What compels us to hate other who are different from us? Destroy the ecosystem? Grow our population until it outstrips the carrying capacity of the earth?

            Liked by 1 person

      • James Cross says:

        https://www.pnas.org/content/113/18/4900

        Here we propose that at least one invertebrate clade, the insects, has a capacity for the most basic aspect of consciousness: subjective experience. In vertebrates the capacity for subjective experience is supported by integrated structures in the midbrain that create a neural simulation of the state of the mobile animal in space. This integrated and egocentric representation of the world from the animal’s perspective is sufficient for subjective experience. Structures in the insect brain perform analogous functions. Therefore, we argue the insect brain also supports a capacity for subjective experience. In both vertebrates and insects this form of behavioral control system evolved as an efficient solution to basic problems of sensory reafference and true navigation. The brain structures that support subjective experience in vertebrates and insects are very different from each other, but in both cases they are basal to each clade. Hence we propose the origins of subjective experience can be traced to the Cambrian.

        Like

        • This is one of the reasons I keep my hierarchy handy:
          1. Reflexes
          2. Perception
          3. Action-selection / sentience
          4. Deliberation
          5. Introspection

          My initial impression from the abstract is that they’re identifying layer 2 functionality in insects. I don’t think that, in and of itself, is controversial. But equating it with consciousness is. They do assert layer 3, but they only cite layer 2 functionality to justify it (attention).

          Whether this amounts to primary consciousness is a definitional matter, but personally I need more.

          Like

  4. Callan says:

    A clue might be that, when deciding on their behavioral criteria for affect consciousness, they ruled out “persistence in pursuit of reward” as a criteria, because it “could reflect aroused but unconscious habits.”

    They’re going to have a rough time thinking that maybe it’s always in pursuit of a reward, they just lack the internal access to be able to perceive it. So it feels like acts without ends. What they think is consciousness is actually less conscious, as in its witless to the rewards the thinking is pursuing.

    Like

    • It seems obvious that consciousness isn’t needed during habit execution. A key question for me is, what’s needed during habit learning? What initially causes the new reaction or action sequence?

      Like

      • Callan says:

        Well my estimate is that there are a number of sub systems that are continually operating to construct habits – like when we walk, we don’t consciously control every joints angle at each second. A series of sub systems learn all this, based on a sort of blueprint of what is required (standing up, walking forward). When I touch type my fingers are kind of moving on their own, working from the blueprint of what I want to type but I don’t control every little movement – and it does freak me a little when I pay attention to it as I see the moving and it’s kind of me but kind of not me.

        To me, so many daily functions are not at all conscious and are handled by sub systems which work out the details – in much the same way as neural nets learn things (because we ripped off the idea of neural nets from our own brains).

        Critically these sub systems are in action before your consciousness is – they are already operating before you even think about them. That’s what makes them kind of invisible.

        To me that sees to answer the question – does it work as an answer for you or seem way off?

        Liked by 1 person

        • Thanks. That’s along the lines of what I’m trying to work out.

          It does seem like babies have to focus a lot of mental effort when they’re just starting to walk. But it also seems that we pick up habits without being aware of them until they’re pretty far along. If that latter system is what fish use, then they’re learning without conscious awareness.

          Ultimately, the question might be moot. Even if fish and amphibians are conscious for the learning, the slowness of it and repetition it requires implies that their consciousness is far more limited than the mammalian or avian versions. If we could in some way have access to it, we might not find it even noticeable by our standards.

          Like

          • Callan says:

            By my estimate what we take for consciousness in ourselves are sub systems whose habit forming inputs are aimed not outward but inward. Probably initially used to learn useful habits with regard to other members of the tribe – the sub systems turn the ability to read and aim it onto the system that is doing the reading. With mixed results.

            What we have with philosophy or psychology or cognitive science is attempting to double turn the eye onto the eye a second time. First A: the eye attempts to look inward at itself, then philosophy etc try to B: look at A…while A is looking at A.

            So you get a recursion problem. You get C and D and E…all steps at attempting to grasp the entirety of the self, while the self is actually stepping backward as it tries to comprehend the entirety of itself, thus failing its (impossible) task but being unable to perceive such failure for not being able to see the whole thing.

            How do I know that – I can’t, I could only know it if I could see the whole of myself whilst not actually stepping back outside myself. But I think if you step back and look at yourself, then step back and look at yourself looking at yourself, then step back and look at yourself [looking at yourself [looking at yourself]] then you can start to see the pattern that means you’re not winning some absolute knowledge. None of these step backs is the final ‘Ah ha, I comprehend all!’. It’s just going to go on forever, endless recursion, or until you run out of cognitive puff. Frankly the second recursion already has me dizzy.

            I think Bakkers blind brain theory has a longer version.

            Liked by 1 person

  5. Lee Roetcisoender says:

    “This is one of the reasons I keep my hierarchy handy:
    1. Reflexes
    2. Perception
    3. Action-selection / sentience
    4. Deliberation
    5. Introspection”

    For what it’s worth Mike, your hierarchy is clearly anthropocentric, which makes it an arbitrary, highly restrictive and prejudiced model for a definition of consciousness. As an instrumentalist, I’m surprised that you’ve painted yourself into such a restrictive corner. The following quote is a good example of the restrictive nature of your model.

    “Biologists once assumed that ants had some conception of what they were doing, until pheromone trails were discovered. A lot of the state of ant tasks are held by those chemical trails…”

    As a chemical compound, just like any other chemical compound, pheromone is information, information that the ant species just happens to use to communicate. So the compelling question becomes: Is communication possible without some form of sentience? The short answer is no. Most information is embedded in matter with the exception of synthetic a priori information such as probability. Probability is neither matter nor energy, nevertheless, probability is still information. If matter is reducible to information, (which I believe it is), then it becomes ineluctable for some form of sentience and/or consciousness on the receiving end of that information. Without that relationship of information with sentience, there is no motion or form. The alternative to that model which most people prefer is magic…

    Like

    • Lee,
      My hierarchy is indeed anthropocentric. It’s studying layers of sophistication with us as the highest layer, an inherently anthropocentric paradigm.

      But I think you’ve misdiagnosed the chief reason why it’s anthropocentric. It isn’t because I’m considering an anthropocentric conception of consciousness. It’s because consciousness itself is an anthropocentric concept. It’s us looking for ourselves in others, for humanity in animals, for life in machines. Pansychism is looking for us in all the universe.

      Consciousness, at its core, is both paradoxically antrhropocentric and anthropomorphic. It’s an assumption that there is something special about us, about the way we process information, something distinct from the rest of the universe. Of course, panpsychism says this special thing is everywhere, but it assumes the special thing in and of itself exists as something other than the viewpoint of certain systems, as something other than simply a label for machines like us.

      Like

      • Lee Roetcisoender says:

        “It’s an assumption that there is something special about us…”

        I agree that the ideology of “exceptionalism” is the prevailing paradigm. And that postion is problematic, revealing itself across the entire spectrum of debate on the subject.

        “…panpsychism says this special thing is everywhere, but it assumes the special thing in and of itself exists as something other than the viewpoint of certain systems as something other than simply a label for machines like us.”

        Certainly. First, panpsychism says that human beings are not special and second, that consciousness is the venue for the experience of discrete systems. It’s that simple. To be succinct, panpsychism is in contrast to scientism not science itself, that is of course if we take science to be a method rather than a dogma. One has to be at least willing to get off the anthropocentric band wagon in order to do serious research on this illusive phenomenon known as conscious experience.

        Liked by 1 person

  6. It seems to me that Le Doux’s HOT does not provide us with brain architecture which is extensive enough to sensibly account for issues such as “habit”, not to mention the troublesome question of deciding the point where non-conscious function effectively gets transformed into conscious function. You can throw all the neuroscience you like at bad brain architecture, and all this should do is bolster its politics. Many seem to believe that neuroscience has the ability to actually convert soft science into hard science! I’ll present what I consider to be a better approach.

    There is a place in my daily commute where I have the option of using a more direct path that can get backed up but is usually best, and a slightly longer one that almost never backs up. I’ve developed the habit of using the shorter path, but always check a traffic app when I leave work to see if I should use the other one. Unfortunately when I’ve decided that the alternate route would be best, by the time I get there I usually miss it because there are some harrowing traffic conditions at that junction that tends to have me thinking about other things. So how does my model suggest that I formed this particular habit, which I often fail to consciously overcome, and what exactly is going on here?

    Consider the brain as a neuron fueled computer which is not conscious, that produces a virtual computer which is instead fueled by means of valence. So here we have a vast supercomputer which does an amazing number of calculations in order to produce “me”, while my valence inspired computations should amount to less that 1000th of one percent of that number. Thus I interpret my inputs of valences, senses, and memories, and construct scenarios about how to make myself feel better through my only non thought output, or muscle operation. So what does this model imply about my above “habit” of using the standard road? Consider the following diagram.

    Notice that the entire conscious mode of operation exists as an output of the vast non-conscious computer. Furthermore pay attention to the “Learned Line”. This conduit is meant to represent how the vast majority of what we generally take credit for in a conscious sense, is actually handled non-consciously. By “learning” how to pronounce words through valence based function as an infant, for example, the tiny conscious computer conditions the vast supercomputer to take care of this sort of thing automatically.

    If “HOT”, “GWT”, “theory of constructed emotion”, or any other mainstream brain architecture is able to provide practical accounts of human function, then I’d love to match them up with my own such model. My perception is that they instead depend upon academic fascination with neuroscience itself. 😦

    Like

    • LeDoux is actually one word. Interestingly, he’s Cajun, from the same region of Louisiana as my family.

      You assume that conscious operation is needed for habit acquisition. It might be that for humans, it often does involve conscious action in the early stages, the cognitive stage of procedural memory.

      But the question is, is that always true? Can we acquire habits nonconsciously? If we can, that has implications for a creature that only displays reflexive and habitual behavior.

      I’m really interested in this question, but any answer on it won’t be satisfying unless it includes evidence or at least solid logical reasoning.

      Liked by 1 person

      • Mike,
        I noticed the extra space after it was sent, but was mainly just happy that I got the letters right. And I do hope that you’ve also got yourself a fancy Cajun sir name. If I’m going to be quite open with others online, then I need reason to suspect that it won’t be easy for angry people to track me down! It’s surely the same for you.

        I think that I can give you some solid logic that it’s productive to say that habits can only be acquired by means of conscious function, and this does go beyond the model that I’ve just presented.

        As you know there are things that function entirely without consciousness, as well as things like our bodies that have both conscious and non-conscious elements. So given this circumstance, how will it be most productive to define the “habit” term? Shall we say that the sun has “habits”? Or the computer that I’m now typing on? No in that case we might as well use the term to represent all elements of causality. Instead it seems productive to define the term to represent a situation like my freeway scenario. So by definition, “habit” shall exist here as the aftermath of repetitive conscious function, which thus gets passed off to non-conscious function.

        I believe that this generally solves the problem that you speak of, except of course that so much remains unsettled on both the brain architecture front, as well as deciding which creatures would apply to a “conscious” dynamic given associated experimental evidence. I consider myself quite qualified to discuss that sort of thing if you, Paul Torek, or others would like to get into various particulars.

        Liked by 1 person

        • Eric,
          Smith is my actual name. I have one Englishmen in my otherwise Cajun ancestry. (Or at least that’s the family lore.)

          Here’s my concern with your reasoning. (Which is similar to what mine used to be, but has been thrown into question recently.) From the studies I’ve been reading, instrumental learning in fish and amphibians seems to be exclusively habitual, meaning it happens far more slowly than in mammals and birds. For an example, see:

          In conclusion, amphibians adjusted to shifts in incentives by gradual behavioral reorganization, rather than abruptly reacting to unexpected changes in incentive magnitudes, as it has been shown in mammals [10]. These findings add to a growing body of comparative evidence suggesting that relatively more conservative vertebrate lineages regulate their behavior predominantly on the basis of habit formation and reorganization.

          https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0025798

          If we say that consciousness is required for habit acquisition, then it seems like the version in fish and amphibians is very limited. (I’ve read some stuff that indicates reptiles might be in an intermediate state.) In the language people often use, if the lights are on, they’re barely on.

          To some degree, this fits with the fact that a fish that’s had its forebrain destroyed, appears to behave routinely. It can hunt, mate, and do many other things. But it appears to have lost its ability to learn. (I also assume it lost smell, since that goes through the forebrain.) In other words, its behavioral repertoire is almost entirely reflexive, so much so that destroying the non-reflexive part is barely noticeable, and even its limited learning abilities appear to be stimulus bound.

          Liked by 1 person

      • Well “Michael Smith” should be about as generic as they come in our country, so I suspect that most of the people that you piss off would give up right there anyway. You should be fine.

        Regarding the slowness of “learning”, that really shouldn’t make a difference in the end. Either it comes from repetitive conscious function (whatever the “consciousness” definition), and so will inherently be defined as “habitual”, or it’s perfectly programmed by means of the default form of function and so will not be.

        But this gets back into what I’ve mentioned often enough here as the greatest general failure of science that I perceive. It’s commonly thought that “true” definitions for terms like “habit” exist out there to discover rather than to define. I consider science to suffer horribly given that it functions upon nothing more than rough implicit principles of metaphysics, epistemology, and axiology. For example, regarding my associated first principle of epistemology I’ve been told, “No, no, a thousand times no!” And this was by an extremely well educated person for whom I have a great deal of respect! Science is still a relatively new endeavor, and thus it seems to have associated bugs to work out.

        Let’s say however that you’re able to grant that it’s useful to consider “habit” as a product of conscious function which becomes non-conscious through such repetition. Here even if circumstances cause something to happen perpetually in a non-conscious way, by definition it won’t be “habit”. Furthermore if fish happen to do things “consciously” from a given definition that later become automatic, then this shall by definition be “habit”, and even if it takes a great deal of associated repetition.

        Let’s say that we destroy the “conscious” part of a fish (from whatever definition) and it then continues to function reasonable well, that is except for “learning”. Well that’s fine. It’s just that by definition it will not be possible for “habits” to subsequently form.

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.