The marshmallow test and conscious feeling

The recent news reports that cuttlefish are able to pass the marshmallow test are interesting.

The classic marshmallow test involved giving a young child a marshmallow but promising them a second one if they could hold off eating the first for 15 minutes. The kid was then left alone in a room with the first marshmallow for those 15 minutes, but monitored via a hidden camera. Some kids ate the first marshmallow immediately, others struggled with it, and about a third held out the entire 15 minutes. The ability of the child to delay their gratification was found to be correlated with success later in life, but a later replication of the study found that both a child’s ability to delay gratification and their later success in life were correlated with their socioeconomic background.

In the case of animals, the test is often holding off eating a somewhat tasty treat for a time in order to get a more tasty treat. That’s pretty much the sequence for the cuttlefish. They join a list of species who are able to do this, including great apes, covids, and parrots, as opposed to the list who cannot, including rats and chickens.

The reason I find this test interesting is it’s an example of value trade-off behavior, of value-based cost/benefit decision making. This is one of the criteria Todd Feinberg and Jon Mallatt use in their book, The Ancient Origins of Consciousness, to establish that affect consciousness is present. (Other criteria include nonreflexive operant conditioning based on valenced result, frustration behavior, self delivery of analgesics, etc.)

Value trade-off behavior has always struck me as a particularly potent indicator of the presence of affects. It seems to indicate that the animal is feeling multiple impulses and has to choose to inhibit one in favor of another. The marshmallow type test is a particularly strong example since it involves a time sequenced scenario that requires the animal being able to imagine alternate future states. So it’s not particularly surprising that only relatively intelligent species can pass it.

There are weaker forms of the test that less intelligent species can still pass. One example is putting a tasty treat in a chamber that’s colder than the animal prefers. Normally the animal would avoid the cold, but they’ll often endure it to get to the treat. The problem is, without the time sequenced component, it’s hard to be sure that one impulse isn’t just overwhelming the other. In other words, it’s hard to know whether the animal is truly feeling the affect, or is just having a storm of reflexes with a certain combination winning. It might well be some combination, a hybrid that challenges our binary notions of something either feeling or not feeling.

But it also shows that for a feeling to be a feeling, there has to be at least an incipient reasoning part of the system that utilizes it in decision making. If not, then it’s not a feeling but just a reflex or action program. In other words, to have feeling, you have to have cognition. The very meaning of feeling without it is incoherent. It’s like attempting to have art without an audience, a donut hole without the donut, or yin without the yang.

It seems like a lot of theories of consciousness overlook this simple realization. But maybe I’m missing something? Is there something that makes a feeling a feeling besides its effects?

25 thoughts on “The marshmallow test and conscious feeling

  1. “Why didn’t you eat the marshmallow?”
    “I don’t like them.” | “It’s the wrong kind.” | “Only poor kids eat marshmallows.” | “Ew, it’s not toasted.”
    “Oh.”

    “Why did you eat the marshmallow?”
    “We got a whole bag of these at home. I’ll get more there.” | “What? I didn’t understand the instructions. Let’s start over. (tee-hee)” | “Here’s a dollar, give me another one.” | “I was fucking hungry.”

    Liked by 1 person

    1. That’s the problem with psychological tests. If the kid ate an hour before the test, they’ll have an easier time holding out. If they hadn’t yet eaten that day, and their parents need the payment from the study, resisting’s going to be a lot tougher.

      At least with animals those factors can be tightly controlled for.

      Liked by 1 person

  2. I’m with you, Mike – affect requires reasoning ability. I wonder if it also works the other way around, that you have to have at least some potential for affect (even if not currently experiencing affect) to do reasoning.

    Liked by 1 person

    1. I do think any reasoning system has to have motivation, a value system of some sort. But it doesn’t necessarily have to work the way animal ones do as evolved systems.

      For example, if a robot receives damage, it can receive the sensory information about it, update its body model, and take that into account in its future decisions. It doesn’t have to have the evaluation constantly happening, with automatic arousal/revving up of its systems, and constantly having to override its impulses, using up energy and stressing its systems, the way an animal does when in pain.

      Like

      1. As I’ve said before, I’m a neophyte in this area. So forgive me if my question is rudimentary. Are you saying, Mike, that cognition is a necessary (although perhaps not sufficient) ingredient for creating a hierarchy of values? Thus, the marshmallow test being one way to gauge a child’s ability to compare values based on alternative courses of action. That seems clear to me, but I may be overly simplifying your point. And, perhaps more importantly, that value and reasoning are inextricably linked?

        Liked by 1 person

        1. Hi Matti,
          That sounds about right. Although “value” is perhaps a bit broad. Someone could argue that a plant “values” sunlight because its innate reactions are to grow toward it, without anything most of us would regard as cognition. So in that sense, value goes back to the earliest life. Feelings related to that value, on the other hand, are much more recent, albeit still arguably ancient.

          Like

          1. Oh, you are a wily coyote! You probably saw right through my quite transparent follow up argument lying in the weeds.

            In short, whenever your blogging strays into the realm of ethics I’ve noticed that you raise the so-called “is-ought” gap as a sort of rational roadblock to an acceptable certainty in the realm of ethics, an affliction not imposed on pure science. I was hoping to get your affirmation that reason and value are indeed linked and thereby try to lead you back to the Hilary Putnam argument I raised sometime ago—expressed in his book, “The Collapse of the Fact Value Dichotomy.” If you remember I referenced that work many weeks ago. Putnam’s second chapter is conveniently entitled “The Entanglement of Fact and Value.” And, as you remember, I was quite feeble in my attempt to articulate a convincing summary of Putnam’s Argument. At least you were unconvinced. But I do see some light through a little crack in your thinking. En Guard!

            Like

          2. Can’t say I’m that wily. I totally didn’t see the ethics angle coming. And I’m afraid you’ll have to connect the dots for me to see it even now. The question is whether there’s anything in nature that makes certain values right and others wrong. The desire of a wolf pack to eat me may be totally in their nature, but so is my desire to not be eaten.

            Like

    2. [gonna jump in here ‘cuz I wanna talk about “affect”]

      What *exactly* do you mean by “affect”? Is it another word for “feeling”? (And how do you define “feeling”?). And is it different from emotion?

      I ask because I agree that a reasoning system has to have motivation. I call it having goals. Value is just how a situation relates to the goal state. Mike seems to be suggesting a difference between evaluating value and having affect when he says “It doesn’t have to have the evaluation constantly happening”. When a robot assesses itself to be in a damaged state and uses that evaluation in subsequent decisions, how is that not “evaluation constantly happening”? Also, does the robot determine damage once and then stop checking?

      *

      Liked by 1 person

      1. Affects, feelings, and emotions are a definitional morass. I usually use the word “affect” and “feeling” interchangeably. But there is widespread disagreement on whether affects or emotions can be unconscious and well as conscious. And affects and affect-displays are often conflated.

        This led Joseph LeDoux to coin the phrase “survival circuit” to refer to non-conscious portions, but that doesn’t seem to capture the sophistication of some of the complex yet unconscious cognitive states that seem able to exist. Although most people are onboard with “feeling” referring to the conscious part. Of course, that hinges on what we mean by “conscious”.

        Anyway, an affect is usually considered to have a valence, arousal, and motivational dimension. Whatever the effect the particular affect is going to have, it usually has them in a preconscious stage. So if there is damage to the body, there is the interoceptive / nociceptive signal registering the damage. Which should lead to a change in the current body map. That, I think, in turn triggers the affect with its dimensions. The arousal part in particular can have wide ranging effects on the body, which reverberates back interoceptively, reinforcing the affect.

        It’s the arousal part in particular I’m not sure the robot really needs. It seems like it could wait on that unless it explicitly decides to take action. The only reason would be if it needed to be physically prepared for action before that decision could be formulated (as in an animal). But other than that, I could see a robot keeping its body map updated, particularly if it’s important to its operations.

        Like

          1. I think we can say animals (including people) need the arousal effect. If a predator comes into view, before the animal can figure out what it’s going to do, it seems adaptive for its fight-or-flight state to ramp up. And if it only becomes aware of the predator as its charging, its strongest reflexive reaction might be all it has time for.

            Like

      2. When people use “feeling” sometimes they include sensations: “I feel a gentle pressure on my foot.” By affect I mean a felt emotion which is either positive (liked) or negative (disliked).

        Liked by 1 person

        1. Well now I want to know what is an emotion such that you can feel it. I’m thinking an emotion is a systemic effect impacting multiple physiological systems, and “feeling” it is really a reference to the proprioception of those various systems. Do you think it is something different?

          *

          Liked by 1 person

  3. Hi,

    I find it interesting that you think affects have that role in decision making or cognitition. If it was like round wheels, oke, but to me they are like triangle wheels. Not at all good for biking. Affects are way too cluncky and unclear. Like now I woke up to early and have a slight headache, I also feel a little bit stress because I don’t have much time and have to do other chores. Still I am typing this, because my prefrontal cortex is making me do this. He thinks selfawarepatterns deserves a reply a little bit more often even, although selfawarepatterns never agrees with him. It is also often: I feel fear, but I am still doing it because of reasoning wich feels quite neutral, just like heavy tension in the sense that maybe the cortex is blocking the fear. It doesn’t seem like two affects are fighting it out. More affect against unaffectual reasoning. If you get my point. greetings

    Liked by 1 person

  4. I was curious actually, since you listen to the AMA of Carrol. Would you have a question for him, you would like to get answered?

    Liked by 1 person

    1. Hi Oscar,
      Disagreement is fine. It leads to the more interesting conversations. Conversations where we’re just saying “Yep” to each other are nice and satisfying, but they don’t tend to exercise our intellect much.

      I would say that your PFC is brokering between numerous affects, some urging you to go back to bed, others to move on with the day, and yet others to respond here. (Glad you chose the last one.) David Hume pointed out that reason is the slave of the passions. By that, he didn’t mean that reasoning is fruitless, that passion will always win. What he meant was that our very motivation to engage in reasoning comes from our feelings. Without them, the reasoning portions are just an empty analytical engine. Although in truth reason and feelings are constantly reacting to each other in a never ending loop.

      I don’t currently have a question for Carroll, at least not any that could be answered succinctly in an AMA. I’ve promised myself that as soon as I do, I’ll join his Patreon and ask.

      Like

  5. Mike, you state: “I totally didn’t see the ethics angle coming. And I’m afraid you’ll have to connect the dots for me to see it even now.” This probably is not the blog entry to go into great detail. So I won’t. I simply saw an opening in your thinking process. So, I decided to insert a little subversive thinking. 😉

    One should note that “value” is a wider concept than mere ethical value. In short, rationality and values are linked as I think you were saying. As Putnam claims “…many kinds of value judgment that are not themselves of an ethical variety tend to get sidelined in philosophical discussions of the relationship between (so-called) values and (so-called) facts. I think you were eluding to one such variety. But nevertheless bridging the gap there helps understand the lack of one in ethics.

    I’m certain that the is-ought gap in ethics is a philosophical muddle—an unfortunate and unnecessary byproduct of the Enlightenment’s nurturing of the scientific method. Putnam’s approach is not my first choice in arguments to rebut it. I originally suggested Putnam in a past entry because his scientific credentials are unimpeachable—a world-class mathematician and computer scientist. I certainly agree with him. But I also assumed his credentials made his argument more palatable to the majority of participants on this blog. I think it’s best not to further hijack and sideline this fascinating discussion at this time. I just couldn’t help myself. I’ll wait for a more germane topic to bend your ear on this.

    Like

  6. Forgot to add proper quotes to the above: As Putnam claims “…many kinds of value judgment that are not themselves of an ethical variety tend to get sidelined in philosophical discussions of the relationship between (so-called) values and (so-called) facts.“ (The Collapse of the ..Dichotomy, p. 19). Don’t want to put extra words in Putnam’s mouth.

    Liked by 1 person

  7. Well, from a functional perspective I guess there is no point in feelings unless there are behaviors to be rewarded and punished by the same organism’s emotional mechanisms. I would not dare to say that feelings are necessary for organisms to avoid dangers and the like because that way we would be implying that our last universal ancestor required those complex mechanisms to survive, which is obviously false, so perhaps the relevance of feelings is only at the cognitive level of data structuring and processing of behavioral outputs in case there is more than one possible option, again, from a cognitive perspective

    Liked by 2 people

    1. Joseph LeDoux makes a distinction between emotions (by which he means conscious feelings) and survival circuits. Survival circuits are very ancient, going back very early in the history of life. But conscious feelings are much more recent. Just how recent depends on what we mean by “conscious”.

      So, we need survival circuits to survive. We don’t need feelings for immediate threats. We have reflexes for that. But we do need feelings to plan and learn, even if only for a few seconds into the future.

      Liked by 2 people

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.