Fruit fly fear and AI sentience

I found this study interesting: Do flies have fear (or something like it)? — ScienceDaily.

A fruit fly starts buzzing around food at a picnic, so you wave your hand over the insect and shoo it away. But when the insect flees the scene, is it doing so because it is actually afraid? Using fruit flies to study the basic components of emotion, a new Caltech study reports that a fly’s response to a shadowy overhead stimulus might be analogous to a negative emotional state such as fear — a finding that could one day help us understand the neural circuitry involved in human emotion.

It might seem obvious that, since a fly avoids the fly swatter, it must have some kind of fear.  However:

“There are two difficulties with taking your own experiences and then saying that maybe these are happening in a fly. First, a fly’s brain is very different from yours, and second, a fly’s evolutionary history is so different from yours that even if you could prove beyond any doubt that flies have emotions, those emotions probably wouldn’t be the same ones that you have,” he says. “For these reasons, in our study, we wanted to take an objective approach.”

It’s a fair point.  Fly fear is probably very different from human fear.  Still, it’s hard not to conclude that flies have some type of fear.  The only way to conclude otherwise would be to narrow the definition of fear so that it only applies to mammalian brains, but that seems excessively speciesist, and anyway, this study did find what appears to be evidence for fly emotions.

“These experiments provide objective evidence that visual stimuli designed to mimic an overhead predator can induce a persistent and scalable internal state of defensive arousal in flies, which can influence their subsequent behavior for minutes after the threat has passed,” Anderson says. “For us, that’s a big step beyond just casually intuiting that a fly fleeing a visual threat must be ‘afraid,’ based on our anthropomorphic assumptions. It suggests that the flies’ response to the threat is richer and more complicated than a robotic-like avoidance reflex.”

What’s interesting about this evidence is that it seems to mean that, to at least some extent, flys are sentient beings.  What makes that interesting is that their brains are relatively simple systems, with about 100,000 neurons and 10 million synapses (compared to the 100 billion neurons and 100 trillion synapses in humans).

Mapping brains to computing capacity is fraught with problems, but unless you assume the resolution of mental processing is smaller than neurons and synapses,  the device you’re using to read this post has far more storage and processing complexity than a fruit fly’s brain.  Yet I doubt you see your device as a sentient being.  (You may not see a fly as a sentient being either, but you have to admit it seems more sentient than a smartphone.)

All of which brings me back to the realization that sentience is a matter of having the right software, the right data processing architecture.  We don’t understand that architecture yet.  As simple as a fly brain is, we have little understanding of how it generates the fly’s emotions, although the researchers do hope to change that.

In the future, the researchers say that they plan to combine the new technique with genetically based techniques and imaging of brain activity to identify the neural circuitry that underlies these defensive behaviors. Their end goal is to identify specific populations of neurons in the fruit fly brain that are necessary for emotion primitives — and whether these functions are conserved in higher organisms, such as mice or even humans.

I have to wonder how they plan to do brain imaging on flies.

Anyway, one of the things that is becoming sharper in my mind is the distinction between intelligence and sentience.  Fly sentience is almost certainly not as rich as mouse sentience, much less human sentience.  But while we have computing systems that are intelligent enough to beat humans in narrow domains like chess or Jeopardy, we don’t yet have a system with even the limited sentience of a fly.  (At least none that I know of.  I’m sure the fly biologists and neuroscientists of all types would like to know if we did.)

A lot of sci-fi scenarios have sentience creeping in by accident as machines progressively become more intelligent.  Personally, I doubt we’re going to get it by accident.  We’re probably going to have to understand how it arises in creatures such as flies well before we have much of a chance of generating it in machines.  Fortunately, sentience won’t be required for most of what we want from artificial intelligence.

45 thoughts on “Fruit fly fear and AI sentience

        1. Hmmm. What leads you to that understanding? The wiki article on them doesn’t mention anything that drastic during their metamorphosis. http://en.wikipedia.org/wiki/Butterfly#Life_cycle

          Somewhat in the same vein, there was a study that seemed to show that mice can pass fear of certain smells through their sperm to their pups, a case of visceral type of memory possibly being inherited. http://www.sciencedaily.com/releases/2013/12/131202121544.htm

          Like

  1. Very interesting. I could have used this in something I was working on last week. The remarkable Cambridge Declaration of Consciousness (2012) stressed that the required neurological apparatus for total awareness of pain—and the emotional states allied to that—arose in evolution as early as the invertebrate radiation, being evident in insects and cephalopod molluscs, such as octopus, Nautilus, and cuttlefish. However, Professor Marc Bekoff has since proposed an even broader declaration, a Universal Declaration on Animal Sentience, where sentience—and by extension a total awareness of suffering—is defined as the “ability to feel, perceive, or be conscious, or to experience subjectivity.” I think the fruit fly findings support such a move in definition.

    Liked by 1 person

    1. I could see that. Though I think getting people to feel empathy for insects such as flies, cockroaches, and the like, would be a hard sell. Not because most people don’t intuitively see them as sentient, but because people generally feel revulsion toward them and see them as pests.

      Like

      1. Being Australian, I loathe spiders. As any Australian kid will attest, we’ve all had terrible experiences with them. They’re big, they’re fast, and they’re unpredictable. I don’t hate spiders, though. I appreciate their genius and just accept they have a very different perspective on the world than I do. I’ve thought, though, how much i would love and cherish the spider if it were just him or her and me out in deep space: two things from earth, distant cousins, in fact. I’m sure I, like most people, would come to see the humble spider as family.

        Like

        1. Good point. I don’t hate spiders, or cockroaches, or anything else, but I don’t want them on my arm. But if I were in deep space, I might well see their presence as a comfort, something else alive from Earth. (That might be a good scene in a sci-fi story.)

          Liked by 1 person

          1. Be very, very glad insect biological architecture prevents them from growing too large. A world where spiders were as big as large dogs would be a very scary one for mammals — spiders are fierce predators! (Did you know they nearly all of them have a poisonous bite? The saving grace is that most of them have fangs too small to penetrate human skin. Count your blessings!)

            Liked by 1 person

  2. My goldfish certainly experience fear. In fact, fear and hunger seem to be their primary drivers (possibly lust too, although they display that in private.) We have an outdoor pond with goldfish, and twice each year a migratory heron arrives for a snack. When the heron appears, I cover the pond with a net to protect the fish. However, the fish remain submerged at the bottom of the pond for at least 3 days after the arrival of the heron. That’s more than fear, it’s terror.

    If my understanding of evolutionary pathways is correct, mammals, fish and birds are on a different branch to insects and molluscs. If both branches experience fear, that would indicate that their common ancestor (which would have been a very primitive invertebrate in the Cambrian period) also experienced emotions. You could say that virtually every animal that has ever existed was probably sentient. They certainly behave as if they are.

    Liked by 1 person

    1. Goldfish are an excellent example. Them staying submerged demonstrates an emotional state the article calls “persistence,” which is one of the criteria they use to distinguish actual fear from a “robotic-like avoidance reflex”.

      On all animals having sentience, it seems like that depends to some extent, on what we define as an “animal.” I think we can safely say that sponges, for instance, don’t have it. But definitely anything that evolved during or after the rise of predators, probably at the beginning of the Cambrian.

      Like

  3. MIke, I know that you recently had a long discussion with Wyrd about this, but one implication of this article is that sentience can be built with a very simple architecture of perhaps thousands of neurons, whereas intelligence may require billions. Is it feasible to build a truly intelligent AI that does not have sentience?
    Sci-fi may have sentience popping in by accident for a good reason. Not just that it adds dramatic conflict, but that it’s how we experience the world. Indeed, since intelligence in nature is built on top of the basic sentient structure, we might need to use an entirely different kind of architecture in order to engineer systems that are not sentient. Even in that case, could we be sure that they weren’t sentient? Would there be an equivalent of a Turing Test to test for sentience? Perhaps like the fruit flies, we would have to place our robots in danger and see if they reacted. If they didn’t, would they practically be able to function in the real world?
    One of your common tests for whether a technology is likely is whether it arises in nature. For example, you have pointed out that birds fly, so building a flying machine seems plausible, whereas we know of nothing that can travel faster than light, so building a FTL spaceship seems implausible. Since we know that all intelligent creatures have sentience, is it plausible to build a non-sentient AI?

    Like

    1. Good questions. I do think nature is a good guide for determining what is possible. If it happens somewhere in nature, then I think any statement that it’s impossible for us to eventually accomplish it technologically, has to have the burden of proof. So, I would say it’s definitely possible for machines to be sentient.

      The question is, is sentience required for intelligence, or an inevitable consequence of it? If we were having this discussion in, say, 1950, I would probably have seen it as a reasonable hypothesis. Intelligence was built on top of sentience in nature and we saw no signs of intelligence without sentience. It would have seemed reasonable that it might be required for intelligence in machines.

      I still see it as possible, but not very probable. The reason is that we have machines with a lot of intelligence today. It’s easy to dismiss that intelligence because we’ve become acclimated to it, but a machine that can beat a chess champion, or Jeopardy champions, is not something we should rationalize away as something without intelligence.

      But that intelligence, to all appearances, comes with zero sentience. Of course, we can never be absolutely sure that the computer I’m typing this on isn’t sentient in some way. All we can say is that it gives less sign of it than a fruit fly. There’s no indication it fears its replacement at some point in the future, or any anxiety about being taken apart.

      Now, we can’t rule out that sentience might not be necessary at some higher level of intelligence. But I think the lack of any incipient signs of it at the current levels of intelligence, not even to the extent of a fly, or even a worm, is significant data.

      Like

  4. Great post. Still, what is it like to be a scared fruit-fly (thanks Nagel)? What does it mean to say a fruit-fly feels fear? I only know of my fear, yet if a fruit-fly’s consciousness is so different from mine, then is its fear recognizable as anything like my fear? If not, then what am I claiming by claiming this, given that I’m admitting it’s in an entirely different mental state? Am I saying that it experiences a state like mine but unlike mine? Seems contradictory. I don’t think I can even claim that its fear bears a similar relation to its conscious state (as mine), as that assumes much about its conscious structure. For example, my fear is informed by my ability to think over the past, worry about the future, run various scenarios and even set complex goals. Do these even apply for a fruit fly?

    Liked by 2 people

    1. Thanks!

      You zero in on a central problem with any cross species cognition. How broadly or narrowly we want to define “fear.” Is fly fear like our fear? Almost certainly not. We can’t even be sure that your fear and my fear are the same.

      There are definitely different levels of fear. We can fear for our reputation because of the long term consequences of damage to it. I’m pretty sure a fly doesn’t have that kind of fear. But most of us also have an innate primate’s fear of snakes. That seems at least somewhat similar to a fly’s fear of something overhead blocking the light. And the fact that it affects their behavior for a while afterward implies that it isn’t just a reflexive response.

      But ultimately, as Nagel pointed out, we can’t know what it’s like to be a fly. (Or a bat, or a dog, cat, or chimpanzee.) We can only try to judge by their behavior and what we can determine of their brain states.

      Liked by 1 person

  5. Mike, is it possible that you have a notion of how AI might be achieved that is different to mine? You have repeatedly used the word “program” with regard to intelligence, as in an algorithm for specific tasks. Do you think a clear algorithm will be invented for general AI? In the sense that in any scenario we could say what rules it is following and predict precisely how the AI will respond? Most observers seem to think instead of developing some kind of neural network that would essentially evolve intelligence and would therefore be unpredictable in its behaviour.

    Liked by 1 person

    1. Steve, when I use the word “program” in these contexts, I’m using it in a broad manner to mean engineering of one type or another. I also speak of human “programming” by which I mean our evolved instincts.

      On neural nets, which are a very useful architecture for certain domains, like pattern recognition, I think a lot of people have this notion that if we just throw up the right neural net, it will magically self assemble into a being of some kind. Personally, I think this is wishful (bordering on magical) thinking. To understand why means spending time reading neuroscience and psychology. Remember that we’re not born blank slates, we don’t self assemble into what we are. (Unless you want to consider billions of years of evolution “self assembly”.)

      Here’s something to consider. Modern neural nets are already more complicated than the brains of animals like flies. Why haven’t these neural nets spontaneously assembled into beings at least as functional as these creatures yet? At what point does the self assembly start?

      Like

      1. “I think a lot of people have this notion that if we just throw up the right neural net, it will magically self assemble into a being of some kind.”

        Not if they know anything about neural nets. As you go on to say:

        “Remember that we’re not born blank slates, we don’t self assemble into what we are.”

        Exactly. And neither do neural nets. They, like our brains, have basic engineering the gives them a “hardware architecture” and an “O/S” that’s usually, in their case, optimized for a given task. Then, like us, they usually need to be trained and experienced to be useful.

        “Modern neural nets are already more complicated than the brains of animals like flies. Why haven’t these neural nets spontaneously assembled into beings at least as functional as these creatures yet?”

        Modern neural nets are capable of functions much in advance of fruit flies. In terms of things like pattern recognition they have exceeded the capacity of humans in a given domain.

        Google did one in 2012 with 1.7 billion connections that successfully taught itself to recognize cats in YouTube videos. I’m pretty sure fruit flies can’t do that!

        We’re obviously a long way away from the 100 trillion connections of the human connectome, but as we said below — that’s just engineering.

        Like

        1. “They, like our brains, have basic engineering the gives them a “hardware architecture” and an “O/S” that’s usually, in their case, optimized for a given task.”
          Totally agreed. But, I think it’s in that firmware that sentience, including self concern, the foundational emotions, etc, reside, or at least its functional foundations. Without the right system architecture, there will be nothing to make a neural net anything other than an empty engine.

          “Then, like us, they usually need to be trained and experienced to be useful.”
          It’s worth noting that a lot of animals don’t need to be trained before functioning in the world, although it does seem that most mammals do. But I don’t see why machines should need to be bound by the ratio of hard coded to learned functionality that humans evolved. I think engineers will be able to decide that based on the goals of the system.

          I didn’t know that about the Google neural net. Interesting. I did hear that the cat recognition still has its issues though. Apparently a single pixel can still throw it off, much less some other object obscuring the cat even slightly. But I’m sure they’ll work through all that in time.

          “We’re obviously a long way away from the 100 trillion connections of the human connectome, but as we said below — that’s just engineering.”
          Totally agreed.

          Like

          1. “It’s worth noting that a lot of animals don’t need to be trained before functioning in the world,…”

            Doesn’t that training come from millions of years of evolution?

            “But I don’t see why machines should need to be bound by the ratio of hard coded to learned functionality that humans evolved.”

            Given that humans are the only intelligent species we know, and that humans do work that way, that seems an optimistic view.

            Could be, but it means figuring out what intelligence is sufficiently well to design an “out of the box” system. We’re a lot further away from that than from systems that attempt to emulate the one intelligence system we do know about.

            “I did hear that the cat recognition still has its issues though.”

            Visual recognition is extremely difficult, so it’s not surprising it’s not perfect. The thing that’s really interesting is that they didn’t train it to recognize cats — it did that on its own. I found a “cat” region in the phase space of images. That’s… awesome!

            A buddy of mine worked on a commercial project (over a decade ago) intended to recognize different types of motor vehicles. That system had only 256 parameters and managed to achieve a degree of success. (Data from daily runs was uploaded to a Cray computer for processing that took hours.) A big problem in that project was lighting and shadows.

            With person (or cat) recognition, part of the difficulty is rotations of the object. Recognizing a profile is different from recognizing full face. Plus shadows and lighting!

            Like

          2. “Doesn’t that training come from millions of years of evolution?”
            Sure. Evolution, our programmer / engineer.

            “Given that humans are the only intelligent species we know, and that humans do work that way, that seems an optimistic view.”
            Again, in 1950, I might have agreed. But we’ve made good progress so far with different ratios. Similar to sentience, maybe we’ll hit a threshold where the human ratio is required, but the lack of any apparent incipient requirement in modern systems makes me doubt it.

            It seems like pattern recognition is primed to get a lot better in the next few years.

            Like

          3. I the key difference turns out to be an optimistic view of where current research will lead versus a pessimistic one. I look at what’s being done today and don’t see us as being anywhere close at all. [shrug]

            Like

      2. Are modern neural nets really more complex than the brains of fruit flies? I have no data, so this isn’t a rhetorical question. They may have more data storage, or connections, but as you say your iPhone has a stupidly large number of possible states, yet still almost no animal-like intelligence. It’s surely the architecture that matters, and as you said in another comment, this is the “magic” that makes intelligence happen.
        The physicist David Deutsch speculates that we could build a general AI with a laptop if we only knew how. Looking at the tiny brains of fruit flies and the complex behaviour that this enables, this doesn’t seem so far-fetched.
        So maybe there is a way that would merge the “huge neural net” approach with your “algorithmic programming” that would lead to general AI. What I envisage is a new kind of hardware based on massively interconnected systems like a brain (or net) but with some heavily designed structure (like your algorithms). Perhaps this is what IBM is trying to do with its new neuron-like microprocessors.
        I note also that extremely primitive organisms such as single-celled bacteria have pseudo-intelligent behaviour despite having no brains. Surely the key to intelligence is rooted in this primitive architecture? The rest is just scaling up.
        http://blogbloggerbloggest.com/2014/02/13/free-will-its-a-no-brainer/

        Like

        1. I actually tend to agree with both you and Deutsch on this point. I’ve long had a sneaking suspicion that one of problems with understanding sentience is that we’re over complicating it, that we’ll be stunned by how simple it actually is. Similar to natural selection, after it’s figured out, it will be followed by decades or centuries of people saying that there’s just no way something that simple could be the answer.

          Liked by 1 person

  6. I think calling what fruit flies and goldfish do “fear” is a misuse of the term as we usually apply it. I see considerable middle ground between our experience of fear (which is, in large part, due to our powerful imaginations) and simple pain-death avoidance. Even the persistence effect could consist of fairly simple programming.

    The big difference between the complexity of modern computing equipment and fruit fly brains is likely the interconnectedness of the neurons. The number of possible connections among 10 million synapses is a truly astronomical number. All the more so with humans.

    (And many of the SF scenarios that posit emergent sentience do so with highly interconnected systems. The Arthur Clarke story, Dial F For Frankenstein, which has the phone network becoming complex enough for sapience, has that great final line something along the lines of, “Everywhere, the phones began to ring.”)

    I’ll echo what I’ve said previously (and which Steve Morris mentions above) that I doubt true AI will be strictly programmatic. There is a pretty good body of work suggesting that intelligence cannot be algorithmic. Neural networks and other systems capable of learning may offer the most hope for exploring machine minds. We provide the basic O/S and functionality (not entirely unlike a new born brain) and then allow the brain to learn through experience.

    Although it will be very interesting if we’re ever able to capture the basic wiring and synapse strength-weakness values — essentially capturing the mind — and then recreate that in some other substrate. That, to me, seems largely an engineering problem, so we will accomplish it some day. Whether it works remains to be seen!

    Like

      1. Aren’t you interpreting their behavior in human terms, though? From an evolutionary point of view, fish with that response to birds in the area went on to make more fish with that response.

        The fish with no response got eaten long ago, and eventually so did fish that only stayed deep briefly (because birds hang around a potential food source). Eventually, that leaves fish that go deep and stay deep for a while.

        Look at it this way, fear is a learned response. Fire burns, so we learn to fear and respect it. Or we’re taught to fear something. How would fish learn fear of bird shapes? How would fruit flies learn fear of overhead objects?

        Like

        1. “Look at it this way, fear is a learned response.”
          Aren’t you arbitrarily narrowing the definition of “fear” here? Most primates have an innate fear of snakes, or more precisely snake like shapes and movement, even if they’ve never encountered a snake before. Is that fear any less…fearful than fear of something they’ve learned to fear?

          Like

          1. Looking at the end result, perhaps not much. Looking at the basis for the fear, perhaps considerable.

            Primates learn from other primates. I’d be interested in knowing whether a fear of snakes is instinctive or learned from parents or tribe.

            Like

          2. Cool!

            “We’re finding results consistent with the idea that snakes have exerted strong selective pressure on primates,” Isbell said.

            Yep. Evolutionary training. Just add eons! 🙂

            I’ve been wondering how much of that exists in humans. We have a mythology involving snakes that’s part of our social consciousness, but where did that mythology come from in the first place?

            Like

          3. It wouldn’t surprise me if the snake myths are related to the primate instinct. Of course, confirming that scientifically, like all propositions about human nature, is extremely difficult.

            Like

          4. I don’t know about fear, but there are a bunch of studies looking at the intuitions of 5-6 month old babies. The problem of course, is that we can’t eliminate the possibility they learned the intuition in their first few months of life. But it sounds like establishing much of anything cognitive with younger babies is very tough, although occasionally it’s done.

            Like

          5. What Mike said.
            I’m confident that the fish haven’t learned to be afraid of the heron. The heron visits twice a year. They simply don’t have enough data. They are also afraid of me, when I walk near the pool, even though I have never harmed them, and actually feed them each day. They are instinctively afraid of any creature larger than themselves, which is presumably how their ancestors survived millions of years of predators.
            Same with us with snakes and spiders, and probably any large and potentially dangerous creature.
            If you want to know whether they actually experience fear, or simply behave as if frightened, I can say that they pass the Turing Test in this regard.

            Liked by 1 person

    1. I agree there’s a lot of ground between fly fear and human fear, but I’m loathe to call fly fear something other than fear.

      “The number of possible connections among 10 million synapses is a truly astronomical number. ”
      Wyrd, I’m not clear what you mean by this sentence. You seem to indicate further down that you understand that synapses are the connections, so maybe I’m missing something?

      On the algorithmic / neural net thing, check out my reply to Steve. A lot depends on what we mean by “algorithm”. I think human minds operate by a wide range of algorithms. But because its more in the nature of a massive cluster of algorithms executing in parallel, it seems like it’s something more than that.

      “That, to me, seems largely an engineering problem, so we will accomplish it some day. Whether it works remains to be seen!”
      I think the only reason it wouldn’t work is either because there is a non-physical aspect to the mind (and I think you know my thoughts on that), or because human cognition inherently requires a certain type of substrate. Actually, even the substrate scenario I would see as just another engineering problem (wetware), albeit one that might well preclude everyone living ever after in virtual land.

      Like

      1. “I’m loathe to call fly fear something other than fear.”

        That’s fine so long as we understand it’s then a term with a very broad definition. There’s the pre-programmed sort of fear, the learned sort, even the fear of the unknown. No problem so long as we understand which kind we mean.

        “Wyrd, I’m not clear what you mean by this sentence [about connections].”

        I’m referring to the number of combinations for 100,000 neurons being connected with 10 million possible connections. The number of possible different connectomes is astronomical. There is also that each of those connections has an “ease of firing” value based on how heavily that synapse has been used. If you consider the connectome as just the map of connections, there is a huge number of possible differently weighted maps.

        “A lot depends on what we mean by ‘algorithm’.”

        Okay. How do you define algorithm? The standard definition is along the lines of: “Implementable by a Universal Turing Machine.” Another common definition is: “Computable function.”

        “…massive cluster of algorithms executing in parallel, it seems like it’s something more than that.”

        Even massively parallel systems are still implementable by UTMs and still are bound by NP constraints, not to mention Turing’s Halting Problem.

        “I think the only reason it wouldn’t work is either because there is a non-physical aspect to the mind…”

        Agreed. Also on the possibility that size or timing constraints might require specialized hardware. Definitely a (formidable, but achievable) engineering problem. So formidable, considering the size of the human connectome, that it could many, many decades away. But you never know.

        Like

        1. I think emotions in general are a definitional morass. There’s not even widespread agreement on what “emotion” means. So, yeah, I totally understand what you mean with the word “fear”. Even all the things humans refer to as “fear” seem like a variety of different experiences.

          On synapses, there are 10 million (more or less) in a fruit fly, with smoothly varying strengths. What’s unknown is how much granularity might be required to represent each synapse meaningfully in a digital system. One byte, with 255 combinations? Four bytes with 4 billion? Even if we go to 10 bytes for each synapse, we’re still only talking about 100 megabytes. Double it, triple it, even quadruple it, to track other necessary characteristics of neurons and synapses, and you’re still at less than a gigabyte.

          It’s worth noting that my iPhone itself has 65 billion ^ 255 possible states, although software and storage patterns reduce the combinations that might exist. The trick, of course, is that we don’t understand the brain’s software and storage patterns, yet.

          By algorithm, I mean any process involving a sequence of operations.

          “Even massively parallel systems are still implementable by UTMs and still are bound by NP constraints, not to mention Turing’s Halting Problem.”
          Are you sure brains aren’t as well? Alan Turing didn’t seem bothered by these theoretical issues.

          Agreed on your final paragraph.

          Like

          1. “Double it, triple it, even quadruple it, to track other necessary characteristics of neurons and synapses, and you’re still at less than a gigabyte.”

            As I understand it, the dataset itself isn’t the bottleneck, it’s computing the neural activity. (The Google project mentions something about a billion connection parameters, but it required 16,000 CPU cores to run.)

            “Are you sure brains aren’t [UTMs] as well?”

            Of course not! No one is.

            Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.