Changing what makes us happy

from Saturday Morning Breakfast Cereal (click through for the hovertext and red button caption)

Greg Egan in his novel Incandescence posits an alien civilization whose ancestors, in order to survive, establish a series of space habitats.  In order to ensure their descendants will be happy, they bioengineer those descendants to feel satisfaction and bliss working within and maintaining the habitat.  In order to ensure their happiness in such a limited environment, they remove or minimize curiosity from the average inhabitant.

But in order to ensure that the occasional dangerous situation can be handled, they arrange for a few individuals in each generation to have curiosity.  In most generations, these individuals are miserable outcasts, but when their attributes are needed, they’re saviors.  The novel shows us a generation where the curious individuals become leaders and save the world (the habitat), and another where the one curious individual is desperately lonely and unhappy.

I often wondered if this type of survival would be worth it.  The aliens had removed just about everything that made them…them, in order to ensure that some version of their species continued to survive, with only the occasional lonely individual retaining some part of their scientific curiosity.

At some point, we may develop the ability to reprogram ourselves, to control what makes us happy.  The question is, is happiness achieved this way real happiness?  Does it make a difference what kind of scenario we choose to make us happy?

For example, I could see someone deciding that reprogramming people to enjoy hard work and virtuous living is a good thing.  Presumably no one would think reprogramming people to enjoy a horrible death would be good, but if we had a situation where people were stuck in miserable lives and we could simply reprogram them to enjoy their lot, would that be ethical?  Why, or why not?

And how would this be different from a philosophy like stoicism, where people essentially will themselves to be at peace with unavoidable circumstances?

35 thoughts on “Changing what makes us happy

  1. Is happiness achieved this way real happiness? To avoid having a tautological question, I’m interpreting this as “Is happiness achieved this way always personally valuable?” In which case, no. There’s more to life than happiness (qua psychological state, as most Americans interpret it; those steeped in Aristotelian tradition mean something else). Curiosity/knowledge is a good example of another valuable feature. Action having an impact is another, and interpersonal relationships are yet another: two features that are missing in the classic “Experience Machine” thought-experiment.

    Liked by 1 person

    1. But what makes something personally valuable? Ultimately doesn’t it come down to how we feel about it? And if what we feel about it can be reprogrammed, then isn’t talking about the personal value of a proposed new set of feelings just judging that new set by the standards of the current set?

      Like

  2. Thought-provoking questions. I think there’s no ethical dilemma here, it only appears this way because of a belief in free will, which leads to a belief in one’s “true self”. Think about it; if we’re automatons anyway, all decisions are imposed on us by our environment, and as such, there is no X that is reflective of our “free” desires. As such, being programmed to prefer X to Y isn’t a violation of anything; it’s simply a change of one type of control for another.

    Liked by 2 people

    1. Thanks BIAR.

      Good point. What we naturally value is a result of instincts imposed on us by evolution. Since there’s no non-physical self, there’s nothing to object when our deepest desires have been reprogrammed to prefer something we previously would have found abhorrent.

      Liked by 1 person

  3. Egan is one of my favorite modern SF authors! I really enjoyed Incandescence for how it details (literally, details!) how a species could discover GR without astronomy (and seeing orbits).

    That book also taught me about orbital dynamics and how there is a slight gravity gradient in any orbiting solid body at all points not on the orbital axis. The part about how up and down worked was especially fascinating. Makes perfect sense once you get what’s happening.

    Liked by 1 person

      1. And I do love geeking out on tech details in my SF!

        Back in the day, George O. Smith and his adventures with radio tubes in space (because of the vacuum, you don’t need the glass envelopes).

        Or Hal Clement. Love that guy! I re-read his books every once in a while.

        Liked by 1 person

  4. This is clearly a profound topic, especially as our ability to self-medicate improves. I think the paradigm situation was given in the experiment of the rats who could push a lever to jolt their pleasure center. They did so to the exclusion of everything, including food.

    I think the takeaway from that experiment is that permanently changing the rewards system will be judged by natural selection, risking permanent removal.

    So to put this into the perspective of the OP, making people happy with physical labor may have undesirable side-effects, like lessening the amount of daydreaming, thus creativity, that happens. Maybe it’s a matter of avoiding large, i.e., population, scale changes.

    *

    Liked by 1 person

    1. “I think the takeaway from that experiment is that permanently changing the rewards system will be judged by natural selection, risking permanent removal.”

      Good point. Of course, from natural selection’s perspective, birth control is a disaster.

      And we may be reaching a point where artificial selection, not to mention outright gene editing, will be a bigger player in our evolution. Although I guess if you take a long enough view (millions or billions of years), natural selection will eventually win out.

      Like

          1. Actually, the point is kinda like “I do not think that word means what you think it means”. Too many people think there is something special about being “natural”.

            *

            Like

  5. “hard work and virtuous living”

    There’s a point where moral heuristics start to break – it’s at the question ‘What version of hard work and virtuous living’?

    Of course the person insisting it will, parochially, think their version is IT. There’s nothing else. And this is cute for how we’ve lived in the past and even the present. But it’s not cute when people actually consider editing while having what is actually a kind of tiny mindedness (I presume that here we all already consider that there can be many versions of hard working or virtuous – if not, then I guess I just came off as a big asshole!). It’s like watching a genie give out twisted wishes, but the genie twists nothing – only the small mindedness of the individual is the thing that twists the wish.

    And as you say, the creatures in the story have stripped out parts of the creatures they were. Are they actually extinct, just with some doppleganger race there to make it look like to the galactic caretaker everything’s still running smoothly? Again, we have the parochial impression that as long as there are bipeds walking around then we are not extinct. It’s like body snatchers don’t even have to try.

    But as much as SMBC is very topical these days, I kind of feel it’s indulging in pessimistic fatalism to sate a capitulation/fatalism audience. A certain demographic who want to shake their heads and tut, but not actually do anything because they just want to tut and treat it that there’s nothing else to be done. I’m not sure it’s a good audience to focus on, even as it’s an easier audience to work with.

    Liked by 1 person

    1. I think the stark fact is that what we consider virtuous is a factor of our instinctual and indoctrinated preferences. Someone with a different genetics and cultural upbringing will have different ideas on what is virtuous. Reprogram those preferences, and it seems like the interplay could go in very bizarre places.

      I follow the SMBC author, Zach Weinersmith, on Twitter. My impression of him is that he’s interested in a wide variety of things, and his humor is intended for people interested in math, science, and philosophy. But finding the humor in things often means focusing on the worst and most salacious interpretation of them, amplified to the point of absurdity so it has a chance of being funny. So I wouldn’t take that humor in and of itself as his philosophy or the philosophy of the audience he’s targeting, at least not intentionally.

      Liked by 1 person

      1. While I agree it could well not be his philosophy, why wouldn’t it be the general philosophy of his audience? I know I might be arguing for ‘wholesome’ stuff here, but if you write comics that appear to a wide number of people as fatalistic, even if you as author aren’t fatalistic you’re going to attract an audience of fatalists – and affirm that fatalism (unintentionally).

        I feel maybe there was an age, a slower age, where satirists could just pick at things without suggesting solutions – and over time maybe that would create a cultural therapy effect as other people would take the picked at thing and try to figure a solution.

        I feel now that things are moving too fast and satirists can’t just pick at things and leave it to some other random person to maybe want to find a solution, find one and also be able to enact it. We are, IMO, really are headed towards people editing our freaking base lines of what gives us pleasure – in just a few decades (or less!). IMO making jokes about the situation without giving any guesses as to a solution is too much like playing a violin while Rome burns, IMO. I know writing content with any amount of ‘wholesome’ or edutainment makes the content less cool – but maybe we need slightly less cool violin playing at this point.

        Liked by 1 person

  6. I am reminded of a quip from Nietzsche, “Man does not live for pleasure; only the English -man.”
    From utilitarians to mystics, we mistake pleasure for happiness and conclude that the latter might be static.
    It is an interesting conceit to assume that happiness could be formulated, but it would necessarily fail.

    Liked by 1 person

      1. Happiness is a happening, rather than a thing to be had, despite the way we talk about it. Therefore, I don’t think it can be quantified as if it had dimensions, or constructed according to its dimensions.
        That doesn’t mean that we can’t have any expectations regarding the sort of venue where happiness is prone to occur. You will just naturally get a distribution of occurrences across a given circumstance.

        Liked by 1 person

  7. It seems “reprogramming” in computer-age science fiction plays the same role as “soma” in Aldous Huxley’s 1932 novel, “Brave New World”? (I wonder if the term “science fiction” was even used back then; I think his book is still referred to usually as a “dystopian novel.”)

    Liked by 1 person

    1. I haven’t read ‘Brave New World’, but based on what I know of it, soma is exactly the same concept, although as a drug, it would still always be evident that it was an artificial change. Once someone has rewired my preferences, they would feel like my real preferences to me. Indeed, they would be my real preferences at that point.

      I think the term “science fiction” can be traced back to Hugo Gernsback in 1926 when he coined the term “scientifiction” in an editorial in the first issue of Amazing Stories. Although I don’t know how pervasive it was in 1932, particularly in Britain. But it had taken hold enough in America by 1938 that the title of Astounding Stories magazine was changed to Astounding Science Fiction.

      Like

      1. “…although as a drug, it [soma] would still always be evident that it was an artificial change.”

        I’m not so sure of this. Take ex-alcoholics (or for that matter “born-again Christians”). Couldn’t you say that their “preferences have been rewired” and that these new preferences feel like their “real preferences”?

        Like

        1. Good point. “Rewired” in the context of a nervous system isn’t a good word since, technically, any experience we have “rewires” the brain. And you could argue that addiction is itself a reprogramming, at a certain level, of our preferences.

          Like

  8. There’s no need to invoke reprogramming or stoicism. Every successful adult must have learned to look past the momentary pains and pleasures of life when it counts; a human consciousness instead of a stimulus-response machine, in other words.

    (Is it just me, or did SMBC used be funnier?)

    Liked by 1 person

    1. But reprogramming as discussed here goes far beyond simply looking past momentary pains and pleasures, it fundamentally alters what generates those pains and pleasures, perhaps making something that was previously painful into something enjoyable. (Admittedly I did confuse the issue with the stoicism remark.)

      I can’t say that I’ve ever found SMBC to be uproariously funny, although I do think it has its moments. I like it because it periodically touches on interesting concepts.

      Like

  9. A thought-provoking post! It sounds like the story touches on some classic problems. There’s also the question of whether the happiness of a few members of society ought to be sacrificed for the many.

    “…if we had a situation where people were stuck in miserable lives and we could simply reprogram them to enjoy their lot, would that be ethical? Why, or why not?”

    I think it depends on the situation, but like most things, it’s not always simple or clear. For instance, should people take medications for depression? Nowadays I think most of us would say yes, but it wasn’t that long ago when there was a stigma attached to this, and perhaps there still is. Another point is that if the reprogramming is to be considered ethical, it seems an obvious prerequisite that an individual’s autonomy should be respected. If you force people to be reprogrammed against their will, that’s a problem in itself, regardless of the result.

    I would want to ask: does the reprogramming really achieve happiness in the fullest sense, or is it deficient? This is where I see things getting really sticky. I think many of us see happiness in the fullest sense as having some relationship with natural function, which means that, as humans, we can’t see our own happiness in a way that’s less than human. I would not choose to be reprogrammed to lick food off the floor, not even if I were given good arguments showing me it would make me supremely happy and fulfilled. But why not? Maybe it’s not logical to be so repulsed. I think we hesitate to accept artificial means because we assume that the result would also be artificial—whatever is achieved would be a short-lived, thinly pleasurable feeling merely posing as happiness. In other words, being human, I can’t believe I would be fulfilled licking food off the floor. Even though I “get the idea” that I would no longer be me, even though I understand what’s at stake, I just can’t accept that sort of alteration.

    In the case of reprogramming to make people enjoy death, that’s way too much of a stretch. I can’t imagine any creature’s raison d’etre being…non-etre. In any case, a strong thanatos drive does not seem likely to withstand the test of time.

    Liked by 1 person

    1. Good point on respecting people’s autonomy. In the case of antidepressants, the person has volitional control on whether they take them. It is a sort of reprogramming, but it’s a reprogramming the recipient is participating in.

      If we respect autonomy, that also seems to cover the problems with reprogramming you to enjoy eating off the floor. (Was Geordi eating off the floor when that scenario occurred to you? 🙂 ) You’re allowed to use your current values in deciding what your revised values might be.

      “I think many of us see happiness in the fullest sense as having some relationship with natural function,”

      That’s an interesting criteria. But I could see it used perniciously. For example, suppose a society wanted women to be happy bearing children, so it reprogrammed women who wanted to live a life of intellectual accomplishment to instead desire to be a housewife and mother. Of course, this wouldn’t be respecting the woman’s autonomy. Autonomy seems to be a crucial requirement.

      But it itself raises issues. People who argue against free will and moral responsibility often talk about a future where we might be able to reprogram criminals to have less criminal impulses. If the criminal agrees to this, that might be okay, but I could see a lot of people not wanting to submit to that reprogramming. Would we give them that option?

      I’ve often wondered if the next taboo to come in human relations will be against violating the sovereignty of a mind.

      Liked by 1 person

      1. The “natural function” criteria could definitely be used perniciously. A lot of myth surrounding what’s natural and what’s not.

        As for whether we give criminals the option, it’s a tough one. We don’t give them the option for punishment, but perhaps reprogramming fits into a different category that goes beyond normal punishment. Of course, someone facing the death penalty might be eager to be reprogrammed. It’s not exactly an appealing choice, but it’s a choice. Those committing less crimes might prefer the normal sentence, such as jail time.

        “I’ve often wondered if the next taboo to come in human relations will be against violating the sovereignty of a mind.”

        What sort of thing did you have in mind? This reprogramming? Or something like it? Do you envision a particular scenario?

        Like

        1. “What sort of thing did you have in mind? This reprogramming? Or something like it? Do you envision a particular scenario?”

          Some possible scenarios:
          altering criminals to have less anti-social impulses
          reading someone’s mind to obtain private information (compare to the cases where defendants are being forced to provide passwords for their phones or online accounts)
          reprogramming school kids with behavioral problems
          reprogramming workers to find their assigned duties pleasurable
          reprogramming citizens to be happy with the current regime, no matter how oppressive or corrupt it might be

          All of these strike me as scenarios where someone might be forced to submit themselves to having their mind violated.

          In the case of criminals, I could see them being offered a choice. A lot might depend on how heavy handed the reprogramming is. Reprogramming that manages to leave a person’s personality more or less intact, but curb their most violent or other criminal impulses, might be easier to agree to.

          Reprogramming that alters a person enough that they become someone new, maybe into, from their point of view, an excessively placid or timid version of themselves? I could see some people deciding that death is preferable. I think of all the people who commit suicide after they’ve been chemically castrated, or who get off of antidepressants because, although they work, they just don’t feel like themselves while on them.

          A person’s conception of self seems like a powerful thing.

          Liked by 1 person

  10. I’m here from your most recent post. 🙂

    To me, the bottom line is not about what makes us more or less happy, but what increases our chances for long term survival (evolutionarily speaking). We currently are hardwired for a different environment, and so these new environments we’ve made for ourselves can often be very psychologically damaging because of how “stoic” we are required to be in order to ensure that things run relatively smooth enough. It seems as though a large percentage of people are not able to be stoic enough much of the time.

    The path we take forward would have to fall somewhere between the following two extremes, wouldn’t it? Extreme 1: Genetically alter ourselves to suit the current environment. Extreme 2: Alter the environment to suit our current genetic traits.

    Currently we seem to be much closer to Extreme 2, but with a healthy dose of winging it with very little foresight in terms of the environments we make. My gut says that we can’t survive for very long (evolutionarily speaking) on this current path. The question is, which path would give us the best chance for long term survival? Would going down the middle where we both alter our environment and alter our genes roughly the “same amount” (whatever that even means) be the best route? I want to see a plot of survival chances on the y-axis (from 0-100%) and the “Extreme” spectrum on the x-axis. Haha!

    Liked by 1 person

  11. Rereading this, I want to clarify a couple of things. It’s probably more accurate to define Extreme 1 as genetic and chemical alterations (like the depression medication you were discussing with others above).

    Secondly, the “haha!” at the very end sounds a little funny reading it later. It was meant to convey that I know it’s kinda absurd to think that we could plot such a graph. But even it we can’t do the calculus of the situation and find the “exact area under the curve” so to speak, maybe we could “use a bunch of rectangles to estimate the area under the curve”, again so to speak.

    Liked by 1 person

    1. Good point about modifying the environment to fit our instincts vs modifying our instincts. In truth, what we’ve been doing since the agricultural revolution amounts to the modifying the environment strategy. As you point out, it’s increasingly putting us at risk. Until now, it’s pretty much been the only alternative.

      But as we increasingly develop the ability to modify ourselves, including our deepest most primal impulses, it will become increasingly easier, in the sense of requiring a lot less energy, to go that route. The question is where it might lead, particularly with artificial intelligence increasingly in the mix. Eventually a far future civilization may become a jumbled mix of modified posthumans and AIs.

      But even such a civilization may still find it beneficial to alter the environment, although the needs would change. Rather than large swathes of the planet being covered in farm fields, we might have large swathes covered in vast solar panel fields, or maybe even vast swarms of energy collection stations in orbit about the sun, capturing as much of its energy as possible, eventually resulting in a Dyson swarm.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.