This Kurzgesagt video is interesting. It discusses the possibility of civilization collapsing and how it might affect the long term fate of humanity. It’s about 11 minutes.
One of the things the video gets into is how we should think about our present day decisions, decisions that might have long term effects on humanity. In this view, we are in the early stages of humanity, with vast possibilities before us. Humanity, in one form or another, might exist for millions or even billions of years, but only if we play our cards right.
This outlook is often called longtermism. The video cites William MacAskill and his new book, which he recently discussed in an interview with Sean Carroll on Carroll’s Mindscape podcast.
The view gets criticism, most recently in a Salon piece by Émile P. Torres, arguing that longtermism is toxic. As is often the case with these Salon articles, I don’t think many of the criticisms are remotely fair. Anytime you see short quotes with no discussion of context, you’re probably reading a hit job. And the author seems to have at least as many ideological hang-ups as any longtermer. Still, I think Torres’ overall point that we should scrutinize longtermism is right.
Now, I don’t see any problem, all else being equal, in thinking about what might be best for the long term fate of humanity when making decisions today. But all else is rarely equal. Often longtermism seems to imply that we should privilege considerations of future generations over the wellbeing of the current one.
This assumes that we have reasonably accurate insights on what that future might be, or what effect our actions might have on it. But if there’s one thing history seems to show, our ability to predict the future is very limited. In fact, beyond a few decades, it’s really an illusion. Consider how much insight someone from 1522 might have had about our world today, not to mention someone from 1022. Even reading science fiction from a few decades ago can be sobering.
That means longtermers are often arguing that we should privilege the concerns of hypothetical entities over the ones of people who are alive today. That’s where they lose me. It’s one thing to be concerned about the life children today might lead. It’s another to be worried about people who might be alive 10,000 or a million years from now. Certainly I hope they will be, but I’m not interested in making anyone today suffer for it.
My own take, given all the unknowns and uncertainties, is that the best thing we can do for those remote descendants is survive and flourish as much as possible. If each generation does that, I suspect the future will take care of itself. Of course, there are no guarantees. I have no faith the universe or some deity will step in and prevent us from destroying ourselves if we make the wrong choices. But there’s also no guarantee depriving people today will ensure against it. As is often the case in life, we have to just make the best bets we can.
So I think longtermers have a point, but when considering their suggestions, we should remember just how limited anyone’s insights into the future really are.
What do you think? Am I being too dismissive about long term projections? Or missing something else?
Was a click-bait video, for sure. I clicked it (back in my own EweToob buffet). In-A-Coconut-Shell is one of my force-fed channels youtube is convinced I’m addicted to — and they wouldn’t be wrong.
> Anytime you see short quotes with no discussion of context, you’re probably reading a hit job.
I’ll remember that.
What I took from that was, collapse?, of course. But, humanity, baring an ELE, (I think that’s from Deep Impact), are better adapted than cockroaches and will endure. Eventually rising up, many times if necessary, to who knows what end.
> the best thing we can do for those remote ancestors is survive and flourish as much as possible.
I think that’s your DNA talking.
Longtermism? Bah. HeatDeathoftheUniverse.com (404… hmm)
LikeLiked by 2 people
Yeah, it’s been a few days since that video was released. I’m playing catchup to some degree.
“I think that’s your DNA talking.”
Probably, but isn’t all of it? Actually my view of humanity transcends biology. I agree with the long termers that uploaded or enhanced humans would still be human. (Admittedly, it’s a matter of definition.) I’m just skeptical we know enough to make the right sacrifices for benefits too far down the road.
“Longtermism? Bah. HeatDeathoftheUniverse.com”
Right. Think long term, but not too long term.
LikeLiked by 3 people
All this recent AI/robot/trans-human/mind-transfer talk got me thinking about the actual computing needs of “uploading”. WildWildWestWorld’s finale, (meh), and the Hoover Dam as a power source would be a mere (shrinking-lake-mead) drop in the bucket.
I figure vast fields of solar powered space-based server farms would be required — all manufactured in-situ from asteroids in the area. Even Q-Bit servers would require huge amounts of processors – per human sim.
Starlink? Mindlink. (Imagine what a hacker field day it would be to infiltrate a mind-server.)
Way more on this, later…
LikeLiked by 2 people
In terms of current or near-term technology, you’re probably right. It isn’t something we’re going to do in a few years.
The thing to remember is the brain does it with only 3 lbs of water cooled substrate made out of common elements, and 20 watts of power. So in principle reproducing its operations should be possible with resources in that neighborhood. Although it may be centuries before we know how to do it.
LikeLiked by 3 people
Plus our brain is not really designed to run as efficiently as possible, given that it’s a product of evolution. In principle it should be possible to run it even more efficiently than 20 watts of power, although it could just be that running the brain on a biological substrate is inherently more energy-efficient than running it on a server farm in simulation.
LikeLiked by 3 people
Evolution does actually do a very good job with energy efficiency. A solution that uses less energy will be selected over one that uses more. But there’s definitely no guarantee that it’s the most efficient solution.
LikeLiked by 1 person
Fair point. Maybe AGI will give us a leg up in designing humanity’s next level.
LikeLiked by 1 person
“But there’s definitely no guarantee that it’s the most efficient solution.”
Bingo – evolution finds local optima. This is a big part of why I’m scared about AGI in the long run. (Also, genetically engineered pathogens, in the medium term.)
LikeLiked by 2 people
It’s odd to see the longtermists not applying a discount rate to the welfare of future humans. They used to – I remember the argument used to be that even if you heavily discounted future humans, there would be so many of them that they still weighed heavily on the present.
LikeLiked by 2 people
Carroll asks MacAskill about discounting in the interview. MacAskill basically dismisses it. I think he compares it to discounting suffering in a far away country as compared to suffering in front of us. The issue, I think, is that we know about the geographically distant suffering with far more certitude than we do far future suffering. (At least today with current technology. People a century ago could claim more uncertainty.)
LikeLike
I think he’s wrong on that. There’s just far less causal connection between you doing something now and what happens to someone a century from now versus a month from now.
LikeLiked by 2 people
I think Daniel Schmactenberger and Brett Weinstein’s opinions about existential risks is on the money. Personally if I wake up unscathed in the morning and there is sun shining I feel fortunate.
If I was a betting man, I give us 50% chance as a species of surviving the next 10 years. About 10% the next hundred.
Biological and technological advances are evolving so rapidly that Jonny in his basement will be able to unleash in the foreseeable future a Pandemic the likes we haven’t seen or Drones for that matter. We are f/(ked in my humble estimation.Nick Bostrom’s Argument No1 seems more likely to me for us.
LikeLiked by 2 people
Those odds are pretty grim. Not sure what my numbers would be, but I’m more optimistic. That doesn’t mean we may not be in for a rough time.
LikeLiked by 1 person
I was trying to be conservative too! Haha. A ‘rough time’ I consider an understatement’, but yeh it’s gonna get pretty, pretty, pretty bad. Rational materialism over the Judeo Christian tradition is our biggest existential risk in my opinion.
LikeLiked by 2 people
2/3 of the world’s population is not Christian or Jewish. And humanity existed for hundreds of thousands of years before those particular traditions developed. Theology aside, what about them do you think is crucial?
LikeLike
I’m with you on “longtermism” as a guiding principle being foolish and unrealistic. However, “the best thing we can do for those remote descendants is survive and flourish as much as possible.” What does that mean? Isn’t that pretty much what humans have always done? At least the most “successful” ones? Isn’t that Elon Musk and Donald Trump? With their many offspring, multiple wives and wealth, power and influence? I’ll buy that, actually. “It’s good to be king.” Yes?
LikeLiked by 2 people
‘It’s good to be King’ haha love that Mel Brooks line
LikeLike
I was referring to flourishing of the population as a whole, not individual billionaires, or some kind of special class. In general, I think capitalism is fine, but as part of a mixed economy which moderates its most extreme outcomes.
LikeLiked by 1 person
It’s hard not to argue that that is what we’ve done – collectively. The Steven Pinker view- overall we’ve come a long, long way from the days of old. My thinking is: We’ll muddle through. And yes, there will be winners and losers. That won’t change.
So …
Are you saying you think all and everyone can thrive and flourish? But for ___?
Some think Technology/AI can “save” us (from ourselves.)
LikeLiked by 2 people
I’m not making any particular statement about what might be prevent flourishing. But I do think a lot of societal ills are ameliorated with education and equality, particularly of women.
On AI saving us from ourselves, that reminds me of Gort from the original Day the Earth Stood Still. Klaatu described a society that had handed off its governance to the robots. Of course, machines like that would be the product of our own thought, an extension of our collective will, so they wouldn’t be so much saving us from ourselves but the tools we use to control ourselves.
LikeLike
Just as Longtermists are allowed to speculate about the future in order to potentially advance their interests, I am too.
For one, I hold the unpopular position that humanity will never develop self sustaining places to live beyond our planet. Even if we some day build highly advanced conscious robots to explore space, I don’t believe that even they’ll become self sustaining elsewhere in the end. The circumstances should simply be too challenging, with space too vast, for our machines to be built which last out there long enough to also be maintained and replicated with materials out there.
Furthermore my naturalism prevents me from believing that brains create subjective existence by means of the proper coding alone. Instead subjectivity should exist when a computer animates the right kind of physics, such as certain neuron produced electromagnetic fields. Experimental demonstrations of this should effectively end the fantasy that we’ll some day create ridiculous numbers of conscious entities living their lives under the guise of computer modeling alone.
So if Longtermists are wrong on each of these counts, how do I suspect that the future of humanity will generally be? Perhaps the biggest relatively near term paradigm that few beyond myself foresee, should be the dismantling of liberty. With simple algorithms China’s highly monitored citizens are being electronically punished and rewarded regarding their behavior for teaching purposes. This should cause them to progressively become extremely effective producers of goods and services and so amazing wealth should be created per capita. Once this dynamic becomes generally understood I expect the liberal world to fight like hell to save human liberty. I also expect it to fail however given that liberal people should progressively choose to forfeit their liberty for the wealth associated with being part of a society that’s intimately manipulated by the state.
Then secondly if it’s true that feeling good constitutes value, as I believe, and if it’s true that feeling good can be attained far more efficiently by directly manipulating the brain, as I suspect, then fulfillment in life should be attained this way more and more.
I understand why people would rather not believe that humanity will never expand beyond our planet, would rather not believe that subjectivity requires more than code, would rather not believe that liberty will largely be defeated, and would rather not believe that we’ll end up directly causing ourselves to feel good rather than earning this through standard life. What I don’t have however is good reason to doubt any of these predictions.
Mike,
Your position seems pretty close to Eric Schwitzgebel’s from his January “Against Longtermism” post. I wonder if you’ve read it? I’m in general agreement. https://schwitzsplinters.blogspot.com/2022/01/against-longtermism.html?m=0
LikeLiked by 3 people
I’m with you on biological humans not colonizing space, or if they do, it will always be a small subset of what the machines are doing. But I am more optimistic about what those machines will accomplish. The solar system is currently being explored by machines, while humans barely venture outside of Earth’s atmosphere. And if some of those machines are built in our image, then they will become our progeny into the universe. Space will be owned by the machines.
We’ve discussed the Chinese system before. We’ll see. I think it’s interesting that India’s population is on track to surpass China’s in the next decade. People have often looked at China’s natural resources and population and assumed they will own the future. But I also think we should be watching India carefully.
Somehow I missed that post from Eric Schwitzgebel. (Or utterly forgot about it.) Thanks!
LikeLiked by 1 person
I doubt that you would have read and forgotten about that post Mike. Matti had some fun with it too. But it’s interesting to me that with your current post you ended up coming down so close to Schwitzgebel’s position without being influenced by it.
On robots exploring space, I’m certainly not saying that this will end. I’m saying that I don’t foresee such robots ever being able to cut the cord with Earth to become self sustaining out there. I think they’ll always be one shot deals that will require replacing when they break. And even if we were to build certain elements of them to be far more intelligent than humans (not that I expect us to), given the challenges of space I still don’t see how they could independently reproduce elsewhere for long. I’m not sure that the question of how they might practically exist independently of Earth has yet been given earnest consideration. Though much of the conventional wisdom depends upon this happening specifically, to me the presumption seems more based upon faith rather than reason.
In a sense India soon becoming more populated than China helps make my point. Government engineering is creating a “quality” rather than “quantity” China. Given its aims I wouldn’t say that it will soon if ever thrive in artistic fields, though from what I’ve heard its science and technology are progressing at a very rapid clip. India and Pakistan may not be too bad with this either, though as China’s social engineering takes hold I suspect that it will leave them in the dust, as well as western science and technology. Perhaps they’ll even be able to do what the west has failed to do for so long, or harden up the soft sciences? In any case you and I should be around to monitor what happens for a few more decades, and hopefully with more rather than fewer of our marbles intact. But whatever.
LikeLiked by 2 people
Me converging on Eric S’s position doesn’t strike me as strange, since the steps to get there all seem rational. But I suppose we could just have similar biases. He’s is a moral realist, while I’m not, but that’s more a metaethical difference than a difference in practice.
The only reason I can see to doubt that robots could build copies of themselves is to assume that the ability to do it requires some form of magic that only biology possesses. But living things are physical systems that operate according to the laws of nature, so I can’t see any reason in principle we can’t engineer a system to do the same thing. No faith required. And it seems like they’d have far more options for environments and forms they could exist in, as long as there’s an energy source.
A more plausible concern is staying functional across interstellar distances far away from the energy of any star. But while the Q-Drive has a lot of issues, one thing it would be good for is capturing energy from the interstellar medium.
India still has a ways to go to catch up to China economically, so only time will tell. But I have far less confidence in China’s social engineering than you do.
LikeLiked by 1 person
I meant that it was interesting to me that you converged on a position of Schwitzgebel’s that I personally agree with without reading his post first Mike. And I now see that Sean Carol seems more of our mindset here as well, not that I’m generally a fan of his. And yes moral realism is a tremendous general obstacle as I see it. Even most supposed moral anti-realists seem not to grasp that welfare should exist specifically rather than generally. Just because something is good for one entity shouldn’t mean that it’s good for another. Nevertheless both moral realists and anti-realists (like Sean Carroll) seem to continually speak of absolute rightness and wrongness, like it applies generally. I instead speak of specific instantaneous examples of value, or good. Even personal value should not be continuous but rather be made up of countless separate moments. They seem roughly joined through memory of the past and anticipation of the future.
I don’t want to discount the challenges of getting space craft that far away to another planet Mike. Yes that should be quite difficult. But I also factor in that many thousands of ships should need to be sent given the infrastructure required to use the materials of a planet to create parts made of metal, glass, plastic, rubber, and so on such that they could potentially rebuild themselves to do it in yet another solar system. They’d need to not break down critically before they were using that planet’s materials for self sufficiency. And even with a well chosen planet these machines would not have evolved for those conditions, unlike us. So yes I think there is plenty of room to doubt the conventional wisdom that our machines (if not the human itself) will become self sustaining elsewhere.
LikeLiked by 1 person
So they aren’t Judeo Christiana? Ok Humanity is in infancy and we havent ever seen technology make an impact since the social-media age and that is turning into a disaster.
You are the scientist. Tell me how they will forge the Meaning 3.0. Meaning 1.0 was religion and institutions. Meaning 2.0 Enlightenment – separation of Govt and state and 3? What will it entail? Because if it is rational materialism we are in deep guano.
LikeLiked by 1 person
I hope we’ll figure out our future values and norms through discussion and consensus (with both religious and non-religious people included), rather than the more traditional routes of dogma, coercion, and conquest. But only time will tell.
LikeLiked by 1 person
“Survive and flourish as much as possible.” I would add, “while not destroying too many resources on which we rely.” This includes biodiversity, livable temperatures, and our coastal cities, for starters. I would also add “without creating new dire threats.” Killer drone swarms, genetically engineered biowarfare pathogens, and corporate- or military-designed AGI come to mind. I do not trust most CEOs, generals, or programmers to grasp the ways that a program’s interpretation of “maximize profits for the corporation” or “keep the country as safe as possible” (or more detailed instructions) might differ from their own, or that of any human.
At this point, the best or only way to avoid some of these threats might just be to ramp up research focused on foreseeing and avoiding problems. But that’s worth doing.
LikeLiked by 2 people
Sure. By “flourish” I mean for the whole population and in a sustainable manner for that population. Partying like there’s no tomorrow comes with a giant hangover when tomorrow finally arrives. It’s when the putative hangover is centuries or millenia away that I become skeptical.
I have no problem with research to try to foresee and avoid problems, but I think it should be focused on more near term, more predictable threats rather than worrying about philosophical hypotheticals.
LikeLike
Sure, as long as we recognize that for many threats, enough of the probability mass lies in the near term to merit a hard look, even if more probability is further out.
LikeLiked by 2 people
Mike, I’m with you on this. The whole idea of longtermism baffles me with an argument of a moral obligation to future hypothetical generations over our moral obligations to our living fellow humans. I recently, in another forum, submitted a critical review of a book by another Longtermist, Toby Ord’s, The Precipice. Ord’s work is more pessimistic than I assume is MacAskill’s. Nevertheless I think my critique applies generally. I submit that the whole idea of Longtermism is part of an effort to keep a limited and moribund consequentialist ethics relevant. And it can be summarized by altering Commander Spock’s famous ethical axiom: “The needs of the living are outweighed by the abstract needs of unknown future billions.” Horse feathers!
LikeLiked by 2 people
I’m not familiar with Ord’s views, other than what Eric Schwitzgebel discussed in his post about it. (And which I just skimmed today.) Honestly I only know about MacAskill’s views through the interview he did with Carroll, and the little bit revealed in the video. But yeah, neither approach strikes me as compelling.
I do think you note a real issue with consequentialism, which is figuring out what the actual consequences of any decision might actually be. Typically those consequences are followed until a result that matches some preconceived intuition is reached, which of course seems to make the logic redundant.
LikeLiked by 1 person
“It’s tough to make predictions, especially about the future” (Yogi Berra). I feel compelled to add a bit to my simple “horse feathers” critique (above). Let me just add this. And, by the way, I submitted a review of Ord’s book on Goodreads which has more meat on the bones that this snippet if anyone is interested. There I point out the ethical dangers in this type of thinking which I won’t repeat here. Anyway, MacKaskill and his fellow longtermist, Toby Ord, are both philosophers—not historians. Why do I think this is significant? Well, note first that “Longtermism” is an ethical approach to understanding humanity and its future. As such it does not utilize the well developed tools of historical scholarship in predicting its far distant dystopian futures. There are so-called “cause and effect” type arguments by both philosophers. But such arguments are at best thin and speculative. They both rely more on certain consequentialist ethical arguments—if such and such dire consequence is going to happen then we “owe” future generations certain positive actions now. It’s their premises that need work. One cannot ignore the speculative nature of the premise just because the predicted effects are so dire. That is simply sloppy scholarship. I mention historical scholarship for the following reason. Historians still argue over the history and historiography of past events like the causes and consequences of the French Revolution. Those events are easier to analyze because they are not the future. Moreover, the historical source material is readily available for analysis. Yet we still still plow and re-plow history for more accurate insights into our condition and how exactly we got here. This is, in my opinion, the weakness of “Longtermism.” It’s attempting to postulate the future as history—a very tricky business.
LikeLiked by 2 people
I don’t quickly see your review Matti. I wonder if you could paste a link to it here?
LikeLike
PhilEric, I think this gets you there: https://www.goodreads.com/review/show/4472636982?book_show_action=false&from_review_page=1
LikeLiked by 1 person
Thanks Matti. That largely matches my own concerns. The future is an ever increasing number of branch possibilities. Making ethical arguments based on which of those branches we think we’ll be going down is essentially just trying to justify intuitive desires with rationalizations.
I agree we know the past with much more certainty. (Although our knowledge of the past is often full of uncertainties itself.)
A lot of people have started piling on the longtermers in the last few days. I’m not seeing many in the intellectual world coming to their defense. It’s not looking like this will be an enduring movement.
LikeLike
Predicting the future is a thankless task, even for a short-term lot (hundreds to thousands of years). I think the best we can do is to comprise the list of the known unknowns about future humankind’s development.
Suppose the current course of humankind’s development is optimistic for humankind’s future. In that case, we should continue on that course and try to mitigate possible negative known unknowns as much as possible. That is the only realistic, to some degree, course of action.
In my book (available on Amazon), “Subsurface History of Humanity: Direction of History,” I found the objective trend of humankind’s development during the last 44 thousand years. In the same book, I identified seven known unknowns for humanity’s near future.
If humankind will try to follow the suggested path – that would be a positive development. However, we would need to repeat such an analysis and mitigation repeatedly for a long time.
LikeLiked by 2 people
Good points. And you made me go back and peruse your known unknowns. I agree with the one about us merging with AI. In my view, bioengineering and machine engineering will eventually reach a point where they blend together. It’s why concerns about AI “taking over” have never been compelling for me.
I’ve often wonder myself about how long the current interglacial period, which encompasses the entire age of agriculture and civilization, will last. When it ends, it might have profound impacts on humanity. Depending on where we are, it might just mean a transition to new forms, or it could result in a population crash and a retreat for human occupied portions of the planet. The good news is that interglacials don’t end overnight, but gradually cross millenia, so there would be time to adapt. (Unlike if we ourselves cause a severe shift in climate.)
Totally agree that we’ll never be done with this, and that analysis has to be an ongoing process.
LikeLiked by 1 person
I’m afraid I have been watching the meta-discussion forming around longtermism without jumping in to many of the articles. Guess it’s time. Another relevant, and I think very good, discussion is in Erik Hoel’s substack: https://erikhoel.substack.com/p/why-i-am-not-an-effective-altruist.
I need to listen to the Carroll/MacAskill discussion because I don’t know what MacAskill’s specific suggestions are. My inclination is, much like yours (surprise), that we can’t predict how individual actions actions will impact far future generations, and so we are better off, individually, finding the correct “rules of thumb”. [Hint: actively seek to determine goals in other entities and cooperate if reasonable]. But I do think larger institutions have the responsibility to think longer term.
*
[note: robots should/will need to use the same rule of thumb]
LikeLiked by 2 people
I’ve been in pretty much the same mode with longtermism. Honestly, it barely impinged on my consciousness until I got it from three directions in the last week (the ones I linked to). Thanks for the Erik Hoel link. That post looks longish so I’ll have to swing back to it when I have more time. I have nothing against effective altruism per se, just an observation that if we all did that, no one would be doing the rubber meets the road part. But it does seem better than doing nothing.
On institutions having a responsibility to think long term, I guess it depends on just how long we’re talking about. The Salon piece worries about what might happen if there were ever a longtermist President. I think the chances of that are zero. Presidents don’t get elected on promises of what they’ll do for the longterm fate of humanity, but on what they’ll do for voters in the next few years.
Interesting point about robots and rules of thumb. There’s a tendency to assume they’ll be superior to us in all ways, but our actual experience so far is they seem vulnerable to the same biases we have.
LikeLike
Matti’s review linked above has inspired me to delve a bit deeper into my own position regarding Longtermism. If this is not right, then what is right? To effectively answer that and all such questions I believe that one must begin with a founding premise of what constitutes value, or good, regarding existence. What is the essential difference between existence which is good/bad versus completely valueless? I consider this constituted by what feels good as opposed to bad versus standard non-sentient function. All questions of welfare should effectively be reduced back to this simple axiom. What is the value of existing as you in a personal capacity? This should be constituted by an aggregate figure of how good/bad you feel over a defined period of time. It’s the same for the welfare of any defined society as well I think — just aggregate the combined moments of good (+) to badness (-) for each member over a defined period of time to get a theoretical figure for the whole. Of course we don’t currently have good ways of empirically estimating such figures today, though theoretically we should eventually. (I suspect that value is constituted by certain forms of neuron produced electromagnetic radiation, so measurements of associated parameters here might provide an ultimate sort of scale.)
There are several reasons to believe that Longtermist plans are weak from this utility based perspective. Here it’s popularly presumed that humanity and/or its machines could eventual colonize the universe, as well as create endless software based conscious entities no less or perhaps far more valuable than a real human. I consider each of these notions ridiculous. Furthermore it holds that humanity could become governed stably for a very long time if it simply survives a current vulnerable period. To me this seems highly optimistic — future governments should have their own challenges to deal with given their circumstances.
Nevertheless couldn’t investing today for the welfare of future humanity, still be considered “good” in various ways? Of course it can! For the most part so far technological humanity has raped and pillaged it’s environment for instant gratification. In recent decades associated worries have led many governments to moderate such behavior. Not only can this mitigate existing worry (which feels bad) and add to existing hope (which feels good), but if future humanity benefits from such policies then the welfare of this greater subject should also improve. So even if one spurns full Longtermism, one might still support various more sustainable human activities.
I realize that various educated people should read this and confidently assert that sentient welfare cannot be reduced to the single parameter that I’ve identified (or an aggregate of how good/bad any subject feels over a given period). Haven’t philosophers been debating this question for millennia? Actually for the most part I’d say that they’ve been debating the rightness and wrongness of behavior, often referred to as “morality”. The model which I propose however is just as amoral as any model accepted in our hard sciences. Though this premise already founds the reasonably hard science of economics, more basic mental and behavioral fields such as psychology seem to suffer given that they’ve not yet taken it on as a foundational premise from which to potentially build. Essentially the thought is that we shouldn’t be able to model ourselves very well psychologically without the formal identification of what’s ultimately valuable to us.
LikeLiked by 2 people
I think we’ve talked about this before, but your proposed value calculus is very close to Jeremy Bentham’s hedonic calculus. https://en.wikipedia.org/wiki/Felicific_calculus
And you’re right that most people don’t consider this to be a practical framework, although an act utilitarian will see it as valid in principle. The problem is that affective valences are very rough approximations for mental states that arise due to evolutionary programming and an enormous number of factors. A particular TV show may give me pleasure but not you. A particular experience may give substantial pleasure the first time, but not on the twentieth iteration. Something may give me pleasure by reminding me of something from my childhood, but do nothing for anyone who hasn’t had that experience. Trying to attach any meaningful quantification to all that seems pretty forlorn.
Investing in the welfare of future humanity could be considered good, but it has to be considered in terms of the degree of uncertainty and opportunity costs. We might have a decent idea of what effect a change might have within our current lifetimes and those just being born. But any claim to know what effect it might have centuries or millennia from now has so much uncertainty that it should be discounted into speculative guessing.
In terms of opportunity costs, many longtermers are pretty dismissive of climate change, because it doesn’t represent the kind of existential threat to humanity that they’re concerned about. However, anyone concerned about the standard of living for existing children throughout their lives will be more concerned.
LikeLiked by 1 person
Mike,
Back as a college kid in the early 90s I had a political science professor who knew my position regarding value. For extra credit he once had me write a paper on Bentham. Before that I hadn’t even heard of the guy. Though Bentham did spawn one of the three popular name brands of ethics (along with deontology and virtue ethics), his work leaves me plenty to criticize. For example he seems to have been satisfied to present his position under the paradigm of morality. Thus he’d state things like “good exists as the greatest happiness for the greatest number”. Conversely my position is that this is only what’s good for that specific subject. Other subjects should have their own utility based interests to acknowledge as well. So it’s not “the greatest happiness of the greatest number” that’s good from my perspective, but rather the greatest happiness of any defined subject that’s good for it.
I consider my amoral value position far more simple and effective than his moral value position, or at least for the domain of science. Without an effective axiological premise from which to build our mental and behavioral sciences should continue to flounder. Economics seems to be the only branch that formally accepts this premise (and obviously without my help). And why was a utility premise able to take root and grow in economics though not psychology? I suspect because economics is far enough from center to not threaten the social tool of morality. So the psychological egoist may have faced too much political heat to succeed so far.
“The problem is that affective valences are very rough approximations for mental states that arise due to evolutionary programming and an enormous number of factors.”
If my position held that everyone should effectively value the same things then fine, though fortunately it does not. Thus my proposal seems unchallenged here. And observe that we do seem able to quantify value today in at least a rough sense. One example would be the amount of money that people are willing to pay for various goods and services. Also questionnaires can provide such information. Ultimately neuroscience should even reach a point where welfare can objectively be measured to some degree.
Beyond economics the central reason that mental and behavioral sciences do not yet formally acknowledge what constitutes value regarding the subjects of their studies, I think, is the evolved social tool of morality. To progress in this regard we may need a respected community of meta scientists to provide science in general with basic metaphysical, epistemological, and axiological principles from which to function.
LikeLiked by 1 person
Eric,
Are you familiar with preference utilitarianism? It seems a lot closer to what you’re talking about. I think it’s championed by Peter Singer. It’s focus is on maximizing people’s preferred states. Of course, the chief issue is that people’s preferences might conflict, and I’ve never seen a description of how that might be resolved that isn’t, to some degree, arbitrary.
Ultimately I’m not a moral realist. I think all of these frameworks are really just sales pitches for adopting particular social frameworks, with no strict fact of the matter on which one is the one true ethics.
I’m sure you know that economics itself gets a lot of criticism for being insufficiently scientific. I think a good amount of it is, although what it says tends to be much more limited than anyone is comfortable with. But a disappointingly large share of economics is simply ideology in disguise, and the field tolerates far too much of it, ultimately lessening its collective credibility.
LikeLiked by 1 person
I’m not sure I’d looked up preference utilitarianism before Mike, but no I don’t think this position provides much in the way of my own message. I guess you’re referring to its clause about sentient animals being subjects of welfare even when they “live in the moment”. It’s a start I suppose though in the end I make no moral claims about the objective rightness or wrongness of any behavior. Apparently you’re the same. Thus I wouldn’t call myself a utilitarian, and I certainly take issue with Peter Singer’s blatant charity mongering. I consider all moral notions, whether his, mine, or any other, to exist as an evolved social tool of persuasion. Essentially humanity has survived better given our sympathy, empathy, and concern about how we are thought of by others. Alter or eliminate that and you should alter or eliminate human morality. Also I’d say that whenever such characteristics exist in other animals, they may be said to display moral inclinations as well. Beyond grasping the psychology by which this aspect of our nature arises, I’m not a fan of overt moral oughting.
On economics, I realize that various economists use their understandings to promote their own ideologies. I wouldn’t call any such ideology “economics” however. What I mean by the hardness of the field is that it contains a vast assortment of behavior based models that all economists accept in order to indeed be “economists”. Furthermore I’m able to observe that all such models ultimately reduce back to the premise that value is ultimately utility based. Conversely as I understand it the field of psychology does not have a vast assortment of behavior based models that all psychologists accept, nor a founding value based premise upon which any of its meager common understandings are ultimately based. So I’d like this weak central behavior science to become founded upon the same premise used by the strong peripheral science of economics. The thing that has prevented this progression so far I think, is the social tool of morality. Essentially utility based value can have various repugnant implication which prevent psychologists from universally accepting it. I consider this repugnancy evidence of correctness however since reality itself seems repugnant in associated ways.
LikeLiked by 1 person
PhilEric,
I will comment only on a tiny part of your very extensive remarks—that is, that one must begin at “what constitutes value, or good, regarding existence.” As to that I think I agree with you. In fact, from ancient times until the Enlightenment the so-called “good life” was an ongoing, and in many respects a fruitful, debate. However, scholastic philosophy in the late Middle Ages grew stale, unimaginative, and rigid. Hence the new enlightenment ideas easily toppled it. Among my many complaints with Enlightenment thinking, the loss of an ongoing debate about the good life was one of the most unfortunate. However, we are slowly tapping our way back to it with, for example, the midcentury resuscitation and rejuvenation of virtue ethics.
However, your following remarks that “all questions of welfare should … be reduced to [a] simple axiom” is a clear example, in my opinion, of an Enlightenment mindset. And I think it is a needless hindrance to productive ethical dialog. I submit that our enlightenment mindset—which is the air we breathe—is at times a trap. In short, we often approach ethical issues with the intellectual habits developed in science. If an ethical insight is convincing in a certain case, we try to reduce it to a simple formula and apply to other cases or, worse, apply it to all cases. One can see this in the limitations of deontological and consequentialist ethical systems. But actually the sciences themselves don’t function that way. So, I submit that it’s probably a mistake to seek the same goal in philosophy, especially ethics.
LikeLiked by 1 person
It’s good to hear from you on this Matti. You seem to be applying my framework to philosophy and its moral/ethical positions however. I actually mean for this to exist at a more fundamental level. For example what I’m referring to is theorized to constitute the value of existing as any sentient creature, whether a rat, fish, or spider, or even any number of them as a whole. We don’t attribute ethical or moral function to exist in non-human creatures of course, and even though we do consider personal welfare to exist for them given their sentience.
If you get my meaning regarding an idea that lies outside of traditional philosophy, clearly this could be applied to the human as well. The parameters are that this be perfectly amoral (as is science in general), subjective (and so a specific subject must be noted), and constituted by an aggregated figure of how good/bad it’s been caused to feel over a noted period of time. One name that I’ve toyed with for it is Amoral Subjective Total Valence, or ASTV for short.
Here you might wonder why I’d be concerned with identifying the nature of amoral welfare itself? I believe that this understanding should help certain scientists do their work more effectively than they’ve been able to in the past. Physicists needn’t concern themselves with value given their valueless domain, though the field of psychology should be crippled without such a formal understanding I think. So that’s my goal — not to steal something from philosophy, but rather to help our soft mental and behavioral sciences progress. I believe that academia will need a new “meta scientist” to help improve science in its most troubled spots. The only goal of these people would be to provide other scientists with various generally accepted principles of metaphysics, epistemology, and axiology from which to do science effectively.
Some questions now occur to me. Do you believe I’m wrong to believe that the personal to social value of existing is constituted by an aggregated score of how good/bad a subject feels over a given period of time? If so then what corrections seem appropriate to you? And regardless of that, am I right that our mental and behavioral sciences should not function as effectively as they might without a generally accepted premise of value from which to explore the nature of creatures like us for which value exists?
LikeLike
PhilEric,
And good to hear from you. When you mentioned my review of Toby Ord’s version of Longtermism I was hoping you’d want to discuss what I described as a “dangerous seed” in his thinking—something your friend Schwitzgebel seemed to have difficulty understanding. I consider Ord’s thinking quite dangerous indeed. However, I’m unfamiliar with MacAskill’s version of Longtermism. So, I can’t level that particular criticism on him. But there’s enough wrong with Longtermism in general that I’m unlikely to take any time with MacAskill’s book.
Now, regarding your latest post. As you know from our previous discussions, I need to engage in close reading of your comments to feel comfortable that I fully grasp your meaning. So, I’ll need to ponder at bit before I respond fully. Nevertheless, I can state now that I think you are wrong if you truly think that physicists are engaging in a “valueless” endeavor—dead wrong. Moreover, and more importantly, I think you are also wrong in your starting point that human value is “constituted by an aggregated score of how good/bad a subject feels.” I agree with Mike that this sounds very much like Bentham’s “hedonic calculus.” And there is a lot wrong with that. As I am no fan of consequentialist ethics in general, and if that is your meaning, then I have little interest in spending much time on it. And, to reiterate my previous point, I think you are caught up in a restrictive Enlightenment mindset. That is, an approach ethical issues with the intellectual habits developed in science—or more accurately, our myth of the scientific method. In short, the reduction of one ethical insight to a general, even universal, ethical principle. And, as I also pointed out. The sciences really do not work like that anyway. So, I’ll ponder your comments a bit, but I think this is a preview of where I’ll end up.
LikeLiked by 1 person
Matti,
We can discuss Ord’s perspective if you like. Though I do disagree with him, I suppose that I don’t consider his position all that dangerous since I give such sci-fi scenarios virtually no credence. And actually it seems to me that Schwitzgebel supported your concerns in that January post. For example he said this:
“Matti: Yes, I agree that is a dangerous seed. Well put. I’m not sure Ord would (or should) disagree that it is dangerous, though unfortunately he does not specifically address its danger at any length in the book.”
Regarding physics, I didn’t mean to imply that it was valueless as a human endeavor. Given that we expend tremendous resources trying to make sense of it, of course it’s valuable to us. What I meant was that the work of physicists does not depend upon them grasping the nature of value. Value seems to emerge beyond their domain such that it must instead be dickered with in our mental and behavioral sciences. Furthermore my point is that while the reasonably hard peripheral science of economics accepts utility as value, the quite soft central science of psychology does not. I consider this problematic.
Regarding my position on amoral value, consider the reasoning behind it. Before the emergence of life here, nothing that happened should have been valuable to anything. Did life change this? No, as in the case of plants and microorganisms, existence should still have been valueless. Brains? No they’re just biological computers. Once certain brains began to create sentient experiencers however, often referred to as “consciousness”, value should have emerged. Existence could then feel anywhere from extremely bad to good from moment to moment for an associated experiencer, or value as I see it. Thus it seems to me that amoral value should exist as an aggregate score of the positive to negative experiences of any such subject over a defined period, whether a single sentient being or a defined group of them. Just because the human would eventually emerge with a complex set of moral inclinations, I don’t see how this would alter the nature of value that exists below. I do grasp why our moral nature would lead us to deny what’s ultimate valuable to us though.
LikeLike
PhilEric,
You are quite right that Schwitzgebel ultimately agreed that Ord’s ethical theory contained a dangerous seed. However, it didn’t seem readily apparent to him until I laid it out a second time. He first said he didn’t see a “lack of concern for current human beings in the book.” And apparently you don’t see his ethical stance as dangerous since you give no credence to sci-fi scenarios. I don’t understand that at all. Ord’s ethical stance is not, as I understand it, merely a thought experiment. I think he is quite serious. I guess I’m slipping as a communicator! There’s not much more I can say to make that point. Or I’m really missing something.
Anyways, let me just say physics is a human endeavor. And contrary to your position, the work of physics DOES depend upon physicists understanding and implementing certain values. However, a fuller explanation—deep into the philosophy of science—would take me far down a cul-de-sac I’m not prepared to travel right now. Maybe I can circle back on that one later.
More importantly, I have re-read your summary on “amoral value” including your back and forth with Mike. Frankly, I’m at a loss. First, amoral value sounds like an oxymoron to me and that stops me in my tracks. The best I can conclude is that your ethical stance, if I can call it that, is some unique version of consequentialism or, perhaps, pure ethical egoism. Until I get it, I should follow Wittgenstein’s admonition; “Whereof one cannot speak, thereof one must be silent.” I’ll keep at it. After all I began following this blog because my philosophical understanding was deficient in certain areas and Mike’s site seemed to be the best place to learn.
LikeLiked by 1 person
I don’t think we’ll ever be aligned on this Matti. Our positions seem quite opposed. At least we can agree that Searle makes a good point regarding the nature of consciousness. Like me I presume that you also have little use for the quite popular Daniel Dennett. Furthermore my past might help explain my disdain for human moral notions.
As it happens my parents instructed me quite heavily in the rightness, and certainly the wrongness, of my behavior when I was young. And for a while they succeeded. My beloved authority figures were all that I had in life so I did try to follow their moralistic teachings. But often enough I’d notice others not following those rules as well. Given this divergence I’d sometimes find myself vulnerable to the behavior of others and so feel cheated. And when I came home upset about my treatment, what was their moralistic salve for me? Crap like “Now that you know how this feels, just make sure that you don’t do it to others”. So a kid who’s been injured by the inequities of the world would also be obligated with further restrictions?
Thus I grew skeptical of moral oughts and highly curious about the fundamentals of human nature itself. It was at about the age of 14 that I came to a formula that seemed to make sense of virtually all that I’d observed. The idea here is that from the most despicable of us to the most loved, we all function as self interested products of our circumstances. I’m not saying that people don’t do stupid things. Clearly they do. But I think each and every one of us attempt to promote our own happiness given the circumstances that existence has set up for us, and regardless of how wonderful or horrible we’re perceived.
I don’t think you should fear Longtermists because people have a hard enough time investing in their own futures let alone the futures of others, and regardless of how seductive various sci-fi scenarios happen to be. But should you fear my own prediction that the softest parts of the still relatively new institution of science should progressively become founded upon the premise of a utility based purpose? I don’t fear this because I’ve moved on from our various moral notions, and whether “virtuous” or not. But you might fear this.
In any case I’m pleased to call you a friend and I hope that you never get too angry with me. If I’m wrong here, at least now you might grasp what it is that has made me wrong.
LikeLiked by 1 person
PhilEric,
First, I have no feelings of anger at all my friend. And I would hope I never succumb to such feelings. Open mindedness and tolerance are important means to the end of truth seeking. And I sincerely appreciate your biographical interpretation. I think placing ideas in historical and biographical context explains a great deal. For example (and excuse my digression), in my study of political theory I am convinced by the interpretation that much of the main differences between the social contract theories of Hobbes in contrast to Locke is that Hobbes lived through a bloody revolution and civil war while Locke lived through the bloodless so-called Glorious revolution.
And, yes, we both deeply respect Searle and not so much for Dennett. By the way, if you want some fun reading check out a letter exchange between Searle and Dennett in the New York Review of Books from a few years ago. I think there are at least two enhances and maybe three. It’s a legendary debate. Searle mopped the floor with Dennett!
For the record, I do not fear Longtermism. I think it’s an intellectual flash in the pan—one way for the dying breed of professional consequentialists to maintain some relevance and get published. It’s just another tired old end-times apocalypse. It’s just updated, of course, for sci-fi oriented nerds. So, I see it’s popularity. I’m just surprised that, except for one New Yorker reviewer and a handful of folks including me, so few see that Ord’s theory as containing a dangerous seed. He favors some future evolved human race over people—hence my comparison in the review to Marx’s “species Being.” I could just as well have substituted the übermench.
LikeLiked by 1 person