In a not-quite-disposable line in a recent New York profile, Oxford philosopher and poster boy for “effective altruism,” William MacAskill, described the 2015 meeting of billionaire Tesla founder Elon Musk: “I spent five minutes trying to to talk to him about global poverty and managed to get little interest.”
Recently, however, their interests seem to have aligned. In August, Musk tweeted a recommendation for MacAskill’s new book What we owe to the futurenoting, “That’s very close to my philosophy.”
What we owe to the future is a case for “long-termism,” which MacAskill defines as “the idea that positively influencing the future is a key moral priority of our time.” It’s compelling at first glance, but as a value system, its practical implications are worrying.
First some background. Since its beginnings in the late 2000s, the Effective Altruism (EA) movement has been obsessed with “doing good better”—using reason and evidence to optimize charitable giving to better alleviate the suffering of as many people as possible alleviate.
In the early days of the movement, this included promoting effective, fundamental interventions to combat global health and poverty, such as distributing bed nets in the developing world — a marked departure from the regular philanthropic practice of donating to one’s alma mater or favorite museum . Today, however, those EA priorities are giving way to a new and questionable fascination.
Long-termism relies on the theory that humans evolved relatively recently, so we can expect our species to grow long into the future. If all goes well, a large number of people will be after us. So, if we argue rationally and impartially (as EAs pride themselves on doing), we should have a strong focus on paying attention to the concerns of this larger future population.
Depending on how you summarize the numbers, making even the smallest of advances in avoiding existential risks may prove more rewarding than saving millions of lives today. By and large, “near-term” issues like poverty and global health aren’t affecting enough people to be concerned – what we should really be obsessing over is the chance of a sci-fi apocalypse.
The biggest threats to the future population are things like a rogue super-intelligent AI, a nuclear disaster, or an unexpectedly virulent pathogen, and there is a strong emphasis on technology-driven research and solutions. It’s hard to argue against taking a long-term perspective. People tend to be short-sighted, and we keep talking about leaving a better world for future generations.
But while this makes this latest obsession with effective altruists seem almost irrefutable, it is not ethically sound to give up what would help people most today. The shift to long-term thinking appears to be a projection of a hubris common in tech and finance, based on an unjustified faith in its followers’ ability to predict the future and shape it as they wish.
It suggests that playing games with probabilities (what is expectation to tame a speculative robot overlord?) is more important than helping those in the here and now, and that top-down solutions trump collective systems, that respond to real people’s preferences.
Focusing on the future means that long-term researchers don’t have to get their hands dirty by looking at actual living people in need, or hold themselves accountable by criticizing the morally questionable systems that have allowed them to succeed have made possible. A not-yet-living population cannot complain or criticize or interfere, making the future a much more pleasant sandbox in which to pursue your interests – be they AI or biotechnology – than an existing community that might push back or try to to steer things for yourself.
To be even more cynical, Longtermism seems tailor-made to allow the elites of technology, finance, and philosophy to indulge their anti-humanist tendencies while slapping themselves on the back for their superior IQs. The future becomes a blank slate upon which long-termists can project their moral certainty and indulge in techno-utopian fantasies while coaxing themselves into still “doing good.”
So it’s not surprising that someone like Musk – whose most memorable philanthropic moments include tweeting that he would donate $6 billion to the Nobel Prize-winning World Food Program if it could convince him of its effectiveness, and then never follows up when his executive director replied in detail – finds the offer convincing.
https://www.independent.ie/opinion/comment/longtermism-supporters-like-elon-musk-and-william-macaskill-share-a-lack-of-concern-for-the-present-that-is-deeply-flawed-41967337.html Supporters of long-termism like Elon Musk and William MacAskill share a lack of concern for the present that is deeply flawed