Please support our programs

Ninety Seconds to Midnight

Never miss a show! @ symbol icon Email Signup Spotify Logo Spotify RSS Feed Apple Podcasts

 

A new philosophy steeped in the ideas of Artificial Intelligence, space colonization, and the long-term survival of the human species is gaining ground among the wealthy.  However, there are reasons to question its goals and its ethics. Longtermists believe that not only could we colonize space and create simulated humans in giant servers around stars, but that we must. Anything short of a trillion-year multi-planetary existence for our species would be a moral failing. They also believe that all of our ethical actions should focus on the countless lives that may exist in that dim future, instead of on the people alive today. Is this the kind of ethics we should all accept, however? Philosopher and historian Émile P. Torres joins us to discuss Longtermism and its dangerous pitfalls.

EDITOR’S NOTE

The transcript of this story has been corrected since original publication. Interview subject Émile P. Torres was incorrectly referred to as he/him/sir in the original. Their pronouns are they/them. The audio version still contains this error.

Like this program? Please click here and support our non-profit listener-supported journalism. Thanks!

Featuring:

  • Émile P. Torres, Philosopher and Historian

Credits:

The Making Contact Team

  • Host: Salima Hamirani
  • Co-host: Amy Gastelum
  • Executive Director: Jina Chung
  • Segment Editor + Interim Senior Producer: Jessica Partnow
  • Staff Producers: Anita Johnson, Salima Hamirani, Lucy Kang, Amy Gastelum
  • Audio Engineering: Jeff Emtman

   

Music Credits:

  • Rocky Marsiano – Whatshappenin
  • Blear Moon – Ongoing Cases
  • Alex Productions – Born
  • Dilating TImes – Faded Flowers
  • Danny Bale – Fern Music (Extended)

MACHINE-GENERATED TRANSCRIPT

Intro

Salima Hamirani: welcome to Making Contact. I’m Salima Hamirani

Amy Gastelum: And I’m Amy Gastelum

Salima Hamirani: and Amy, I asked you to join me on the show this week because I’ve been going down this rabbit hole learning about something called longtermism, and I wanted to share it with you. Have you heard of longtermism?

Amy Gastelum: I have no idea what it is. Longtermism. That sounds good.

Salima Hamirani: Basically for longtermists, they think very, very, very far into the future.

Émile P Torres: We could exist on Earth for, you know, around a billion years.

Still at that point, earth will become uninhabitable eventually, the oceans will boil,  but we could also escape earth, you know, and colonized space. And if we do that, we could exist for a really, really long period of time,

Salima Hamirani: Which seems like maybe a good idea on face value, but it can actually be kind of problematic. one of the reasons I want to do a making contact piece about long-termism is because it’s already having a huge impact on the world. But I don’t know how many of us would agree with long-termism or its values.

Salima Hamirani: Okay, so I wanna start by listening to part of a press conference for the Bulletin of Atomic Scientists. And they’re the people who adjust what’s called the Doomsday Clock every year

Bulletin of Atomic Scientists: Good morning. I’m Rachel Bronson, president and c e o of the Bulletin of the Atomic Scientists.  (fade down under narration) I’m pleased to welcome you to today’s virtual press conference hosted at the National Press Club in Washington.

Salima Hamirani: So basically if they move the clock closer to midnight, that means that we’re closer to Annihilation

Bulletin of Atomic Scientists: Today, the members of the Science and Security Board move the hands of the Doomsday clock forward. The closest it has ever been to midnight. it is now 90 seconds to midnight.

Amy Gastelum: It’s only Monday Salima.

Salima Hamirani: I am so sorry. You know, I’m kind of attracted at these kinds of topics.

Amy Gastelum: That’s terrifying. I mean, I didn’t even know about this clock, the doomsday clock. It even has a name like that

Salima Hamirani: Yeah, it’s pretty terrifying. But I don’t know if a lot of people would argue with the fact that, you know, there’s pandemics, there’s scary technology, there’s global warming…And so Amy,, to sort of ground us, can you share with me your values around change and what we could do to shift our situation so that we don’t become extinct?

Amy Gastelum: I mean, I am a disciple of reproductive justice as a movement, we need to think about all the things that play into the health and wellbeing of our pregnant and parenting people, our children. I mean, to me that’s, that’s the future. Is our human, this human resource. And so we have to think about the ways that housing, food security, violence, all these things like play into the health and wellbeing, or not of families and children particularly,

Salima Hamirani: Okay. So I want you to keep all of that in mind. Stuff coming from reproductive justice. Because that is not how long term is think at all. And I think it’s important for us to hold in mind our values when we talk about something like this philosophy, because it, it is sometimes a clash of worldview. So let’s talk about this movement that’s gaining ground among the rich and the powerful.

Amy Gastelum: That’s never good.

Salima Hamirani: Yeah. It honestly never is. And you know, their philosophy is a little unbelievable in some ways. And because of that, I wanna start from the very basics and then move up the ladder of believability. Because I want you to have an experience of this philosophy.

Amy Gastelum: Okay. I’m scared.

Salima Hamirani: So, you know, I actually came across this philosophy by accident, you know, late at night when I couldn’t sleep, scrolling through Twitter, and there was an article in Aeon, which is a magazine, by a philosopher who studies the possible scenarios of human extinction, and their name is Émile P. Torres.

Émile P Torres: The past 10 years in particular a lot of what I’ve published on. Has been global catastrophic risks, and it’s so-called existential risks.

Salima Hamirani: and Amy, I think you can possibly guess what an existential risk is.

Amy Gastelum: You mean like a, a catastrophic event that would put like the existence of humans at risk

Salima Hamirani: Exactly. Anything that could possibly completely destroy the human race.

Émile P Torres: You know, Spanish flu pandemic of 1918 probably would constitute a global catastrophe. And there are various other examples. Going back to 75,000 years ago, apparently there was this volcanic super eruption in Indonesia which resulted in there being about a thousand breeding pairs of human beings left on, on the planet. So that would be another example of global catastrophe. So that’s the sort of cheery topic that my work has focused on over the last decade

Salima Hamirani: and Émile was actually part of the longtermist movement. And then they jumped ship and since then they’ve been tracing the development of the philosophy. They’ve actually called this one of the most dangerous philosophies that exists. So to start, what exactly is longtermism?

Émile P Torres: Longtermism is this, ideology that sort of has roots going back to the 1980s in particular. But it was really developed into a like cohesive research program really since, since about 2002, which is when an Oxford professor by the name of Nick Bostrom, published an article in which he introduced the concept of an existential risk. And existential risk is like the central organizing concept around which the entire longtermist worldview is built.

Salima Hamirani: And there’s a reason that long-term is so focused on existential risks, and that’s because they think very, very, very far into the future. So, you know, we have our ancient history.

Émile P Torres: Humanity’s been around for maybe 300,000 years. And, the lineage of humans or the, the genus homo goes back, you know, 2.6 million years or something. We’ve been around 300,000 years.

Salima Hamirani: But we also have this possible future history, which by the way, is completely theoretical

Émile P Torres: We could exist on Earth for, you know, around a billion years.

Still at that point, earth will become uninhabitable eventually, the oceans will boil,  but we could also escape earth, you know, and colonized space. And if we do that, we could exist for a really, really long period of time, at least, you know, 10 to the 40 years. So that’s one followed by 40 zeros, it’s just a, a mind boggling amount of time in our future. There could be an enormous future population of people.

Amy Gastelum: I am having so many thoughts. Salima, like this is like, you can probably see my face is just like -what are you avoiding in your current life by spending your time? 10 to the 40 in the future you need to deal with your feelings and you need to like deal with your relationships and you’re here and now you probably need a hobby. it’s like a word That’s beyond privilege. It’s almost like it’s, it’s dark. That’s what I think. It’s very judgmental

Salima Hamirani: Michael, co-host Amy, is trying to psychoanalyze the people who believe this. But you know, unfortunately they do have hobbies, which is part of the problem, but we’re gonna get into their hobbies later. So, you know, so far we’ve been sort of in this realm of science fiction. What if we live for a trillion years?

But unfortunately, people don’t see this as science fiction. And this is when things start to get a little weird, because for them, this is actually a moral philosophy, an ethical philosophy.

Émile P Torres: So then there’s the question of, well, if I want to do the most good possible, if I want to positively affect the greatest number of lives, and if it’s the case that most people who will ever exist will exist in the far future rather than in the present maybe what I should do is focus on how my actions will influence these far future people.

Some of the numbers that they’ve given are, you know, like 10 to the 58 people in the future. So that’s a one followed by 58 zeros. And so even if I influence just 1% of those future people, that’s still a way greater number than the number of people who currently exist right now.

Salima Hamirani: and actually I wanna step back just a little bit because it’s a little unclear how we get from this idea of floating in space for tens of billions of years to philanthropy or ethics, and the history is a little bit murky, but basically there’s a philosopher named Peter Singer, who in the 1970s wrote a book about utilitarianism and famine.

Émile P Torres: He made the argument that, You know, if you just bought some new shoes or you know, a new suit or something, and you’re walking by a pond and you happen to see a small child who’s drowning most of us would not really hesitate to ruin our shoes and our new suit running in to save the child.

The point of his, scenario is that the location of somebody suffering shouldn’t matter. So in this scenario, you know, there’s a child who’s, you know, 10, 20 feet away from us.

But right now there are people on the other side of the world who are starving to death. For example, and so shouldn’t we sort of sacrifice our nice shoes and new suit by not buying those things in the first place and taking that money instead and giving it to those, to those people in need.

What they were getting at is the idea of altruism. We should be altruistic as best we can,

Now. How do I do that the best way possible maybe my altruistic impulses lead me to give my money to a charity that actually isn’t very good. And a lot of the money is just gonna go to, you know, the, the CEO or whoever is running it. it’ll just end up in the hands of the, the wrong people

how can we use reason and evidence to figure out the most effective ways to be altruistic? .So that’s how effective altruism was born. .

Salima Hamirani: and I know that there’s a lot of terms that we’re throwing around, but actually effective altruism is a huge movement and is very tied to long-termism. They’re kind of bedfellows. And it’s sort of what gives longtermism, it’s ethical or practical applications.

Émile P Torres: you know, sometime in the 2000. You have the people who are developing this idea of effective altruism. And their initial focus is on global poverty. And then they, they sort of discover the work of Nick Bostrom who I mentioned earlier,

And, you know, in their 2002 article, which introduced the idea of existential risk really emphasized the potential bigness of the future. They sort of discovered this and went, wow, okay. If most people who ever exist will exist in the far future, and if I, what I want to do with my finite resources is positively influence the, the greatest number of people possible, then maybe what I should be doing is focusing on these far future individuals, not on current day people or current day problems.

Émile P Torres: So yeah, I mean, to use another example, you know, I have finite resources. I’m trying to figure out how I can maximize the amount of good I do with these finite resources. There are 1.3 billion people in multidimensional poverty today. Okay. That’s a huge number.

But again, there could be 10 to the 58 people in the very far future and maybe using my resources to benefit those people by ensuring that they come into existence in the first place by some very small probability. If you crunch the numbers. That may have a much greater expected value, as they would say, than taking my resources and helping the 1.3 billion people in multidimensional poverty. So that’s how you get this long-term ethic.

The fact that the future could be so big is why , we should pay more attention to how things go, you know, thousands, billions, billions, trillions of years from now, rather than how they’re going right now.

Amy Gastelum: I hope we’re gonna stop and talk about this. Like, this is killing me. Okay. you know, is there a problem of like the ultra rich deciding what the major problems of the world are, and then trying to solve them from their limited perspectives.

They can’t even do laundry. We already discovered that.

Salima Hamirani: right. And that’s actually a huge criticism right off the bat, not just with long-termism, but with all of philanthropy.

Amy Gastelum: I was thinking where’s the power? Where’s the conversation about power

Salima Hamirani: Absolutely. You know, and Emil and I talked about that a lot. The issue of power, power structures and how they play into longtermists thinking about the future.

Émile P Torres: it’s not just about sort of ignoring the power structures, in the world, but also sort of, you know, embracing them and working within them to make as much money as possible.

For example, there’s this idea within EA called Earn to Give. So the notion of earned to give is you should pursue the most lucrative career possible in order to you know, as, as one journalist put it in reference to Sam Bankman Freid

Salima Hamirani: Amy? Have you, have you heard of Sam Bankman Freid?

Amy Gastelum: This FTX guy, right

Salima Hamirani: Yeah, so Sam Bankman Fried led FTX, which is now bankrupt. But it was a cryptocurrency exchange, uh, a crypto hedge fund.

News Clips: FTX founder Sam Bankman-Fried once head of a cryptocurrency giant, now alleged federal criminal bankman freed, and his co-conspirators stole billions of dollars from FTX customers. This is one of the biggest financial frauds in American history. Authorities arrested him last night after US prosecutors responding for Sam Bankman-Fried, and he could be facing additional time if the government is able to prove its case in the slew of new charges just added to the FTX case here. (fade out) But I want to direct your attention to.

Émile P Torres: so, one of the, the journalists writing about him before ftx catastrophically imploded described Bankman Fried’s goal as to get filthy rich for charity’s sake.

Amy Gastelum: charity is connected to power in ways that are really, truly evil. It can be anyway. It can look like something nice and it can be something evil.

Salima Hamirani: Absolutely. And I mean, he was one of the richest people on the planet, so I wanted to point him out specifically because there are already some extremely wealthy, powerful people who do already believe this philosophy

and they believe that it makes them ethical. So, you know, we’re almost to the halfway point of our show. And Amy, before we hit the break, I wanna get into the wildest part of the philosophy. So, are you ready?

Amy Gastelum: I don’t know man. This has like already been such a ride. All right. Take.

Salima Hamirani: Okay. So, up till now, we’ve sort of been talking about the moderate version of long-termism, and now we’re gonna get into the more radical version if you can even believe that. Here’s Emil p Torres again.

Émile P Torres: As philosophers, like, in the 1960s and seventies, started to realize there are kind of two ways to maximize value. One is you could focus on the people who currently exist and try to increase their happiness.

Alternatively, you could create new people who, if they have net positive amounts of happiness, they are going to therefore increase the total amount of happiness in the universe.

Salima Hamirani: In order for them to increase the happiness of the universe as a whole, they have to create a lot more people. So how do you create a lot more people?

Émile P Torres: Not only should we survive on earth for as long as we can but we should colonize space, colonize as much of the universe as possible.

And at, at the, very extreme, and this is something that a lot the leading long-term is have strongly endorsed. We should create these massive computers in space that are maybe powered by mega structures that surround stars that collect, you know, most of the energy output of those stars.

And the reason you would create these computers is that you’d be able to simulate virtual reality worlds. And in these virtual reality worlds, you could have, a much greater number of people cuz you can fit more people in a simulated universe than you can in the actual universe.

So the more of these simulations we have spread throughout the universe, the more happy people could come to exist in the future.

Amy Gastelum: Yeah. Okay. we can’t even get universal healthcare and they’re gonna take stars to power our supercomputers in space. I think the only thing that’s really that comforting to me about any of this S Lima is that we already know the boundaries on what humans can even collaborate to get done.

Salima Hamirani: I also, I don’t know how they define happiness because I can’t imagine that in this type of philosophy that is this pro capitalist, someone would just simulate trillions of people out of the goodness of their hearts, you know?

Amy Gastelum: Right.

Salima Hamirani: I feel like you would basically be a disembody. Eternal slave to whoever created you, as far as I can imagine that future.

Amy Gastelum: This is very thinly veiled. Yeah, exactly

Salima Hamirani: So Amy, we have reached the halfway point of today’s show, but please stay tuned to all of you who are listening because when we come back. We’re gonna talk about why Emil p Torres caused long-termism the most dangerous philosophy

Lucy Kang: You’re listening to Making Contact. Just jumping in here to remind you to visit us online. If you like today’s episode or wanna leave us a comment, we have more information about long-termism and links to all our other shows@radioproject.org. And now back to the show.

Salima Hamirani: Welcome back to Making Contact. If you’re just tuning in, Amy, can you remind everybody what we’re talking about today

Amy Gastelum: We are talking about long-termism

Salima Hamirani: and what is long-Termism

Amy Gastelum: Long-termism is some really wealthy people have an idea about what altruism is or how to be the most altruistic is to prioritize the needs and wellbeing of some futuristic population of like humans, humanoids or some kind of like AI beings using, this is my favorite part, computers that are powered by Stars and outer space

Salima Hamirani: Right? And so in the second half of today’s show, we’re gonna talk about some of the possible problems with this viewpoint. Again, you’ll be hearing from a Émile P Torres, who is a philosopher, and the reformed longtermist, who’s written a lot about long-termism.

And so Amy, out of curiosity, from the little that you know so far, what is one of the problems with long-termism?

Amy Gastelum: Let me just compare it to another lens for looking at the world, right? which would be reproductive justice. Reproductive justice is about, you know, caring for whole communities in the here and now. I think it centers children and families and like, the wellbeing of children and pregnant and parenting people as sort of the lifeblood of the existing and future of all of our communities. And so if I think about that being kind of a moral framework then compare that to long-termism. You know, there’s like an effort to be concerned about others, but it’s like everything is controlled. When we talk about power, everything is controlled by the elite few,

Salima Hamirani: Yeah, I, you know, I think you captured something really important when you said in the here and now, one thing is that long-term is, don’t care about people who are alive today. And in fact, they believe that most of what you do in the short term for people who already exist doesn’t matter.

Émile P Torres: part of comes down to this sort of trick a kind of numbers game trick.

The thing you need to pay attention to is expected value. And expected value is what you get when you assign a certain value to an outcome. And then you, you multiply that by the probability.

So what they would say is, okay, we’re talking about 10 to the 58. You could exist in the future, according to our calculations, if we colonize space and create these huge computer simulations,

Salima Hamirani: and then you could argue, but, what are the chances of us ever being able to do that? I mean, create trillions of simulated people in supercomputers around a star.

Émile P Torres: Well, okay, maybe it’s really small, you know, maybe it’s very small, but the payoff is so large that it pulls the expected value up. So the expected value could still be way higher than, for example, helping people in multi-dimensional poverty today. So crunch the numbers and , suddenly long-termism looks from this

perspective, like a a justified theory

Amy Gastelum: it sounds like folks applying some really technical. Ideas and ways of thinking. To systems that are human, I mean to flesh and blood and bones in ways that seem really outlandish to me and disconnected from the earth and reality.

Salima Hamirani: Right. It’s just a magician’s hat trick. There are no people who exist in the future, and I don’t know how to break it to them, but this is probably never gonna happen. but because of this math trick, they’re just allowed to waste huge amounts of resources on, on nothing. I mean absolutely nothing.

Amy Gastelum: I can see you’re irritated,

Salima Hamirani: Yeah, I mean, I think that’s why I wanted to create this piece because there are a lot of advancements that happen in society because of rich and powerful are able to push their idea of the future onto us. But, is this a vision of life on the earth or in space that most of us share?

Émile P Torres: So for example Hillary Greaves and Will MacAskill argued in a paper that was defending this sort of radical long termist view that we should essentially just ignore the consequences of our actions over the next hundred or maybe even thousand years.

Salima Hamirani: And by the way, this includes issues like global warming because in their view, most likely someone will survive, namely the rich and the powerful and the global north. And they will one day re-populate. and the carbon will eventually be re sequestered even, even if that takes tens of thousands of years.

And I don’t know, Amy, if this sounds vaguely, uh, fascist that only the good people of the global North will survive… It is because there are some pretty dark roots to some of these beliefs.

Émile P Torres: There, there are aspects of long-termism like transhumanism that. trace their lineage directly back to 20th century eugenicists, including eugenicists that, held, uh, some very awful deeply illiberal views like Julian Huxley, who was in favor of, uh, forced sterilizations.

Salima Hamirani: And one of these days we should, you know, investigate the history of eugenics in Silicon Valley. But that does bring us to the next issue.

You remember how we said long-term is do have hobbies? Well,…

Amy Gastelum: What are they please?

Salima Hamirani: they’re really into artificial intelligence and creating it. And to understand why we have to go back to something we talked about at the beginning of the show. an existential risk.

So, Amy, in your view, , what is an existential risk?

Amy Gastelum: That things that could destroy all of humanity, I would say, you know, a big old asteroid or like global warming or a pandemic.

Salima Hamirani: Yeah. They’d agree with some of that except the global warming. but they’re also really worried about, um, nuclear war.

Amy Gastelum: Yeah. Also that, yeah, I’ll agree with.

Salima Hamirani: Yeah, me too. and biotechnology or engineered plagues. But they’re really, really obsessed with the notion of artificial intelligence going rogue and killing us all. Which is funny because

Amy Gastelum: because they’re making it.

Salima Hamirani: Exactly. But guess what their solution is to the dangers of technology like artificial intelligence.

Amy Gastelum: Tech.

Salima Hamirani: Yeah. More technology

Amy Gastelum: yes

Salima Hamirani: You guessed it. Yeah. They think if we build an artificial intelligence system soon enough in the right and most moral way before the bad people do, it could fix all of the world’s problems.

Émile P Torres: And it could go, okay, give me five minutes and, I’ll use my super intelligent powers thinking, you know, a million plus times faster than the human brain does. And I’ll come up with a, a solution.

If we create super intelligence before we create molecular nanotechnology, and if the super intelligence is built in a way that we can actually control it and it doesn’t just destroy us right away, then we would be able to mitigate the risk posed by advanced nanotechnology. We’d be able to mitigate the risk of thermonuclear conflict.

You know, maybe we’d get the super intelligence to immediately create a single world government

Salima Hamirani: And I don’t know, Amy, I find it interesting that they think a computer intelligence can fix the world, but for some reason, all of the solutions that humans have come up with, those are meaningless. But you know, no one ever wants to listen to the poor.

Amy Gastelum: I mean, it has to do with power. know that. So they’re gonna think 10 to the 58 in the future, which is a population that cannot hold them accountable.

Salima Hamirani: Yeah. And then turn to this God-like idea of artificial intelligence to supposedly fix everything. But what if the AI told them, Hey, actually you need to redistribute wealth and pay reparations.

Amy Gastelum: What if it did that? How cool would that be?

Salima Hamirani: That’s a story we can write one day. The anti-capitalist super intelligence.

But you know, this obsession with technology is something that really worries Emil about long-termism.

Émile P Torres: I mean, even Nick Bostrom later on defined the concept of existential risk as any event that would prevent us from attaining a state of what he called technological maturity. And he defined technological maturity as a situation in which we have maximized economic productivity and are able to fully control natural processes. So, in other words, it’s this capitalistic draconian kind of fever dream of subjugating nature, maximizing economic productivity.

Amy Gastelum: It’s wild. This is wild. The only thing I can think of while I’m listening to that is like, and this is why we have poetry, you know, sometimes I even wonder about why we have poetry, right? But like, this is why, because when you don’t have it, like you just turn into this guy. Not Émile. I mean, he’s reformed or reforming or whatever, but it’s like ? Why would you wanna live in that world? You know?

Salima Hamirani: and I don’t know. That is a very dark.

And Amy, we’re actually reaching the end of today’s show and we are finishing on a kind of heavy note

Amy Gastelum: I’m trying to stay afloat mentally okay? Buoy me

Salima Hamirani: you know, but I did sort of do that on purpose because there are a lot of people with extreme power and wealth who, as I mentioned, believe in this philosophy.

Émile P Torres: Yeah, I mean, Elon Musk has described long-termism as a quote uh, close match for my philosophy.

Salima Hamirani: and there’s others. The UN recently used some longtermist phrases and a recent report. And other big names in tech and Silicon Valley, but Amy, and also to the audience. I guess I wanted us to think about whether or not we should just buy into the visions of the wealthy and the elite and really question whether this is a future that we want.

and so, you know, to end for everybody listening, I think one of the things that helps ground us in this moment is to remind us what your vision of the future is, and remind us of what your values are. So please email us. What have you learned from the movements that you’re part of. Émile

And what do you think could help us survive? Or do you agree with long-termism? Let us know.

Amy Gastelum: What’s your vision?

Salima Hamirani: Exactly.

Amy Gastelum: I love that.

Salima Hamirani: I’m Salima Hamirani. Amy Gastelum was joining me. As always, you can find out more about our shows at radioproject.org and thanks for listening to Making Contact.

Author: Radio Project

Share This Post On