Skip to main content

Make sense of it all

Our world has too much noise and too little context. Vox helps you understand what matters. We don’t drown you in panic-inducing headlines, and we’re not obsessed with being the first to break the news. We’re focused on being helpful to you.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join today

How smart will AI get? Ajeya Cotra has an answer.

Predicting the pace of AI intelligence is the first step to knowing what we should do about it.

Rebecca Clarke for Vox

How smart will AI get? Ajeya Cotra has an answer.

Predicting the pace of AI intelligence is the first step to knowing what we should do about it.

Sigal Samuel
Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

Let’s say you have hundreds of millions of dollars. You want to help the world as much as possible. How do you know which causes to spend money on and how much to give?

This is exactly the situation that major charitable organizations, like Open Philanthropy, find themselves in. Should they prioritize saving kids from malaria? Preventing a manmade pandemic? What about runaway AI?

Ajeya Cotra, a senior research analyst at Open Phil, works on answering questions like these. Her investigations into specific causes, as well as her meta-investigations into how we can even think through such hard questions, are refreshingly nuanced and thoughtful.

AI risk is the specific cause that Cotra has devoted most of her time to thinking about lately. In 2020, she put out a report that aimed to forecast when we’ll most likely see the emergence of transformative AI (think: powerful enough to spark a major shift like the Industrial Revolution). The question of AI timelines is crucial for figuring out how much funding we should spend on mitigating risks from AI versus other causes — the closer transformative AI is to happening, the more pressing the need to invest in safety measures becomes.

Cotra came up with a way to estimate what might seem to be unknowable. She uses the human brain to estimate how much computation we’d need to train an AI that could perform as well as a human. By using this “biological anchor” for her work, she arrived at 2050 as her median estimate for transformative AI.

This year, though, she updated her timelines in light of the recent explosion in AI development. Her new median estimate is that transformative AI will emerge by 2040, which would make what had seemed to be a long-term risk now possibly right around the corner.

There’s plenty of room to debate whether Cotra’s biological anchor approach is the right one. But if she’s even anywhere in the ballpark, that’s a pretty eye-popping estimate.

And it’s got practical implications. “This update should also theoretically translate into a belief that we should allocate more money to AI risk over other areas such as bio risk,” Cotra writes, “[and] to be more forceful and less sheepish about expressing urgency when ... trying to recruit particular people to work on AI safety or policy.”

Beyond helping us think through specific causes like AI, Cotra has offered a way to think through the meta question of how to allocate resources between different causes. She calls it “worldview diversification.”

In a nutshell, it says that we shouldn’t just divvy up resources based on how many beneficiaries each cause claims to have. If we did that, we’d always prioritize longtermist causes because things that will shape the far future will affect the hundreds of billions who may live, not just the 8 billion alive today. Instead, we should acknowledge that there are different worldviews (some that prioritize current problems like, say, malaria, and some that are longtermist) and that each might have something useful to offer. Then we should divvy up our budget among them based on our credence — how plausible we find each one.

This approach, which Open Phil has been gravitating toward in practice, has clear advantages over an approach that’s based only on calculating which cause has the most beneficiaries. The charity world is better for Cotra having articulated that.

How commerce became our most powerful tool against global povertyHow commerce became our most powerful tool against global poverty
Good News

The inconvenient truth about trade.

By Bryan Walsh
The world’s biggest animal cruelty problem, explained in one chartThe world’s biggest animal cruelty problem, explained in one chart
Future Perfect

It’s the most invisible — and the hardest to solve.

By Kenny Torrella
The Trump administration says cheap goods aren’t part of the American dream. They’re wrong.The Trump administration says cheap goods aren’t part of the American dream. They’re wrong.
Future Perfect

Everyone likes cheap consumer goods that will get more expensive under Trump’s tariffs.

By Kelsey Piper
These fluffy white wolves explain everything wrong with bringing back extinct animalsThese fluffy white wolves explain everything wrong with bringing back extinct animals
Future Perfect

Don’t buy the hype about “de-extinction.”

By Marina Bolotnikova
The self-inflicted death of American science has already begunThe self-inflicted death of American science has already begun
Future Perfect

Trump’s crackdown on foreign students and scientists will do irreparable harm to the country.

By Bryan Walsh
The influential paper that explains Trump’s radical tariff policyThe influential paper that explains Trump’s radical tariff policy
Future Perfect

Tariffs aren’t helping globalization’s losers — they’re victimizing them again.

By Dylan Matthews