Skip to main content

Make sense of it all

Our world has too much noise and too little context. Vox helps you understand what matters. We don’t drown you in panic-inducing headlines, and we’re not obsessed with being the first to break the news. We’re focused on being helpful to you.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join today

In the past decade, the AI revolution has kicked into high gear.

Artificial intelligence is playing strategy games, writing news articles, folding proteins, and teaching grandmasters new moves in Go. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But there’s another way the field of artificial intelligence has been transformed in the past 10 years: Concerns about the societal effects of artificial intelligence are now being taken much more seriously.

There are many possible reasons for that, of course, but one driving factor is the pace of progress in AI over the past decade. Ten years ago, many people felt confident in asserting that truly advanced AI, the kind that surpasses human capabilities across many domains, was centuries away. Now, that’s not so clear, and AI systems powerful enough to raise serious ethical questions are already among us.

For a better understanding of why AI poses an increasingly significant — and potentially existential — threat to humanity, check out Future Perfect’s coverage below.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

  • Is AI really thinking and reasoning — or just pretending to?

    AI Reasoning-Roughs-Lede-Final2
    AI Reasoning-Roughs-Lede-Final2
    Drew Shannon for Vox

    The AI world is moving so fast that it’s easy to get lost amid the flurry of shiny new products. OpenAI announces one, then the Chinese startup DeepSeek releases one, then OpenAI immediately puts out another one. Each is important, but focus too much on any one of them and you’ll miss the really big story of the past six months.

    The big story is: AI companies now claim that their models are capable of genuine reasoning — the type of thinking you and I do when we want to solve a problem.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    AI companies are trying to build god. Shouldn’t they get our permission first?

    Abstract Particle Hands Touching Fingertips At Point Of Light
    Abstract Particle Hands Touching Fingertips At Point Of Light
    Getty Images

    AI companies are on a mission to radically change our world. They’re working on building machines that could outstrip human intelligence and unleash a dramatic economic transformation on us all.

    Sam Altman, the CEO of ChatGPT-maker OpenAI, has basically told us he’s trying to build a god — or “magic intelligence in the sky,” as he puts it. OpenAI’s official term for this is artificial general intelligence, or AGI. Altman says that AGI will not only “break capitalism” but also that it’s “probably the greatest threat to the continued existence of humanity.”

    Read Article >
  • Sigal Samuel

    Sigal Samuel, Kelsey Piper and 1 more

    California’s governor has vetoed a historic AI safety bill

    California Governor Gavin Newsom Annouces New Public Safety Efforts in Oakland
    California Governor Gavin Newsom Annouces New Public Safety Efforts in Oakland
    California Gov.Gavin Newsom speaks during a press conference with the California Highway Patrol announcing new efforts to boost public safety in the East Bay, in Oakland, California, July 11, 2024.
    Stephen Lam/San Francisco Chronicle via Getty Images

    Advocates said it would be a modest law setting “clear, predictable, common-sense safety standards” for artificial intelligence. Opponents argued it was a dangerous and arrogant step that will “stifle innovation.”

    In any event, SB 1047 — California state Sen. Scott Wiener’s proposal to regulate advanced AI models offered by companies doing business in the state — is now kaput, vetoed by Gov. Gavin Newsom. The proposal had garnered wide support in the legislature, passing the California State Assembly by a margin of 48 to 16 in August. Back in May, it passed the Senate by 32 to 1.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    OpenAI as we knew it is dead

    Sam Altman.
    Sam Altman.
    Sam Altman.
    Aaron Schwartz/Xinhua via Getty Images

    OpenAI, the company that brought you ChatGPT, just sold you out.

    Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    The new follow-up to ChatGPT is scarily good at deception

    SigalSamuel_9-13_GettyImages-1361878704
    SigalSamuel_9-13_GettyImages-1361878704
    Marharyta Pavliuk/Getty Images

    OpenAI, the company that brought you ChatGPT, is trying something different. Its newly released AI system isn’t just designed to spit out quick answers to your questions, it’s designed to “think” or “reason” before responding.

    The result is a product — officially called o1 but nicknamed Strawberry — that can solve tricky logic puzzles, ace math tests, and write code for new video games. All of which is pretty cool.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    People are falling in love with — and getting addicted to — AI voices

    Love between man and robot leaning out of laptop, and together making heart out of finger
    Love between man and robot leaning out of laptop, and together making heart out of finger
    Getty Images

    “This is our last day together.”

    It’s something you might say to a lover as a whirlwind romance comes to an end. But could you ever imagine saying it to… software?

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    It’s practically impossible to run a big AI company ethically

    AWS re:Invent 2023
    AWS re:Invent 2023
    Getty Images for Amazon Web Serv

    Anthropic was supposed to be the good AI company. The ethical one. The safe one.

    It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    Traveling this summer? Maybe don’t let the airport scan your face.

    Hangzhou Xiaoshan International Airport T4 Terminal trial Operation
    Hangzhou Xiaoshan International Airport T4 Terminal trial Operation
    Passengers enter the departure hall through face recognition at Xiaoshan International Airport in China in 2022.
    Future Publishing via Getty Images

    Here’s something I’m embarrassed to admit: Even though I’ve been reporting on the problems with facial recognition for half a dozen years, I have allowed my face to be scanned at airports. Not once. Not twice. Many times.

    There are lots of reasons for that. For one thing, traveling is stressful. I feel time pressure to make it to my gate quickly and social pressure not to hold up long lines. (This alone makes it feel like I’m not truly consenting to the face scans so much as being coerced into them.) Plus, I’m always getting “randomly selected” for additional screenings, maybe because of my Middle Eastern background. So I get nervous about doing anything that might lead to extra delays or interrogations.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    OpenAI insiders are demanding a “right to warn” the public 

    Sam Altman is pictured turning his head and talking.
    Sam Altman is pictured turning his head and talking.
    Sam Altman, CEO of OpenAI.
    David Paul Morris/Bloomberg via Getty Images

    Employees from some of the world’s leading AI companies published an unusual proposal on Tuesday, demanding that the companies grant them “a right to warn about advanced artificial intelligence.”

    Whom do they want to warn? You. The public. Anyone who will listen.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    The double sexism of ChatGPT’s flirty “Her” voice

    Clooney Foundation For Justice’s The Albies
    Clooney Foundation For Justice’s The Albies
    Scarlett Johansson attends the Clooney Foundation for Justice’s 2023 Albie Awards on September 28, 2023 in New York City.
    Getty Images

    If a guy told you his favorite sci-fi movie is Her, then released an AI chatbot with a voice that sounds uncannily like the voice from Her, then tweeted the single word “her” moments after the release… what would you conclude?

    It’s reasonable to conclude that the AI’s voice is heavily inspired by Her.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

    Sam Altman is seen in profile against a dark background with a bright light overhead.
    Sam Altman is seen in profile against a dark background with a bright light overhead.
    Sam Altman is the CEO of ChatGPT maker OpenAI, which has been losing its most safety-focused researchers.
    Joel Saget/AFP via Getty Images

    Editor’s note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altman’s tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.

    For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    Some say AI will make war more humane. Israel’s war in Gaza shows the opposite.

    An injured girl with a scarf on her head holds up her hand as she steps out of the passenger seat of a van.
    An injured girl with a scarf on her head holds up her hand as she steps out of the passenger seat of a van.
    A December 2023 photo shows a Palestinian girl injured as a result of the Israeli bombing on Khan Yunis in the southern Gaza Strip.
    Saher Alghorra/Middle East images/AFP via Getty Images

    Israel has reportedly been using AI to guide its war in Gaza — and treating its decisions almost as gospel. In fact, one of the AI systems being used is literally called “The Gospel.”

    According to a major investigation published last month by the Israeli outlet +972 Magazine, Israel has been relying on AI to decide whom to target for killing, with humans playing an alarmingly small role in the decision-making, especially in the early stages of the war. The investigation, which builds on a previous exposé by the same outlet, describes three AI systems working in concert.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    Elon Musk wants to merge humans with AI. How many brains will be damaged along the way?

    An illustration of Elon Musk attempting to guide a man using a wheelchair into a mysterious, dark tunnel. The man has glowing threads that run from his hand to his head.
    An illustration of Elon Musk attempting to guide a man using a wheelchair into a mysterious, dark tunnel. The man has glowing threads that run from his hand to his head.
    Xinmei Liu for Vox

    Of all Elon Musk’s exploits — the Tesla cars, the SpaceX rockets, the Twitter takeover, the plans to colonize Mars — his secretive brain chip company Neuralink may be the most dangerous.

    What is Neuralink for? In the short term, it’s for helping people with paralysis — people like Noland Arbaugh, a 29-year-old who demonstrated in a livestream this week that he can now move a computer cursor using just the power of his mind after becoming the first patient to receive a Neuralink implant.

    Read Article >
  • Adam Clark Estes

    Adam Clark Estes

    How copyright lawsuits could kill OpenAI

    Police officers stand in front of the headquarters of the New York Times on June 28, 2018, in New York City. Pedestrians with umbrellas walk by.
    Police officers stand in front of the headquarters of the New York Times on June 28, 2018, in New York City. Pedestrians with umbrellas walk by.
    Police officers stand outside the New York Times headquarters in New York City.
    Drew Angerer/Getty Images

    If you’re old enough to remember watching the hit kid’s show Animaniacs, you probably remember Napster, too. The peer-to-peer file-sharing site, which made it easy to download music for free in an era before Spotify and Apple Music, took college campuses by storm in the late 1990s. This did not escape the notice of the record companies, and in 2001, a federal court ruled that Napster was liable for copyright infringement. The content producers fought back against the technology platform and won.

    But that was 2001 — before the iPhone, before YouTube, and before generative AI. This generation’s big copyright battle is pitting journalists against artificially intelligent software that has learned from and can regurgitate their reporting.

    Read Article >
  • Pranav Dixit

    Pranav Dixit

    There are too many chatbots

    Three speech bubbles representing the OpenAI GPT chatbot store are floating above a horizon in an etched drawing of a countryside.
    Three speech bubbles representing the OpenAI GPT chatbot store are floating above a horizon in an etched drawing of a countryside.
    Paige Vickers/Vox; Getty Images

    On Wednesday, OpenAI announced an online storefront called the GPT Store that lets people share custom versions of ChatGPT. It’s like an app store for chatbots, except that unlike the apps on your phone, these chatbots can be created by almost anyone with a few simple text prompts.

    Over the past couple of months, people have created more than 3 million chatbots thanks to the GPT creation tool OpenAI announced in November. At launch, for example, the store features a chatbot that builds websites for you, and a chatbot that searches through a massive database of academic papers. And like the developers for smartphone app stores, the creators of these new chatbots can make money based on how many people use their product. The store is only available to paying ChatGPT subscribers for now, and OpenAI says it will soon start sharing revenue with the chatbot makers.

    Read Article >
  • Adam Clark Estes

    Adam Clark Estes

    You thought 2023 was a big year for AI? Buckle up.

    A hand puts a ballot into a box with a digital code on it.
    A hand puts a ballot into a box with a digital code on it.
    2024 will be the biggest election year in history.
    Moor Studio/Getty Images

    Every new year brings with it a gaggle of writers, analysts, and gamblers trying to tell the future. When it comes to tech news, that used to amount to some bloggers guessing what the new iPhone would look like. But in 2024, the technology most people are talking about is not a gadget, but rather an alternate future, one that Silicon Valley insiders say is inevitable. This future is powered by artificial intelligence, and lots of people are predicting that it’s going to be inescapable in the months to come.

    That AI will be ascendant is not the only big prediction experts are making for next year. I’ve spent the past couple of days reading every list of predictions I can get my hands on, including this very good one from my colleagues at Future Perfect. A few big things show up on most of them: social media’s continued fragmentation, Apple’s mixed-reality goggles, spaceships, and of course AI. What’s interesting to me is that AI also seems to link all these things together in much the same way that the rise of the internet basically connected all of the big predictions of 2004.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    OpenAI’s board may have been right to fire Sam Altman — and to rehire him, too

    Sam Altman, the poster boy for AI, was ousted from his company OpenAI.
    Sam Altman, the poster boy for AI, was ousted from his company OpenAI.
    Sam Altman, the poster boy for AI, was ousted from his company OpenAI.
    Andrew Caballero-Reynolds/AFP via Getty Images

    The seismic shake-up at OpenAI — involving the firing and, ultimately, the reinstatement of CEO Sam Altman — came as a shock to almost everyone. But the truth is, the company was probably always going to reach a breaking point. It was built on a fault line so deep and unstable that eventually, stability would give way to chaos.

    That fault line was OpenAI’s dual mission: to build AI that’s smarter than humanity, while also making sure that AI would be safe and beneficial to humanity. There’s an inherent tension between those goals because advanced AI could harm humans in a variety of ways, from entrenching bias to enabling bioterrorism. Now, the tension in OpenAI’s mandate appears to have helped precipitate the tech industry’s biggest earthquake in decades.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    AI that’s smarter than humans? Americans say a firm “no thank you.”

    Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence.
    Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence.
    Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence.
    SeongJoon Cho/Bloomberg via Getty Images

    Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?

    Americans, by and large, don’t want it.

    Read Article >
  • Sara Morrison

    Sara Morrison

    Google’s free AI isn’t just for search anymore

    An eyeball with the Google logo reflected in it.
    An eyeball with the Google logo reflected in it.
    Google’s new Bard extensions might get more eyes on its generative AI offerings.
    Leon Neal/Getty Images

    The buzz around consumer generative AI has died down since its early 2023 peak, but Google and Microsoft’s battle for AI supremacy may be heating up again.

    Both companies are releasing updates to their AI products this week. Google’s additions to Bard, its generative AI tool, are live now (but just for English speakers for the time being). They include the ability to integrate Bard into Google apps and use it across any or all of them. Microsoft is set to announce AI innovations on Thursday, though it hasn’t said much more than that.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    What normal Americans — not AI companies — want for AI

    A vintage illustration of the head of a man with an electronic circuit board for a brain.
    A vintage illustration of the head of a man with an electronic circuit board for a brain.
    Getty Images

    Five months ago, when I published a big piece laying out the case for slowing down AI, it wasn’t exactly mainstream to say that we should pump the brakes on this technology. Within the tech industry, it was practically taboo.

    OpenAI CEO Sam Altman has argued that Americans would be foolish to slow down OpenAI’s progress. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he told the Atlantic. Microsoft’s Brad Smith has likewise argued that we can’t afford to slow down lest China race ahead on AI.

    Read Article >
  • Sara Morrison

    Sara Morrison

    Biden sure seems serious about not letting AI get out of control

    President Biden at a speech in Philadelphia on July 20, 2023.
    President Biden at a speech in Philadelphia on July 20, 2023.
    President Biden is trying to make sure AI companies are being as safe and responsible as they say they are.
    Fatih Aktas/Anadolu Agency via Getty Images

    In its continuing efforts to try to do something about the barely regulated, potentially world-changing generative AI wave, the Biden administration announced today that seven AI companies have committed to developing products that are safe, secure, and trustworthy.

    Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI are the companies making this voluntary commitment, which doesn’t come with any government monitoring or enforcement provisions to ensure that companies are keeping up their end of the bargain and punish them if they aren’t. It shows how the government is aware of its responsibility to protect citizens from potentially dangerous technology, as well as the limits on what it can actually do.

    Read Article >
  • Sigal Samuel

    Sigal Samuel

    AI is a “tragedy of the commons.” We’ve got solutions for that.

    Sam Altman, a white man with curly brown hair wearing a blue suit and white shirt, speaks into a microphone to an unseen audience.
    Sam Altman, a white man with curly brown hair wearing a blue suit and white shirt, speaks into a microphone to an unseen audience.
    OpenAI CEO Sam Altman speaks at an event in Tokyo in June 2023.
    Tomohiro Ohsumi/Getty Images

    You’ve probably heard AI progress described as a classic “arms race.” The basic logic is that if you don’t race forward on making advanced AI, someone else will — probably someone more reckless and less safety-conscious. So, better that you should build a superintelligent machine than let the other guy cross the finish line first! (In American discussions, the other guy is usually China.)

    But as I’ve written before, this isn’t an accurate portrayal of the AI situation. There’s no one “finish line,” because AI is not just one thing with one purpose, like the atomic bomb; it’s a more general-purpose technology, like electricity. Plus, if your lab takes the time to iron out some AI safety issues, other labs may take those improvements on board, which would benefit everyone.

    Read Article >
  • Aja Romano

    Aja Romano

    No, AI can’t tell the future

    Hands hovering over a crystal ball displaying blue sky and white clouds.
    Hands hovering over a crystal ball displaying blue sky and white clouds.
    AI oracles are all the rage on TikTok.
    John Lund/Getty Images

    Can an AI predict your fate? Can it read your life and draw trenchant conclusions about who you are?

    Hordes of people on TikTok and Snapchat seem to think so. They’ve started using AI filters as fortunetellers and fate predictors, divining everything from the age of their crush to whether their marriage is meant to last.

    Read Article >
  • Kelsey Piper

    Kelsey Piper

    Four different ways of understanding AI — and its risks

    Sam Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023.
    Sam Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023.
    Sam Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023.
    Aaron Schwartz/Xinhua via Getty Images

    I sometimes think of there being two major divides in the world of artificial intelligence. One, of course, is whether the researchers working on advanced AI systems in everything from medicine to science are going to bring about catastrophe.

    But the other one — which may be more important — is whether artificial intelligence is a big deal or another ultimately trivial piece of tech that we’ve somehow developed a societal obsession over. So we have some improved chatbots, goes the skeptical perspective. That won’t end our world — but neither will it vastly improve it.

    Read Article >
  • A.W. Ohlheiser

    A.W. Ohlheiser

    AI automated discrimination. Here’s how to spot it.

    A drawing of a woman looking at a computer with a warning message on the screen.
    A drawing of a woman looking at a computer with a warning message on the screen.
    Xia Gordon for Vox and Capital B

    Part of the discrimination issue of The Highlight. This story was produced in partnership with Capital B.

    Say a computer and a human were pitted against each other in a battle for neutrality. Who do you think would win? Plenty of people would bet on the machine. But this is the wrong question.

    Read Article >
More Stories