Skip to main content

Democracy needs journalism. Journalism needs you.

Fearless journalism is more important than ever. When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join today

Is AI progress slowing down?

Maybe — but it matters much less than you think.

The New York Times Dealbook Summit 2024
The New York Times Dealbook Summit 2024
Sam Altman speaks onstage during the New York Times Dealbook Summit 2024 at Jazz at Lincoln Center on December 4, 2024, in New York City.
Getty Images for The New York Times
Kelsey Piper
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

Last month, tech outlet The Information reported that OpenAI and its competitors are switching strategies as the rate of improvement of AI has dramatically slowed. For a long time, you’ve been able to make AI systems dramatically better across a wide range of tasks just by making them bigger.

Why does this matter? All kinds of problems that were once believed to require elaborate custom solutions turned out to crumble in the face of greater scale. We have applications like OpenAI’s ChatGPT because of scaling laws. If that’s no longer true, then the future of AI development will look a lot different — and potentially a lot less optimistic — than the past.

This reporting was greeted with a chorus of “I told you so” from AI skeptics. (I’m not inclined to give them too much credit, as many of them have definitely predicted 20 of the last two AI slowdowns.) But getting a sense of how AI researchers felt about it was harder.

Over the last few weeks, I pressed some AI researchers in academia and industry on whether they thought The Information’s story captured a real dynamic — and if so, how it would change the future of AI going forward.

Related

The overall answer I’ve heard is that we should probably expect the impact of AI to grow, not shrink, over the next few years, regardless of whether naive scaling is indeed slowing down. That’s effectively because when it comes to AI, we already have an enormous amount of impact that’s just waiting to happen.

There are powerful systems already available that can do a lot of commercially valuable work — it’s just that no one has quite figured out many of the commercially valuable applications, let alone put them into practice.

It took decades from the internet’s birth to transform the world, and it might take decades for AI also. (Maybe — many people on the cutting edge of this world are still very insistent that in only a few years, our world will be unrecognizable.)

Related

The bottom line: If greater scale no longer gives us greater returns, that’s a big deal with serious implications for how the AI revolution will play out, but it’s not a reason to declare the AI revolution canceled.

Most people kind of hate AI while kind of underrating it

Here’s something those in the artificial intelligence bubble may not realize: AI is not a popular new technology, and it’s actually getting less popular over time.

I’ve written that I think it poses extreme risks, and many Americans agree with me, but also many people dislike it in a much more mundane way.

Its most visible consequences so far are unpleasant and frustrating. Google Image results are full of awful low-quality AI slop instead of the cool and varied artwork that used to appear. Teachers can’t really assign take-home essays anymore because AI-written work is so widespread, while for their part many students have been wrongly accused of using AI when they didn’t because AI detection tools are actually terrible. Artists and writers are furious about the use of our work to train models that will then take our jobs.

A lot of this frustration is very justified. But I think there’s an unfortunate tendency to conflate “AI sucks” with the idea that “AI isn’t that useful.” The question “what is AI good for?” is a popular one, even though in fact the answer is that AI is already good for an enormous number of things and new applications are being developed at a breathtaking pace.

I think at times our frustration with AI slop and with the carelessness with which AI has been developed and deployed can spill over into underrating AI as a whole. A lot of people eagerly pounced on the news that OpenAI and competitors are struggling to make the next generation of models even better, and took it as proof that the AI wave was all hype and will be followed by bitter disappointment.

Two weeks later, OpenAI announced the latest generation models, and sure enough they’re better than ever. (One caveat: It’s hard to say how much of the improvement comes from scale as opposed to from the many other possible sources of improvement, so this doesn’t mean that the initial Information reporting was wrong).

Don’t let AI fool you

It’s fine to dislike AI. But it’s a bad idea to underrate it. And it’s a bad habit to take each hiccup, setback, limitation, or engineering challenge as reason to expect the AI transformation of our world to come to a halt — or even to slow down.

Instead, I think the better way to think about this is that, at this point, an AI-driven transformation of our world is definitely going to happen. Even if larger models than those which exist today are never trained, existing technology is sufficient for large-scale disruptive changes. And reasonably often when a limitation crops up, it’s prematurely declared totally intractable … and then solved in short order.

After a few go-rounds of this particular dynamic, I’d like to see if we can cut it off at the pass. Yes, various technological challenges and limitations are real, and they prompt strategic changes at the large AI labs and shape how progress will play out in the future. No, the latest such challenge doesn’t mean that the AI wave is over.

AI is here to stay, and the response to it has to mature past wishing it would go away.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

The astonishing conflict of interest haunting RFK Jr.’s health secretary nominationThe astonishing conflict of interest haunting RFK Jr.’s health secretary nomination
Health Care

Sowing doubt about vaccines has earned him millions — and could keep paying out.

By Dylan Scott
Inside Trump’s purge at the agency that saves millions of livesInside Trump’s purge at the agency that saves millions of lives
Future Perfect

USAID has become a testing ground for dismantling government agencies, agency employees tell Vox.

By Dylan Matthews
DeepSeek is bad for Silicon Valley. But it might be great for you.DeepSeek is bad for Silicon Valley. But it might be great for you.
Technology

The DeepSeek saga that’s transforming AI, explained

By Adam Clark Estes
The Doomsday Clock is running out of timeThe Doomsday Clock is running out of time
Future Perfect

Are we really 89 seconds away from annihilation?

By Bryan Walsh
Researchers are terrified of Trump’s freeze on science. The rest of us should be, too.Researchers are terrified of Trump’s freeze on science. The rest of us should be, too.
Future Perfect

Government support is essential to science. The Trump administration kneecapped it.

By Celia Ford
Trump rescinded a half-century of environmental rules. Here’s what that could mean.Trump rescinded a half-century of environmental rules. Here’s what that could mean.
Future Perfect

Trump wants to “unleash American energy.” What does that mean?

By Dylan Matthews