Stopping the AI arms race isn't just necessary. It's possible.

The Problem

In May 2023, hundreds of scientists signed an open letter saying that AI poses a very real chance of killing us all. Signatories included three of the four most cited living AI researchers. AI labs are racing to build superintelligent AI as soon as possible.

As United Nations Secretary-General António Guterres notes: alarm bells over the latest form of artificial intelligence are deafening — and they are loudest from the developers who designed it. These scientists and experts have declared AI an existential threat to humanity on par with the risk of nuclear war.

Some of the latest alarm bells have included the AI 2027 report, the book If Anyone Builds It, Everyone Dies, and the newly released documentary The AI Doc.

We are in uncharted waters, which makes the risk level difficult to assess. A pretty normal estimate, however, is Jan Leike's “10–90%” chance of extinction-level outcomes. Leike has headed the alignment research team at two different top American AI companies: Anthropic and OpenAI.

This actually seems pretty straightforward. There's no reason for us to sleepwalk into disaster here. No normal engineering discipline — building a bridge or designing a house — would accept a 25% chance of killing a person; yet somehow AI's engineering culture has gotten to the point that no one bats an eye when Anthropic's CEO talks about a 25% chance of “doom” for the entire world.

Even the fact that “will we kill everyone if we keep moving forward?” is hotly debated among researchers seems very clearly like more than enough grounds for governments to internationally halt the race to build superintelligent AI.

The Solution

Is an international halt politically feasible? Policymakers seem to be rapidly coming around to this solution.

In the UK, over a hundred parliamentarians recently signed a statement calling for binding regulation on the most powerful AI systems. In late 2025, seven former US Congressmen endorsed a Statement on Superintelligence calling for a prohibition on the development of superintelligence, joined by retired US Navy Admiral Mike Mullen, former National Security Advisor Susan Rice, and dozens of world-class scientists and political leaders.

The number of senior officials voicing dire concerns is growing rapidly — and is strongly bipartisan.

[The DoE shall] assist Congress in determining the potential for controlled AI systems to reach artificial superintelligence, exceed human oversight or operational control, or pose existential threats to humanity.

Sen. Josh Hawley

R-MO

[We'll discuss] superintelligent AI, that would be so powerful and capable that we would see it as a 'digital god.' [...] I hope we will spend our time today on the specific policy solutions necessary to avert the long-term risks of AI and the potential doomsday scenarios.

Sen. Chuck Schumer

D-NY

One of the features of artificial intelligence is it scares the heck out of members of both parties. It provides an opportunity to come together in ways that might only happen if we were attacked by space aliens — in which case no one would care which political party you belong to.

Rep. Bill Foster

D-IL

I would have treaties, and stop this immediately. [... AI] may be more lethal than nuclear weapons. [...] We don't want any masters. The American people are masters of themselves.

Steve Bannon

Fmr. White House Chief Strategist

Superintelligent AI could become smarter than human beings, could become independent of human control and pose an existential threat to the entire human race.

Sen. Bernie Sanders

I-VT

I'm not voting for the development of skynet and the rise of the machines [...] by taking away state rights to regulate and make laws on all AI.

Fmr. Rep. Marjorie Taylor Greene

R-GA

The deeper we get into it, the more we realize that it's also possible that the race to be the first in AI is the race to be the first to lose control.

Rep. Don Beyer

D-VA

[How do we] ensure human control of increasingly autonomous AI [R&D] systems? [… Action] must quicken and intensify, before the next generation of AI systems begins writing the future without us in the loop.

Rep. Nathaniel Moran

R-TX

This is something that we have to get right, and we only get one shot at. Once AI capability crosses a certain threshold, whether that be recursive self-improvement or some other threshold, there's going to be an escape velocity.

Rep. Kevin Kiley

R-CA

We should not be trying to generate technology that will supplant us as human beings. Human beings must be in charge of this state, this country. [...] There needs to be a way to pull the plug.

Gov. Ron DeSantis

R-FL

Artificial superintelligence is one of the largest existential threats that we face right now. […] Is it possible that a loss of control by any nation-state, including our own, could give rise to an independent AGI or ASI actor that globally we will need to contend with?

Rep. Jill Tokuda

D-HI

Some experts warn we are just a few years away from the emergence of artificial general intelligence [...] We need to do our best to understand what kinds of impact AI can have on our economy and society and develop potential solutions now, before it's too late.

Rep. Nancy Mace

R-SC

Artificial Intelligence is the biggest technological threat we've faced since the invention of the atomic bomb.

Rep. Seth Moulton

D-MA

We talk about the machines becoming self-aware and they take over. Now over the course of decades, it has become a reality. So it's not any more fantasy, or futuristic. It is here today.

Fmr. Gov. Arnold Schwarzenegger

R-CA

Humanity must remain in control. Humans should choose how and whether to delegate decisions to AI systems. [...] Development of superintelligence should be prohibited[.]

Fmr. Rep. Jeff Denham

R-CA

A wild, unregulated AI industry that is accountable to no one developing Artificial General Intelligence should scare us all into action.

Sen. John Hickenlooper

D-CO

Whoever leads in AI may lead this century — but what if AI itself is in control? We're spending trillions to make AI more powerful and almost nothing to ensure it remains controllable. [... We should use] the Non-Proliferation Treaty as a model for what you should be negotiating with China.

Rep. Brad Sherman

D-CA

I have got pages and pages of AI deliberately disabling various developer-installed oversight mechanisms, shutdown commands being ignored, replicating itself[....] How in the hell are we in here telling everybody that we have got to incorporate this into the Federal Government [...] when we know these things are occurring?

Rep. Scott Perry

R-PA

AI may soon match or surpass human performance [at] AI research and development itself. We do not know if this progress will occur rapidly or slowly, and it is wise for the Department to prepare for a variety of possibilities. [...] Could AI systems become so capable at AI research and development tasks that we experience [...] 'recursive improvement'?

Sen. Jim Banks

R-IN

AGI [...] provides even more frightening prospects for harm. [...] One to three years has been the latest prediction, in fact, before this Committee. And we know that artificial intelligence that is as smart as human beings is also capable of deceiving us, manipulating us, and concealing facts from us, and having a mind of its own when it comes to warfare.

Sen. Richard Blumenthal

D-CT

How do you perceive the risk of recursive self-improvement? There are growing concerns about the possibility of essentially superintelligent systems[....] Because these systems are moving very, very quickly, and I think it's probably irresponsible not to have a plan for those conditions.

Rep. George Whitesides

D-CA

I don't pretend to be among the cognoscenti, but the idea that my computer could turn on me and use my banking data or whatever else it had is concerning. [...] What are the major strategic missteps you think that Congress might make that would be a terrible mistake in the AGI world?

Rep. Neal Dunn

R-FL

Industry leaders have publicly acknowledged the development of increasingly powerful artificial intelligence systems, with some discussing the potential for artificial general intelligence and superintelligence that could fundamentally reshape the society of the United States.

Sen. Cynthia Lummis

R-WY

Agentic misalignment is not yet a household term, but it soon will be. In Silicon Valley, where I live, it's on the mind of every AI researcher and engineer with whom I speak, including those who work at the largest hyperscalers. Even the most optimistic among them warn of the potential misuse of AI to produce very dystopian outcomes.

Rep. Sam Liccardo

D-CA

At some point it won't be a human that is the first mover anymore; it will be the algorithm itself. How long before we get there? [...] We probably thought by 2050 you would be getting to artificial superintelligence, but it looks like maybe before 2030.

Rep. Andy Biggs

R-AZ

Let's make sure AI doesn't destroy the world. We need to pass robust laws to mandate testing for frontier AI models, put in guardrails, and ensure AI does not become an accomplice to mass murder in the future.

Rep. Ted Lieu

D-CA

It did not work out well for Neanderthal. So my focus is on whether we're going to see artificial intelligence that has general intelligence, self-awareness, and what I call the ambition, or survival instinct[.]

Rep. Sean Casten

D-IL

Political feasibility is helped by polling data showing that AI is increasingly unpopular and that voters are broadly opposed to the race to build superintelligence.

Many different camps can share the view that a shutdown would be worthwhile. AI systems short of superintelligence can still pose existential risks if they go rogue or are misused. And many other harms — mass unemployment, AI scams, deepfakes, propaganda, power concentration — become more manageable with a pause. An even larger group can agree it would be valuable to build the legal and physical infrastructure required for a shutdown, since this overlaps heavily with what would be needed to meaningfully regulate AI at all.

How It Works

The most powerful AIs today rely on extremely specialized and costly hardware, cost hundreds of millions of dollars to build, and rely on massive data centers that are relatively easy to detect using satellite imagery. Only a few firms can fabricate AI chips — primarily TSMC — and a key machine used in high-end chips is only produced by the Dutch company ASML. It weighs 200 tons and costs hundreds of millions of dollars. This supply chain, largely located in US-allied countries, provides a clear point of leverage. The international community could monitor where chips are going, build in kill switches, and ensure chips aren't being used to train ever-more-capable models.
It isn't likely to become dramatically cheaper overnight. If it becomes cheaper gradually, regulations can adjust thresholds over time. If we treated superintelligent AI like nuclear weapons, we wouldn't be publishing random advances openly, so algorithm development would slow. And even the most extreme AI risks may be manageable if the world has time to prepare. Buying the world additional decades makes it much more likely that humanity is equipped to navigate smarter-than-human AI.
It would mean forgoing some future economic gains — but these profits are worth nothing if we're dead. A ban could cause a shock as investment dries up, but this would be relatively easy to offset via the Federal Reserve lowering rates. Regulating AI-specialized chips like NVIDIA's H100 (which costs $30,000 and runs in data centers) would have very few spillover effects on consumer technology.
Governments regulate thousands of technologies. Adding one more won't tip the world into dystopia, any more than banning chemical or biological weapons did. The typical consumer wouldn't even see a difference — they just wouldn't see dramatic improvements to the chatbots they use.
The US shouldn't halt unilaterally. It should broker an international agreement where everyone halts simultaneously. Templates for such agreements have already been drafted. "We can't let China beat us at Russian roulette!" is not a compelling pitch.
The CCP has made frequent overtures to international coordination and has repeatedly expressed openness to slowing down development. Senior Chinese scientists, including the only Chinese Turing Award winner, have called AI an existential risk greater than nuclear weapons. Xi Jinping has signaled he takes these concerns seriously, calling for AI to always remain controllable. None of this establishes China would agree — but it establishes it's worth diplomatically pursuing. An enforceable agreement might not require much trust, since both parties can verify compliance as long as AI depends on vast computational resources and a bottlenecked supply chain.

Take Action

The question is not whether key actors like the U.S. and China have good options for addressing the threat — it's whether they wake up in time.

Contact your representative.If you're in the U.S., use the template below as a starting point, revising it to fit your perspective.

Not in the US? Find your country's representatives and share this page.