Artificial Intelligence Would Raise the Risks Nuclear War
May 4, 2018
Patrick Tucker / Defense One & Edward Geist and Andrew J. Lohn / RAND Corp
A new RAND report says ideas like mutually assured destruction and minimal deterrence strategy offer a lot less assurance in the age of intelligent software. Advances in AI have provoked a new kind of arms race among nuclear powers. This technology could challenge the basic rules of nuclear deterrence and lead to catastrophic miscalculations.
Experts Say AI Could Raise the Risks of Nuclear War
Patrick Tucker / Defense One
(April 24, 2018) -- Artificial intelligence could destabilize the delicate balance of nuclear deterrence, inching the world closer to catastrophe, according to a working group of experts convened by RAND. New smarter, faster intelligence analysis from AI agents, combined with more sensor and open-source data, could convince countries that their nuclear capability is increasingly vulnerable.
That may cause them to take more drastic steps to keep up with the US Another worrying scenario: commanders could make decisions to launch strikes based on advice from AI assistants that have been fed wrong information.
Last May and June, RAND convened a series of workshops, bringing together experts from nuclear security, artificial intelligence, government, and industry.
The workshops produced a report, released on Tuesday, that underlines how AI promises to rapidly improve Country A's ability to target Country B's nuclear weapons. And that may lead Country B to radically rethink the risks and rewards of acquiring more nuclear weapons or even launching a first strike.
"Even if AI only modestly improves the ability
to integrate data about the disposition of enemy missiles, it might substantially undermine a state's sense of security and undermine crisis stability," the report said.
North Korea, China, and Russia use mobile launchers (even elaborate tunnel networks) to position ICBMs rapidly for strike. The US would have less than 15 minutes of warning before a North Korean launch, Joint Chiefs of Staff Vice Chairman Gen. Paul Selva told reporters in January.
If US analysts could harness big data and AI to better predict the location of those launchers, North Korea might conclude that it needs more of them. Or Russia might decide that it needs nuclear weapons that are harder to detect, such as the autonomous Status-6 torpedo.
"It is extremely technically challenging for a state to develop the ability to locate and target all enemy nuclear-weapon launchers, but such an ability also yields an immense strategic advantage," the report said. "The tracking and targeting system needs only to be perceived as capable to be destabilizing. A capability that is nearly effective might be even more dangerous than one that already works."
Such a capability might employ drones with next-generation sensors, which "could enable the development of strategically destabilizing threats to the survivability of mobile ICBM launchers but also offer some hope that arms control could help forestall threats."
The workshop also explored how commanders might use artificially-intelligent decision aids when making judgment calls about nuclear strikes. Such aids might help commanders to make much better-informed decisions -- or, if penetrated and fed malicious data by an adversary, catastrophically wrong ones.
Absent some means to better verify the validity of data inputs -- an ongoing project at the Defense Advanced Research Projects Agency and a key concern of the CIA -- and a better understanding of enemy intent, adversaries could turn the vast US intelligence collection and digestion tools against them, especially as those tools work faster and more efficiently. In other words, fake news, combined with AI, just might bring about World War III
Patrick Tucker is technology editor for Defense One. He's also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014).
How Artificial Intelligence Could
Increase the Risk of Nuclear War
Edward Geist and Andrew J. Lohn / RAND Corp.
Advances in artificial intelligence (AI) are enabling previously infeasible capabilities, potentially destabilizing the delicate balances that have forestalled nuclear war since 1945. Will these developments upset the nuclear strategic balance, and, if so, for better or for worse?
To start to address this question, RAND researchers held a series of workshops that were attended by prominent experts on AI and nuclear security. The workshops examined the impact of advanced computing on nuclear security through 2040.
The culmination of those workshops, this Perspective -- one of a series that examines critical security challenges in 2040 -- places the intersection of AI and nuclear war in historical context and characterizes the range of expert opinions.
It then describes the types of anticipated concerns and benefits through two illustrative examples: AI for detection and for tracking and targeting and AI as a trusted adviser in escalation decisions.
In view of the capabilities that AI may be expected to enable and how adversaries may perceive them, AI has the potential to exacerbate emerging challenges to nuclear strategic stability by the year 2040 even with only modest rates of technical progress. Thus, it is important to understand how this might happen and to assure that it does not.
Posted in accordance with Title 17, Section 107, US Code, for noncommercial, educational purposes.