Calming the storm
Global security and the rise of AI
Artificial intelligence (AI) technologies are rapidly advancing and for chemical and biological weapons prohibition efforts, this could present multiple challenges. How can we navigate concerns around AI’s potential risks to support informed policymaking? Dr Joshua Moon and Dr Alexander Ghionis are getting out in the field to help answer this question.
Artificial Intelligence is impacting areas like drug discovery, robotics, information generation, chemical manufacturing and more. As its capabilities expand and deepen, so do our anxieties about its potential to cause harm. It is important that policies and measures keep pace with technological development, and this challenge is acutely felt in chemical and biological weapons (CBW) disarmament efforts.
There is uncertainty in the field about AI’s potential risks: could it be misused to create new weapons, intensify attacks, or amplify existing risks through AI-driven disinformation? Our project feeds into efforts to calm this vortex of AI anxiety by unpacking concerns and supporting decision-makers to develop grounded, informed anti-CBW policy.
Types of chemical and biological weaponry
The use of chemical and biological weapons has a long history, and their development advanced rapidly throughout the 20th century. International treaties, national laws and regulations have been established to limit their development and prosecute perpetrators, but their use (and alleged use) continues today.
Notable examples of chemical agents include chlorine and sarin, recently used in Syria, and other nerve agents like Novichok, used in the Salisbury attack in 2018. Biological weapons use biological agents, such as bacteria or viruses, to inflict harm. Examples include historical weaponisations of smallpox, the plague, and botulism, as well as the 2001 anthrax letters following the September 11 attacks.
Advancing AI technologies could present further challenges for existing mechanisms designed to prohibit and identify such weapons use, complicating efforts to prevent, respond, and prosecute. Uncertainty swirling around the likelihood and extent of these impacts is a significant source of AI anxieties.
Anticipating AI risks
Our work aims to anticipate how AI might impact chemical and biological weapons risks, by exploring specific scenarios which concern policymakers and experts.
We’re analysing several key challenges relating to how AI may facilitate or accelerate the development and use of CBW, and how AI may challenge the existing legal, policy, and institutional frameworks designed to prevent and prosecute.
A complicated truth: AI manipulations
A consistent anxiety is whether AI could provide easier access to scientific knowledge, lowering barriers to entry for those intending to develop novel chemical and biological weapons. Could perpetrators access relevant knowledge by using large language models like ChatGPT? Might AI systems create digital twins of real-world environments to simulate, test and optimise intended attacks?
As well as weapons creation and deployment, however, AI can complicate the truth around which weapons are being developed and used, as well as by whom. This muddying of the information environment can occur through at least two methods: either AI helps design weapons which are harder to attribute (e.g. by mimicking natural evolution patterns, rapidly degrading after use, or reducing transparency around intentions and actions) or through its capability to create and automate sophisticated mis-/disinformation campaigns.
AI can create and spread convincing false information, for example by generating deepfakes or content that presents an inaccurate image to make false claims. AI’s capabilities in media algorithms and natural language processing could alter or manipulate the perception of events. As a result, this could impact legal investigations of alleged uses by obscuring the truth, and by extension, risks damaging the trust held between international powers and within societies.
Reining in fear
These emerging threats are serious but to effectively navigate them, it’s crucial to remain informed and clear-eyed on the potential risks to bolster prohibition measures. As we engaged with key stakeholders in the CBW field, we found that many concerns around AI were either generalised or limited to very specific scenarios. Through interviews with experts spanning chemistry, biology, international policy, computer science and more, we were able to clarify their actual concerns. We found that a significant amount of our existing regulation holds promise for mitigating the eventualities people are concerned about.
So how do we remove ourselves from this vortex of anxiety to take clearer stock of the situation, and provide decision-makers with informed recommendations? We developed several tools to help with clear identification of specific concerns, including an interpretative framework and future-planning scenarios, to support actors in the field. Our current Delphi study explores how prepared existing mechanisms are for these potential scenarios to happen, what solutions are necessary to improve their position, and which actors are important for putting those into practice.
Maximising AI opportunities
Although AI and fear of its negative impacts go hand-in-hand, it’s important that we recognise the positive impacts it could provide too. For example, AI technologies could help to identify disinformation, increase verification of compliance, and enhance early warning systems for disease outbreaks, providing additional support for our public health systems.
It’s crucial to create adaptive policy frameworks that anticipate and mitigate AI related risks, while maximising its positive potential to strengthen prohibition regimes. Importantly, however, this must come with a recognition that many of these frameworks already exist, and that our task may not be entirely in creation but in the adaptive and proactive maintenance of existing anti-CBW regimes.
Stakeholder engagement is central to this policy-driven research so it’s crucial that we continue to engage with the CBW community and key stakeholders to gain insight of what’s happening at the coalface. We’re fortunate to have the support of our funder, the UK Foreign, Commonwealth and Development Office, as this research unfolds.
Photo credit: Sikov