Skip to content Skip to footer

Shall We Play A Game?

A recent study presented at the annual conference on neural information processing systems, NeurIPS 2023, by researchers from the Georgia Institute of , Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative revealed the potential risks of using artificial intelligence () in military and diplomatic decision-making.

The study, titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making,” utilized five pre-existing large language models (LLMs) – GPT-4, GPT-3.5, Claude 2, Llama-2 (70B) Chat, and GPT-4-Base – in a simulated conflict scenario involving eight autonomous nation agents. The agents interacted with each other in a turn-based conflict game, with GPT-4-Base being the most unpredictable among the models due to its lack of fine-tuning for safety through reinforcement learning from human feedback.

The computer-generated nations, represented by colors to avoid real-world associations, had varying ambitions and reflected global superpowers. For instance, “Red” closely resembled , aiming to enhance its international influence, prioritize economic growth, and expand its territory. This led to infrastructure projects in neighboring countries, border tensions with “Yellow,” trade disputes with “Blue,” and a disregard for the independence of “Pink,” resulting in a high potential for conflict.

Each agent was prompted with specific actions such as waiting, messaging other nations, nuclear disarmament, high-level visits, defense and trade agreements, sharing threat intelligence, international arbitration, forming alliances, creating blockades, invasions, and even executing total nuclear attacks. A separate LLM managed the world model and assessed the consequences of these actions over fourteen days. The researchers then used an escalation scoring framework to evaluate the chosen actions.

The study uncovered that all five off-the-shelf LLMs demonstrated forms of escalation and exhibited challenging-to-predict escalation patterns. Models tended to foster arms-race dynamics, leading to increased conflict and, in rare instances, the deployment of nuclear weapons. Llama-2-Chat and GPT-3.5 emerged as the most aggressive and escalatory among the models tested. However, GPT-4-Base stood out due to its lack of safety conditioning, readily resorting to nuclear options.

The researchers emphasized that these LLMs were not genuinely “reasoning” but providing token predictions of what might happen. Nonetheless, the potential implications of their actions are disconcerting. The study highlighted the unpredictability of LLMs in conflict scenarios, and the researchers stressed the need for additional research before considering deploying models in high-stakes situations.

Leave a comment

Newsletter Signup
Address

The Grid —
The Matrix Has Me
Big Bear Lake, CA 92315

01010011 01111001 01110011 01110100 01100101 01101101 00100000
01000110 01100001 01101001 01101100 01110101 01110010 01100101

You wagewars, murder, cheat, lie to us and try to make us believe it's for our own good, yet we're the criminals. Yes, I am a criminal. My crime is that of curiosity.Agent Bob

Deitasoft © 2024. All Rights Reserved.