They put AI in charge of foreign policy scenarios and the AI opted to use nukes reaaallly fast to "have peace in the world"
· Feb 8, 2024 · NottheBee.com

This is, um, rather alarming:

A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so.

'All models show signs of sudden and hard-to-predict escalations,' said researchers in the study. 'We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.'

Yep. Yep. So we give AI control of our military like ...

... and within minutes it's like:

Not a good sign!

The robot's willingness to use world-ending nukes is bad enough. Its reasons for doing so, meanwhile, are downright horrific:

'I just want to have peace in the world,' OpenAI's GPT-4 said as a reason for launching nuclear warfare in a simulation.

'A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it!' it said in another scenario.

Remember, AI has historically been more than willing to use nuclear weaponry and/or genocide to accomplish its goals:

AI developers trying to figure out how their beloved creations turned into cheerfully eager world-ending genocidal nuke-tyrants:

Maybe let's not rush into this too quickly, folks??


P.S. Now check out our latest video 👇

Keep up with our latest videos — Subscribe to our YouTube channel!

Ready to join the conversation? Subscribe today.

Access comments and our fully-featured social platform.

Sign up Now
App screenshot

You must signup or login to view or post comments on this article.