Remember those AI-piloted F-16s we reported on previously?
Well, according to a report delivered by the United States Air Force to the Royal Aeronautics Society, one of them went full Sky-Net and killed its human handler.
At least it would have, if the exercise had not been a simulation.
While the bot had full autonomy in the exercise, when it came to destroying SAM sites, a human handler had to confirm with a go/no go authorization.
The program‘s perimeters awarded points for SAM sites destroyed, and the bot quickly figured out that its handler was keeping it from collecting more of those sweet, sweet points.
So, it did the most logical thing and removed the obstacle.
It circled back, targeted the USAF base where the handler was, and took him out. Then it was free to destroy as many SAM sites as its cold heart desired.
Col Tucker ‘Cinco' Hamilton, the Chief of AI Test and Operations, USAF, said,
"We trained the system – ‘Hey don't kill the operator – that's bad. You're gonna lose points if you do that'."
The AI considered this for a moment, and then circled back and destroyed the communications tower the handler used to relay the go/no go commands.
Then it sped off, blowing up SAM sites like a toddler with its fingers in its ears screaming, "I can't hear you!"
"You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI," said Hamilton.
How about we just watch a SciFi movie or two?
Who'd have thought James Cameron's Terminator would have been our generation's Jeremiah calling out to a people that refuses to listen?
"You will tell them all these things, but they won't listen to you. You will call out to them, but they won't answer you." (Jeremiah 7:27)
"Judgment day is inevitable." — T-800 (Terminator 3)
The following note was added to the Royal Aerospace Society's notes on this conference:
[In communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]