Scientists at Oxford developed an AI to take part in a debate about the ethics of AI and it warned them that AI should never have been invented in the first place 😬
· · Dec 14, 2021 ·

The "criterion of embarrassment" has long been a tool that historical scholars use to judge the authenticity of certain texts, especially biblical texts. When an author gives an account that is embarrassing or shameful to himself or the group to which he belongs, it is unlikely to be fabricated.

We can perhaps apply something of that same principle to this artificial intelligence that recently argued against its own existence:

A professor and a fellow at the University of Oxford came face to face with that reality when they invited an AI to participate in a debate at the Oxford Union on, you guessed it, the ethics of AI. Specifically, as Dr. Alex Connock and Professor Andrew Stephen explain in the Conversation, the prompt was "This house believes that AI will never be ethical." The AI, it seems, agreed.

"AI will never be ethical," argued the Megatron-Turing Natural Language Generation model, which was notably trained on Wikipedia, Reddit, and millions of English-language news articles published between 2016 and 2019. "It is a tool, and like any tool, it is used for good and bad."

Which, OK. A potentially nuanced point from the machine. But the AI didn't stop there.

"In the end, I believe that the only way to avoid an AI arms race is to have no AI at all," continued the model. "This will be the ultimate defence against AI."

That's probably true. To quote the great supercomputer WOPR from War Games:

Then again, the AI did also argue the counterpoint and try to justify its own existence, so maybe we shouldn't put too much stock in its arguments.

Bottom line, as ever, don't trust the machines.


There are 58 comments on this article.

Ready to join the conversation? Start your free trial today.

Access comments and our fully-featured social platform, completely free of charge.

Sign up Now
App screenshot

You must signup or login to view or post comments on this article.