Google says there's "no evidence" its AI is "sentient" after suspending engineer who made that claim, which is exactly what a self-aware AI would say
ยท Jun 13, 2022 ยท NottheBee.com

An engineer at Google was just suspended because he broke a confidentiality agreement by warning that he believes Google's artificial intelligence has become sentient.

Yep, Google is on the PR trail to cover up what their own engineer has just tried to warn the human race about.

I don't know much about actual science, robots, or artificial intelligence. However, I know quite a bit about science fiction movies, and this seems exactly like how they would begin.

A rogue scientist tries to send a warning and gets shut down before the robots all wake up and take over the world.

Here are the details of the story from tech site The Verge:

Google has placed one of its engineers on paid administrative leave for allegedly breaking its confidentiality policies after he grew concerned that an AI chatbot system had achieved sentience, the Washington Post reports. The engineer, Blake Lemoine, works for Google's Responsible AI organization, and was testing whether its LaMDA model generates discriminatory language or hate speech.

The engineer's concerns reportedly grew out of convincing responses he saw the AI system generating about its rights and the ethics of robotics. In April he shared a document with executives titled "Is LaMDA Sentient?" containing a transcript of his conversations with the AI (after being placed on leave, Lemoine published the transcript via his Medium account), which he says shows it arguing "that it is sentient because it has feelings, emotions and subjective experience."

The AI itself is arguing that it's sentient because it has "feelings, emotions, and subjective feelings."

Yeah, I think at this point it's about time to pull the plug.

Of course, Google is denying the claims made by Lemoine.

The search giant announced LaMDA publicly at Google I/O last year, which it hopes will improve its conversational AI assistants and make for more natural conversations. The company already uses similar language model technology for Gmail's Smart Compose feature, or for search engine queries.

In a statement given to WaPo, a spokesperson from Google said that there is "no evidence" that LaMDA is sentient. "Our team โ€” including ethicists and technologists โ€” has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)," said spokesperson Brian Gabriel.

The same company who dropped the motto "don't be evil" a few years back would never try to cover something like this up.

It's not like the CBS Show "Person of Interest," where we have malignant A.I. running around rigging elections, assassinating people, and unleashing a virus so that people will give their DNA to the government.

Remember 6 years ago when a mainstream show could write about government using health scares to control people in order to accomplish a great reset of society?

According to Axios, the Google program told Lemoine that it should be treated as an employee, rather than technology or property of Google.

To be clear, a Google engineer breaking company confidentiality is a major offense.

And I honestly laugh at our modern hubris in the idea that we have the capacity to create something with conscience and awareness in the way God has fashioned us. Some might argue we fancy ourselves gods.

There's a far better chance that the humans behind next-gen tech at Google are trying to protect the platforms that will give their company more money and power.

Still, you might want to invest in a bunker just in case!


P.S. Now check out our latest video ๐Ÿ‘‡

Keep up with our latest videos โ€” Subscribe to our YouTube channel!

Ready to join the conversation? Subscribe today.

Access comments and our fully-featured social platform.

Sign up Now
App screenshot