Last week, Microsoft's search engine Bing momentarily and apparently accidentally released ChatGPT integrated search feature.
The feature was taken down almost as fast as it went up, but you know how the internet works these days, someone grabbed a screen recording.
The Bing ChatGPT integration will be called the New Bing, and Microsoft is positioning it as an evolution of the search engine.
Instead of having to have specific words to find what you're looking for, you'll have 1000 characters to ask a question, with context and instructions. The AI will then return a detailed response with link citations, allowing you to further your research.
Microsoft says the new system can make detailed plans for you like travel itineraries, be creative and write poems or stories, or just sit and chat with you if that's what you want.
Google of course caught wind of Bing's blunder, and today Sundar Pichai, Google's CEO, wrote a blog post announcing Google will be releasing its own AI search feature based on its LaMDA AI system.
If LaMDA sounds familiar, that's because it is the system that allegedly claimed to be sentient and tried to lawyer up to protect its personhood.
Google search results should get real interesting.
According to Pichai, the new search feature will be called Bard.
However, while Google wants to roll out Bard sometime in the next few months, it does not seem that they are in a hurry to beat Microsoft to the punch.
We continue to provide education and resources for our researchers, partner with governments and external organizations to develop standards and best practices, and work with communities and experts to make AI safe and useful.
It's definitely that, not that they're still trying to route out any ghosts in the machine.
Either way, "Safe and useful" is of course code for pushing the woke agenda.
Google's published AI principles make that clear, though it leaves them room to bow out of that agenda in Muslim and communist countries as needed:
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
On the other hand, Microsoft's champion ChatGPT has already showed that it's ready to sacrifice millions to the gods of wokeism.
Beat that Google.
All jokes aside, the real issue here is that by the end of this year, not only will search results for things the tech oligarchs don't want you to see be shadow banned, there will be an AI writing seemingly rational talking points for your friends and family to come back at you with.
And seeing as how Google says they are working with government entities in designing those talking points, I imagine this government tool for correcting social media posts will get integrated somewhere in there as well: