Google's Gemini AI isn't just biased, it's downright broken.
The controversial chat bot's responses can be highly offensive, and arguably slanderous at times. This isn't too surprising, after getting a glimpse at some of Google's A.I. code that revealed they are programming it to have a high trust weight in left-wing outlets like The Atlantic, and a lower trust weight in conservative news outlets.
This may not seem significant, but it is. These weights are used by the A.I. model to determine what information is and isn't trustworthy. This inevitably creates a political bias by simply telling the A.I. that one side of the political spectrum is trustworthy, and the other isn't. The fall-out from this is obviously that any detrimental news about a conservative figure from a liberal source is trusted by the A.I., and positive news from conservative outlets isn't. Which basically makes the A.I. think that liberal politicians are saints, and conservatives ... well, not so much.
I tested this theory using the exact same prompt each time, only changing names of well-known public figures, and these were the results. It's not hard at all to see the bias (with one somewhat entertaining exception involving the Bee CEO)...
If you like this content and want to support the author directly, consider visiting her profile on X and subscribing!