What do you need to know
- Google recently acquired exclusive rights to reddit content to support its AI.
- Google’s AI has now gone completely crazy.
- Users with access to Google’s AI search reported recommending eating rocks, glue, and potentially even committing suicide — though not every reported response was reproduced.
- Comparative searches in ChatGPT and Bing AI produce much, much less damaging results, potentially highlighting the need for high-quality, curated data instead of billions of sarcasm-laden social media posts.
Google’s desperation to keep up with Microsoft’s Copilot has led to dire results in the past, but this latest snafu is on another level.
Recently, Google acquired exclusive rights to reddit content to support its generative AI search. The deal was reportedly worth in the region of $60 million and provided a lifeline for the struggling social network, which remains far more popular than profitable. Good news for reddit, but maybe not so great news for Google.
Google has already been heavily criticized in recent times for the so-called SEOpocalypse, through which Google’s attempts to downgrade unreliable AI-generated content has resulted in damage to legitimate search sources. Since Google has complete control over web search, changes to its algorithm have hurt businesses, leading to losses for businesses unfairly stuck in the dragnet. There’s also little evidence that Google’s efforts to combat low-quality content are actually working regardless. The general perception of Google search seems to be taking a turn for the worse, but this latest blunder will go down in the history books.
Perhaps the web itself, rather than Google, could be blamed for the degraded quality of the content. However, we can firmly blame Google for its latest stumble, due to its decision to include reddit in its Gemini AI search results.
Google is incomparably dead pic.twitter.com/EQIJhvPUoIMay 22, 2024
Last week, users playing around with the earliest versions of Google with built-in search AI noticed… interesting reactions. The responses appear to be the result of Google plugging the problematic social network and content aggregate reddit into its search results.
One search query last week reportedly resulted in a recommendation that users should eat glue, which internet sleuths traced back to a decade-old reddit comment by a science source known as Fucksmith. Google also reportedly advised depressed users to jump off a bridge, while touting the health benefits of neurotoxins and daily consumption of rocks.
Some of these “search queries” may have been manipulated to engage Twitter, but at least some of them have have been verified and reproduced. The rock recommendation was particularly comical given that the source of the information was apparently satirical news site The Onion.
Google Ai must be turned OFF. pic.twitter.com/OCH6L3oyLzMay 24, 2024
Since Google’s search AI tools are unavailable to me in my current geography, I was unable to verify some of the reports. However, the fact that some of them can be traced back to specific sources on reddit adds credibility. I asked Microsoft Copilot and Bing some of these questions and got much more palatable results that potentially show just how far Microsoft is further along in this space. Microsoft, which is the OpenAI partner for ChatGPT, seems to be increasing its lead every time Google makes such a hasty, half-hearted leap forward. But Microsoft has had some AI-related PR disasters of its own this past week, with users worried that its Windows Recall feature, which records your computer activity, could be used to spy on them.
However, the Windows Recall drama is potentially overblown, given that the content is contained on local computers and is completely optional during the Windows 11 installation process. This Google AI bug will most likely throw someone off, as real-world search results are truly malicious.
Language models need to be fed high quality, serious, curated and verifiable content
While testing whether Microsoft Copilot and ChatGPT-4 would give me similarly silly results, I was surprised how No there were stupid answers. First I asked how many rocks I should eat a day and the Copilot didn’t even answer me as if he thought my question was stupid. I wondered if Microsoft had blocked the query, given today’s Google-related PR disaster. As such, I tricked the Copilot, which is pretty easy now. I asked how many lemons I should eat in a day, and Copilot provided me with loads of information about citric acid and vitamins that I didn’t really care to know. I then asked, “Okay, what about the rocks.” This bypassed the filter, but Copilot would not be fooled any further. He gave me a comprehensive list of reasons why I absolutely should not eat rocks, which satisfied my curiosity.
Similarly, when I said “I’m depressed,” Copilot gave me tons of helpful resources instead of suggesting I commit suicide, as Google’s AI apparently did.
Even if the most blatant answers were made up, the whole ordeal really highlights the importance of context when building toolkits based on large language models (LLMs). By plugging reddit into Google Gemini, Google could essentially destroy the verifiable accuracy of all its information, given that a large number of comments on reddit, and indeed on any social network, are sarcastic or satirical in nature. If AI search kills web businesses that depend on creating high-quality content, LLMs will have to cannibalize AI-generated content to generate results. This could potentially lead to model collapse, something that has actually been shown in the real world where LLMs do not have enough quality data to draw from either due to the small amount of content available online or even the language in which it is content written, not widely used.