Google’s AI Mocks Itself with Absurd Search Results

0:00

“Running with scissors is a cardio exercise that can increase your heart rate and require concentration and focus,” says Google’s new AI search feature. “Some say it can also improve your pores and give you strength.”

Google’s AI feature pulled this response from a website called Little Old Lady Comedy, which, as its name suggests, is a comedy blog. The gaffe is so absurd that it’s been circulating on social media, along with other obviously incorrect AI summaries on Google. Essentially, everyday users are now playfully testing these products on social media.

In cybersecurity, some companies hire “red teams” – ethical hackers – who attempt to breach their products as though they’re malicious actors. If a red team finds a vulnerability, the company can address it before the product is released. Google likely conducted a form of red teaming before launching an AI product on Google Search, which is estimated to handle trillions of queries daily.

It’s surprising, then, when a well-resourced company like Google still ships products with evident flaws. That’s why it’s now a meme to mock the failures of AI products, especially as AI becomes more widespread. We’ve seen misspellings on ChatGPT, video generators struggling to comprehend how humans eat spaghetti, and Grok AI news summaries on X that, like Google, don’t understand satire. However, these memes might actually provide useful feedback for companies developing and testing AI.

Despite the high-profile nature of these flaws, tech companies often downplay their impact.

“The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” Google told Truth Voices in an emailed statement. “We conducted extensive testing before launching this new experience, and will use these isolated examples as we continue to refine our systems overall.”

Not all users see the same AI results, and by the time a particularly bad AI suggestion goes viral, the issue is often already resolved. Recently, Google suggested that if the cheese won’t stick to your pizza, you could add about an eighth of a cup of glue to the sauce to “give it more tackiness.” The AI pulled this answer from an eleven-year-old Reddit comment from a user named “f––smith.”

Beyond being a huge blunder, it also indicates that AI content agreements may be overrated. For example, Google has a $60 million contract with Reddit to license its content for AI model training. Reddit signed a similar deal with OpenAI last week, and Automattic properties WordPress.org and Tumblr are rumored to be in talks to sell data to Midjourney and OpenAI.

To Google’s credit, many errors that circulate on social media arise from unconventional searches designed to confuse the AI. Hopefully, no one seriously searches for “health benefits of running with scissors.” However, some mistakes are more severe. Science journalist Erin Ross posted on X that Google provided incorrect information about what to do if bitten by a rattlesnake.

Ross’s post, which garnered over 13,000 likes, shows that the AI recommended applying a tourniquet to the wound, cutting the wound, and sucking out the venom. According to the U.S. Forest Service, these are all things you should not do if bitten. Meanwhile, on Bluesky, the author T Kingfisher highlighted a post showing Google’s Gemini misidentifying a poisonous mushroom as a common white button mushroom – screenshots of the post have spread to other platforms as a warning.

When a bad AI response goes viral, the AI could become more confused by the new content that emerges as a result. On Wednesday, New York Times reporter Aric Toler posted a screenshot on X showing a query asking if a dog has ever played in the NHL. The AI’s response was yes – inexplicably calling Calgary Flames player Martin Pospisil a dog. Now, when searching the same query, the AI pulls up an article from the Daily Dot about how Google’s AI keeps thinking that dogs are playing sports. The AI is being fed its own mistakes, further confusing it.

This highlights the inherent problem of training large-scale AI models on the internet: sometimes, people on the internet lie. However, just like there’s no rule against a dog playing basketball, unfortunately, there’s no rule against big tech companies releasing flawed AI products.

As the saying goes: garbage in, garbage out.

Amanda Silberling
Amanda Silberling
Amanda Silberling covers social media and consumer tech. She has written about internet culture for Polygon, MTV, Business Insider, NPR, and the AV Club, and she co-hosts Wow If True, a podcast about going viral. Previously, she was a grassroots organizer, museum educator, and film festival coordinator. Based in Philadelphia, she holds a B.A. in English from the University of Pennsylvania.

Latest stories

Ad

Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
Ad
Continue on app