Google Apologizes for AI Search Blunders amid Criticism


Google is also feeling the embarrassment over its AI Overviews. Following a week filled with criticism and memes highlighting the poor quality and misinformation stemming from the tech giant’s new AI-powered search feature, the company issued a form of apology on Thursday. Google—an enterprise renowned for web searches and dedicated to “organizing the world’s information”—admitted in a blog post that “some odd, inaccurate or unhelpful AI Overviews certainly did show up.”

That’s an understatement.

The acknowledgment of shortcomings, written by Google VP and Head of Search Liz Reid, underscores how the push to integrate AI into everything has ended up deteriorating Google Search.

In a post titled “About last week,” Reid elaborates on the various errors made by AI Overviews. While they don’t “hallucinate” or fabricate information like other large language models (LLMs), they can still be incorrect due to reasons like “misinterpreting queries, misinterpreting nuances of language on the web, or lacking sufficient information.”

Reid also pointed out that some screenshots circulated on social media last week were fake, while others were based on absurd queries, like “How many rocks should I eat?” — an essentially non-existent search before. Given the lack of factual information on this, Google’s AI directed users to satirical content. (In the rocks example, the satirical content was published on a geological software provider’s website.)

It’s noteworthy that if you had Googled “How many rocks should I eat?” and received a set of unhelpful links or even a joke article, you wouldn’t be astonished. What’s shocking is the confidence with which the AI asserted that “geologists recommend eating at least one small rock per day” as though it were a factual answer. It might not be a “hallucination” in technical terms, but the end user doesn’t care. It’s absurd.

It’s also disturbing that Reid claims Google “tested the feature extensively before launch,” including using “robust red-teaming efforts.”

Does no one at Google possess a sense of humor? Did no one envision prompts that could produce poor results?

Additionally, Google downplayed the AI feature’s dependence on Reddit user data as a reliable source of information. While people have been appending “Reddit” to their searches for so long that Google integrated it as a search filter, Reddit is not a repository of factual knowledge. However, the AI would reference Reddit forum posts to answer questions without discerning when first-hand Reddit information is helpful and when it is not—or worse, when it’s a troll.

Reddit today is monetizing its data by selling it to companies like Google, OpenAI, and others to train their models, but that doesn’t mean users want Google’s AI deciding when to search Reddit for answers or suggesting that someone’s opinion is fact. There’s a nuance to knowing when to search Reddit, which Google’s AI has yet to grasp.

As Reid acknowledges, “forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza,” she mentioned, referencing one notable failure of the AI feature last week.

Google AI overview suggests adding glue to get cheese to stick to pizza, and it turns out the source is an 11-year-old Reddit comment from user F*cksmith 😂

— Peter Yang (@petergyang) May 23, 2024

If last week was a catastrophe, it seems Google is at least trying to iterate quickly in response—or so it claims.

The company says it has reviewed examples from AI Overviews and identified patterns where improvements can be made. This includes creating better detection systems for nonsensical queries, limiting the reliance on user-generated content for responses, adding restrictions for queries where AI Overviews are unhelpful, avoiding AI Overviews for critical news topics, “where freshness and accuracy are paramount,” and enhancing protections for health-related searches.

With AI companies continually developing improved chatbots, the question isn’t whether they will ever outperform Google Search in helping us understand the world’s information, but whether Google Search can advance its AI capabilities enough to compete.

Despite Google’s blunders, it’s too early to count it out—especially considering the vast scale of Google’s beta-testing pool, which includes anyone using its search engine.

“There’s nothing quite like having millions of people using the feature with numerous unique searches,” says Reid.

Sarah Perez
Sarah Perez
Staff writer. Previously, Sarah worked for over three years at ReadWriteWeb, a technology news publication. Before working as a reporter, Perez worked in I.T. across a number of industries, including banking, retail and software.

Latest stories


Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
Continue on app