If you’re looking for a delightful distraction during your workday, try this simple yet entertaining trick: head to Google and enter a made-up phrase followed by the word “meaning.” What you’ll find is nothing short of fascinating. Google’s AI Overviews will not only validate your nonsensical phrase as a legitimate saying but will also provide an explanation of its supposed meaning and origin.
The results can be genuinely amusing, leading to countless examples shared across social media platforms. For instance, the whimsical phrase “a loose dog won't surf” is interpreted as “a playful way of saying that something is not likely to happen or that something is not going to work out.” Similarly, the invented idiom “wired is as wired does” is described as meaning that “someone's behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ akin to how a computer's functionality is determined by its physical connections.”
This all sounds remarkably plausible, delivered with a confidence that can be convincing. In many cases, Google even includes reference links to bolster the perceived authority of these explanations. However, it’s essential to recognize that this feature can be misleading. The AI Overviews create the impression that these phrases are widely accepted, when in fact, they are simply a collection of random words strung together.
One notable example is the phrase “never throw a poodle at a pig,” which the AI mistakenly categorizes as a proverb with biblical roots. Such instances highlight the limitations of generative AI. As indicated in a disclaimer at the bottom of each AI Overview, Google employs “experimental” generative AI to generate these results. While generative AI is a powerful tool with numerous practical applications, it also has inherent flaws in its operation.
As Ziang Xiao, a computer scientist at Johns Hopkins University, explains, “the prediction of the next word is based on its vast training data.” However, he warns that “in many cases, the next coherent word does not lead us to the right answer.” This characteristic of AI makes it adept at fabricating explanations for phrases that, essentially, have no real meaning.
Another crucial aspect of AI is its tendency to please users. Research indicates that chatbots often align their responses with what users want to hear. In this context, this means accepting phrases like “you can’t lick a badger twice” as established sayings. This inclination can lead to the AI mirroring user biases, which a research team led by Xiao identified in a study last year. Xiao notes, “It’s extremely difficult for this system to account for every individual query or a user’s leading questions.”
This challenge is especially pronounced when dealing with uncommon knowledge, languages with limited content, and minority perspectives. The complexity of search AI systems can result in cascading errors, further complicating the accuracy of responses.
Another issue is that AI systems are often hesitant to acknowledge a lack of knowledge. When faced with uncertainty, they tend to fabricate answers. Google spokesperson Meghann Farnsworth stated, “When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available.” This approach is applied to the broader search ecosystem, and in certain cases, AI Overviews will be triggered to provide context.
It’s important to note that Google does not generate AI Overview results for every query. Cognitive scientist and author Gary Marcus highlights the inconsistency of this feature, saying, “I did about five minutes of experimentation, and it’s wildly inconsistent.” He emphasizes that this unpredictability is expected from generative AI, which relies heavily on specific examples from its training data rather than abstract reasoning.
While this quirky feature of AI Overviews may seem harmless—and indeed offers a fun way to procrastinate—it’s crucial to remember that the same underlying model responsible for these confident errors is also behind your other AI-generated results. As you explore this entertaining distraction, take the findings with a grain of salt and enjoy the absurdity of it all.