Search Giant Takes Steps to Fix Google AI Overview Feature Issues
Last week, Google’s AI search feature was trending on social media for the wrong reason – generating misleading responses to search queries. Initially, the tech giant downplayed any Google AI Overview feature issues. But according to Wired, this wasn’t to last long.
By late Thursday, Head of Search in the company, Liz Reid admitted that there were issues that needed improvement. AI Overviews are designed to provide users with AI-generated responses to queries. The responses are derived from information that already exists online. Google presents them in a manner that provides users with the answers they seek without having to click web links or enter publication names.
Inaccurate Search Results
Reports indicate that the post shared by Reid made reference to at least two viral inaccurate results that AI Overview generated. In one result, Google algorithms suggested that ‘eating rocks can be good for you’. This result is extraordinary because rock eating isn’t a topic a lot of people ask or write about. This means there aren’t many sources available online for Google to retrieve.
In her post, Reid said that Google’s AI tool drew the result from an article published on a website called The Onion but it misinterpreted the content to be factual. Another search result suggested thickening pizza sauce with non-toxic glue. On this result, Reid posted that the error emanated from a missed sense of humor.
“We saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza,” she said in her post.
Unfair Judgment
Google says passing judgment on the quality of Google AI search feature issues based on screenshots that were shared widely is unfair. She said the search engine giant tested the AI tool extensively before releasing it and had data to demonstrate the value that users place on it. The company holds that the viral Google AI search issues screenshots circulated online were fake.
This is true as some screenshots posted on social media platforms have been tested and their formats don’t align with Google’s AI Overviews. Misleading screenshots were not posted on social media alone. The New York Times was forced to issue a correction after reporting erroneously about Google AI Overview feature issues. In its correction, the publication stated that AI Overviews didn’t suggest jumping off the Golden Gate Bridge to solve depression.
While stating that the mistakes that captured user attention as unintended, Reid stated that they were being taken too far.
“There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results. Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared,” Reid said in her post.
Continuous Improvement
Although Google AI Overview feature issues may have been amplified on social media, the company acknowledged that it had made a significant number of technical improvements to the tool. The tech giant said it will continue to track user feedback and improve AI Overview features as need arises.