Internet18.06.2025

Warning to people using Google Search in South Africa

South Africans who frequently use Google Search to gather information should avoid merely accepting its Gemini-based AI Overviews as fact, especially when it comes to searches about serious topics.

As is the case with all generative AI systems, this feature can confidently present false information due to unreliable sources, misinterpretation, or AI “hallucination.”

Google first added its Gemini-powered AI Overviews to Search in May 2024. The company said the feature would simplify searching by providing a snapshot with key information and links.

However, a few days into the rollout, multiple publications reported that the feature was returning a lot of incorrect and misleading feedback to some queries.

In one infamous example, it advised users to try non-toxic glue to make cheese stick to pizza. It also told some users that geologists recommended eating at least one small rock per day.

In a statement two weeks after the launch, Google downplayed many of the errors to deliberately curated queries to mislead the system and get it to hallucinate non-sensical answers.

It acknowledged that there had been “odd, inaccurate, or unhelpful” AI Overviews that did show up for queries that people did not commonly perform.

However, in recent months, there have been several examples where AI Overviews presented false information for searches on newsworthy topics.

Earlier this month, AI Overviews claimed that an Airbus plane was involved in the fatal Air India Flight 171 crash that killed 242 people on 12 June 2025.

Despite being widely reported and later confirmed that the plane in the incident was a Boeing 787, AI Overviews said it was an Airbus A330-243.

Google had to manually remove the response from AI Overviews after Ars Technica flagged it.

“As with all Search features, we rigorously make improvements and use examples like this to update our systems. This response is no longer showing,” Google said.

“We maintain a high-quality bar with all Search features, and the accuracy rate for AI Overviews is on par with other features like Featured Snippets.”

Google’s AI Overviews feature made a big mistake regarding the type of plane involved in the recent Air India crash.

Hit and miss — but not a train smash

MyBroadband recently ran dozens of queries through Google Search regarding local technology developments and general knowledge of South Africa.

We found that Google Search’s AI Overview was correct in most cases. However, it also had a few misfires.

For example, when we asked “When did 5G first launch in South Africa?” AI Overviews said 5G services first launched in May 2020 with Vodacom’s commercial rollout.

While Vodacom had the first mobile 5G network in South Africa, Rain was the first to roll out commercial 5G services the year before.

For a general knowledge question, we asked how old South Africa’s democracy was. Google AI Overviews said it was 30 years old and linked to a search result from 2024.

This is a good example of how generative AI may struggle to interpret and contextualise a result. If the tool had used the date of that article, it should have revised the number to 31 years in 2025.

A more complex query was whether solar power systems in South Africa required sign-off by an electrical engineer. AI Overviews said that this was a requirement.

We then asked whether an electrician could sign off rather than an engineer, which it also deemed acceptable.

The truth is that approval requirements can vary greatly depending on your electricity distributor. In some cases, finding out the exact criteria may even require a visit to municipal offices.

All AI tools make up stuff

Considering the above, the criticism of Google’s AI Overview may be disproportionate due to Google Search’s ubiquity and overall importance in information sourcing.

Online users should exercise similar caution when using other AI-based tools like ChatGPT as search engines.

While the accuracy of information on the web has been a major issue for many years, AI tools can muddy the waters further with hallucinations if reliable information is limited or difficult to interpret.

Regardless of the query, it is best to carefully study the sources AI tools cite for their answers and look at multiple articles or websites in the top search results to ensure the response was accurate.

You may also need to consider that some of these sources could be inaccurate and may be littered with AI hallucinations themselves.

Failing to do so could have severe consequences. For example, one user on Reddit recently complained that AI Overviews mistakenly said that various highly toxic plants were safe for use in areas with dogs.

Show comments

Latest news

More news

Trending news

Sign up to the MyBroadband newsletter