Google Search AI giving disastrously bad answers

Hanno Labuschagne

Journalist
Staff member
Joined
Sep 2, 2019
Messages
5,519
Reaction score
3,805
The question one has to ask, why do some AI companies get it right, but Google, the largest search provider, has failed.

Have they lost the required skills due to race and other woke policies being valued over actual experience and knowledge?

Is this why Google keeps failing with the basics of AI? This seems very regular for them now.
 
The question one has to ask, why do some AI companies get it right, but Google, the largest search provider, has failed.

Have they lost the required skills due to race and other woke policies being valued over actual experience and knowledge?

Is this why Google keeps failing with the basics of AI? This seems very regular for them now.
I don't think they do but other companies are just more crafty in sifting the more generally used content and they've been at it for longer. Give any AI a strange question where there isn't a lot of data and it's hit and miss. All AI can be poisoned too.
 
The question one has to ask, why do some AI companies get it right, but Google, the largest search provider, has failed.

Have they lost the required skills due to race and other woke policies being valued over actual experience and knowledge?

Is this why Google keeps failing with the basics of AI? This seems very regular for them now.
Exactly. No real innovation from them in the last decade plus. Been going to well with the ads cash cow…
 
I don't think they do but other companies are just more crafty in sifting the more generally used content and they've been at it for longer. Give any AI a strange question where there isn't a lot of data and it's hit and miss. All AI can be poisoned too.
Really? Google started with machine learning in 2001.

When did the other companies start?

Not long after they started with language models. Which is basically what our current AI is based on: LLMs
 
People in the 'SEO' community are complaining that Google's AI Overview is hurting their search results. The SGE is said to enhance SERP, according to Google, but it is actually degrading its quality. This is from my own assessment too. Google does have a framework in which they suggest how to implement structure data. Now there is the focus of GEO, but the algorithm is very selective of the impression metrics. This is why I am saying all of this, and it can be seen in the results of SGE/AI Overview.
 
Really? Google started with machine learning in 2001.

When did the other companies start?

Not long after they started with language models. Which is basically what our current AI is based on: LLMs
I don't think we can compare the past to LLMs. The principles are there but it's like comparing a bicycle to a motorbike. These issues are only surfacing now since the scale has been turned up. Besides when AI continually needs human intervention for the popular stuff it's essentially cheating. There's an afrikaans saying "bo blink onder stink".
 
I don't think we can compare the past to LLMs. The principles are there but it's like comparing a bicycle to a motorbike. These issues are only surfacing now since the scale has been turned up. Besides when AI continually needs human intervention for the popular stuff it's essentially cheating. There's an afrikaans saying "bo blink onder stink".
Google started its modern AI program in 2015 as per link provided.

openAI was only founded at the end of 2015 with not even 0.00001% of Google's budget if not less.

Which brings me back to my question, why is openAI so successful yet Google is lacking? In 2020 Google started with race based policies and they have recently reaffirmed their commitment to promoting based on race and wokism rather than actual knowledge and experience.

If it isn't their woke hateful policies and it isn't your first attempt to defend them about how long they have been at it, it isn't their budget. What else do you think may have caused it?
 
AI should be stopped and closed down and prevented by laws. Maybe in a million years from now, if intelligence under humans starts evolving again, should they re-look at it. AI seems to be as dumb as those programming it and it will always be dangerous in any sense.
 
You forgot about the guy who we helped with his fibre speed issue and after we helped him his ISP cancelled his contract and service he then came back to complain and then we all sided with the ISP
True.

We helped many people. People from all over. People say that we are the best helpers.
Some people say we are the greatest helpers in the world. Maybe the best helpers of all time.
 
AI should be stopped and closed down and prevented by laws. Maybe in a million years from now, if intelligence under humans starts evolving again, should they re-look at it. AI seems to be as dumb as those programming it and it will always be dangerous in any sense.

Github Copilot is kick ass

Of course it was trained on the programmers code that ultimately use it :whistling:
 
Top
Sign up to the MyBroadband newsletter