Racing towards AI superintelligence

Daniel Puchert

Journalist
Staff member
Joined
Mar 6, 2024
Messages
2,004
Reaction score
1,727
March towards AI superintelligence

In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence (AI) with the ominous title Superintelligence: Paths, Dangers, Strategies.

It proved highly influential in promoting the idea that advanced AI systems – “superintelligences” more capable than humans — might one day take over the world and destroy humanity.
 
That's philosophical I guess, depends on what you think consciousness is.
I believe it is just a matter of complexity and design. Specifics like consciousness, awareness, abstraction, reasoning, etc., will be emergent once the complexity is above a certain level, and the correct design constraints and start conditions are figured out.

“The miracle of nature” that is us, I believe, is really just the outcome of a biological machine being trained massively in parallel for billions of years.
 
There isn’t a roadmap or timeline you could put on general AI or superintelligence, because what we have at the moment isn’t a stepping stone towards that and there is no clear path to it. How to even begin working towards that remains unsolved.

The reason the results of LLMs resemble advanced pattern matching, is because that’s exactly how it works. This is why context window size is so important, the larger the context window, the more likely it can pattern match and predict the next text.

The breakthrough is how far this could be pushed with modern hardware and data sizes to feign intelligence and reasoning. But I think we have started to already hit diminishing returns, and there isn’t going to be a significant amount of new good data flowing in that’s untainted by AI.

I think everyone has this experience with LLMs, that if you ask it questions about things you have shallow knowledge of, the answers seem impressive. If you ask it questions about things you do know about, the answers often have the “shape” of a correct answer, but are all over the place especially where non-trivial reasoning and deduction is required.

I’m not saying this to be “down” on AI and the accomplishments, just to try bring the hype back out of orbit. Understand the limitations, what has actually been achieved and how far it can be taken (ignore the over the top marketing), and what the practical real world use cases are given those limitations.
 
Can't think for itself, despite the hype. Never will

Why would it need to? If its got a purpose all it needs to do is constantly try and achieve it.
IMO very few humans think for themselves anyways.
 
There isn’t a roadmap or timeline you could put on general AI or superintelligence, because what we have at the moment isn’t a stepping stone towards that and there is no clear path to it. How to even begin working towards that remains unsolved.
I think that attention, tensor cores, high speed interconnects, fast quantized features, improved (mathematical) optimization methodologies, the mapping of neural structures, etc., are huge steps towards this.

The reason the results of LLMs resemble advanced pattern matching, is because that’s exactly how it works. This is why context window size is so important, the larger the context window, the more likely it can pattern match and predict the next text.
This is true for people too.

The breakthrough is how far this could be pushed with modern hardware and data sizes to feign intelligence and reasoning. But I think we have started to already hit diminishing returns, and there isn’t going to be a significant amount of new good data flowing in that’s untainted by AI.

I think everyone has this experience with LLMs, that if you ask it questions about things you have shallow knowledge of, the answers seem impressive. If you ask it questions about things you do know about, the answers often have the “shape” of a correct answer, but are all over the place especially where non-trivial reasoning and deduction is required.
Agreed. I think that to take it further will require additional algorithmic and/or conceptual jumps, not just more GPUs and more weights/parameters.

I’m not saying this to be “down” on AI and the accomplishments, just to try bring the hype back out of orbit. Understand the limitations, what has actually been achieved and how far it can be taken (ignore the over the top marketing), and what the practical real world use cases are given those limitations.
While I do agree that the hype is pushing the idea that true intelligence is just a matter of scaling what we have now, I expect that we aren’t as many conceptual jumps away from this actually being true as many skeptics may think.
 
This is true for people too.
Yes, it's a big part of how human thought works, but not the only way, that's the distinction.

While I do agree that the hype is pushing the idea that true intelligence is just a matter of scaling what we have now, I expect that we aren’t as many conceptual jumps away from this actually being true as many skeptics may think.
We could be just a single jump away from it, but that's the position we've always been in. The truth is, we don't know how far we are from it. We do know the approaches we currently have aren't the path towards it, so it would need to be a novel approach. Since we don't know what the approach would even look like, predicting hardware requirements or hardware advancement required to make it feasible is impossible.

Further, you can't predict a breakthrough, nor can you force it, people have not unreasonably thought many of them were around the corner given what they knew at the time (ie. the cure for cancer 50+ years ago), but they never materialised.

Now with the amount of resources being poured into it, we are more likely than before to stumble on a solution.
BUT....
It could happen today, it could never happen, or anywhere in between.
 
At our company we have built a Multi-Agent workflow, that is sooooo damn close to Agentic AI.
Agents checking on the previous Agents' work, and so on.
It's pretty mindblowing.
However, for all it's power and ability, it still relies on proper prompting, access to correct data, and had to actually be built.
So while it's output is indistinguishable from a human, it could not have built itself in such a manner.
True autonomy will occur, when AI just decides to build something itself, in response to an event, or if it just damnwell feels like it :ROFL:
That day is coming.
And it's going to be very interesting indeed.
 
I believe it is just a matter of complexity and design. Specifics like consciousness, awareness, abstraction, reasoning, etc., will be emergent once the complexity is above a certain level, and the correct design constraints and start conditions are figured out.

“The miracle of nature” that is us, I believe, is really just the outcome of a biological machine being trained massively in parallel for billions of years.

From that point of view, still unlikely IMO.

"just the outcome of". Those billions of years can't be rushed. Humans keep rushing things, and keep making massive mistakes.

Then there's how much we don't get right at all. The man-involved parts of the planet and its usages are in a mess. More likely we'll take a nose dive first, again - throughout history no civilisation has survived long term, not even the big ones.
 
We could be just a single jump away from it, but that's the position we've always been in. The truth is, we don't know how far we are from it. We do know the approaches we currently have aren't the path towards it, so it would need to be a novel approach. Since we don't know what the approach would even look like, predicting hardware requirements or hardware advancement required to make it feasible is impossible.

Further, you can't predict a breakthrough, nor can you force it, people have not unreasonably thought many of them were around the corner given what they knew at the time (ie. the cure for cancer 50+ years ago), but they never materialised.

Now with the amount of resources being poured into it, we are more likely than before to stumble on a solution.
BUT....
It could happen today, it could never happen, or anywhere in between.

That makes a lot more sense.
Anyway, every gen gets excited at their wondrous developments, but just a few years later they get to witness the newer gen come along and laugh at their backwards ways as they then take the lead. We even see newbies here laughing at the ancient :unsure: satellite technology now.

But also look at our progress in space for instance - financially almost exclusively just due to political games until recently. We're never as advanced as we think we are, just someone's power games in the moment but the potential is always there.
 
Top
Sign up to the MyBroadband newsletter