Us meat bags are just a chance for the birth of a new "AI".

Splinter

Honorary Master
Joined
Oct 14, 2011
Messages
22,196
This is not an original thought from me. Read a sci-fi book a while ago which had this premise.

But, at the end of the day, it does seem likely.

Everything we know is based on mathematics. Seems everything we know can be explained by mathematics. There is that thing where everything has the same ratio or some ****. Our current science is all about connectivity of quicker and smarter networks.

So, you ask, why would a "machine" intelligence require us to birth a new AI. Same reason you would not want to copy and paste yourself as your own children. It would not be an original child.

It's all about statistics and probability. The "AI" needs to be birthed in its own originality and probability - and risk.

Everything I see these days is all about inward looking on social networks and connectivity. Yep, the matrix..

We went to the moon almost 50 years ago. Done ****-all since then except create networks and science that might create an AI
 

HavocXphere

Honorary Master
Joined
Oct 19, 2007
Messages
31,500
Here is the thing about AI - people assume it'll take over like say cars made horses obsolete. Gradually.

If this stuff really does go AI-hard then it'll be game over for humans in <24 hours.

Look at the above two examples - it explains why the one side is all "why are you so freaked out" and the other side is all "omg impending doom".

My personal take is that the doom side is right --- BUT --- humans are nowhere near hard AI yet so things will be ok for a while still.
 

Freshy-ZN

Executive Member
Joined
Aug 17, 2005
Messages
5,730
If AI really is intelligent it would wipe humans from the face of the planet at the first available opportunity.
 

Splinter

Honorary Master
Joined
Oct 14, 2011
Messages
22,196
Here is the thing about AI - people assume it'll take over like say cars made horses obsolete. Gradually.

If this stuff really does go AI-hard then it'll be game over for humans in <24 hours.

Look at the above two examples - it explains why the one side is all "why are you so freaked out" and the other side is all "omg impending doom".

My personal take is that the doom side is right --- BUT --- humans are nowhere near hard AI yet so things will be ok for a while still.
Actually, I'm not talking about us deliberately making an AI. I'm talking about us creating the networks that might lead to a self-creating AI.

Let's put it this way - if us evolutionists believe in a mix of whatever created life, how is it so far fetched that if we create an electronic network that spans the globe, that an AI will be borne by itself.
 
Last edited:

Bobbin

Executive Member
Joined
Oct 22, 2009
Messages
6,769
I view us humans as AI already. Except we are just the "I" part. But I don't see "Artificial" in robots though, as in there is no distinction between a human with a need vs a robot with a need vs an amoeba with a need vs any biological entity.

In order for me to make that distinction I have to change my worldview at the same time :/ And right now I don't see why humans are special (Or any other life), nor why they should be.

Life is really just a thing pursuing energy - vis-a-vis surviving - (And developing new ways to do it - evolving - all the time), that is all.

If robots are going to take over the world they need to see us as a threat to their needs. We'd be silly for creating them with just a need to survive. Instead we should be creating them to be fit for a purpose (Give it a need/reward system in the pursuit of curing cancer or something - and watch it work tirelessly towards that :p Humans could potentially troll the shyte over our "Robot overlords")

If I were any good at programming or creating AI I'd have made a bot with learning capacity and a need/reward incentive at winning in the stock market by now and putting it in my bank account :eek: :p It will survive as long as it is good at doing this task for me :D
 
Last edited:

SkippyRamirez

Expert Member
Joined
Jan 5, 2016
Messages
2,404
I'm going to call all this doomsday crap about AI, absolute rubbish. Nothing will happen, ever. People see the videos on the tube about that stupidly and predictable robot - sofia - or whatever and all the conspiracy theorists gets all giddy.:rolleyes:
 

OrbitalDawn

Ulysses Everett McGill
Joined
Aug 26, 2011
Messages
41,850
Interesting podcast on the topic for those interested.

https://www.samharris.org/podcast/item/the-future-of-intelligence

In this episode of the Waking Up podcast, Sam Harris speaks with Max Tegmark about his new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI, and other topics.

Max Tegmark is a professor of physics at MIT and the co-founder of the Future of Life Institute. Tegmark has been featured in dozens of science documentaries. He is the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence.
 

Splinter

Honorary Master
Joined
Oct 14, 2011
Messages
22,196
Sorry for bringing this up again, but it is where my thoughts are going.

We are trying to create quantum computers. Which, by all accounts, would make super computers look slow.

So, if our slow biological intelligence might create such, what would be the outcome? "Machines" that can think billions of times faster than us? There was a supposition that we might all be in a simulacrum. I'm leaning towards that.
 

Urist

Expert Member
Joined
Mar 20, 2015
Messages
2,580
We are the AI.
Stuck on a sphere with no communication from other civilizations. When we look too far or too close at anything the resolution becomes fuzzy.
Things we're not supposed to worry about are out of reach. We're here to figure something out with pre-determined parameters.
 
Last edited:

Splinter

Honorary Master
Joined
Oct 14, 2011
Messages
22,196
We are the AI.
Stuck on a sphere with no communication from other civilizations. When we look too far or too close at anything the resolution becomes fuzzy.
Things we're not supposed to worry about are out of reach. We're here to figure something out with pre-determined parameters.
No. I'm thinking we are the AI's game.
 

Prawnapple

Expert Member
Joined
May 18, 2015
Messages
1,601
Sorry for bringing this up again, but it is where my thoughts are going.

We are trying to create quantum computers. Which, by all accounts, would make super computers look slow.

So, if our slow biological intelligence might create such, what would be the outcome? "Machines" that can think billions of times faster than us? There was a supposition that we might all be in a simulacrum. I'm leaning towards that.
Exactly spot on. What most people don't tend to realise is that we are going to build carbon copies of ourselves in machines and AI. Those "copies" of ourselves will go onto creating even more overpowered, galaxy / universe populating machines, they'll improve upon themselves again, and so on. Eventually humans as we are now will become pointless in this new "enterprise" / future, and we'll basically BE the AI from that point onward.
 
Top