Shut commercial AI development down now or risk destroying all life on Earth — machine intelligence researcher

The rapid and unchecked development of artificial intelligence (AI) is irresponsible and should be curbed to avoid a superhumanly intelligent AI wiping out all sentient life on Earth.

That is according to Machine Intelligence Research Institute decision theorist Eliezer Yudkowsky, who recently wrote an ominous article for Time about the potentially disastrous consequences of the current AI race among major tech players.

He is perhaps best known for popularising the idea of a “friendly” AI, but his current outlook on the future of AI sounds just about as dystopian as the worlds of Terminator or The Matrix.

Yudkowsky’s piece follows the Centre for Artificial Intelligence and Digital Policy’s letter urging regulators to halt further commercial deployment of new generations of the GTP language model created by OpenAI.

The letter carried 1,000 signatures from prominent technology figures and experts — including Elon Musk — and called for a six-month pause on GPT-4’s commercial activities.

It also plans to ask the United States Federal Trade Commission (FTC) to investigate whether the commercial release of GPT-4 violated US and global regulations.

While Yudkowsky commended the call for a moratorium and said he respected those who had signed it, he believes the letter understated the seriousness of the situation.

“The key issue is not ‘human-competitive’ intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence,” said Yudkowsky.

“Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.”

Precision and scientific insights needed to ensure AI “cares” for biological life

Yudkowsky said humanity was neither prepared nor “on course to be prepared” for AI’s capabilities within any reasonable time window.

“There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems.”

Yudkowsky said he and many other researchers steeped in these issues expected that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that “literally everyone” on Earth would die.

“If we actually do this, we are all going to die,” he said.

“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.'”

Yudkowsky said that surviving AI would require precision, preparation, new scientific insights, and not having AI systems consisting of “giant inscrutable arrays of fractional numbers”.

Without that precision, AI will not do what humans want — including “caring” about humans or sentient life in general.

He said that “caring” could, in principle, be built into AI, but it was not currently understood how this could be achieved.

The lack of this caring factor would result in AI neither loving nor hating humans, but rather seeing them as consisting of atoms that could be used for something else.

Yudkowsky said the likely result of humanity facing down a superhuman intelligence was a “total loss”.

“Valid metaphors include ‘a 10-year-old trying to play chess against Stockfish 15’, ‘the 11th century trying to fight the 21st century,’ and ‘Australopithecus trying to fight Homo sapiens‘,” Yudkowsky said.

Now read: Microsoft threatens to restrict rival AI chatbot access to Bing data

Latest news

Partner Content

Show comments


Share this article
Shut commercial AI development down now or risk destroying all life on Earth — machine intelligence researcher