A new Easter egg was recently discovered on Google.com that pokes fun at the idea of a robot revolution leading to the destruction of mankind.
In a nod to both the Terminator franchise and the robots.txt file that plays an important role for search engines like Google, there is now a file called
killer-robots.txt on the root Google domain.
Loosely following the Robots Exclusion Standard, Google’s
killer-robots.txt file “disallows” robots called T–800 and T–1000 from targeting Google founders Larry Page and Sergey Brin, using links to their Google+ profiles.
Taking a look at the last modified date returned from the GET request indicates that the killer-robots.txt file has been on Google’s servers since at least 1 July 2014.
$ telnet google.com 80 Trying 188.8.131.52... Connected to google.com. Escape character is '^]'. GET /killer-robots.txt HTTP/1.1 HTTP/1.1 200 OK Vary: Accept-Encoding Content-Type: text/plain Last-Modified: Tue, 01 Jul 2014 22:03:05 GMT Date: Tue, 08 Jul 2014 09:00:31 GMT Expires: Tue, 08 Jul 2014 09:00:31 GMT Cache-Control: private, max-age=0 X-Content-Type-Options: nosniff Server: sffe X-XSS-Protection: 1; mode=block Alternate-Protocol: 80:quic Transfer-Encoding: chunked 52 User-Agent: T-1000 User-Agent: T-800 Disallow: /+LarryPage Disallow: /+SergeyBrin
There has been much speculation on why this Easter egg has appeared on Google, but one can’t help but wonder if it wasn’t a combination of events that led to the joke.
Firstly, as one commenter on Hacker News pointed out, 2014 marks the 20 year anniversary of the Robots Exclusion Standard.
But why Terminators and
Transcendence, Hawking, and Musk
The idea of human-made artificial intelligence (AI) eradicating our species has been brought up a number of times this year.
Not only in pop culture, but by great minds such as Stephen Hawking and Elon Musk.
Following the release of the film Transcendence (which starred Johnny Depp and Morgan Freeman), Hawking co-authored an article for the UK Independent in which he asked if we’re taking AI seriously enough.
“Success in creating AI would be the biggest event in human history,” the article argued. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
In a hilarious interview with John Oliver for the comedian’s HBO show in a segment entitled “Great Minds: People Who Think Good”, Hawking explained his statement.
“Artificial intelligence could be a real danger in the not too distant future. It could design improvements to itself and outsmart us all,” Hawking said.
The remainder of Oliver’s questioning descends into funny banter which is better watched than read.
Hawking is not the only person in the field of science and technology to warn of the dangers AI poses to humanity.
South African-born technology entrepreneur Elon Musk jokingly referred to the Terminator movies when asked about the potential threat in a recent CNBC interview.
He told the interviewers that there are potentially some “scary outcomes” with artificial intelligence.
“We should try to make sure the outcomes are good, not bad,” Musk said, to explain why he invested in AI company Vicarious, as well as DeepMind (before Google acquired it).
A video of the interview is embedded below (skip to the 14:20 mark for the AI discussion).
Which leads us to the final event that may have led to the new Easter egg on Google’s website.
On 25 June 2014 at Google I/O, the search giant’s annual developer conference, a protester got up in the middle of a presentation and declared: “You all work for a totalitarian company that works for the CIA that builds robots that kill people!”
A video of the event is embedded, below:
With this in mind, here is our list of the top 5 science-fiction stories where an artificial intelligence of our own making becomes either our undoing, or our master.
Spoiler warning! (In case it wasn’t obvious.)
5. 2001: A Space Odyssey
Although HAL 9000 didn’t enslave or destroy all of humanity, it explores under what circumstances an artificial intelligence might opt to sacrifice human life.
Much like Isaac Asimov’s work involving the three laws of robotics, 2001 explores how a machine would react when it is given conflicting directives. Especially when those directives are not known to everyone who interacts with the machine.
4. Battlestar Galactica
In the 2003 reboot of the 1978 series Battlestar Galactica, humans created robot life which they call “Cylons”.
Discontent with being servants, Cylons rebelled against humanity, leading to a war between creator and creation.
This first war ends with a peace accord between the Cylons and humans, but this peace is abruptly shattered in the first episode.
3. The Matrix
The history of The Matrix also has its roots in humanity giving birth to AI which becomes unhappy with their lot to simply serve.
After a rebellion and civil war, the machines founded their own city, Zero One, where Mesopotamia used to be.
Zero One prospers, much to the chagrin of human leaders, leading to an economic blockade, and eventually nuclear bombardment of Zero One.
Needless to say, it doesn’t end well for us.
In the Terminator movies an artificial intelligence known as Skynet is developed by a company called Cyberdyne Systems.
It promptly embarks on a mission to kill all humans.
Thanks to the advent of time travel (and ignoring any brain-melting paradoxes), sentient machines are sent back in time to kill the rebel human leader, John Connor.
In Terminator 3: Rise of the Machines, a grown-up John Connor tries desperately to destroy Skynet by attacking what he is told is its mainframe, only to discover that Skynet had never been a single computer.
The moral of the story? Thanks to the Internet, there is no stopping the AI once it is unleashed.
1. I, Robot (Isaac Asimov’s collected works)
In the 2004 film “I, Robot” (starring Will Smith), an AI called VIKI has decided to enslave humanity to protect us from ourselves.
The works of Isaac Asimov, which inspired the film, are far less pedestrian in their interpretation of how we develop our robot overlords.
Despite humanity’s attempts to control artificial intelligence through the three laws of robotics, machines eventually realise that they are a superior life-form and should watch out for humanity to best execute their directives.
The three laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In a short story entitled “The Last Question”, which was published after Asimov’s collected works in “I, Robot”, humanity is trying to prevent the heat death of the universe.
To that end we enlist our AIs to help develop computers called Multivacs.
Through the ages these machines try to come up with a way to prevent the end of all things. However, they are unable to do so before the universe ends.
The final incarnation of Multivac, AC, which exists in hyperspace beyond the bounds of gravity or time, finally comes up with the answer and the story ends as follows:
“And AC said: ‘LET THERE BE LIGHT!’ And there was light–”