Artificial Intelligence: Innovating Our Way to Annihilation


Illustration by Max Yeager

When Isaac Asimov formally introduced his Three Laws of Robotics in 1942, he could hardly have imagined the speed with which his fiction would become reality. Over the last decade, artificial intelligence has grown rapidly, moving ever closer to the level of intelligence often encountered in pop culture. In response, many of the brightest men of modern times have warned against artificial intelligence leading to the "end of civilization."

Over the last year, SpaceX CEO Elon Musk, celebrated theoretical physicist Stephen Hawking and Microsoft founder Bill Gates have all made public statements regarding the direction in which research into artificial intelligence is headed. In October 2014, Musk gave an interview at Massachusetts Institute of Technology (MIT) during which he labelled artificial intelligence as "the biggest existential threat to civilization," and Hawking told BBC in Dec. 2014 that while the current capabilities of artificial intelligence are fairly limited, future research and development into sentient intelligence could lead to self-evolution on the part of the machines. This evolution, Hawking said, would threaten humans, who are limited by slow biological evolution themselves. Meanwhile, during a reddit Ask Me Anything (AMA) this January, Gates admitted that while the next few decades do not carry any threat, beyond that, artificial intelligence could be strong enough to threaten human existence.

The concern over artificial intelligence is not about the inherent intelligence of the machines themselves, but in the possible classification of humans as slow and inefficient parts of the system that can be viewed as irrelevant and able to be discarded completely. While it does paint a bleak picture, especially if intelligence of that scale was actually achieved, there are a couple of valid concerns in the argument.

First of all, self-sustaining artificial intelligence is not remotely on the horizon of the current innovation. Most modern artificial intelligence revolves around mundane tasks that may seem incredibly complex by themselves but are actually just large-scale models of tedious decision making algorithms. The more reasonable concern is whether or not machines would see humans as unnecessary components of a larger mechanism.

While popular culture has painted a picture where machines end up in control of the human race and make ruthless decisions due to an inability to comprehend human mortality and emotion, significant research is being pursued in the matter of context-based computing, which is closely tied to artificial intelligence in many regards. It is not implausible to foresee the two fields uniting in the future to create a safer artificial intelligence that would value irrationality as a human trait necessary for decision-making, which would therefore eliminate the threat at the root.

It is difficult to predict where technology will lead us. It is possible that in the future, artificial intelligence might indeed become a threat against our existence, but then again, it could also end up being the catalyst for a better future. Either way, sci-fi will one day be able to say “I told you so.”​