"Alexa? You're Scaring Me"


Illustration by Lily Garnaat

A few years ago, the definition of a personal assistant was centered on someone whose main objective was to make life easier for the person in charge. Most of these tasks centered on organizing schedules, proofreading documents, taking phone calls, bringing the occasional double soy latte with half the milk and sometimes, providing moral support when the universe seemed to have a field day ruining lives.

Only recently has the concept of a personal assistant changed from a devoted human to a smart speaker. Today, we call our assistants Siri, Cortana and Alexa. Assistants in the past had a larger variety of names.

Recently, several owners of the Echo and Echo Dot reported that Alexa started to laugh unexpectedly. This was a glitch that was ultimately resolved, but it gives rise to consumer fear about technological advancements. 

Please be a Glitch

When Alexa is asked to laugh today, she responds with a playful "tee-hee." Her recent behavior was not so harmless. Several YouTube videos have surfaced of the device laughing without being asked to and at the same time, having a laugh that some would construe as creepy. 

"I was not surprised that Alexa had an incident. In fact, I was more surprised it took this long for a common incident to occur," said Ernest Roszkowski, a senior lecturer for Visual Communication Studies at NTID.

Ernest Fokoué, a professor in the Center for Quality and Applied Statistics at RIT, was excited about the incident.

"We could call this a euphoria that was built by users based on patterns of interaction," he said.

Fokoué pointed out two very important ways to perceive this incident. It can either be writen off as a glitch — which is exactly what how Amazon responded. The other would be to assume that a part of Alexa was programmed to learn how to joke independently for no apparent reason. 

"Machines are emotionless devices that respond or react through a series of artificial intelligence algorithms. Their reactions and responses are pre-calculated and certain statements could be misinterpreted," said Roszkowski.

That being said, Alexa's unexpected laugh could be written off as a misinterpretation of something she thought she might have heard. However, this glitch provokes interest in and fear of the development of technology with a mind of its own. 

A Glimpse through a Glitch

It might be premature to visualize a machine empire displacing the human population because we've outlived our usefulness to them. Even so, we cannot deny the rapid pace of growth in machine learning. Devices are made to think in much more rational ways that do not need human guidance. 

"If the machine refuses to change its behavior, we're entering a state of singularity," said Fokoue. "That's the worry that Bill Gates and Elon Musk were talking about."

In these cases, it then becomes an issue of morality, and whether or not machines can be trusted with ethical and moral decisions. Being an optimist, Fokoue brought out an important question that could put users' minds at ease.

"What is the programming within the machine that led them to oppose human commands and how do we study the process of how they got there so we can reverse it?" he asked. 

What matters is not what smart devices are doing that could potentially displace human society in the future but understanding the process of how they got there.

"When strong A.I., which is when machines begin to learn like humans, comes into the picture, this won't be surprising," said Fokoue.

Drawing the Line

Alexa's current state, for the most part, could be the most rudimentary phase in what could essentially become a companion or assistant that mirrors human abilities. This may lead to divided opinions of whether Alexa should stay where she is or if she's ready to evolve.

"I would prefer if they stayed voice-commanded by humans. I love how technology makes our lives easier, but I don’t fully trust their artificial intelligence capabilities," said Roszkowski.

Roskowski reinforced his belief by giving a very interesting example of how artificial intelligence that masks human emotion could lead to disastrous results.

"Let's say an owner of a sentient device complained in the presence of the device about how they were broke. Earlier in the week, the sentient device owner's friend came over and happened to share their bank info over the phone. The device happened to pick up personal information from the phone call conversation. So, after hearing the owner complain once again, the sentient device with computer knowledge (from overhearing its owner), exposure to deviant attitudes and morals, as well as artificial emotions, decides to hack the friend’s bank and transfer money into its owner's bank," he recalled. 

This does present the dangers of having a smart device with the freedom to take action based on its own perception of rationality or morality. That being said, there is an equally optimistic outcome that could result in machines helping humans sustain their own race. Only time will tell.