Google recently unveiled to the world a new line of products under the moniker “Made by Google.” The tech giant formally introduced Google Home, a device suspiciously reminiscent of Amazon’s Echo; Allo, a messaging app; Daydream, a virtual reality headset; and the company's new Pixel Phone

To many, this might just seem like the internet giant trying to jump into every possible tech market that’s popular today, but there is a common thread linking many of these products. The Pixel, Google Home and Allo all make use of the company’s new very literally named digital assistant, Google Assistant.

Like Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa, Google Assistant uses artificial intelligence (AI) to help its users do just about anything they want in response to simple commands. To help its users best, Assistant tries to get to know its users. Within Allo, the “smart messaging app,” Assistant not only provides fun stickers to spice up your texts, but it also becomes another member of the conversation.

As you use the app to message your friends, Allo might notice you mention dinner and then recommend some nearby restaurants within the conversation. Even when it isn’t actively sending messages of its own, the app is reading your texts in order to know your style of texting better. Its “Smart Reply” system suggests your replies to other texts based specifically on how it thinks that you personally might reply, using data from your previous conversations.

If all this sounds a bit too creepy, Google does offer an “incognito” mode much like that of its popular internet browser, Google Chrome. All messages sent while incognito will not be saved by Google Assistant. For those willing to fully embrace the app's AI, you can have a text conversation with Assistant itself and let it get to know you better.

While this sort of technology isn’t necessarily brand new, it is much more impressive than ever before. The future predicted by countless science fiction stories with that ever-present robot voice responding to our every word no longer can be considered fiction. One might expect that with a company as huge and widespread as Google putting their effort into such technologies, that it could play a larger role within our lives sometime soon.

Behind the Tech

Dr. Ray Ptucha, an assistant professor of computer engineering here at RIT, spoke about the technological systems and processes behind such products. Ptucha was part of an Imagine RIT exhibit last May called “Taking AI Further: Deep Learning,” which focused on the same tech used by digital assistants like Google’s.

“Let’s just say it’s a neural network on steroids,” Ptucha said. “A neural network with many, many layers."

Ptucha explained that digital assistants work using a process called Deep Learning.

“Let’s just say it’s a neural network on steroids,” he said. “A neural network with many, many layers. The network is constructed in a hierarchal fashion such that as we go from one layer to the next, it's a more general abstract of the layer beneath.  When we reach the final layer, we will ultimately have a better understanding of what's happening." In other words, the AI takes in input like speech or an image and interprets it over and over again. Each time around it notes more specific details and puts together new puzzle pieces that eventually create an understanding of what the original input actually represents, in a process inspired by that of the human brain.

Ptucha pointed out how these neural networks are nowhere close to copying the complexity of the human brain. That said, they do learn; Ptucha explained that, to teach an AI the basics that it needs to interact with users as advertised, that the AI is subject to something called a supervised learning system.

Texts from millions of users, when put into the system, would form a “ground truth” for the AI to know. The data then uses a probability-based technique to see what the average person in the Unites States might say in any given scenario and provide the user suggestions accordingly.

This is just how an assistive AI starts out. Once it gets to work on your phone, it adds your own data into the equation through a process called “Transfer Learning.” Such personal data is weighted more heavily than that of the original millions of samples.

“Think of that like learning aggressively when it's on your phone,” says Ptucha. “We have to initialize it; we transfer that baseline model over from the average user to your phone before it can start working." A “recurrent neural network” then uses your history and the last word you typed to guess what you might type next.

Looking Forward

Ptucha can envision a future where there will be a computer with everyone wherever they go, a computer that listens, understands and is equipped with intelligent digital assistants akin to Google's. "Not only can I ask it simple things, but I can start asking it some very specific information. Eventually it'll get to the point where this thing is so good I'll start confiding in it," he said. He even sees people becoming friendly with their assistants. .

According to Ptucha, the next place such tech might see implementation is in the form of implants or something wearable like 2013’s Google Glass. Whatever form it might take, the end goal is the same: “augmenting the human to make their lives richer and fuller.”

Obstacles to Overcome

"[We] have to come up with clever systems where humans interact with computers in simple and intuitive interfaces," said Ptucha.

A big concern that naturally comes from the concept of always-listening machines is that of privacy. While Google does promise options like the aforementioned incognito mode, some may still be skeptical to trust the company.

“Whenever you use an incognito mode, you can be pretty sure that none of that is saved," Ptucha said. "It's such a violation of privacy that Google doesn't even want to attempt to be accused of saving any of that information. They have so many ways of getting non-incognito data, that why would [they] take the chance of doing something like that?"

For Ptucha, the potential positives far outweigh concerns about privacy. He noted, for example, how just about everyone has a relative still struggling to figure out smartphones and the internet, and that they perhaps could benefit from assistive AI. 

"[We] have to come up with clever systems where humans interact with computers in simple and intuitive interfaces. Otherwise it's going to make our lives more stressful, not more enjoyable," he said. Google's Assistant could help by responding easily to simple, spoken commands and interpreting what functionality is best suited for the needs of those less adept with technology, without expecting them to know all the terminology.

This is possible because voice recognition has reached amazing heights in recent years. Ptucha cited a study conducted by Stanford University in August of 2016 that showed smartphone speech recognition to be as much as three times faster than texting the same words in English. Not only is it getting faster, but it’s also getting better at understanding words that aren’t spoken very clearly and concisely. As a result, you can talk in a natural way — the way you talk to people — and still be understood by the technology.

"This is the kind of interface that humans will warmly open their arms to," said Ptucha. Google’s new products could be one of the first steps in the direction of a future where such technology has greatly proliferated.