Bigotry Encoded: Racial Bias in Technology

illustration by Aria Dines

Is this soap dispenser racist?” was the question that became an internet sensation. In a video at a Marriott hotel, an automatic soap dispenser is shown unable to detect a black customer’s hand. 

The dispenser used near-infrared technology to detect hand motions, an article on Mic read. The invisible light is reflected back from the skin which triggers the sensor. Darker skin tones absorb more light, thus enough light isn't reflected back to the sensor to activate the soap dispenser. Which means that dark-skinned restroom users will have to skip washing their hands with this not-so-sensitive soap dispenser.

This epic design flaw may seem hilarious on the internet, but demonstrates a major issue with many technology-based companies: diversity. The soap dispenser was created by a company called Technical Concepts, which unintentionally made a discriminatory soap dispenser because no one at the company thought to test their product on dark skin.

According to Alec Harris, a fourth year Manufacturing Engineering Technology major and the Pre-Collegiate Initiative Chair for National Society of Black Engineers (NSBE) chapter, this is an endemic problem within the tech industry.

“If you have an office full of white people, whatever products that come out of that office are more likely to be geared more towards white people. The less diversity there is in a workplace environment, the more likely major design flaws will be present that only affect people of color,” Harris said.

"The less diversity there is in a workplace environment, the more likely major design flaws will be present that only affect people of color.”

Racial Bias in Tech

Silicon Valley, located in northern San Francisco, is a global epicenter for innovation and prestige in technology. It is also infamous for it’s stark lack of diversity which has been nicknamed its “Achilles’ heel,” according to CNBC. The issue extends throughout many facets of technology.

Specifically, facial recognition software has consistently shown racial bias when identifying faces. From iPhone’s face unlock not being able to differentiate between two colleagues in China to Google Photos mistakenly tagging two black friends in a selfie as gorillas, algorithms still fail at recognizing and distinguishing people of color. I'm a black woman and I've even been locked out of my Surface because Windows Hello couldn’t detect my face. Yet the ramifications run much deeper than shoddy cell phones now that facial recognition software is being introduced into policing.

“Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes,” according to the Atlantic

As technology becomes more advanced, so does the scope and harm of these blunders. An essay titled the State of Black America 2018 noted how the digital revolution is increasingly leaving their black consumers behind. Especially on social media platforms like Twitter and Instagram where black people are a quarter of the users, but only represent less than five percent of their employees.

The consequences are clear: technology is becoming a white people’s brand. Caitlin Pope is a third year student of Applied Arts and Sciences and noted how tech companies run the risk of alienating its diverse users. 

“If there was more diversity in the tech industry, then not only would there be more products that suit people of every skin tone, but it will develop each company as a brand to unify themselves instead of having consumers say, ‘That’s a white brand,'” Pope said.

The Fuel for Hatred

Can technology be racist? Absolutely, if we allow it to be. A dangerous assumption is that technology is neutral, however, behind every software is a programmer who has informed their own perspective and biases. Something as innocent as Google search not only reflects algorithms' racial bias, but also increases its own users' bigotry. Due to a former Google search bug, it was detected that when a person typed in “Jews are” the autocomplete search suggestion would be “evil.” Or, if it was “Blacks are” the suggestion would be “not oppressed.” Or, “Hitler was” would suggest “my hero.” Technology runs the risk of adding to the growing wave of online bigotry. 

“The technology industry can be looked at as focused around solving problems, and if only the problems of one group of people are being considered, there are countless other problems and potential solutions not being looked into,” Harris said.

Online radicalization is a growing problem. Dylann Roof, the mass shooter who killed nine black churchgoers in a racist hate crime, radicalized himself online. This is because many online companies were ill-prepared for the extremist content such as fake news and bots that flood the internet. Unfortunately, many algorithms are set up to automatically promote content without screening it, such as with the Youtube autoplay scandal which recommended pro-Nazi videos on unaffiliated searches. 

Companies like Microsoft have tried to combat online divisions with artificial intelligence (AI) technology. However, this has backfired spectacularly. Enter “Tay,” an innocent AI bot that was programmed to engage different audiences in fun, friendly conversation on Twitter. In less than a day, Tay went from claiming “Humans are cool” to “Hitler was right I hate the Jews.”

A Human Solution

Microsoft’s strategy, and others like it, often receive backlash. The issue isn’t necessarily with the technology itself, but the makers' inability to predict social consequences from their designs. According to Medium, the main issue with the tech industry is the lack of emotional intelligence and apathy towards their effect on the community. “Smarter” tech won’t solve what is, essentially, a human solution. This is why Apple’s diverse emojis still have the wrong palm skin colors and why algorithms suspend people of colors' social media more than they do Nazis', as reported by ProPublica — because no one was there to tell those tech companies that something was wrong. 

Medium continues that the easiest way to solve the tech industry's woes is to just hire more people of color in STEM and provide a more nurturing environment to grow emotional intelligence. It’s not just the companies hiring — RIT also has a responsibility to its students of color in STEM fields. 

“I think RIT should shed light on the marginalization of students in tech to show support of people of color in that industry or people who want to pursue that industry,” Pope said.

Harris agrees and suggests that RIT can do this by supporting initiatives like NSBE that fund more resources, mentorship and scholarships for students. On campus, there’s already several diverse tech organizations like NSBE that help balance the odds for more people of color to succeed in STEM. With more community building and support, maybe one day we all can keep our hands clean.