How Silicon Valley's sexism affects your life - Washington Post

The Washington Post

Sara Wachter-Boettcher

13 August 2017

It was a rough week at Google. On Aug. 4, a 10-page memo titled "Google's Ideological Echo Chamber" started circulating among employees. It argued that the disparities between men and women in tech and leadership roles were rooted in biology, not bias. On Monday, James Damore, the software engineer who wrote it, was fired; he then filed a labor complaint to contest his dismissal.

We've heard lots about Silicon Valley's toxic culture this summer - venture capitalists who proposition female start-up founders, man-child CEOs like Uber's Travis Kalanick, abusive nondisparagement agreements that prevent harassment victims from describing their experiences. Damore's memo added fuel to the fire, arguing that women are more neurotic and less stress-tolerant than men, less likely to pursue status, and less interested in the "systemizing" work of programming. "We need to stop assuming that gender gaps imply sexism," he concludes.

Like the stories that came before it, coverage of this memo has focused on how a sexist tech culture harms people in the industry - the women and people of color who've been patronized, passed over, and pushed out. But what happens in Silicon Valley doesn't stay in Silicon Valley. It comes into our homes and onto our screens, affecting all of us who use technology, not just those who make it.

Take Snapchat. Last year, on April 20 (also known as 4/20, a holiday of sorts for marijuana fans), the app launched a new photo filter: "Bob Marley," which applied dreadlocks and darker skin tones to users' selfies. The filter was roundly criticized as "digital blackface," but Snapchat refused to apologize. In fact, a few months later, it launched another racially offensive filter - this one morphing people's faces into Asian caricatures complete with buckteeth, squinty eyes and red cheeks.

Then there's Apple Health, which promised to monitor "your whole health picture" when it launched in 2014. The app could track exercise habits, blood alcohol content and even chromium intake. But for a full year after its launch, it couldn't track menstruation, which affects a huge portion of the population.

And consider smartphone assistants such as Cortana and Siri. In 2016, researchers noted in JAMA Internal Medicine that these services couldn't understand phrases such as "I was raped" or "I was beaten up by my husband." They often responded to such queries with jokes.

In many cases, sexist or racist biases are also embedded in the powerful (yet invisible) algorithms behind much of today's software.

Look at FaceApp, which came under fire this spring for its "hotness" photo filter. The filter smoothed wrinkles, slimmed cheeks - and dramatically lightened skin. The company behind the app acknowledged that the filter's algorithm had been trained using a biased data set - meaning it learned what "beauty" was from faces that were predominantly white.

Likewise, in 2015, Google launched a new image-recognition feature for its Photos app. The feature would trawl users' photos, identify their contents and automatically add labels to them - such as "dog," "graduation" or "bicycle." Brooklyn resident Jacky Alciné noticed a more upsetting tag: A series of photos of him and a friend, both black, was labeled with the word "gorillas." The racial slur wasn't intentional. The system simply wasn't as good at identifying black people as it was white people. After the incident, Google engineers acknowledged this, promising improvements focused on "better recognition of dark-skinned faces."

Then there's Word2vec, a neural network Google researchers created in 2013 to assist with natural language processing - that is, computers' ability to understand human speech. Word2vec combs through Google News articles to learn about the relationships between words. The program can complete analogies such as "Paris is to France as Tokyo is to _____." But Word2vec also concluded

"Man is to woman as computer programmer is to homemaker" and "Man is to architect as woman is to interior designer."

These pairings aren't surprising - they simply reflect the Google News data set the network was built on. But in an industry where white men are the norm and "disruption" trumps all else, technology such as Word2vec is often assumed to be objective and then embedded into all sorts of other software - recommendation engines, job-search systems. Kathryn Hume, of artificial-intelligence company Integrate.ai, calls this the "time warp" of AI: "Capturing trends in human behavior from our near past and projecting them into our near future." The effects are far-reaching. Studies show that biased machine-learning systems result in problems from job-search results that show women lower-paying positions to predictive-policing software that perpetuates disparities in communities of color.

Some of these flaws might seem small. But together, they paint a picture of an industry that's out of touch with the people who use its products. And without a fundamental overhaul in the way Silicon Valley works - who is funded, who is hired, who is promoted and who is believed when abuses happen - it's going to stay that way. That's why calls to kill tech diversity initiatives are so misguided. The sooner we stop letting tech get away with being insular, inequitable and hostile to diversity, the sooner we'll start building technology that works for all of us.

Twitter: @sara_ann_marie

is a Web consultant and the author of the forthcoming book "Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech." 

http://www.washingtonpost.com