Posts in Gender Bias
How Silicon Valley's sexism affects your life - Washington Post

It was a rough week at Google. On Aug. 4, a 10-page memo titled "Google's Ideological Echo Chamber" started circulating among employees. It argued that the disparities between men and women in tech and leadership roles were rooted in biology, not bias. On Monday, James Damore, the software engineer who wrote it, was fired; he then filed a labor complaint to contest his dismissal.

We've heard lots about Silicon Valley's toxic culture this summer - venture capitalists who proposition female start-up founders, man-child CEOs like Uber's Travis Kalanick, abusive nondisparagement agreements that prevent harassment victims from describing their experiences. Damore's memo added fuel to the fire, arguing that women are more neurotic and less stress-tolerant than men, less likely to pursue status, and less interested in the "systemizing" work of programming. "We need to stop assuming that gender gaps imply sexism," he concludes.

Like the stories that came before it, coverage of this memo has focused on how a sexist tech culture harms people in the industry - the women and people of color who've been patronized, passed over, and pushed out. But what happens in Silicon Valley doesn't stay in Silicon Valley. It comes into our homes and onto our screens, affecting all of us who use technology, not just those who make it.

Read More
We tested bots like Siri and Alexa to see who would stand up to sexual harassment -Quartz

Women have been made into servants once again. Except this time, they’re digital.

Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Home peddle stereotypes of female subservience—which puts their “progressive” parent companies in a moral predicament.

People often comment on the sexism inherent in these subservient bots’ female voices, but few have considered the real-life implications of the devices’ lackluster responses to sexual harassment. By letting users verbally abuse these assistants without ramifications, their parent companies are allowing certain behavioral stereotypes to be perpetuated. Everyone has an ethical imperative to help prevent abuse, but companies producing digital female servants warrant extra scrutiny, especially if they can unintentionally reinforce their abusers’ actions as normal or acceptable.

Read More
Look Who’s Still Talking the Most in Movies: White Men -New York Times

With “Wonder Woman” and “Girls Trip” riding a wave of critical and commercial success at the box office this summer, it can be tempting to think that diversity in Hollywood is on an upswing.

But these high-profile examples are not a sign of greater representation in films over all. A new study from the University of Southern California’s Viterbi School of Engineering found that films were likely to contain fewer women and minority characters than white men, and when they did appear, these characters were portrayed in ways that reinforced stereotypes. And female characters, in particular, were generally less central to the plot.

Read More
The Global Search for Education: How Are We Doing On The Gender Agenda? – Millennials Weigh In

Posted By C. M. Rubin on Jul 27, 2017

In an interview with CMRubinWorld, Dr. Linda Scott, Emeritus DP World Chair for Entrepreneurship and Innovation at the Said Business School, University of Oxford, reminds us that the gender gap is everywhere, “real and measurable” and not a “figment of some feminist’s imagination.” The global research confirms that gender inequality “retards economic growth, perpetuates poverty,” and “is bad for families.”

Read More
Diversity in the Robot Reporter Newsroom

The Associated Press recently announced a big new hire: A robot reporter from Automated Insights (AI) would be employed to write up to 4,400 earnings report stories per quarter. Last year, that same automated writing software produced over 300 million stories — that’s some serious scale from a single algorithmic entity.

So what happens to media diversity in the face of massive automated content production platforms like the one Automated Insights created? Despite the fact that we’ve done pretty abysmally at incorporating a balance of minority and gender perspectives in the news media, I think we’d all like to believe that by including diverse perspectives in the reporting and editing of news we fly closer to the truth. A silver lining to the newspaper industry crash has been a profusion of smaller, more nimble media outlets, allowing for far more variability and diversity in the ideas that we’re exposed to.

Read More
We Recorded VCs' Conversations and Analyzed How Differently They Talk About Female Entrepreneurs - HBR

When venture capitalists (VCs) evaluate investment proposals, the language they use to describe the entrepreneurs who write them plays an important but often hidden role in shaping who is awarded funding and why. But it’s difficult to obtain VCs’ unvarnished comments, given that they are uttered behind closed doors. We were given access to government venture capital decision-making meetings in Sweden and were able to observe the types of language that VCs used over a two-year period. One major thing stuck out: The language used to describe male and female entrepreneurs was radically different. And these differences have very real consequences for those seeking funding — and for society in general.

Read More
AI programs exhibit racial and gender biases, research reveals - Guardian

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say.

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Read More
Better, Less-Stereotyped Word Vectors -Conceptnet Blog

Bias and Disenfranchisement Conversational interfaces learn from the data they have been given, and all datasets based on human communication encode bias. In 2013, researchers at Boston University and Microsoft discovered what they characterized as “extremely sexist” patterns in “Word2Vec,” a commonly used set of data based upon three million Google News stories.18 They found, among other things, that occupations inferred to be “male” included Maestro, Skipper, Protégé and Philosopher, while those inferred to be female included Homemaker, Nurse, Receptionist and Librarian. This is more than a hypothetical risk for organizations; Word2Vec is used to train search algorithms, recommendation engines, and other common applications related to ad targeting or audience segmentation. Organizations building chatbots based on common data sets must investigate potential bias and design for it upfront to prevent alienating and disenfranchising customers and consumers. The good news is that these and other researchers are working on methods to audit predictive models for bias.

Read More