Posts in Stereotypes
AI tool quantifies power imbalance between female and male characters in Hollywood movies - Technology Breaking News

At first glance, the movie “Frozen” might seem to have two strong female protagonists — Elsa, the elder princess with unruly powers over snow and ice, and her sister, Anna, who spends much of the film on a quest to save their kingdom.

But the two princesses actually exert very different levels of power and control over their own destinies, according to new research from University of Washington computer scientists.

The team used machine-learning-based tools to analyze the language in nearly 800 movie scripts, quantifying how much power and agency those scripts give to individual characters. In their study, recently presented in Denmark at the 2017 Conference on Empirical Methods in Natural Language Processing, the researchers found subtle but widespread gender bias in the way male and female characters are portrayed.

“‘Frozen’ is an interesting example because Elsa really does make her own decisions and is able to drive her own destiny forward, while Anna consistently fails in trying to rescue her sister and often needs the help of a man,” said lead author and Paul G. Allen School of Computer Science & Engineering doctoral student Maarten Sap, whose team also applied the tool to Wikipedia plot summaries of several classic Disney princess movies.

“Anna is actually portrayed with the same low levels of power and agency as Cinderella, which is a movie that came out more than 60 years ago. That’s a pretty sad finding,” Sap said.

Read More
Understanding Bias in Algorithmic Design - ASME Demand

In 2016, The Seattle Times uncovered an issue with a popular networking site’s search feature. When the investigative reporters entered female names into LinkedIn’s search bar, the site asked if they meant to search for similar sounding male names instead — “Stephen Williams” instead of “Stephanie Williams,” for example. According to the paper’s reporting, however, the trend wouldn’t happen in reverse, when a user searched for male names.

Within a week of The Seattle Times article’s release, LinkedIn introduced a fix. Spokeswoman Suzi Owens told the paper that the search algorithm had been guided by “relative frequencies of words” from past searches and member profiles, not by gender. Her explanation suggests that LinkedIn’s algorithm was not intentionally biased. Nevertheless, using word frequency — a seemingly objective variable — as a key parameter still generated skewed results. That could be because men are more likely to have a common name than American women, according to Social Security data. Thus, building a search function based on frequency criteria alone would more likely increase visibility for Stephens than Stephanies.

Examples like this demonstrate how algorithms can unintentionally reflect and amplify common social biases. Other recent investigations suggest that such incidents are not uncommon. In a more serious case, the investigative news organization ProPublica uncovered a correlation between race and criminal recidivism predictions in so-called “risk assessments” — predictive algorithms that are used by courtrooms to inform terms for bail, sentencing, or parole. The algorithmic predictions for recidivism generated a higher rate of false-negatives for white offenders and a higher rate of false-positives for black offenders, even though overall error rates were roughly the same.

Read More
Biases in Algorithms - Cornell University Blog

http://www.pewinternet.org/2017/02/08/theme-4-biases-exist-in-algorithmically-organized-systems/

In class we have recently discussed how the search algorithm for Google works. From the very basic material that we learned about the algorithm, it seems like the algorithm is resistant to failure due to its very systematic way of organizing websites. However, after considering how it works, is it possible that the algorithm is flawed? More specifically, how so from a social perspective?

Well, as it turns out, many algorithms are indeed flawed- including the search algorithm. The reason being is that algorithms are ultimately coded by individuals who inherently have biases. And although there continues to be a push for the promotion of people of color in STEM fields, the reality at the moment is that the majority of people in charge of designing algorithms are White males.

Read More
She Giggles, He Gallops - The Pudding

Analyzing gender tropes in film with screen direction from 2,000 scripts.

By Julia Silge

In April 2016, we broke down film dialogue by gender. The essay presented an imbalance in which men delivered more lines than women across 2,000 screenplays. But quantity of lines is only part of the story. What characters do matters, too.

Gender tropes (e.g., women are pretty/men actmen don’t cry) are just as important as dialogue in understanding how men and women are portrayed on-screen. These stereotypes result from many components, including casting, acting, directing, etc.

Read More
Alexa, Siri, Cortana: Our virtual assistants say a lot about sexism -Science Friction

OK, Google. We need to talk. 

For that matter — Alexa, Siri, Cortana — we should too.

The tech world's growing legion of virtual assistants added another to its ranks last month, with the launch of Google Home in Australia.

And like its predecessors, the device speaks in dulcet tones and with a woman's voice. She sits on your kitchen table — discreet, rotund and white — at your beck and call and ready to respond to your questions.

But what's with all the obsequious, subservient small talk? And why do nearly all digital assistants and chatbots default to being female?

A handmaid's tale

Feminist researcher and digital media scholar Miriam Sweeney, from the University of Alabama, believes the fact that virtual agents are overwhelmingly represented as women is not accidental.

"It definitely corresponds to the kinds of tasks they carry out," she says.

Read More
We tested bots like Siri and Alexa to see who would stand up to sexual harassment -Quartz

Women have been made into servants once again. Except this time, they’re digital.

Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Home peddle stereotypes of female subservience—which puts their “progressive” parent companies in a moral predicament.

People often comment on the sexism inherent in these subservient bots’ female voices, but few have considered the real-life implications of the devices’ lackluster responses to sexual harassment. By letting users verbally abuse these assistants without ramifications, their parent companies are allowing certain behavioral stereotypes to be perpetuated. Everyone has an ethical imperative to help prevent abuse, but companies producing digital female servants warrant extra scrutiny, especially if they can unintentionally reinforce their abusers’ actions as normal or acceptable.

Read More
You may be stereotyped if you use these words - Business Insider

Stereotyping happens. A new study helps identify how it happens and what it gets wrong by asking participants to make predictions about people based on their Tweets. 

Researchers at the University of Pennsylvania and other institutions had subjects read sets of 20 Tweets and predict the writer's gender, age, political orientation, and education level based on the words they used. Subjects' guesses were fairly accurate — they guessed right 76% of the time on gender, 69% on whether the person was older or younger than 24, and 82% on liberal versus conservative. They were only right in 46% of cases, however, when predicting whether the Tweet-writers had no bachelor’s degree, a bachelor’s degree, or an advanced degree.

Read More
We Recorded VCs' Conversations and Analyzed How Differently They Talk About Female Entrepreneurs - HBR

When venture capitalists (VCs) evaluate investment proposals, the language they use to describe the entrepreneurs who write them plays an important but often hidden role in shaping who is awarded funding and why. But it’s difficult to obtain VCs’ unvarnished comments, given that they are uttered behind closed doors. We were given access to government venture capital decision-making meetings in Sweden and were able to observe the types of language that VCs used over a two-year period. One major thing stuck out: The language used to describe male and female entrepreneurs was radically different. And these differences have very real consequences for those seeking funding — and for society in general.

Read More
AI programs exhibit racial and gender biases, research reveals - Guardian

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say.

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Read More
Better, Less-Stereotyped Word Vectors -Conceptnet Blog

Bias and Disenfranchisement Conversational interfaces learn from the data they have been given, and all datasets based on human communication encode bias. In 2013, researchers at Boston University and Microsoft discovered what they characterized as “extremely sexist” patterns in “Word2Vec,” a commonly used set of data based upon three million Google News stories.18 They found, among other things, that occupations inferred to be “male” included Maestro, Skipper, Protégé and Philosopher, while those inferred to be female included Homemaker, Nurse, Receptionist and Librarian. This is more than a hypothetical risk for organizations; Word2Vec is used to train search algorithms, recommendation engines, and other common applications related to ad targeting or audience segmentation. Organizations building chatbots based on common data sets must investigate potential bias and design for it upfront to prevent alienating and disenfranchising customers and consumers. The good news is that these and other researchers are working on methods to audit predictive models for bias.

Read More