Posts tagged University of Washington
AI tool quantifies power imbalance between female and male characters in Hollywood movies - Technology Breaking News

At first glance, the movie “Frozen” might seem to have two strong female protagonists — Elsa, the elder princess with unruly powers over snow and ice, and her sister, Anna, who spends much of the film on a quest to save their kingdom.

But the two princesses actually exert very different levels of power and control over their own destinies, according to new research from University of Washington computer scientists.

The team used machine-learning-based tools to analyze the language in nearly 800 movie scripts, quantifying how much power and agency those scripts give to individual characters. In their study, recently presented in Denmark at the 2017 Conference on Empirical Methods in Natural Language Processing, the researchers found subtle but widespread gender bias in the way male and female characters are portrayed.

“‘Frozen’ is an interesting example because Elsa really does make her own decisions and is able to drive her own destiny forward, while Anna consistently fails in trying to rescue her sister and often needs the help of a man,” said lead author and Paul G. Allen School of Computer Science & Engineering doctoral student Maarten Sap, whose team also applied the tool to Wikipedia plot summaries of several classic Disney princess movies.

“Anna is actually portrayed with the same low levels of power and agency as Cinderella, which is a movie that came out more than 60 years ago. That’s a pretty sad finding,” Sap said.

Read More
Machines trained on photos learn to be sexist towards women - Wired

Last Autumn, University of Virginia computer-science professor Vicente Ordóñez noticed a pattern in some of the guesses made by image-recognition software he was building. “It would see a picture of a kitchen and more often than not associate it with women, not men,” he says.

That got Ordóñez wondering whether he and other researchers were unconsciously injecting biases into their software. So he teamed up with colleagues to test two large collections of labeled photos used to “train” image-recognition software.

Their results are illuminating. Two prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men. Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

Mark Yatskar, a researcher at the Allen Institute for Artificial Intelligence, says that phenomenon could also amplify other biases in data, for example related to race. “This could work to not only reinforce existing social biases but actually make them worse,” says Yatskar, who worked with Ordóñez and others on the project while at the University of Washington.

Read More