Posts in Sexism
The world is relying on a flawed psychological test to fight racism - Quartz Media

In 1998, the incoming freshman class at Yale University was shown a psychological test that claimed to reveal and measure unconscious racism. The implications were intensely personal. Even students who insisted they were egalitarian were found to have unconscious prejudices (or “implicit bias” in psychological lingo) that made them behave in small, but accumulatively significant, discriminatory ways. Mahzarin Banaji, one of the psychologists who designed the test and leader of the discussion with Yale’s freshmen, remembers the tumult it caused. “It was mayhem,” she wrote in a recent email to Quartz. “They were confused, they were irritated, they were thoughtful and challenged, and they formed groups to discuss it.”

Finally, psychologists had found a way to crack open people’s unconscious, racist minds. This apparently incredible insight has taken the test in question, the Implicit Association Test (IAT), from Yale’s freshmen to millions of people worldwide. Referencing the role of implicit bias in perpetuating the gender pay gap or racist police shootings is widely considered woke, while IAT-focused diversity training is now a litmus test for whether an organization is progressive.

This acclaimed and hugely influential test, though, has repeatedly fallen short of basic scientific standards.

Full article: https://qz.com/1144504/the-world-is-relying-on-a-flawed-psychological-test-to-fight-racism/

 

Read More
AI tool quantifies power imbalance between female and male characters in Hollywood movies - Technology Breaking News

At first glance, the movie “Frozen” might seem to have two strong female protagonists — Elsa, the elder princess with unruly powers over snow and ice, and her sister, Anna, who spends much of the film on a quest to save their kingdom.

But the two princesses actually exert very different levels of power and control over their own destinies, according to new research from University of Washington computer scientists.

The team used machine-learning-based tools to analyze the language in nearly 800 movie scripts, quantifying how much power and agency those scripts give to individual characters. In their study, recently presented in Denmark at the 2017 Conference on Empirical Methods in Natural Language Processing, the researchers found subtle but widespread gender bias in the way male and female characters are portrayed.

“‘Frozen’ is an interesting example because Elsa really does make her own decisions and is able to drive her own destiny forward, while Anna consistently fails in trying to rescue her sister and often needs the help of a man,” said lead author and Paul G. Allen School of Computer Science & Engineering doctoral student Maarten Sap, whose team also applied the tool to Wikipedia plot summaries of several classic Disney princess movies.

“Anna is actually portrayed with the same low levels of power and agency as Cinderella, which is a movie that came out more than 60 years ago. That’s a pretty sad finding,” Sap said.

Read More
Hatier publie le premier manuel scolaire en écriture inclusive - HuffPost

FÉMINISME - "Grâce aux agriculteur.rice.s, aux artisan.e.s et aux commerçant.e.s, la Gaule était un pays riche." C'est avec des phrases de ce type que des enfants de CE2 apprendront l'histoire dans un manuel Hatier pour l'année scolaire 2017-2018, comme l'a repéré Le Figaro vendredi 22 septembre.

La maison d'édition, qui publie notamment le Bescherelle, a diffusé au mois de mars dernier ce premier manuel en 'écriture inclusive', un mode d'écriture qui féminise les mots en plaçant, entre des points, la terminaison du féminin.

Intitulé "Questionner le Monde"le livre a été repéré par un professeur de physique-chimie et une image diffusée dans un groupe d'enseignants sur Facebook, comme l'explique Le Figaro.

Read More
Artificial Intelligence: Making AI in our Images - Savage Mind

Savage Minds welcomes guest blogger Sally Applin

Hello! I’m Sally Applin. I am a technology anthropologist who examines automation, algorithms and Artificial Intelligence (AI) in the context of preserving human agency. My dissertation focused on small independent fringe new technology makers in Silicon Valley, what they are making, and most critically, how the adoption of the outcomes of their efforts impact society and culture locally, and/or globally. I’m currently spending the summer in a corporate AI Research Group where I contribute to anthropological research on AI. I’m thrilled to blog for the renowned Savage Minds this month and hope many of you find value in my contributions.

Read More
She Giggles, He Gallops - The Pudding

Analyzing gender tropes in film with screen direction from 2,000 scripts.

By Julia Silge

In April 2016, we broke down film dialogue by gender. The essay presented an imbalance in which men delivered more lines than women across 2,000 screenplays. But quantity of lines is only part of the story. What characters do matters, too.

Gender tropes (e.g., women are pretty/men actmen don’t cry) are just as important as dialogue in understanding how men and women are portrayed on-screen. These stereotypes result from many components, including casting, acting, directing, etc.

Read More
Words ascribed to female economists: 'Hotter,' 'feminazi.' Men?: 'Goals,' 'Nobel.' - The Washington Post

In 1970, the economics department at the University of California at Berkeley hired three newly minted economics PhDs from the Massachusetts Institute of Technology. Two - both men - were hired as assistant professors. But a woman, Myra Strober, was hired as a lecturer, a position of inferior pay and status and no possibility of tenure. When she asked the department chairman why she was denied an assistant professorship, he put her off with excuses. She kept pressing him until he gave a frank answer: She had two young children; the department couldn't possibly put her on the tenure track.

So Strober took another offer. In 1972, she became the first female economist at Stanford's Graduate School of Business. "They didn't know what to make of me," she said. The faculty retreat, which had been held every year at a men's club, had to be moved. There were jokes about putting a bag over her head so they could keep going to the club.

"It was like trying to run a race with one of your legs tied behind you," Strober said of the culture.

Read More
Read YouTube CEO Susan Wojcicki’s Response to the Controversial Google Anti-Diversity Memo - Fortune

Yesterday, after reading the news, my daughter asked me a question. “Mom, is it true that there are biological reasons why there are fewer women in tech and leadership?”

That question, whether it’s been asked outright, whispered quietly, or simply lingered in the back of someone’s mind, has weighed heavily on me throughout my career in technology. Though I’ve been lucky to work at a company where I’ve received a lot of support—from leaders like Larry Page, Sergey Brin, Eric Schmidt, and Jonathan Rosenberg to mentors like Bill Campbell—my experience in the tech industry has shown me just how pervasive that question is.

Read More
Machines trained on photos learn to be sexist towards women - Wired

Last Autumn, University of Virginia computer-science professor Vicente Ordóñez noticed a pattern in some of the guesses made by image-recognition software he was building. “It would see a picture of a kitchen and more often than not associate it with women, not men,” he says.

That got Ordóñez wondering whether he and other researchers were unconsciously injecting biases into their software. So he teamed up with colleagues to test two large collections of labeled photos used to “train” image-recognition software.

Their results are illuminating. Two prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men. Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

Mark Yatskar, a researcher at the Allen Institute for Artificial Intelligence, says that phenomenon could also amplify other biases in data, for example related to race. “This could work to not only reinforce existing social biases but actually make them worse,” says Yatskar, who worked with Ordóñez and others on the project while at the University of Washington.

Read More
Alexa, Siri, Cortana: Our virtual assistants say a lot about sexism -Science Friction

OK, Google. We need to talk. 

For that matter — Alexa, Siri, Cortana — we should too.

The tech world's growing legion of virtual assistants added another to its ranks last month, with the launch of Google Home in Australia.

And like its predecessors, the device speaks in dulcet tones and with a woman's voice. She sits on your kitchen table — discreet, rotund and white — at your beck and call and ready to respond to your questions.

But what's with all the obsequious, subservient small talk? And why do nearly all digital assistants and chatbots default to being female?

A handmaid's tale

Feminist researcher and digital media scholar Miriam Sweeney, from the University of Alabama, believes the fact that virtual agents are overwhelmingly represented as women is not accidental.

"It definitely corresponds to the kinds of tasks they carry out," she says.

Read More
We tested bots like Siri and Alexa to see who would stand up to sexual harassment -Quartz

Women have been made into servants once again. Except this time, they’re digital.

Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Home peddle stereotypes of female subservience—which puts their “progressive” parent companies in a moral predicament.

People often comment on the sexism inherent in these subservient bots’ female voices, but few have considered the real-life implications of the devices’ lackluster responses to sexual harassment. By letting users verbally abuse these assistants without ramifications, their parent companies are allowing certain behavioral stereotypes to be perpetuated. Everyone has an ethical imperative to help prevent abuse, but companies producing digital female servants warrant extra scrutiny, especially if they can unintentionally reinforce their abusers’ actions as normal or acceptable.

Read More
The Global Search for Education: How Are We Doing On The Gender Agenda? – Millennials Weigh In

Posted By C. M. Rubin on Jul 27, 2017

In an interview with CMRubinWorld, Dr. Linda Scott, Emeritus DP World Chair for Entrepreneurship and Innovation at the Said Business School, University of Oxford, reminds us that the gender gap is everywhere, “real and measurable” and not a “figment of some feminist’s imagination.” The global research confirms that gender inequality “retards economic growth, perpetuates poverty,” and “is bad for families.”

Read More
AI robots learning racism, sexism, and other prejudices from humans, study finds - The Independent

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

Read More