Posts in Gender Bias
Delivering through diversity - Mc Kinsey

Our latest research reinforces the link between diversity and company financial performance—and suggests how organizations can craft better inclusion strategies for a competitive edge.

Awareness of the business case for inclusion and diversity is on the rise. While social justice typically is the initial impetus behind these efforts, companies have increasingly begun to regard inclusion and diversity as a source of competitive advantage, and specifically as a key enabler of growth. Yet progress on diversification initiatives has been slow. And companies are still uncertain about how they can most effectively use diversity and inclusion to support their growth and value-creation goals.

Read More
Studies show facial recognition software almost works perfectly – if you’re a white male - Global News

Recent studies indicate that the face recognition technology used in consumer devices can discriminate based on gender and race.

A new study out of the M.I.T Media lab indicates that when certain face recognition products are shown photos of a white man, the software can correctly guess the gender of the person 99 per cent of the time. However, the study found that for subjects with darker skin, the software made more than 35 per cent more mistakes.

As part of the Gender Shades project 1,270 photos were chosen of individuals from three African countries and three European countries and were evaluated with  (AI) products from IBM, Microsoft and Face++-. The photos were classified further by gender and by skin colour before testing them on these products.

The study notes that while each company appears to have a relatively high rate of accuracy overall, of between 87 and 94 per cent, there were noticeable differences in the misidentified images in different groups.

Full article:

https://globalnews.ca/news/4019123/facial-recognition-software-work-white-male-report/

Read More
DeepMind's Mustafa Suleyman: In 2018, AI will gain a moral compass - Wired

Humanity faces a wide range of challenges that are characterised by extreme complexity, from climate change to feeding and providing healthcare for an ever-expanding global population. Left unchecked, these phenomena have the potential to cause devastation on a previously untold scale. Fortunately, developments in AI could play an innovative role in helping us address these problems.

At the same time, the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely. They could shine a light on damaging human biases and help society address them, or entrench patterns of discrimination and perpetuate them. Getting things right requires serious research into the social consequences of AI and the creation of partnerships to ensure it works for the public good.

Read More
AI reveals, injects gender bias in the workplace - BenefitsPro

While lots of people worry about artificial intelligence becoming aware of itself, then running amok and taking over the world, others are using it to uncover gender bias in the workplace. And that’s more than a little ironic, since AI actually injects not just gender, but racial bias into its data—and that has real-world consequences.

A Fox News report highlights the research with AI that reveals workplace bias, uncovered by research from Boston-based Palatine Analytics. The firm, which studies workplace issues, “analyzed a trove of data—including employee feedback and surveys, gender and salary information and one-on-one check-ins between managers and employees—using the power of artificial intelligence.”

Read More
The world is relying on a flawed psychological test to fight racism - Quartz Media

In 1998, the incoming freshman class at Yale University was shown a psychological test that claimed to reveal and measure unconscious racism. The implications were intensely personal. Even students who insisted they were egalitarian were found to have unconscious prejudices (or “implicit bias” in psychological lingo) that made them behave in small, but accumulatively significant, discriminatory ways. Mahzarin Banaji, one of the psychologists who designed the test and leader of the discussion with Yale’s freshmen, remembers the tumult it caused. “It was mayhem,” she wrote in a recent email to Quartz. “They were confused, they were irritated, they were thoughtful and challenged, and they formed groups to discuss it.”

Finally, psychologists had found a way to crack open people’s unconscious, racist minds. This apparently incredible insight has taken the test in question, the Implicit Association Test (IAT), from Yale’s freshmen to millions of people worldwide. Referencing the role of implicit bias in perpetuating the gender pay gap or racist police shootings is widely considered woke, while IAT-focused diversity training is now a litmus test for whether an organization is progressive.

This acclaimed and hugely influential test, though, has repeatedly fallen short of basic scientific standards.

Full article: https://qz.com/1144504/the-world-is-relying-on-a-flawed-psychological-test-to-fight-racism/

 

Read More
Unconscious Bias Training Isn't the Silver Bullet For a Biased Hiring Process - Elevate Blog

The latest fashion trend with most of my clients is Unconscious Bias Training. While those trainings are interesting and engaging, and may raise awareness about various biases, there's little evidence to their effectiveness in eliminating those. This is well explained in Diversity and Inclusion specialist's Lisa Kepinski's article, Unconscious Bias Awareness Training is Hot, But the Outcome is Not: So What to Do About It?

Lisa outlines two problems with these trainings:

  1. The "So What?" effect: having done the training, leaders and HR professionals alike remain at loss for the next steps that could deliver a sustainable cultural change, and
  2. The training may backfire by encouraging more biased thinking and behaviors (by conditioning the stereotypes). Moreover, "by hearing that others are biased and it's ‘natural’ to hold stereotypes, we feel less motivated to change biases and stereotypes are strengthened (‘follow the herd’ bias)."
Read More
Microsoft Researcher Details The Real-World Dangers Of Algorithm Bias

However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives -- in health, law enforcement, sex, etc. -- it can't outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible keynote speech, titled "The Trouble with Bias" at Spain's Neural Information Processing System Conference on Tuesday.

Read More
Garbage in. Garbage Out. - NEVERTHELESS

One afternoon in Florida in 2014,18-year Brisha Borden was running to pick up her god-sister from school when she spotted an unlocked kid’s bicycle and a silver scooter. Brisha and a friend grabbed the bike and scooter and tried to ride them down the street. Just as the 18-year-old girls were realizing they were too big for the toys, a woman came running after them saying, “That’s my kid’s stuff.” They immediately dropped the stuff and walked away. But it was too late — a neighbor who witnessed the event had already called the police. Brisha and her friend were arrested and charged with burglary and petty theft for the items, valued at a total of $80.

The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store. He had already been convicted of several armed robbery charges and had served 5 years in prison. Borden, the 18 year old, had a record too — but for juvenile misdemeanors.

 

For the the full transcript and podcast: 

https://medium.com/nevertheless-podcast/transcript-garbage-in-garbage-out-78b74b08f16e

Read More
A Study Used Sensors to Show That Men and Women Are Treated Differently at Work - HBR

Gender equality remains frustratingly elusive. Women are underrepresented in the C-suitereceive lower salaries, and are less likely to receive a critical first promotion to manager than men. Numerous causes have been suggested, but one argument that persists points to differences in men and women’s behavior.

Which raises the question: Do women and men act all that differently? We realized that there’s little to no concrete data on women’s behavior in the office. Previous work has relied on surveys and self-reported assessments — methods of data collecting that are prone to bias. Fortunately, the proliferation of digital communication data and the advancement of sensor technology have enabled us to more precisely measure workplace behavior.

We decided to investigate whether gender differences in behavior drive gender differences in outcomes at one of our client organizations, a large multinational firm, where women were underrepresented in upper management. In this company, women made up roughly 35%–40% of the entry-level workforce but a smaller percentage at each subsequent level. Women made up only 20% of people at the two highest seniority levels at this organization.

Read More
Are algorithms making us W.E.I.R.D.? - alphr

Western, educated, industrialised, rich and democratic (WEIRD) norms are distorting the cultural perspective of new technologies

From what we see in our internet search results to deciding how we manage our investments, travel routes and love lives, algorithms have become a ubiquitous part of our society. Algorithms are not just an online phenomenon: they are having an ever-increasing impact on the real-world. Children are being born to couples who were matched by dating site algorithms, whilst the navigation systems for driverless cars are poised to transform our roads.

Read More
Biases in Algorithms - Cornell University Blog

http://www.pewinternet.org/2017/02/08/theme-4-biases-exist-in-algorithmically-organized-systems/

In class we have recently discussed how the search algorithm for Google works. From the very basic material that we learned about the algorithm, it seems like the algorithm is resistant to failure due to its very systematic way of organizing websites. However, after considering how it works, is it possible that the algorithm is flawed? More specifically, how so from a social perspective?

Well, as it turns out, many algorithms are indeed flawed- including the search algorithm. The reason being is that algorithms are ultimately coded by individuals who inherently have biases. And although there continues to be a push for the promotion of people of color in STEM fields, the reality at the moment is that the majority of people in charge of designing algorithms are White males.

Read More
Start-Ups Use Technology to Redesign the Hiring Process - NY Times

Iris Bohnet, a behavioral economist and professor at the Harvard Kennedy School, spoke to the founders of two behavioral design start-ups, Kate Glazebrook of Applied and Frida Polli of Pymetrics, for the latest on the algorithmic design revolution that is transforming hiring practices.

Read More
Hatier publie le premier manuel scolaire en écriture inclusive - HuffPost

FÉMINISME - "Grâce aux agriculteur.rice.s, aux artisan.e.s et aux commerçant.e.s, la Gaule était un pays riche." C'est avec des phrases de ce type que des enfants de CE2 apprendront l'histoire dans un manuel Hatier pour l'année scolaire 2017-2018, comme l'a repéré Le Figaro vendredi 22 septembre.

La maison d'édition, qui publie notamment le Bescherelle, a diffusé au mois de mars dernier ce premier manuel en 'écriture inclusive', un mode d'écriture qui féminise les mots en plaçant, entre des points, la terminaison du féminin.

Intitulé "Questionner le Monde"le livre a été repéré par un professeur de physique-chimie et une image diffusée dans un groupe d'enseignants sur Facebook, comme l'explique Le Figaro.

Read More
Artificial Intelligence: Making AI in our Images - Savage Mind

Savage Minds welcomes guest blogger Sally Applin

Hello! I’m Sally Applin. I am a technology anthropologist who examines automation, algorithms and Artificial Intelligence (AI) in the context of preserving human agency. My dissertation focused on small independent fringe new technology makers in Silicon Valley, what they are making, and most critically, how the adoption of the outcomes of their efforts impact society and culture locally, and/or globally. I’m currently spending the summer in a corporate AI Research Group where I contribute to anthropological research on AI. I’m thrilled to blog for the renowned Savage Minds this month and hope many of you find value in my contributions.

Read More
Debiasing AI Systems- Luminoso Blog

One of the most-discussed topics in AI recently has been the growing realization that AI-based systems absorb human biases and prejudices from training data. While this has only recently become a hot news topic, AI organizations, including Luminoso, have been focused on this issue for a while. Denise Christie sat down with Luminoso’s Chief Science Officer, Rob Speer, to talk about how AI becomes biased in the first place, the impact such bias can have, and - more importantly - how to mitigate it.

Read More
She Giggles, He Gallops - The Pudding

Analyzing gender tropes in film with screen direction from 2,000 scripts.

By Julia Silge

In April 2016, we broke down film dialogue by gender. The essay presented an imbalance in which men delivered more lines than women across 2,000 screenplays. But quantity of lines is only part of the story. What characters do matters, too.

Gender tropes (e.g., women are pretty/men actmen don’t cry) are just as important as dialogue in understanding how men and women are portrayed on-screen. These stereotypes result from many components, including casting, acting, directing, etc.

Read More
Read YouTube CEO Susan Wojcicki’s Response to the Controversial Google Anti-Diversity Memo - Fortune

Yesterday, after reading the news, my daughter asked me a question. “Mom, is it true that there are biological reasons why there are fewer women in tech and leadership?”

That question, whether it’s been asked outright, whispered quietly, or simply lingered in the back of someone’s mind, has weighed heavily on me throughout my career in technology. Though I’ve been lucky to work at a company where I’ve received a lot of support—from leaders like Larry Page, Sergey Brin, Eric Schmidt, and Jonathan Rosenberg to mentors like Bill Campbell—my experience in the tech industry has shown me just how pervasive that question is.

Read More
AI May Soon Replace Even the Most Elite Consultants - HBR

mazon’s Alexa just got a new job. In addition to her other 15,000 skills like playing music and telling knock-knock jokes, she can now also answer economic questions for clients of the Swiss global financial services company, UBS Group AG.

According to the Wall Street Journal (WSJ), a new partnership between UBS Wealth Management and Amazon allows some of UBS’s European wealth-management clients to ask Alexa certain financial and economic questions. Alexa will then answer their queries with the information provided by UBS’s chief investment office without even having to pick up the phone or visit a website. And this is likely just Alexa’s first step into offering business services. Soon she will probably be booking appointments, analyzing markets, maybe even buying and selling stocks. While the financial services industry has already begun the shift from active management to passive management, artificial intelligence will move the market even further, to management by smart machines, as in the case of Blackrock, which is rolling computer-driven algorithms and models into more traditional actively-managed funds.

Read More
Machines trained on photos learn to be sexist towards women - Wired

Last Autumn, University of Virginia computer-science professor Vicente Ordóñez noticed a pattern in some of the guesses made by image-recognition software he was building. “It would see a picture of a kitchen and more often than not associate it with women, not men,” he says.

That got Ordóñez wondering whether he and other researchers were unconsciously injecting biases into their software. So he teamed up with colleagues to test two large collections of labeled photos used to “train” image-recognition software.

Their results are illuminating. Two prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men. Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

Mark Yatskar, a researcher at the Allen Institute for Artificial Intelligence, says that phenomenon could also amplify other biases in data, for example related to race. “This could work to not only reinforce existing social biases but actually make them worse,” says Yatskar, who worked with Ordóñez and others on the project while at the University of Washington.

Read More
Alexa, Siri, Cortana: Our virtual assistants say a lot about sexism -Science Friction

OK, Google. We need to talk. 

For that matter — Alexa, Siri, Cortana — we should too.

The tech world's growing legion of virtual assistants added another to its ranks last month, with the launch of Google Home in Australia.

And like its predecessors, the device speaks in dulcet tones and with a woman's voice. She sits on your kitchen table — discreet, rotund and white — at your beck and call and ready to respond to your questions.

But what's with all the obsequious, subservient small talk? And why do nearly all digital assistants and chatbots default to being female?

A handmaid's tale

Feminist researcher and digital media scholar Miriam Sweeney, from the University of Alabama, believes the fact that virtual agents are overwhelmingly represented as women is not accidental.

"It definitely corresponds to the kinds of tasks they carry out," she says.

Read More