GENDER BIAS

 
We share these concerns and see both a challenge and opportunity in the growing reliance on algorithmic decision making. The challenge is that this approach can embed and perpetuate unfair, harmful, and unwanted biases.
— Machine, Platform, Crowd : HARNESSING OUR DIGITAL FUTURE Andrew McAfee (Author, MIT), Erik Brynjolfsson (Author, MIT)
AI is just an extension of our existing culture.
— Joanna Bryson, computer scientist, the University of Bath, and co-author of Semantic derived automatically from human corpora contain human-like biases

Gendered Content

AI will be used to create, curate, personalise, or translate stories, fiction, movie scripts, textbooks, blogs, social posts, knowledge hubs, Adverts, using algorithms and data models rooted in gendered representations of the world.

Gendered Assistants

AI increasingly powers virtual assistants that face consumers, customers and workers. Many bots, virtual assistants, or domestic robots have an implied feminine gender, perpetuating gender stereotypes. Most virtual assistants are developed & trained by white males.

We want our technology to help us, but we want to be the bosses of it, so we are more likely to opt for a female interface.
— Clifford Nass in the 2005 book Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship

Gendered Decisions

AI powered applications will execute decisions as well as propose them. Systemic bias & stereotypes risk being  inherited from data sets, embedded deep in code, and reinforced by learning loops, perpetuating or worse amplifying gender bias. Applications  at risk:

-recommending a career OR AN education,  

-selecting or hiring candidates or contributors,

-managing or predicting performance,

-creating organisational design,

-recommending a provider,

-personalising interactioNS,

-funding or conducting SCIENTIFIC RESEARCH,

-crafting communications,

-creating stories, movies, ARt...

Machine-learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms. Rather, as long as they are trained using data from society, and as long as society exhibits biases, these methods will likely reproduce these biases.
— Hanna Wallach, Microsoft Research.