AI programs exhibit racial and gender biases, research reveals - Guardian

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say.

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Read More
Better, Less-Stereotyped Word Vectors -Conceptnet Blog

Bias and Disenfranchisement Conversational interfaces learn from the data they have been given, and all datasets based on human communication encode bias. In 2013, researchers at Boston University and Microsoft discovered what they characterized as “extremely sexist” patterns in “Word2Vec,” a commonly used set of data based upon three million Google News stories.18 They found, among other things, that occupations inferred to be “male” included Maestro, Skipper, Protégé and Philosopher, while those inferred to be female included Homemaker, Nurse, Receptionist and Librarian. This is more than a hypothetical risk for organizations; Word2Vec is used to train search algorithms, recommendation engines, and other common applications related to ad targeting or audience segmentation. Organizations building chatbots based on common data sets must investigate potential bias and design for it upfront to prevent alienating and disenfranchising customers and consumers. The good news is that these and other researchers are working on methods to audit predictive models for bias.

Read More