Artificial Intelligence—With Very Real Biases-WSJ

According to AI Now co-founder Kate Crawford, digital brains can be just as error-prone and biased as ours.

What do you imagine when someone mentions artificial intelligence? Perhaps it’s something drawn from science-fiction films: Hal’s glowing eye, a shape-shifting terminator or the sound of Samantha’s all-knowing voice in the movie “Her.”

As someone who researches the social implications of AI, I tend to think of something far more banal: a municipal water system, part of the substrate of our everyday lives. We expect these systems to work—to quench our thirst, water our plants and bathe our children. And we assume that the water flowing into our homes and offices is safe. Only when disaster strikes—as it did in Flint, Mich.—do we realize the critical importance of safe and reliable infrastructure.

Artificial intelligence is quickly becoming part of the information infrastructure we rely on every day. Early-stage AI technologies are filtering into everything from driving directions to job and loan applications. But unlike our water systems, there are no established methods to test AI for safety, fairness or effectiveness. Error-prone or biased artificial-intelligence systems have the potential to taint our social ecosystem in ways that are initially hard to detect, harmful in the long term and expensive—or even impossible—to reverse. And unlike public infrastructure, AI systems are largely developed by private companies and governed by proprietary, black-box algorithms.

A good example is today’s workplace, where hundreds of new AI technologies are already influencing hiring processes, often without proper testing or notice to candidates. New AI recruitment companies offer to analyze video interviews of job candidates so that employers can “compare” an applicant’s facial movements, vocabulary and body language with the expressions of their best employees. But with this technology comes the risk of invisibly embedding bias into the hiring system by choosing new hires simply because they mirror the old ones. What if Uber, with its history of poorly behaved executives, used a system like this? And attempting to replicate the perfect employee is an outdated model of management science: Recent studies have shown that monocultures are bad for business and that diverse workplaces outperform more homogenous ones.

New systems are also being advertised that use AI to analyze young job applicants’ social media for signs of “excessive drinking” that could affect workplace performance. This is completely unscientific correlation thinking, which stigmatizes particular types of self-expression without any evidence that it detects real problems. Even worse, it normalizes the surveillance of job applicants without their knowledge before they get in the door.

Full article: https://www.wsj.com/articles/artificial-intelligencewith-very-real-biases-1508252717?mod=e2tw