Can A.I. Be Taught to Explain Itself? New York Times

By CLIFF KUANGNOV. 21, 2017

As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: “Advances in A.I. Are Used to Spot Signs of Sexuality.” But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski’s work “dangerous” and “junk science.”

(They claimed it had not been peer reviewed, though it had.) In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: “The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse.”

Kosinski has made a career of warning others about the uses and potential abuses of data. Four years ago, he was pursuing a Ph.D. in psychology, hoping to create better tests for signature personality traits like introversion or openness to change. But he and a collaborator soon realized that Facebook might render personality tests superfluous: Instead of asking if someone liked poetry, you could just see if they “liked” Poetry Magazine. In 2014, they published a study showing that if given 200 of a user’s likes, they could predict that person’s personality-test answers better than their own romantic partner could.

For full article: https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html