WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT- WIRED

SCOTT ROSENBERG

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results.

The report, released two weeks ago, is the brainchild of Kate Crawford and Meredith Whittaker, cofounders of AI Now, a new research institute based out of New York University. Crawford, Whittaker, and their collaborators lay out a research agenda and a policy roadmap in a dense but approachable 35 pages. Their conclusion doesn’t waffle: Our efforts to hold AI to ethical standards to date, they say, have been a flop.

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI,” they write. When tech giants build AI products, too often “user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles…” Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life. Is there anything we can do? Crawford sat down with us this week for a discussion of why ethics in AI is still a mess, and what practical steps might change the picture.

 

Scott Rosenberg: Towards the end of the new report, you come right out and say, “Current framings of AI ethics are failing.” That sounds dire.

Kate Crawford: There’s a lot of talk about how we come up with ethical codes for this field. We still don’t have one. We have a set of what I think are important efforts spearheaded by different organizations, including IEEEAsilomar, and others. But what we’re seeing now is a real air gap between high-level principles—that are clearly very important—and what is happening on the ground in the day-to-day development of large-scale machine learning systems.

We read all of the existing ethical codes that have been published in the last two years that specifically consider AI and algorithmic systems. Then we looked at the difference between the ideals and what was actually happening. What is most urgently needed now is that these ethical guidelines are accompanied by very strong accountability mechanisms. We can say we want AI systems to be guided with the highest ethical principles, but we have to make sure that there is something at stake. Often when we talk about ethics, we forget to talk about power. People will often have the best of intentions. But we’re seeing a lack of thinking about how real power asymmetries are affecting different communities.

The underlying message of the report seems to be that we may be moving too fast—we’re not taking the time to do this stuff right.

I would probably phrase it differently. Time is a factor, but so is priority. If we spent as much money and hired as many people to think about and work on and empirically test the broader social and economic effects of these systems, we would be coming from a much stronger base. Who is actually creating industry standards that say, ok, this is the basic pre-release trial system you need to go through, this is how you publicly show how you’ve tested your system and with what different types of populations, and these are the confidence bounds you are prepared to put behind your system or product?

These are things we’re used to in the domains of drug testing and other mission-critical systems, even in terms of things like water safety in cities. But it’s only when we see them fail, for example in places like Flint, Michigan, that we realize how much we rely on this infrastructure being tested so it’s safe for everybody. In the case of AI, we don’t have those systems yet. We need to train people to test AI systems, and to create these kinds of safety and fairness mechanisms. That’s something we can do right now. We need to put some urgency behind prioritizing safety and fairness before these systems get deployed on human populations.

You want to get this stuff in place before there’s the AI equivalent of a Flint disaster.

I think it’s essential that we do that.

The tech landscape right now is dominated by a handful of gigantic companies. So how is that going to happen?

This is the core question. As a researcher in this space, I go to the tools that I know. We can actually do an enormous amount by increasing the level and rigor of research into the human and social impacts of these technologies. One place we think we can make a difference: Who gets a seat at the table in the design of these systems? At the moment it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.

So we have a very strong recommendation that the AI industry should be hiring experts from disciplines beyond computer science and engineering and insuring that those people have decision-making power. What’s not going to be sufficient is bringing in consultants at the end, when you’ve already designed a system and you’re already about to deploy it. If you’re not thinking about the way systemic bias can be propagated through the criminal justice system or predictive policing, then it’s very likely that, if you’re designing a system based on historical data, you’re going to be perpetuating those biases.

Addressing that is much more than a technical fix. It’s not a question of just tweaking the numbers to try and remove systemic inequalities and biases.

 

For the full article: 

https://www.wired.com/story/why-ai-is-still-waiting-for-its-ethics-transplant/

For the AINOW report: 

https://assets.contentful.com/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf