Posts in AI
AI May Soon Replace Even the Most Elite Consultants - HBR

mazon’s Alexa just got a new job. In addition to her other 15,000 skills like playing music and telling knock-knock jokes, she can now also answer economic questions for clients of the Swiss global financial services company, UBS Group AG.

According to the Wall Street Journal (WSJ), a new partnership between UBS Wealth Management and Amazon allows some of UBS’s European wealth-management clients to ask Alexa certain financial and economic questions. Alexa will then answer their queries with the information provided by UBS’s chief investment office without even having to pick up the phone or visit a website. And this is likely just Alexa’s first step into offering business services. Soon she will probably be booking appointments, analyzing markets, maybe even buying and selling stocks. While the financial services industry has already begun the shift from active management to passive management, artificial intelligence will move the market even further, to management by smart machines, as in the case of Blackrock, which is rolling computer-driven algorithms and models into more traditional actively-managed funds.

Read More
How to make a racist AI without really trying - ConceptNet Blog

Rob Speer

Perhaps you heard about Tay, Microsoft’s experimental Twitter chat-bot, and how within a day it became so offensive that Microsoft had to shut it down and never speak of it again. And you assumed that you would never make such a thing, because you’re not doing anything weird like letting random jerks on Twitter re-train your AI on the fly.

My purpose with this tutorial is to show that you can follow an extremely typical NLP pipeline, using popular data and popular techniques, and end up with a racist classifier that should never be deployed.

There are ways to fix it. Making a non-racist classifier is only a little bit harder than making a racist classifier. The fixed version can even be more accurate at evaluations. But to get there, you have to know about the problem, and you have to be willing to not just use the first thing that works.

Read More
Machines trained on photos learn to be sexist towards women - Wired

Last Autumn, University of Virginia computer-science professor Vicente Ordóñez noticed a pattern in some of the guesses made by image-recognition software he was building. “It would see a picture of a kitchen and more often than not associate it with women, not men,” he says.

That got Ordóñez wondering whether he and other researchers were unconsciously injecting biases into their software. So he teamed up with colleagues to test two large collections of labeled photos used to “train” image-recognition software.

Their results are illuminating. Two prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men. Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

Mark Yatskar, a researcher at the Allen Institute for Artificial Intelligence, says that phenomenon could also amplify other biases in data, for example related to race. “This could work to not only reinforce existing social biases but actually make them worse,” says Yatskar, who worked with Ordóñez and others on the project while at the University of Washington.

Read More
Alexa, Siri, Cortana: Our virtual assistants say a lot about sexism -Science Friction

OK, Google. We need to talk. 

For that matter — Alexa, Siri, Cortana — we should too.

The tech world's growing legion of virtual assistants added another to its ranks last month, with the launch of Google Home in Australia.

And like its predecessors, the device speaks in dulcet tones and with a woman's voice. She sits on your kitchen table — discreet, rotund and white — at your beck and call and ready to respond to your questions.

But what's with all the obsequious, subservient small talk? And why do nearly all digital assistants and chatbots default to being female?

A handmaid's tale

Feminist researcher and digital media scholar Miriam Sweeney, from the University of Alabama, believes the fact that virtual agents are overwhelmingly represented as women is not accidental.

"It definitely corresponds to the kinds of tasks they carry out," she says.

Read More
How Silicon Valley's sexism affects your life - Washington Post

It was a rough week at Google. On Aug. 4, a 10-page memo titled "Google's Ideological Echo Chamber" started circulating among employees. It argued that the disparities between men and women in tech and leadership roles were rooted in biology, not bias. On Monday, James Damore, the software engineer who wrote it, was fired; he then filed a labor complaint to contest his dismissal.

We've heard lots about Silicon Valley's toxic culture this summer - venture capitalists who proposition female start-up founders, man-child CEOs like Uber's Travis Kalanick, abusive nondisparagement agreements that prevent harassment victims from describing their experiences. Damore's memo added fuel to the fire, arguing that women are more neurotic and less stress-tolerant than men, less likely to pursue status, and less interested in the "systemizing" work of programming. "We need to stop assuming that gender gaps imply sexism," he concludes.

Like the stories that came before it, coverage of this memo has focused on how a sexist tech culture harms people in the industry - the women and people of color who've been patronized, passed over, and pushed out. But what happens in Silicon Valley doesn't stay in Silicon Valley. It comes into our homes and onto our screens, affecting all of us who use technology, not just those who make it.

Read More
We tested bots like Siri and Alexa to see who would stand up to sexual harassment -Quartz

Women have been made into servants once again. Except this time, they’re digital.

Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Home peddle stereotypes of female subservience—which puts their “progressive” parent companies in a moral predicament.

People often comment on the sexism inherent in these subservient bots’ female voices, but few have considered the real-life implications of the devices’ lackluster responses to sexual harassment. By letting users verbally abuse these assistants without ramifications, their parent companies are allowing certain behavioral stereotypes to be perpetuated. Everyone has an ethical imperative to help prevent abuse, but companies producing digital female servants warrant extra scrutiny, especially if they can unintentionally reinforce their abusers’ actions as normal or acceptable.

Read More
Why we desperately need women to design AI - Medium

At the moment, only about 12–15% of the engineers who are building the internet and its software are women.

Here are a couple examples that illustrate why this is such a big a problem:

  • Do you remember when Apple released it’s health app a few years ago? Its purpose was to offer a ‘comprehensive’ access point to health information and data. But it left out a large health issue that almost all women deal with, and then took a year to fix that hole.
  • Then there was that frustrated middle school-aged girl who enjoyed gaming, but couldn’t find an avatar she related to. So she analyzed 50 popular games and found that 98% of them had male avatars (mostly free!), and only 46% of them had female avatars (mostly available for a charge!). Even more askew when you consider that almost half of gamers are women.

We don’t want a repeat of these kinds of situations. And we’ve been working to address this at Women 2.0 for over a decade. We think a lot about how diversity — or lack thereof. We think about it has affected — and is going to affect — the technology outputs that enter our lives. These technologoies engage with us. The determine our behaviors, thought processes, buying patterns, world views… you name it. This is part of the reason we recently launched Lane, a recruitment platform for female technologists.

Read More
Look Who’s Still Talking the Most in Movies: White Men -New York Times

With “Wonder Woman” and “Girls Trip” riding a wave of critical and commercial success at the box office this summer, it can be tempting to think that diversity in Hollywood is on an upswing.

But these high-profile examples are not a sign of greater representation in films over all. A new study from the University of Southern California’s Viterbi School of Engineering found that films were likely to contain fewer women and minority characters than white men, and when they did appear, these characters were portrayed in ways that reinforced stereotypes. And female characters, in particular, were generally less central to the plot.

Read More
Why AI Needs a Dose of Design Thinking - Deloitte/WSJ.com

Artificial intelligence technologies could reshape economies and societies, but more powerful algorithms do not automatically yield improved business or societal outcomes. Human-centered design thinking can help organizations get the most out of cognitive technologies.

Today’s artificial intelligence (AI) revolution has been made possible by the big data revolution. The machine learning algorithms researchers have been developing for decades, when cleverly applied to today’s web-scale data sets, can yield surprisingly good forms of intelligence. For instance, the United States Postal Service has long used neural network models to automatically read handwritten zip code digits. Today’s deep learning neural networks can be trained on millions of electronic photographs to identify faces, and similar algorithms may increasingly be used to navigate automobiles and identify tumors in X-rays. The IBM Watson information retrieval system could triumph on the game show “Jeopardy!” partly because most human knowledge is now stored electronically.

Read More
Diversity in the Robot Reporter Newsroom

The Associated Press recently announced a big new hire: A robot reporter from Automated Insights (AI) would be employed to write up to 4,400 earnings report stories per quarter. Last year, that same automated writing software produced over 300 million stories — that’s some serious scale from a single algorithmic entity.

So what happens to media diversity in the face of massive automated content production platforms like the one Automated Insights created? Despite the fact that we’ve done pretty abysmally at incorporating a balance of minority and gender perspectives in the news media, I think we’d all like to believe that by including diverse perspectives in the reporting and editing of news we fly closer to the truth. A silver lining to the newspaper industry crash has been a profusion of smaller, more nimble media outlets, allowing for far more variability and diversity in the ideas that we’re exposed to.

Read More
Inspecting Algorithms for Bias - MIT Technology Review

Courts, banks, and other institutions are using automated data analysis systems to make decisions about your life. Let’s not leave it up to the algorithm makers to decide whether they’re doing it appropriately.

It was a striking story. “Machine Bias,” the headline read, and the teaser proclaimed: “There’s software used across the country to predict future criminals. And it’s biased against blacks.”

 

Read More
AI robots learning racism, sexism, and other prejudices from humans, study finds - The Independent

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

Read More
AI programs exhibit racial and gender biases, research reveals - Guardian

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say.

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Read More
Better, Less-Stereotyped Word Vectors -Conceptnet Blog

Bias and Disenfranchisement Conversational interfaces learn from the data they have been given, and all datasets based on human communication encode bias. In 2013, researchers at Boston University and Microsoft discovered what they characterized as “extremely sexist” patterns in “Word2Vec,” a commonly used set of data based upon three million Google News stories.18 They found, among other things, that occupations inferred to be “male” included Maestro, Skipper, Protégé and Philosopher, while those inferred to be female included Homemaker, Nurse, Receptionist and Librarian. This is more than a hypothetical risk for organizations; Word2Vec is used to train search algorithms, recommendation engines, and other common applications related to ad targeting or audience segmentation. Organizations building chatbots based on common data sets must investigate potential bias and design for it upfront to prevent alienating and disenfranchising customers and consumers. The good news is that these and other researchers are working on methods to audit predictive models for bias.

Read More