Posts in AI
Studies show facial recognition software almost works perfectly – if you’re a white male - Global News

Recent studies indicate that the face recognition technology used in consumer devices can discriminate based on gender and race.

A new study out of the M.I.T Media lab indicates that when certain face recognition products are shown photos of a white man, the software can correctly guess the gender of the person 99 per cent of the time. However, the study found that for subjects with darker skin, the software made more than 35 per cent more mistakes.

As part of the Gender Shades project 1,270 photos were chosen of individuals from three African countries and three European countries and were evaluated with  (AI) products from IBM, Microsoft and Face++-. The photos were classified further by gender and by skin colour before testing them on these products.

The study notes that while each company appears to have a relatively high rate of accuracy overall, of between 87 and 94 per cent, there were noticeable differences in the misidentified images in different groups.

Full article:

https://globalnews.ca/news/4019123/facial-recognition-software-work-white-male-report/

Read More
Kriti Sharma: rendre l’intelligence artificielle plus éthique - Business au Feminin

Vice-présidente Bots et intelligence artificielle chez Sage, Kriti Sharma est une pionnière dans le développement de machines intelligentes capables de fonctionner et de réagir comme des êtres humains pour simplifier les tâches administratives des entreprises. Elle est aussi la créatrice de Pegg, le premier chatbot de comptabilité au monde qui sera sera commercialisé en 2018 en France et désormais adopté dans 135 pays.

L’intelligence artificielle est une des plus grandes révolutions de notre temps pouvant mettre en danger le pouvoir de l’être humain et son travail. Quel est votre point de vue ?

Kriti Sharma: L’intelligence artificielle est comme n’importe quelle autre révolution technologique majeure, elle aura des implications positives comme négatives. Maintenant, il faut être sûr qu’elles sont utilisées à de bonnes fins. Par exemple pour les petites entreprises qui n’ont pas beaucoup d’équipes technologiques, l’intelligence artificielle peut les aider à automatiser un certain nombre de process.

Par ailleurs, la technologie attire une main d’œuvre de plus en plus diversifiée, ce qui n’existait pas auparavant. L’intelligence artificielle peut également s’automatiser elle-même. Avant, créer un software prenait du temps, maintenant, l’IA commence à écrire ses propres codes. Elle peut, dans une certaine mesure, automatiser le travail de l’ingénieur software. Donc nous avons maintenant un besoin de gens aux compétences créatives, plus seulement des ingénieurs mais une combinaison de profils Art et Science.  Autrement dit, vous n’avez pas besoin d’être un ingénieur ou un Data scientifique avec un master pour travailler dans l’intelligence artificielle.

Dans « the end of the professions » David Susskind évoque des professions comme les avocats, qui vont être impactées par l’automatisation et l’intelligence artificielle.  Ne pensez-vous pas que cela va accroitre les inégalités à l’échelle mondiale ?

Read More
DeepMind's Mustafa Suleyman: In 2018, AI will gain a moral compass - Wired

Humanity faces a wide range of challenges that are characterised by extreme complexity, from climate change to feeding and providing healthcare for an ever-expanding global population. Left unchecked, these phenomena have the potential to cause devastation on a previously untold scale. Fortunately, developments in AI could play an innovative role in helping us address these problems.

At the same time, the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely. They could shine a light on damaging human biases and help society address them, or entrench patterns of discrimination and perpetuate them. Getting things right requires serious research into the social consequences of AI and the creation of partnerships to ensure it works for the public good.

Read More
Enquête au cœur de l’intelligence artificielle, ses promesses et ses périls - Le Monde

L’être humain est-il menacé par la technologie ? La machine risque-t-elle de le dominer ? Notre dossier spécial pour faire le tri entre fantasmes et réalité.

L’intelligence artificielle (IA) est à la mode. Rien que dans Le Monde et sur ­Lemonde.fr, le sujet a été évoqué dans 200 articles en 2017, soit presque 15 % de plus qu’en 2016. Il en a été question dans tous les domaines : en économie, en science, et même en politique, ­puisque le premier ministre, Edouard Philippe, a confié une mission sur la question au député (LRM) mathématicien Cédric Villani, dont les conclusions sont attendues en janvier.

Il reste à savoir ce que cache ce terme. Bien sûr, il y a ces fantastiques percées montrant que des machines surpassent désormais l’homme dans des tâches spécifiques. Dans le secteur de la santé, elles repèrent mieux que les médecins des mélanomes ou des tumeurs du sein sur des images médicales. Dans le transport, elles causent moins d’accidents que des chauffeurs. Sans compter les autres avancées : la reconnaissance vocale, l’art du jeu (poker, go), l’écriture, la peinture ou la musique. En coulisse de ce monde si particulier s’activent les géants du ­numérique (Google, Facebook, Amazon, Microsoft, IBM, Baidu…) ou des start-up désireuses de leur voler la vedette.

Read More
AI reveals, injects gender bias in the workplace - BenefitsPro

While lots of people worry about artificial intelligence becoming aware of itself, then running amok and taking over the world, others are using it to uncover gender bias in the workplace. And that’s more than a little ironic, since AI actually injects not just gender, but racial bias into its data—and that has real-world consequences.

A Fox News report highlights the research with AI that reveals workplace bias, uncovered by research from Boston-based Palatine Analytics. The firm, which studies workplace issues, “analyzed a trove of data—including employee feedback and surveys, gender and salary information and one-on-one check-ins between managers and employees—using the power of artificial intelligence.”

Read More
Artificial intelligence could hardwire sexism into our future. Unless we stop it- WEF Blog

In five years’ time, we might travel to the office in driverless cars, let our fridges order groceries for us and have robots in the classroom. Yet, according to the World Economic Forum’s Global Gender Gap Report 2017it will take another 100 years before women and men achieve equality in health, education, economics and politics.

What’s more, it's getting worse for economic parity: it will take a staggering 217 years to close the gender gap in the workplace.

How can it be that the world is making great leaps forward in so many areas, especially technology, yet it's falling backwards when it comes to gender equality?

 

 

Read More
Microsoft Researcher Details The Real-World Dangers Of Algorithm Bias

However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives -- in health, law enforcement, sex, etc. -- it can't outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible keynote speech, titled "The Trouble with Bias" at Spain's Neural Information Processing System Conference on Tuesday.

Read More
AI tool quantifies power imbalance between female and male characters in Hollywood movies - Technology Breaking News

At first glance, the movie “Frozen” might seem to have two strong female protagonists — Elsa, the elder princess with unruly powers over snow and ice, and her sister, Anna, who spends much of the film on a quest to save their kingdom.

But the two princesses actually exert very different levels of power and control over their own destinies, according to new research from University of Washington computer scientists.

The team used machine-learning-based tools to analyze the language in nearly 800 movie scripts, quantifying how much power and agency those scripts give to individual characters. In their study, recently presented in Denmark at the 2017 Conference on Empirical Methods in Natural Language Processing, the researchers found subtle but widespread gender bias in the way male and female characters are portrayed.

“‘Frozen’ is an interesting example because Elsa really does make her own decisions and is able to drive her own destiny forward, while Anna consistently fails in trying to rescue her sister and often needs the help of a man,” said lead author and Paul G. Allen School of Computer Science & Engineering doctoral student Maarten Sap, whose team also applied the tool to Wikipedia plot summaries of several classic Disney princess movies.

“Anna is actually portrayed with the same low levels of power and agency as Cinderella, which is a movie that came out more than 60 years ago. That’s a pretty sad finding,” Sap said.

Read More
Can A.I. Be Taught to Explain Itself? New York Times

As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: “Advances in A.I. Are Used to Spot Signs of Sexuality.” But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski’s work “dangerous” and “junk science.”

(They claimed it had not been peer reviewed, though it had.) In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: “The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse.”

Read More
Something really is wrong on the Internet. We should be more worried. The Washington Post

“Something is wrong on the internet,” declares  trending in tech circles. But the issue isn’t Russian ads or Twitter harassers. It’s children’s videos.

The piece, by tech writer James Bridle, was published on the heels of a report from the New York Times that described disquieting problems with the popular YouTube Kids app. Parents have been handing their children an iPad to watch videos of Peppa Pig or Elsa from “Frozen,” only for the supposedly family-friendly platform to offer up some disturbing versions of the same. In clips camouflaged among more benign videos, Peppa drinks bleach instead of naming vegetables. Elsa might appear as a gore-covered zombie or even in a sexually compromising position with Spider-Man.

The phenomenon is alarming, to say the least, and YouTube has said that it’s in the process of implementing new filtering methods. But the source of the problem will remain. In fact, it’s the site’s most important tool — and increasingly, ours.

YouTube suggests search results and “up next” videos using proprietary algorithms: computer programs that, based on a particular set of guidelines and trained on vast sets of user data, determine what content to recommend or to hide from a particular user. They work well enough — the company claims that in the past 30 days, only 0.005 percent of YouTube Kids videos have been flagged as inappropriate. But as these latest reports show, no piece of code is perfect

Read More
Garbage in. Garbage Out. - NEVERTHELESS

One afternoon in Florida in 2014,18-year Brisha Borden was running to pick up her god-sister from school when she spotted an unlocked kid’s bicycle and a silver scooter. Brisha and a friend grabbed the bike and scooter and tried to ride them down the street. Just as the 18-year-old girls were realizing they were too big for the toys, a woman came running after them saying, “That’s my kid’s stuff.” They immediately dropped the stuff and walked away. But it was too late — a neighbor who witnessed the event had already called the police. Brisha and her friend were arrested and charged with burglary and petty theft for the items, valued at a total of $80.

The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store. He had already been convicted of several armed robbery charges and had served 5 years in prison. Borden, the 18 year old, had a record too — but for juvenile misdemeanors.

 

For the the full transcript and podcast: 

https://medium.com/nevertheless-podcast/transcript-garbage-in-garbage-out-78b74b08f16e

Read More
Understanding Bias in Algorithmic Design - ASME Demand

In 2016, The Seattle Times uncovered an issue with a popular networking site’s search feature. When the investigative reporters entered female names into LinkedIn’s search bar, the site asked if they meant to search for similar sounding male names instead — “Stephen Williams” instead of “Stephanie Williams,” for example. According to the paper’s reporting, however, the trend wouldn’t happen in reverse, when a user searched for male names.

Within a week of The Seattle Times article’s release, LinkedIn introduced a fix. Spokeswoman Suzi Owens told the paper that the search algorithm had been guided by “relative frequencies of words” from past searches and member profiles, not by gender. Her explanation suggests that LinkedIn’s algorithm was not intentionally biased. Nevertheless, using word frequency — a seemingly objective variable — as a key parameter still generated skewed results. That could be because men are more likely to have a common name than American women, according to Social Security data. Thus, building a search function based on frequency criteria alone would more likely increase visibility for Stephens than Stephanies.

Examples like this demonstrate how algorithms can unintentionally reflect and amplify common social biases. Other recent investigations suggest that such incidents are not uncommon. In a more serious case, the investigative news organization ProPublica uncovered a correlation between race and criminal recidivism predictions in so-called “risk assessments” — predictive algorithms that are used by courtrooms to inform terms for bail, sentencing, or parole. The algorithmic predictions for recidivism generated a higher rate of false-negatives for white offenders and a higher rate of false-positives for black offenders, even though overall error rates were roughly the same.

Read More
WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT- WIRED

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results.

The report, released two weeks ago, is the brainchild of Kate Crawford and Meredith Whittaker, cofounders of AI Now, a new research institute based out of New York University. Crawford, Whittaker, and their collaborators lay out a research agenda and a policy roadmap in a dense but approachable 35 pages. Their conclusion doesn’t waffle: Our efforts to hold AI to ethical standards to date, they say, have been a flop.

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI,” they write. When tech giants build AI products, too often “user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles…” Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life. Is there anything we can do? Crawford sat down with us this week for a discussion of why ethics in AI is still a mess, and what practical steps might change the picture.

Read More
Your Data Are Probably Biased And That's Becoming A Massive Problem Beware of black boxes - INC

Nobody sets out to be biased, but it's harder to avoid than you would think. Wikipedia lists over 100 documented biases from authority bias and confirmation bias to the Semmelweis effect, we have an enormous tendency to let things other than the facts to affect our judgments. We all, as much as we hate to admit it, are vulnerable.

Machines, even virtual ones, have biases too. They are designed, necessarily, to favor some kinds of data over others. Unfortunately, we rarely question the judgments of mathematical models and, in many cases, their biases can pervade and distort operational reality, creating unintended consequences that are hard to undo.

Yet the biggest problem with data bias is that we are mostly unaware of it, because we assume that data and analytics are objective. That's almost never the case. Our machines are, for better or worse, extensions of ourselves and inherit our subjective judgments. As data and analytics increasingly become a core component of our decision making, we need to be far more careful.

Read More
Emploi : réconcilier l’humain et la machine pour une IA éthique - Silicon

Emploi : réconcilier l’humain et la machine pour une IA éthique

Ariane Beky27 octobre 2017, 17:54

Le think tank Renaissance Numérique et le groupe Randstad France prônent une collaboration étroite entre humains et technologies d’intelligence artificielle.

Le débat public coordonné par la Commission nationale informatique et libertés (CNIL) sur les enjeux soulevés par les algorithmes et l’intelligence artificielle (IA) se poursuit.

Agents conversationnels, automates, outils de traduction, recommandation, géolocalisation… Les technologies d’intelligence artificielle ont une influence croissante sur la société, le travail et l’emploi. Le think tank Renaissance Numérique et le groupe d’intérim et services RH Randstad ont étudié la problématique. Jeudi, ils ont rendu public leur contribution intitulée : « L’éthique dans l’emploi à l’ère de l’intelligence artificielle ».

En France, un emploi sur deux va se transformer sous l’effet combiné de l’automatisation et de la numérisation. Certaines tâches, et pas uniquement les plus pénibles et répétitives, ne seront plus effectuées par les humains. En revanche, l’hypothèse d’une destruction massive d’emplois remplacés par des technologies d’intelligence artificielle est écartée.

Read More
Are algorithms making us W.E.I.R.D.? - alphr

Western, educated, industrialised, rich and democratic (WEIRD) norms are distorting the cultural perspective of new technologies

From what we see in our internet search results to deciding how we manage our investments, travel routes and love lives, algorithms have become a ubiquitous part of our society. Algorithms are not just an online phenomenon: they are having an ever-increasing impact on the real-world. Children are being born to couples who were matched by dating site algorithms, whilst the navigation systems for driverless cars are poised to transform our roads.

Read More
Who Controls Our Algorithmic Future? - Datanami

Alex Woodie

The accelerating pace of digitization is bringing real, tangible benefits to our society and economy, which we cover daily in the pages on this site. But increased reliance on machine learning algorithms brings its own unique set of risks that threaten to unwind progress and turn people against one another. Three speakers at last week’s Strata Data Conference in New York put in all in perspective.

Read More
Examining Gender and Emotion in Political News Debates - Affectiva Blog

Blog post by: Juliana Viola, Intern at Affectiva

Today in the US and around the world, women are undeniably underrepresented in politics. American women make up just 19.4% of Congress and 24.9% of state legislators. Globally, just ten women serve as head of state and nine as head of government. This lack of diversity brings huge consequences; time and time again, studies have documented how diversity can spur workplace innovation and boost productivity. Therefore, increasing the representation of women, specifically women of color, in government offices would likely lead to a more effective government.

Along the same vein, as an avid news junkie, I have often noticed homogeneity in the panel discussions I watch on TV. Political panels in particular are often comprised of mostly men. I wondered how I could capture metrics about how panel members emote and participate in the discussion, and how these metrics might vary by gender. For example, how is airtime split between men and women? Since the American public relies on political talk shows for perspective, these panels would ideally represent a diversity of voices to interpret objective information.

Read More
Artificial Intelligence: Making AI in our Images - Savage Mind

Savage Minds welcomes guest blogger Sally Applin

Hello! I’m Sally Applin. I am a technology anthropologist who examines automation, algorithms and Artificial Intelligence (AI) in the context of preserving human agency. My dissertation focused on small independent fringe new technology makers in Silicon Valley, what they are making, and most critically, how the adoption of the outcomes of their efforts impact society and culture locally, and/or globally. I’m currently spending the summer in a corporate AI Research Group where I contribute to anthropological research on AI. I’m thrilled to blog for the renowned Savage Minds this month and hope many of you find value in my contributions.

Read More
Debiasing AI Systems- Luminoso Blog

One of the most-discussed topics in AI recently has been the growing realization that AI-based systems absorb human biases and prejudices from training data. While this has only recently become a hot news topic, AI organizations, including Luminoso, have been focused on this issue for a while. Denise Christie sat down with Luminoso’s Chief Science Officer, Rob Speer, to talk about how AI becomes biased in the first place, the impact such bias can have, and - more importantly - how to mitigate it.

Read More