CookiesWe use cookies to make it easier for you to browse our website. If you, as a user, visit our website, it is our understanding that you are granting your consent to the use of cookies. You may obtain more information on cookies and their use hereOK
If you’re a fan of Black Mirror or Orwell’s 1984 – or even if you’re not –, don’t miss Training Humans, the exhibition organized and curated by AI researcher, Kate Crawford, and artist and researcher, Trevor Paglen, at Milan Osservatorio Fondazione Prada in Milan. Until February 24, the massive show gives an unparalleled insight into the training of AI softwares by showing to the audience the photos scientists use to show them how to analyse, see, observe and also, judge.

Questioning the present status of the image in artificial intelligence and algorithmic systems – from education and healthcare to military surveillance, from law enforcement and hiring, to the criminal justice system –, Training Humans explores two fundamental issues: how humans are represented, interpreted and codified through training datasets, and how technological systems harvest, label and use this material. With this exploration, the exhibition makes patent how biased these softwares are and exposes the AI technologies’ errors, ideological positions and assumptions – based on, among others, race, gender, age or emotion.

“When we first started conceptualizing this exhibition over two years ago, we wanted to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI systems”, explains Trevor Paglen, one of the organizers. “We weren’t interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures.” However, the forms of measurement these technologies are taught often turn into moral judgments, which perpetuate a long (and dark) history of post-colonial and racist systems of population segmentation as well as other forms of discrimination – ableism, homophobia, sexism, etc.

Kate Crawford, the other organizer, states: “There is a stark power asymmetry at the heart of these tools. What we hope is that Training Humans gives us at least a moment to start to look back at these systems, and understand, in a more forensic way, how they see and categorise us.” Because with the rise of social media, companies and institutions working with AI and facial recognition technologies have been using the thousands of images created every day and posted on platforms like Instagram, Facebook, or YouTube. As Paglen explains, “this exhibition shows how these images are part of a long tradition of capturing people’s images without their consent, in order to classify, segment, and often stereotype them in ways that evoke colonial projects of the past.”

In all, the show is a powerful wake-up call of the dystopian reality we’re entering and makes us wonder where should we draw the line – if we still can. With growing news about governments surveilling their citizens, the increasing number of ‘security’ cams in every corner of every city, or face recognition systems implemented everywhere, Training Humans raises two essential questions: where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these systems?
The exhibition Training Humans is on view until February 24 at Milano Osservatorio Fondazione Prada, Galleria Vittorio Emanuele II, Milan.

Words
David Valero
Photos
Marco Cappelletti Courtesy Foundazione Prada

ic_eye_openCreated with Sketch.See commentsClose comments
CategoriesFilterArchive
0 resultados