If you’re a fan of Black Mirror or Orwell’s 1984 – or even if you’re not –, don’t miss Training Humans, the exhibition organized and curated by AI researcher, Kate Crawford, and artist and researcher, Trevor Paglen, at Milan Osservatorio Fondazione Prada in Milan. Until February 24, the massive show gives an unparalleled insight into the training of AI softwares by showing to the audience the photos scientists use to show them how to analyse, see, observe and also, judge.
Questioning the present status of the image in artificial intelligence and algorithmic systems – from education and healthcare to military surveillance, from law enforcement and hiring, to the criminal justice system –, Training Humans explores two fundamental issues: how humans are represented, interpreted and codified through training datasets, and how technological systems harvest, label and use this material. With this exploration, the exhibition makes patent how biased these softwares are and exposes the AI technologies’ errors, ideological positions and assumptions – based on, among others, race, gender, age or emotion.

“When we first started conceptualizing this exhibition over two years ago, we wanted to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI systems”, explains Trevor Paglen, one of the organizers. “We weren’t interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures.” However, the forms of measurement these technologies are taught often turn into moral judgments, which perpetuate a long (and dark) history of post-colonial and racist systems of population segmentation as well as other forms of discrimination – ableism, homophobia, sexism, etc.

Kate Crawford, the other organizer, states: “There is a stark power asymmetry at the heart of these tools. What we hope is that Training Humans gives us at least a moment to start to look back at these systems, and understand, in a more forensic way, how they see and categorise us.” Because with the rise of social media, companies and institutions working with AI and facial recognition technologies have been using the thousands of images created every day and posted on platforms like Instagram, Facebook, or YouTube. As Paglen explains, “this exhibition shows how these images are part of a long tradition of capturing people’s images without their consent, in order to classify, segment, and often stereotype them in ways that evoke colonial projects of the past.”

In all, the show is a powerful wake-up call of the dystopian reality we’re entering and makes us wonder where should we draw the line – if we still can. With growing news about governments surveilling their citizens, the increasing number of ‘security’ cams in every corner of every city, or face recognition systems implemented everywhere, Training Humans raises two essential questions: where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these systems?
The exhibition Training Humans is on view until February 24 at Milano Osservatorio Fondazione Prada, Galleria Vittorio Emanuele II, Milan.
Osservatorio Fondazione Prada   Training Humans 3.jpg
"Sdumla-Hmt". Yilong Yin, Lili Liu, Ximei Sun, 2011.
Osservatorio Fondazione Prada   Training Humans 4.jpg
From left to right: “Development in the visual cortex”, Colin Blakemore, 1973; “Casia Gait and cumulative foot pressure”, Shuai Zheng, Kaigi Huang, Tieniu Tan, Dacheng Tao, 2001; “Columbia Gaze”, Brian A. Smith, Qi Yin, Steven K. Feiner, Shree K. Nayar, 2013; “Multiple encounters dataset - (Meds-II)”, National Institute of Standars, 2011; “Yale extended faces dataset B”, Athinodoros Georghiades, Peter Belhumeur, David Kriegman, 2001; “Feret dataset”, National Institute of Standars, 1993-96.
Osservatorio Fondazione Prada   Training Humans 8.jpg
On the table: “Utk face”, Zhifei Zhang, Yang Song, Hairong Qi, 2017. On the wall: “Selfie dataset” Mahdi M. Kalayeh, Misrak Seifu, Wesna, LaLanne, Mubarak Shah, 2015.
Osservatorio Fondazione Prada   Training Humans 9.jpg
On the screen: “Labeled faces in the wild”, Gary B. Huang, Manu Ramesh, Tamara Berg, Erik Learned-Miller, 2007. On the wall: “Cross-age celebrity dataset (CACD)”, Bor-Chun Chen, Chu-Song Chen, Winston H. Hsu, 2014.
Osservatorio Fondazione Prada   Training Humans 24.jpg
From left to right: “Age, gender emotions in the wild”, Trevor Paglen Studio, 2019; “Image-net roulette”,Trevor Paglen Studio, 2019.