Questioning the present status of the image in artificial intelligence and algorithmic systems – from education and healthcare to military surveillance, from law enforcement and hiring, to the criminal justice system –, Training Humans explores two fundamental issues: how humans are represented, interpreted and codified through training datasets, and how technological systems harvest, label and use this material. With this exploration, the exhibition makes patent how biased these softwares are and exposes the AI technologies’ errors, ideological positions and assumptions – based on, among others, race, gender, age or emotion.
“When we first started conceptualizing this exhibition over two years ago, we wanted to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI systems”, explains Trevor Paglen, one of the organizers. “We weren’t interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures.” However, the forms of measurement these technologies are taught often turn into moral judgments, which perpetuate a long (and dark) history of post-colonial and racist systems of population segmentation as well as other forms of discrimination – ableism, homophobia, sexism, etc.
Kate Crawford, the other organizer, states: “There is a stark power asymmetry at the heart of these tools. What we hope is that Training Humans gives us at least a moment to start to look back at these systems, and understand, in a more forensic way, how they see and categorise us.” Because with the rise of social media, companies and institutions working with AI and facial recognition technologies have been using the thousands of images created every day and posted on platforms like Instagram, Facebook, or YouTube. As Paglen explains, “this exhibition shows how these images are part of a long tradition of capturing people’s images without their consent, in order to classify, segment, and often stereotype them in ways that evoke colonial projects of the past.”
In all, the show is a powerful wake-up call of the dystopian reality we’re entering and makes us wonder where should we draw the line – if we still can. With growing news about governments surveilling their citizens, the increasing number of ‘security’ cams in every corner of every city, or face recognition systems implemented everywhere, Training Humans raises two essential questions: where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these systems?