Anatomy of an AI System
Vladan Joler (RS) & Kate Crawford (AU)
Anatomy of an AI System is a large-scale map and long-form essay investigating the human labor, data, and planetary resources required to build and operate an Amazon Echo. The exploded view diagram combines and visualizes three central, extractive processes that are required to run a large-scale artificial intelligence system: material resources, human labor, and data. The map and essay consider these three elements across time—represented as a visual description of the birth, life, and death of a single Amazon Echo unit.
At this moment in the 21st century, we see a new form of extractivism that is well underway: one that reaches into the furthest corners of the biosphere and the deepest layers of human cognitive and affective being. Many of the assumptions about human life made by machine learning systems are narrow, normative, and laden with error. Yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed.
The stack that is required to interact with an Amazon Echo goes well beyond the multi-layered technical stack’ of data modeling, hardware, servers, and networks.The full stack reaches much further into capital, labor, and nature, and demands an enormous amount of each. The true costs of these systems—social, environmental, economic, and political—remain hidden and may stay that way for some time.
We offer up this map and essay as a way to begin seeing across a wider range of system extractions. The scale required to build artificial intelligence systems is too complex, too obscured by intellectual property law, and too mired in logistical complexity to fully comprehend in the moment.
Kristina Tica (RS)
FUTUREFALSEPOSITIVE is a project based on StyleGAN and object recognition algorithms applied to the ritual of Turkish coffee mug reading. A collection of these pictures composed the initial database out of which new images have been generated. 15000 real-life and generated images are then morphing into an animation and used to train the algorithm to recognize objects out of these random shapes created by the coffee stains and generated noise. The algorithm performs this continuous object recognition process in real time—reading the mug—while producing new visual narratives in a loop. In this process the relation has been established between false positives in computer vision and psychological phenomena of pareidolia and apophenia. The interplay between prediction as a false positive and prophecy as apophenia – the tendency to perceive meaningful connections between seemingly unrelated things – does not only focus on absurdity but on possibilities of creative interpretation when trying to understand the technical processes behind.
PERCEPTION iO (Dokumentation)
Karen Palmer (GB)
Im Rahmen des Symposiums wird eine Dokumentation des Kunstwerks ausgestellt.
Perception iO (Input Output) is the future of Law Enforcement. An Artificial Intelligence data set emotionally responsive to the participant and potentially their bias.
The participant will assume the role of a police officer watching an interactive training video of an escalating volatile situation. They will experience the interaction from the perspective of a cop’s body camera and come into contact (separately) with a black protagonist and white protagonist. Each protagonist plays either the role of a criminal or of a person with mental health issues. The Perception iO system will track the participants facial expression. How they respond emotionally to the scene will have consequences for the characters. It will influence the branching narrative to prompt the cop to either arrest, assist or shoot the character on the screen.
The Perception iO immersive experience is a convergence of neuroscience, behavioral psychology, film, AI, facial emotion detection, eye tracking, bias and social justice. It reveals how a person’s emotions (and eye tracking functionality currently in development) influences their perception of reality through an immersive storytelling experience.
After extensive research in Artificial Intelligence, this installation will enable participants to experience how easily human bias can be intergraded into networks by humans and therefore understand the necessity for Transparency and Regulation in AI. The immersive experience generates self-reflection and discussion on issues of bias, ethics and accountability for the participants and people creating these types of systems.
This is a collaboration of Art (Cooper Hewitt), Tech (ThoughtWorks Arts) and Science (NYU) to create the installation and undertake the ethical, academic and philosophical R&D. EmoPy is the bespoke (open source), deep neural net toolkit for emotion analysis via Facial Expression Recognition (FER) created by ThoughtWorks Arts and Karen Palmer. Perception iO was commissioned by The Cooper Hewitt Smithsonian Design Museum NYC.