Home

‘What the Robot Saw’ (v 0.9) is an endless documentary by a social media robot. The Robot depicts the images of people and scenes it encounters — as its visual algorithms perceive them. It’s also a massively durational online robo-performance and archive.

A social media AI turned documentary filmmaker, the Robot continuously makes its way through the world of low engagement online video, carefully organizing and labeling the people and scenes it features in its documentary. The subjects speak and behave in parallel to and in defiance of the Robot’s surveillance-centric and consumerist perceptions. The film is constantly curated, edited, titled and archived algorithmically from among the least viewed and subscribed YouTube videos uploaded over the past several hours — videos whose main audience may be online robots. While on the one hand a robo-documentary and generative durational performance, “What the Robot Saw” also documents the complicated relationship between the world’s surveillant and curatorial AI robots and the the humans who are both their subjects and their stars.



‘What the Robot Saw’ streams at 1080p, so please check that the YouTube player is set for the highest resolution your network connection will handle (gear icon on lower right of the player.) Fullscreen or Theatre Mode is recommended (net conditions permitting.) If the stream isn’t live, you can find recent archives here.

Humans have a complicated relationship with robots. Especially the ones that work in the media. An invisible audience of software robots continually analyze content on the Internet. Videos by non-“YouTube stars” that algorithms don’t promote to the top of the search rankings or the “recommended” sidebar may be seen by few or no human viewers. For these videos, robots may be the primary audience. In ‘What the Robot Saw,’ the Robot is AI voyeur turned director: classifying and magnifying the online personas of the subjects of a never-ending film.

Using computer vision, neural networks, and other robotic ways of seeing, hearing, and understanding, the Robot continually selects, edits, and identifies recently uploaded public YouTube clips from among those with low subscriber and view counts, focusing on personal videos. A loose, stream-of-AI-“consciousness” narrative is developed as the Robot drifts through neural network-determined groupings. As the Robot scans and magnifies the clips, it generates the film in a style fitting its own obsessions, inserting titles for sections and “interviewees,” and streams it live back to YouTube for public viewing.

WTRS Screenshot 01

Image 1 of 15

Robot meets Resting Bitch Face and Other Adventures. As it makes its way through the film, the Robot adds lower third supers: periodic section titles, derived from its image recognition-based groupings and interpreted through the Robot’s vaguely poetic perspective; and frequent identifiers, for the many human interviewees in its documentary. The identifiers — talking head style descriptions like “Confused-Looking Female, age 22-34” — are generated using Amazon Rekognition — a popular commercial face detection/recognition library. (The Robot precedes descriptions that have somewhat weaker confidence scores with qualifiers like “Arguably.”) The feature set of Rekognition offers a glimpse into how computer vision robots, marketers, and others who’d be drawn to shelf face detection and recognition products, commonly choose to categorize humans — to the point where these attributes might better identify us than our names. (Of course Amazon itself is quite a bit more granular in its techniques.) While attempting to follow Rekognition’s guidelines that differentiate a person’s appearance from their actual internal emotional state, the Robot titles each person as it analyzes/perceives them — and as marketers and others using similar software do. When you’re a computer vision robot, appearance is everything. Pixels don’t have internal states.

Dated archives are generated on YouTube for each daypart livestream, offering a theoretically endless, on-demand archive of the videos few humans get to see, as robots might, and sometimes do, see them.

The live streams run throughout the day; there are brief “intermissions” every four hours (and as needed for maintenance.) An extensive archive auto-documenting previous streams — and thus a piece of YouTube in realtime — is available on the Videos page and the YouTube Channel.

Although the YouTube live stream is central to the project, the technical limitations of live streaming mean the image and sound quality are not ideal and may vary with network conditions. A high quality stream can be generated locally for art installations and screenings.

‘What the Robot Saw’ is a non-commercial project by Amy Alexander