Categories
Entoptic Media Post-Phenomenological AI Studies

Entoptic Field Camera

Entoptic Field Camera

Part of Post-Phenomenological AI Studies and Entoptic Media

Inspired by the appearance of skies in photos of the 2020 California wildfires (i.e., deep reds appeared blueish-grayish), I have prototyped a GAN-driven camera web application which makes this ‘reality-autocorrect’ explicit. Photos taken by a user are sent to an API for ‘development,’ and a synthetic output image is returned. Currently, I am conducting a field study with collaborators to probe (i) how an explicit ‘reality-autocorrect’ shapes our experience of the world and what this tells us about the status quo of ML-driven imaging technologies, and (ii) how ML technologies can be defined as a uniquely generative design material. Further details below the image gallery.

Publications

Benjamin, Jesse Josua. Forthcoming. ‘Three Post-Phenomenological Design Projects on and with ML Technologies as Philosophy-in-Practice’. Accepted to IASDR ’22.

Interaction Flow of the Entoptic Field Camera. Left: The Entoptic Field Camera prototype. Selecting the red button triggers the user agent camera interface. The toggle switch at the bottom allows users to select between a full GAN reinterpretation (Automatic) and a selective GAN inpainting (Manual). Middle: Image taken using the user agent camera. Selecting “Use Photo” sends this image as an input to the GAN API. Right: Returned generated image, displayed in the green square on the left.

Adopting the metaphor of entoptics in both its medical (cf. ‘entoptic phenomena’, Oxford University Press’ Concise Medical Dictionary, 2009) and archaeological (cf. Lewis-Williams and Dowson, 1988) sense, I use the term to denote phenomena that arise in the interplay of data-processing and model-inference of ML technologies (akin to ‘pattern leakage’ described here). Accordingly, this speculative design projects imagines a future type of everyday imaging apparatus powered by ML technologies (the Entoptic Field Camera) that is built on pattern leakage not as an annoying bug, but rather a desirable design feature. In this project, I seek to investigate, through auto-ethnographic methods as well as a limited field study, what kind of human-technology-world relations such an artefact constitutes—from the assumption that something like the Entoptic Field Camera is simply a given, everyday artefact. Practically, the prototype (see above) is built using custom JS scripts and makes API calls to a GAN model on the RunwayML platform. A user triggering the red button calls up their respective camera interface, and takes an image. When choosing to “Use [this] Photo,” the image is encoded as a base64 string and sent to the RunwayML API. The GAN hosted via the latter then reproduces the image, or rather, the GAN’s generator attempts a reconstruction of probabilistically inferred patterns until the GAN’s discriminator cannot distinguish between the original and the generated image. The latter, then, is feed back into the web prototype. In effect, the Entoptic Field Camera thus enables users to take a picture of their surroundings and receive a sometimes vaguely representative, sometimes oddly faithful, sometimes bizarrely otherwordly ‘reality autocorrect.’