
For the official version of record, see here:
Blom, I., & Fuller, M. (2024). A Filter Theory of Photography. Media Theory, 8(1), 107–132. Retrieved from https://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/1070
A Filter Theory of Photography
INA BLOM
University of Oslo, NORWAY
University of Chicago, USA
MATTHEW FULLER
Goldsmiths, University of London, UK
Abstract
Filters – both technically and in a wider sense of the term – provide a new way of theorizing photography. Most obviously, filters sieve things, persons and data out of flows; more specifically, they are key to the non-optical formations at work in image production based on neural networks in machine learning. In this essay, we argue that theorizing photography from the perspective of filtering reconfigures photography as the entire field of material elements and processes involved in the production of an image and presents the photographic image itself as a distribution across a field of perception that includes various forms of technical sensing and formation. Ultimately, attending to filtering as the technical realities as well as aesthetic propositions in recent art projects allows us to understand photographic conditions as a wider and more general set of environments in which image-events occur with many kinds of causality and manifestation.
Keywords
filters, convolution, machine learning, Walead Beshty, photography
1. Travel Pictures
Tschaikowskistrasse 17 in former East Berlin is an architectural site that is best described as a geopolitical limbo. In 1969, Iraq’s Ba’ath Party Congress decided, as the first country outside the Eastern Bloc, to officially recognize the German Democratic Republic and open diplomatic relations between the two countries. In response, and through an unusual contractual agreement that was never to be repeated, the GDR gave Iraq a parcel of Berlin land and ownership in perpetuity to any building raised on the site. In 1973, an Iraqi embassy was formally opened. The special rights to the site and building were reaffirmed during the German unification process in 1990, even as Iraq established an embassy in the newly formed Federal Republic of Germany and vacated the former GDR site. If the Vienna Convention defines any diplomatic mission building as the inviolable sovereign property of the guest state, Tschaikowskistrasse 17 is that exceptional thing: a sovereign territory devoid of the presence of sovereign power. No wonder it would be invaded by individuals who exploit the real or perceived loopholes of legal ownership, also known as squatters.
No wonder, also, that this exceptional site might be invaded by camera eyes normally prohibited inside embassy buildings. In 2006, the artist Walead Beshty visited the site, photographing its interiors and exteriors. The images, whose focus appears careless or random, as if shot in great hurry, document general disarray: Books, papers and debris strewn on floors, broken windows, piled-up furniture, sagging shelving systems, luxuriant plant growth encasing external walls. But this is just one representational layer in a series of images that are technically defined as “multiple exposures”. Left in his suitcase during Beshty’s return flight to Los Angeles, the films were exposed to the forceful X-rays of airport security baggage scanners. As a result, the visual record of the vacated embassy was overlaid with fogging, striations and haze – atmospheric effects that, in addition to being rather attractive formal features, intensified the geopolitical contexts of these photographic events (see Fig. 1). For it could in fact be argued that the strange state of exception in which the sovereign-yet-invaded embassy site finds itself is not entirely dissimilar to the softer suspensions of sovereignty that have been on the increase since the growth in international air travel met the expanding security demands of the fight against terror. During those moments of passage, states of exception proliferate: Bodies of all kinds are subjected to invasive scanning, surveillance and silencing procedures that would otherwise be deemed an attack on individual rights. Airport scanning systems are perceptual apparatuses whose registrations are rarely directly felt or seen by those subjected to them (Parks, 2009). Since there are seemingly no representations, the systems take on the deceptively neutral authority of the ubiquitous and self-evident (Kafer, 2023). As in the intertwining of random human mess and equally random X-ray tracings in Beshty’s embassy images, what is produced is essentially a vast continuum of inchoate information or noise – but out of this continuum, nuggets of meaning are constantly sifted and sorted.[1]

Figure 1: See endnote [2]
In the aftermath of the so-called Travel Pictures (2006–2008), Beshty created a series of photographs consisting simply of the effects of airport security X-rays on clean, otherwise unexposed, photographic film. Referred to as Transparencies (2008–2014),these hauntingly beautiful abstract works seem to directly evoke the dialectics of transparency and obliqueness that have informed critical discourses on modern state surveillance, as well as important discussion in 20th century art and architecture. This is the framework outlined in Noam Elcott’s recent essay on Beshty’s work, and it is hard to deny its relevance (Elcott, 2019). If anything, it finds support in the idiosyncratic titling convention Beshty uses for individual works. “Travel Picture Rose [Tschaikowskistrasse 17 in multiple exposures* (LAXFRATHF/TXLCPHSEALAX) March 27–April 3, 2006]*Contax G-2, L-3 Communications eXaminer 3DX 6000, and InVision Technologies CTX 5000”is all at once a (for non-experts) barely readable chain of technical codes – and a precise denotation of the sites and dates of photographic recordings, the airports and brand of airport scanners through which the photographic film travelled, as well as the dominant colour code of the image as referenced in the NBS-ICC Synthetic Dye System.
Even so, Beshty’s photographic work might call out for an alternative critical framework. Ultimately, the focus on transparency and its negations tends to wed the works to questions of representation. But there are good reasons to approach them in more emphatically procedural terms – for instance through the concept of filters and filtering. Of course, the association between photography and filters is so commonplace as to seem to warrant no special attention: from the tint of film stock to the placement of disks of coloured glass or gelatine in front of the camera lens to the range of transformative filters available to Instagram, Snapchat and TikTok users, photographic seeing was always emphatically filtered. And yet the question of filtering has not really been brought to bear on photographic theory – a task that seems particularly urgent at a time when the habitual emphasis on the photographic frame is displaced by an emerging understanding of the photograph as a sample produced at highly diffused inter-machinic conjunctures. Ever attentive to the procedural aspects of things, Beshty’s work and writing might in fact be seen as sharp mediations of such new photographic realities, and thus also act as an opener toward such a theory.
The Travel Pictures are a case in point. On the one hand they are instances of filtering in the obvious sense of layered effects that function as visual gateways: The messed-up embassy spaces can only be seen through the delicate skein of lines and colour splotches produced by X-ray exposure. But other modes of filtering are equally essential. For the material production of these works hinge on the highly unusual “access settings” of a sovereign territory, as well as their passage through discriminatory channels that determine the right of any object or person to travel from one territory to another. Airport security systems and embassies are key filtering devices in the sense that they regulate flow and access, sorting wanted from unwanted and visible from invisible while quite literally placing their stamp on anything they touch. The visual outcome of these various filtering processes can no longer be understood as the result of a single, determining event of photographic capture, however short or long. There is no “past moment” pointed to in these works, and for this reason a semiotically oriented reading of them as indexical traces will only get you so far. Instead, the works emphatically present themselves as chained instances of the visual registration of events that are both optical and non-optical, and that all, in their various ways, function as transformative gateways. The power of photographic vision is no longer allocated in ‘a’ machine related (or not) to ‘a’ human eye, but in a concatenation of technical affordances (in the limited sense of the term) and social and political machineries (in the wide sense of the term). To ask why these perceptual chains should even bear the name “photography” is to re-open the question of the relation between photographic vision and all the other seeing and sensing capacities that make up our reality. Beshty’s suggestion might be to start by understanding the photographic image itself as a distribution across a field of perception that includes various forms of technical sensing and formation.
However, to understand the implications of this idea for those aspects of photography that are not necessarily (or not only) located within digital environments, a more precise account of filters is needed. And by the same token, we need to clarify the ways in which a filter-oriented understanding of photographic seeing breaks with the strong model of causality implicit in the concept of photographic indexicality – a model that seems to have survived the technical transition from chemical to pixel-based photography, despite some initial misgivings (Marks, 2002). In what follows, photographic filtering will be understood as a series of events or effects that are related not through causal chains – predicated, for instance, on indexicality or pointing to a thing that was there and that leaves a trace in the image – but rather through quasi-causal correspondences or resonances. The theoretical model for this perspective is found in Gilles Deleuze’s engagement with Stoic thought in The Logic of Sense (1990) – a form of thought which impresses him precisely due to its bold splitting of the relation between causes and their effects. Destiny for the Stoics does not entail necessity. Causes (or destiny) are primarily part of a unity that is proper to them, an ideal physical reality that the soul can only attend to as a form of interiority. In contrast, effects are all at once the effects of these causes and something distinct, part of an entirely different set of relations, since they differ in nature from the causes themselves. For effects are a form of pure exteriority: they are the expressions of the soul, with all the freedom that this entails. In The Logic of Sensation, where Deleuze (2003) revisits these themes via the work of Francis Bacon, he differentiates the scream, for instance, from its cause, the horror. It is the scream that Bacon paints, a different event to the horror and one that, in painting, in being screamed, may even allow one to wrestle with that horror. This particular example is one that involves something, a painting, that might be interpreted as a representation. What it points to however is a wider process of dynamic formation.
This is why causes and their effects are related only through an incorporeal quasi-cause, such as the scream, or the image, something that only comes into play via the event of the effect and the new perspective it entails, an instance of independence which unchains destiny from necessity. From this point onward, the realm of causes is less interesting than the complicated realm of expressive events, since different expressions can be compatible as well as incompatible. The effects of quasi-cause are therefore best described as aggregates of echoes, resumptions and resonances that fall into conjunctive and disjunctive series. At this point, Deleuze maps Stoic thought onto Leibnizian concepts of compossibility and incompossibility, which are sharply distinguished from the purely logical and oppositional relation between the possible and the impossible (Tissandier, 2018). For the compatibility or incompatibility of expressive events have nothing to do with logic, only with the differentiating tendencies and trajectories of series of expressions that constitute an emergent reality (Deleuze, 1990). If photography is something more or other than the index, the death mask and the irretrievable past moment, expressive quasi-causality and in/compossibility are useful devices for describing its real-world effects: its radically distributed field of perceptions as well as, relatedly, the techniques of filtering that are inseparable from whatever bears the name ‘photo-graphy’.
2. A typology of filters
A provisional description of filters may single out two broad categories: notably one-layer filters and thick filters. This is in no way a canonical definition, only one that is useful for this project. By describing a move from simpler to more complex forms, it has aspects of a historical trajectory. Still, it is not a chronicle, but a contribution to a developing understanding of the new formalisms that appear in a widening gyre of instauration as computational forms mix with non-computational realities.
Thin, uniform and applied equally across the incoming light or data, one-layer filters simply take a source and apply a process to it. They may operate on electromagnetic waves in the visible and non-visible spectrum or, in the case of computational filters, do so by mathematical and electronic means. The first kind is the historically precedent one: it takes an incoming light source and reduces it, acting as a direct sieve for the elements it absorbs. Computational one-layer filters are less direct, since they keep a memory cache of the transformations they produce and therefore establish a feedback loop between operations and updates. However, whether acting on a physical source or on data, both types reduce their source in some way. This differentiates one-layer filters from the more recent thick filter types of many multiples of layers, which typically work by adding data in various forms.
Wratten filters are typical examples of one-layer filters that act directly on light before it hits the lens and the film itself. Named for their inventor, an early photographic innovator, they are used in scientific and technical imaging processes and also to achieve different kinds of spectral effects in black-and-white and colour film photography. Such filters ‘absorb’ specific parts of the electromagnetic spectrum: by absorbing ultraviolet rays, you may for instance limit the perceived ‘coldness’ of an image that contains more blue tones than the naked eye can see. In black and white photography, yellow filters might be used to darken a bright sky blue for atmospheric or detail purposes, or to compensate for daylight or flash exposures. With colour film, filters may introduce a haze, intensify contrast or change a specific colour-value by a particular amount (Kodak, 1969). During the age of film photography, expertise in the use of such filters was a key part of a photographer’s skill set.
In contrast, digital one-layer filters are typically used in postproduction – although they may also be part of the photographic event and even provide the context in which the event takes place. Such filters may govern the passage or non-passage of light, but they may also highlight peculiarities in the composition of an image, absences that become visible through excision, or abrupt jumps between colours. However, in the late 20th century, programs such as Photoshop and GIMP introduced yet another kind of one-layer filter. Filters were added to Photoshop in version 0.87 of March 1980, and with the advent of version 1.0 in 1990 filtering had become key to what was meant by the verb ‘to photoshop’, part of the working vocabulary of most people dealing with computer graphics. At this point, the effects of the original Wratten filters were simulated by mapping colours across to 255 variations of blends of red, green and blue and manipulating each of these as ‘channels’ that were subject to different levels of hue and saturation.[3] Basic filters such as Gaussian Blur and Sharpen, different kinds of stylization (such as Blur, Blur More, Diffuse, Mosaic and Motion Blur) as well as mechanisms for dealing with interference patterns (such as Despeckle) were incorporated at this point. Rather than acting on single pixels, filters operate on clusters of them, establishing relations between their values depending on the pattern of the filter. In addition to evoking the now-obsolete darkroom phase of photography, as Lev Manovich (2011) notes in an overview of this transition, such software also added novel computational capacities, new ways of analyzing and manipulating the image. Crucially though, just like the physical filters that predated them, Photoshop and, later, GIMP filters acted on the image through a single transformation at a time. In everyday photographic practices these can be pipelined in sequence to create new kinds of images, but they essentially stay in the realm of the one-layer filter in the sense that a set of numerical functions is applied to a given matrix of values. While such techniques set up the possibility for more complex filter types, they differ from images produced in photographic approaches based on machine learning, where filters become denser and more active.
If one-layer filters rely on single transformations, alone or in series, such thick filters operate by holding together and weighing divergent processes. They may have the absorptive functions we find in one-layer filters but should above all be understood as fields of complex interrelations. Some of the most interesting conditions of thick filters have been outlined in Naja Grundtmann’s sophisticated cultural and philosophical analysis of the perceptual logic of machine learning systems. Grundtmann (2023) suggests that we understand convolutional neural networks as examples of a distinct ‘convolutional aesthetics’ that is all at once a feature of a specific type of network, and – potentially – a perspective that concerns a contemporary culture that is increasingly defined by expansive, inter-machinic filtering formations.
As the term indicates, thick filters contain many layers of filtering, and while some of them are combined in one mechanism, such as, for instance, a specific neural network, others are composed of more distributed and non-synchronized processes of filtering. Drawing on Beatrice Fazi’s (2018) emphasis on the generative complexity that exists within the discrete operations of computational processing, as well as Winnie Soon’s (2016) perspectives on computational processuality, Grundtmann argues that images produced by convolutional neural networks are the result of a “dynamic environment that modifies itself via coded interactions with de-ocularized (i.e. numerical) input data” (2023: 116). A photograph, in this context, is best understood as a complex of samples, and convolution is one way of describing the inter-machinic conjunctures in which it is produced. In mathematical tradition, convolution describes a process by which two or more mathematical functions are combined. A value held in a memory, such as a set of numbers describing the colour and luminance values of a pixel, may be either modified or originated in convolution. In this context, digital images are defined as matrices of values that may be produced, analyzed and reworked by convolutional actions.
As Grundtmann argues, in dialogue with Adrian Mackenzie’s Machine Learners (2017), the learning part of a neural network consists of the “discovery of the mathematical function” that is able to map an input onto an output (2023: 133). Such learning consists of the formation of a specific relation, which, once learned, can then be transferred to interactions with other kinds of input. The power of machine learning is precisely the capacity for generating new relations that can subsequently be given a wider application. The discovery and application of such new relations may simply add new information, but they may also open onto significant features, for instance latent formations within data. Once a function is discovered, it is independent of its terms (that is, the specific numbers, or algebraic symbols for them, between which it arranges a relation). In this state, the function is unsaturated, as Frege (1980) puts it.[4] As a relational entity, the unsaturated function has its own modality of mediatic existence, with various potential scales of articulation: It could be a notation in a formula, an arithmetical operation, or an encoding within computational abstraction layers. Such scales of articulation all have their proper or idiomatic sets of relations and may even change register or be reinterpreted at certain moments of accident, inscription or parsing within a system, or by being subjected to interpretation by another system, so that they take on new forms of saturation or migrate to different contexts. Even though they may treat data matrices as seemingly incompatible as a set of medical samples and a series of holiday snaps, the mathematical expressions of the same relation or concatenation of relations may establish a processual kinship across different kinds of images and image functions.
What Grundtmann derives from Soon’s work on the aesthetics of constantly ongoing computational processuality is the all-important point that unsaturated numerical functions are by definition turned outward – toward the world. Scaled up, they alert us to the political significance of the synthetic terrains that emerge from machine learning systems and their ongoing filtering operations. As thick filters are increasingly widely dispersed in contemporary society, they ground new types of social and aesthetic experience, and even new ways of inhabiting life. Among other reasons, the widespread concern with new forms of bias and discrimination that may result from machine learning systems make critical engagement with them especially pressing. When calculated and arranged by different kinds of neural networks and neural network contexts, they form diverging series of compossibility and incompossibility whose specific tendencies may come to determine life in automated ways – forever transforming our understanding of the powers of photography’s automatic images.
Still, as demonstrated by the example of Walead Beshty’s Travel Pictures, a critical approach to filters and filtering may extend beyond the mechanisms most immediately associated with photographic technologies “proper”, in their various phases, forms and levels of complexity. From such perspectives, the question of filters can also work as an approach to the wider media ecologies of photography, suggesting new stakes for the theorization of photography more generally.
3. Image space as phase spaces
In the convolutional processes that create thick filters, the difference between any one image and any other image is traversed by the calculation of the various parameters that compose a given matrix of values. Here it is useful to understand image formation in terms of a phase space, a map of all the states in which any particular system can be. An imaginary phase space of all images, all possible image dimensions and all possible variations of colour and luminance at each pixel is implicated whenever you single out any image or sample from this meta-population of potential images. This imaginary is part of what is at play in the kind of sense of a plenitude of images being generated as probabilistic calculations of possible images. It forms a radical break with the optical coordinates of image spaces understood as distinct forms of visual representation – but it is congenial with a range of aesthetic experiments and expressions. With Travel Pictures, Beshty outlined precisely such a conception of image space as a potentially unlimited phase space. And while the description above, drawn from Grundtmann, pertains to raster images, other techniques may indicate other types of phase spaces. Vector graphics in computer imaging may be one example; photographic image conventions or genres such as portraiture, landscape or pack-shots are perhaps other ways of producing such a phase space with a different kind of logic of inclusion and implication.[5] But, as Beshty’s work shows, phase spaces can also emerge through other kinds of latent conjuncture, in the sense that photographic means are gained in equipment that are not intended as such – for instance when luggage passing through airport x-ray machines function as cameras making a recording of their own passage.
Within the potentially infinite framework of such image phase spaces, the question of what may, at any one time, constitute a distinct or significant entity becomes critical, both technically and aesthetically. One of the most telling features of Beshty’s filter-oriented approach to photographic imaging is the way in which it foregrounds new questions regarding the very relationship between the continuous and the discrete. In analogue photography, the traces of light on the photosensitive surface were understood as continuous, while the film frame defined each shot as a discrete entity. The long tradition of photomontage did little to change these ideas, since the interesting tensions produced in montage forms derived from the fact that each photographic fragment would point back to the original constellation of continuous inscription and discrete frame from which it had been cut. And while the composite nature of digital photographs obviously displaced the notion of a continuous image space, cultural concerns about the autonomy and evidentiary value of whatever was thought to be contained “within” the frame of the single shot tended to obscure the ways in which the filter automatisms of digital image editing programs might produce very different forms of discreteness and continuity. For better or worse, the photographic frame long remained the dominant point of reference for understanding the photographic image.
And yet, this is precisely the mode of understanding that is most emphatically challenged in Beshty’s work. Invited to curate an edition of the photography magazine Blind Spot (#46, 2013), he chose for the cover image Morgan Fisher’s Actual Size –a photograph of a pair of glasses against a white background that filled the entire front page. As the title indicates, the size of the image should always be formatted so that any print version accurately mimics, in a 1:1 ratio, the actual size of the object it depicts. The effect is striking: the flat white background makes it look as though the glasses are placed on top of the magazine, rather than being represented in the cover image. The cover image had, in other words, become an edge within the overall frame of the cover format, expressing the uncertain boundary between object space and image space.
A key function in computational one-layer filters, as well as in the broad category of thick filters, is notably the automated capacity to identify and calculate entities such as edges, so as to single out a part of an image as a new digital object. Already Photoshop 1.0 included a “Find Edges” tool, which identified distinct areas of difference in colour or luminance value. The specific modalities of edge-detection also have their own texturing capacities, where different kinds of diffusion and separation effects come into play. A ‘Magic Wand’ tool for instance includes settings that allow the user to determine the selection of pixels by adjacency and value, or the different levels of ‘dithering’ that can be applied to the boundary of a selected part of an image. Such patterns of differentiation also echo in wider definitional issues: When a continuous entity, such as a curve, appears or is constructed in a digital context, various forms of quantization are deployed to negotiate processes of regression to and progression of finite differences of the infinitely small. A number of different techniques have been generated to deal with continuous objects by digital means such as Bezier curves, NURBS (non-uniform rectilinear bi-splines) or sine waves, amongst others. In all of these cases, an entity may be represented in discrete terms in order to act as a manipulable interface element to an entity that also exists as the mathematical description of a continuous curve. In this play between the discrete and the continuous, techniques for the expansion or contraction of resolution may be deployed, but these also tend to hit boundaries of the infinitesimal, making cuts either arbitrary or meaningful in their negotiation of the jagged and the smooth. Here, the very formulation of specific objects or problems in terms of the discrete may be decisive – as in the national boundaries enacted in airport security.
Although crucial, edges are still relatively simple forms. More complex sets of features may involve the multiple edges or dynamic geometries at work in facial recognition techniques, amongst others. Finding such entities in an image depends on transposing an object from one set of images onto that from another – for instance by recognizing a pattern of data corresponding to a hat and moving this onto the head of an entity drawn from a set of sources corresponding to an identified person (Nguyen et al., 2017). An extrapolation of this approach is the reason behind the recent actors’ strike in the US, where data from footage of an actor may be combined with that of a body double so as to mimic their presence, allowing their image to become a more plastic and deployable asset.
But there are more complex forms to be found, some of which are not even immediately discernible. Multi-layer neural networks tend to produce so-called ‘latent spaces’ – abstract multi-dimensional formations that generate much of the novelty which such systems are sometimes capable of producing. Latent spaces contain linked sets of feature values that may not always be directly interpreted, but that may still encode a meaningful internal representation. Latent spaces emerge in certain kinds of machine learning when data that the network has interpreted as being similar are embedded or compressed by using dimensionality reduction – that is, by interpreting certain aspects of the data as being more significant than others. Often usefully, but sometimes uncannily, the specific dimensions that are elevated in this way may not map onto phenomena that can be perceived by humans. Latent spaces also traverse media forms in ways that are particular to those forms despite their general digitalization: for instance, word embeddings describe those formed in text; image feature spaces are composed in relation to images. Rather than relying on the absolute logical determination and rigid conditions of compossibility established by fixed vectors, such configurations may generate and foreground contingency as a structural presence. In neural networks contingency may appear in latent spaces in the forms of pareidolia (seeing concrete entities in nebulous sources), apophenia (interpretatively connecting the disconnected) and other non-standard forms of machine perception. Such effects and forms may be seen as obtuse, unexpected, insightful or perverse, and may ground claims of novelty, relevance, creativity or predictability; they may also be the nesting ground of biases. In short: The latent spaces that populate neural networks are a crucial aspect of their generative capacities, as well as the various forms of power that are attributed to them.
As seen in the above descriptions of image phase space, submerged latent spaces and negotiations of discreteness and continuity, filtering operations in general and thick filters in particular introduce entirely new levels of abstraction to our engagement with photographic imagery. Abstraction, in this context, is neither the refusal of representation nor the negation of concrete experience, but, as Alberto Toscano (2008) underscores, a process that is immanent in any construction of perception and experience as well as in the various mechanisms that extract surplus value from such construction. Filters are themselves composed in a movement between anticipatory abstraction and immanent formation that in turn shifts the grounds of abstraction. Working with or through these conditions is a crucial aspect of contemporary cultural practices.
In fact, Beshty’s consistent emphasis on procedure demonstrates a willingness to engage with and even exacerbate the new abstractions of photography, both technically and formally. Behind its edge-oriented cover, Blind Spot #46 articulates further mediations of photographic phase space. What you expect from a curated magazine presentation of photographic works is typically a continuity of pages in which separate images are formatted and sequenced according to some overarching principles of narration and design. What you got in this case was something entirely different: a visible layout grid in which the selected images primarily appeared as a population of weighted items, determined by file size. Inside the layout grid, all images were reproduced at a single relative scale of 1:6, with the result that some of them were barely legible, while others would stretch, absurdly, across several pages. Images here were quite simply defined as entities available for techno-mathematical operation and filtering, rather than human-type filtering (reader/viewer attention).
To see photographic images as clusters or populations inhabiting the distributive space of modern media networks of course has precursors. One aspect of André Malraux’s (1967) concept of the ‘museum without walls’ was notably the free-flowing wealth of photographic reproductions of artworks; more recently David Joselit (2013) has addressed a Google-driven visual culture in which images operate as swarms, literally creating a ‘buzz’. However, such ideas get a different inflection once you pay attention to some of the most consequential features of the computational life of images. For it is precisely in the potential (but technically real) phase space of all possible images, image dimensions and variations down to the level of the single pixel that the concept of the distinct photographic event – the ‘death mask’ theory of photography – truly comes undone, along with the still-habitual distinction between image production and image distribution. In its place, Beshty presents a different theory of photographic ‘death’, in which it is precisely the negotiation between continuity and discreteness that is at stake. In the essay ‘Against Distinction: Photography and Legendary Psychasthenia’(2016), Beshty uses Roger Caillois’ 1935 discussion of animal mimicry as a template for an account of photography that is adequate to his own practice. Caillois famously countered the idea that morphological mimicry in animals – for instance, a butterfly mimicking a leaf – was a defence strategy, a way of hiding from prey. Too many examples showed it to be not just ineffective but even actively counterproductive, and so mimicry should rather be understood as a dangerous luxury, the result of a mimetic desire that might be a residue from an evolutionary stage when bodies were more plastic than today. Importantly, the desire to literally shed your distinctions and disappear into the fabric of the surrounding environment was defined by Caillois as “actual photography” – an automated “temptation by space” resulting in a three-dimensional photographic image-sculpture (Caillois, 1984: 28). However, what was a metaphor for Caillois is, in Beshty, turned back onto the technical and aesthetic reality of photography itself. The tendency of photographic objects to ‘play’ or mimic other objects ultimately turns the question of photographic distinctness into a matter of edge detection. For vision machines, there are no edges ‘in’ the world – they are simply functions of the way in which their perceptual systems encode differentiating signals (Zylinska, 2017). By the same token, photography is posited in a wider perceptual space of ongoing adaptation, infiltration and convolution, a space where vectors of attraction – like temptation by space – move through and across image-bodies in an ongoing production of new indistinctions and distinctions. From the procedural perspective of filter technologies, photography simply seems to enact Caillois’ description of an organism that is not just “subject to depersonalization by assimilation to space” but also, significantly, “no longer the origin of coordinates, but one point among others” (Caillois, 1984: 30).
4. Anisotropic images
If the invention of the one-point perspective was a major event in terms of suturing the coordinates of images to a new ordering of the relationship between subjectivity and infinite space, what are we to think of a reality in which photographs refuse to act as the origin of coordinates? As Hubert Damisch (1994) has argued, the cultural powers of one-point perspectival depictions are reasserted in every camera-based image. But on the other hand, photography always had a life beyond the confines of the camera. The black-and-white raster pattern produced by light filtered through a piece of lace placed on a photosensitive surface became, in the early 19th century dialogue between Henry Fox Talbot, Charles Babbage and Ada Lovelace, a source in the development of the first working binary computer (Batchen, 2006). For numerous early 20th century artists, camera-less photography was a way to create images in which unknown spatial formations would automatically reveal themselves – and yet such photograms still tend to retain a sense of space contained within the frame of the image. Contesting this, Moholy-Nagy’s Light-Space Modulator (1922–1930) might be seen as an attempt to move the principles of his photogram-based work into a dynamic arrangement of space and time that immerses the viewer. More procedurally and dramatically, Beshty consistently arranges photographic works that can only be understood as three-dimensional entities subjected to radical spatial confusion or depersonalization – image-bodies that are truly no longer the origin of their own coordinates.

Figure 2: See endnote [6]
As it happens, such image-realities present themselves in works that may on first impression appear to be rather conventional (if spectacularly beautiful) examples of non-representational photography, neatly framed and exhibited. What you see in the 2007 series called Multi-Sided Pictures are crystalline, coloured shapes whose exceptionally complex gradations of hue and light defy any attempt to find your spatial bearings in the image. But this is in itself no big deal: it is a mainstay of artistic abstraction from Cubism onward. What matters is the fact that these photographs are in no way abstract, but rather aim at a radical immanence which produces an abstraction (see Fig. 2).

Figure 3: See endnote [7]
First of all, Beshty lets the very shape of the photographic paper define the form of the image, rejecting a distinction between depiction and material support that has actually been a far more stable convention in photographic (re)production than in painting. Each work is actually a unique photogram that is created by folding sheets of light-sensitive photographic paper into many-sided three-dimensional forms, and having each side exposed to a specific colour of light – the additive hues of human colour vision (red, green and blue) as well as the subtractive hues of offset and inkjet colour printing (cyan, magenta and yellow). This results in the crystalline forms that can only be fully appreciated once the paper is folded out and flattened into a two-dimensional image-surface. Yet through it all, the precise lines made by the creases in the paper remain visible. And while they support the visual play of fleeting crystalline forms, they also constantly force these merely optical effects to collapse onto an actual world of three-dimensional paper folds. These are images that insist on their spatial actuality. In RA4 Color Relief Works (2022) the same photographic procedure is used, but with the difference that in this case the photographic paper is allowed to retain its three-dimensional shape, like a butterfly becoming a photographic sculpture-image of a leaf (see Fig. 3).
Image-objects like these are not only singular entities: they are also anisotropic, meaning that they are essentially irregular in every direction. As they are products of the circumstantial constraints of the situation in which they happened to find themselves, they display no stable relations of scale or position that can be used to map a world or be transferred wholesale to other images. If we tend to look to images for guidance, these are guides to nothing in particular: just samples, points among points, image-potentialities in a phase space where filtering procedures above all alert you to the problems that arise due to the immense variability of values that may be ascribed to each combination of image elements. Relatedly, as Grundtmann (2023: 141-2) points out, if you want convolutional neural networks to produce human-readable images through feature-detection, strong overfitting of the training data may be needed. A lot of work is needed for a neural network to read a constellation of pixels as ‘an apple’. To the extent that the picture frames encasing many of Beshty’s image-objects may also be understood as a type of filter, they are perhaps instances of a kind of ‘overfitting’ that facilitates the interpretability of these strange photographic realities within the specific mediatic contexts of art galleries and museums. It is, however, an illusion of interpretability: from their shape-shifting, irregular structures, other types of perceptual tendencies or directionalities might equally well have been singled out.
To propose a filter theory of photography might appear as an obvious response to a contemporary photographic culture where machine learning is increasingly ubiquitous and where we are faced with the fact that “since there is no clear separation between the perceptual logic of the system and the data whose patterns are being analysed”, photography no longer gives us single, separate sites of aesthetic discernment (Grundtmann, 2023: 149). This is of course entirely true. However, historical perspectives on the genealogy of the photographic image might also give such a theory a more general foundation. When discussing the different ways in which images might be understood as protagonists or active agents in the world, Horst Bredekamp (2018) presents, amongst others, the category of ‘the substitutive image’: a long tradition in which images and bodies constantly stand in for each other through forms of imprinting which in many cases are quite emphatically processes of filtering. And what is particularly striking about this account of images treated as bodies, and bodies as images, is the way in which this tradition is linked, both epistemologically and materially, to the historical emergence of photographic technologies. Starting with the concept of the vera icon – the mythology and reproductive practice surrounding pieces of cloth that were said to carry the imprint of the face of Christ – Bredekamp shows how such textiles, which were at once body and image, could at times even appear in paintings as a transparent grid superimposed on a section of the underlying image, mimicking precisely the type of grid filter used to produce or evaluate the construction of perspectival space in post-renaissance paintings and drawings. Beyond the religious context, the vera icon tradition reappeared in a scientific context where ‘nature printing’ – the direct imprint of perishable natural objects on certain types of fabric or paper – was deemed all-important empirical material for natural history; in an economic context, the imprint on bank notes of the extremely complex and therefore hard-to-copy veining patterns of an individual leaf specimen was used as a protection against forgery. Nicéphore Niépce’s experiments with photography in the early 1820s were a direct extension of such traditions: An old engraving, rendered translucent due to having been covered with wax, functioned as a filter that allowed rays of sunlight to pass through the unmarked areas, leaving a hardened pattern on a bitumen-covered glass plate placed under the engraving. When washed away, the non-hardened parts replicated the lines and shapes of the picture (Bredekamp, 2018: 148-50). From the outset, photographic images emerged out of a world of distributed perceptions, where bodies and images were interchangeable entities that acted on each other in progressive series of substitutions, distillations and transformations.
5. Evidence and experience
A filter theory of photography thus complicates the common view of photographic images as indexical signs. Emphasis on the relatively straightforward causal relationship between the image and its source gives way to a preoccupation with the different forms of quasi-causality at work when photographic images exist as specific occurrences in a vast swath of spaces of formation. This aggregate space is non-uniform and patchily constructed, made up of many different techniques with inherent material differences. It is discontinuous, yet cannot but continue to grow. In fact, it can only be figured as the space of all possible images – a term which is in itself somewhat delusional. The problems of edge detection show the difficulty of deciding what does and does not belong to such a set, for the key terms at work here – ‘all’, ‘possible’ and ‘images’ – are dubious categories in the first place, implying a position from which a summative view can be produced. No such position exists or can exist, yet our imagination may still address ideas of its motley vastness as one of myriad inter-machinic conjunctures.
A compelling question is then what this discontinuous space of formation and its accompanying imaginaries does for evidentiary, documentary or investigative modes or uses of photography. Here, a filter theory of photography can point to two tendencies that occur together and, in their interaction and mutual interference, provide capacities for the formation of claims to truth.If we see convolutional procedures as an interaction between, on the one hand, the movement between immanences and abstractions, and, on the other, the unfolding effects of compossible and incompossible tendencies, we can start to understand the inevitable struggle for the very formation of grounds from which to assemble putative facts in the regime of filters.
At this point, it’s worth taking a clue from the distinction between minimal causation and field causality that has been presented in the context of a form of investigative work where claims to truth depend on a careful construction of large, complex and atypical series of material traces and events. Minimal causation is the pragmatic model for truth claims in the legal system. Here, focus narrows down to the question of movers in the last instance, such as ‘who pulled the trigger’ – which, translated to the context of photography, equals the truth indexed by the isolated moment of ‘the shot’. In contrast, field causality is based on a more expansive, environmental approach that follows the various threads leading outward from the minimal cause of an incident to the wider world in which it is part (Fuller and Weizman, 2021). Such work, which involves establishing relations between a vast, ‘dirty’ and in principle infinite jumble of disparate phenomena, is the mark of the counter-investigative practices of groups such as Forensic Architecture: In their work, new political-ethical and/or legal truth claims may be established by repositioning minimal causation incidents within larger and more complex patterns and temporalities of causation.
To argue that there is evidentiary value in photography viewed from the perspective of filtering in general and convolutional procedures more specifically is to call attention to the fact that any grain of truth contained within a single photographic shot is in reality always positioned at the crosshairs of compossible and incompossible series. At the levels of form, technology, materiality, iconography, theme, narrative, context, politics and culture, a single image may be an addition to a range of expressive series to which it already conforms at the same time as its expressive potentials may generate any number of divergent, or incompossible, series. The minimal cause of the single photographic event is by definition nested inside larger fields of quasi-causality. And while this is true of photographic images in general, one of the important consequences of the convolutional filtering procedures at work in machine learning is that they make such perspectives tangible as techno-mathematical realities that might, under certain conditions, such as those of algorithm audit or reverse engineering, be partially interrogable.[8] To pay attention to the life of photographic images in machine learning is to get a hands-on experience of the vast multiplicity of potential directionalities and tendencies that extend from any given image or image unit at a purely technical level, as well as the cultural and political stakes at work in the technical efforts to select and refine certain types of tendencies over others, for instance through the overfitting of training data. The tensions at work in the interaction between instances of immediate causation and the more ambient regimes of causation we see here have evidentiary value in and of themselves, in the sense that they may highlight symptoms and clues of what takes place in the cross-contamination between image-formation techniques and other sets of material relations.
These intersections between minimal causality and the wider domains of field causality can also open the question of how a filter theory of photography might provide new accounts of experience. What is, for instance, the texture of life at such intersections, as their various ‘moments’ and tendencies are created, tweaked, realigned, missed or recorded by image formation? What capacities for play and for disillusion does it produce? To recognize the convolutions of experience might be a way of attuning to or inhabiting life – similar to how you may feel the photons of light pass through you and age your tissues, while also experiencing the sensual delight of swirling some of their dregs round the cup of the retina. From this perspective, certain conjugations of the inter-machinic phase spaces of images might even present an involuted version of the familiar, anxiety-ridden accounts of life among images, in particular the spell-binding image-screens of modern information societies that often stand accused of blocking access to the fissures and contradictions of social reality. If so, this would be a version in which such imaging formations would gaze at themselves as if entranced, but without finding a final truth about their own powers – for the simple reason that the normal coordinates that map them onto specific notions of social space would be all off. As anisotropic algorithmic realities, they could no longer simply be classified as realistic or stagey, passive or aggressive, truthful or obfuscating – only as instances of unwieldy generative procedures whose various tendencies or inflections (good or bad) were yet to be determined.
Beshty’s photographic works provide examples of such evidentiary and experiential realities, alerting us to the way in which convolutional procedures unfold more widely into and as our life environments. Negotiations between immanence and abstraction take place in thick filtering formations that in turn produce a world, or worlds. If his complex but technically transparent photographic inventions lend themselves to a filter theory of photography, it is because they demonstrate how image-objects by definition emerge in translation from one spatial register or matrix to another. And by the same token, they gesture towards a vast inter-machinic space defined by rigorously wild conjecture that inserts differentiation into uniformity of expression, while also – crucially – producing aporias and places of respite.
References
Batchen, G. (2006) ‘Electricity made Visible’, in W. H. K. Chun and T. Keenan (eds.), New Media Old Media: A History and Theory Reader. London: Routledge, 27-44.
Beshty, W. (2016) ‘Against Distinction. Photography and Legendary Psychasthenia’, October 158(Fall): 67-88.
Bredekamp, H. (2018) Image Acts: A Systematic Approach to Image Agency, trans. E. Clegg.Boston: de Gruyter.
Caillois, R. (1984) ‘Mimicry and Legendary Psychasthenia’, trans. J. Shepley, October 31(Winter 1984): 16-32.
Damisch, H. (1994) The Origin of Perspective, trans. J. Goodman. Cambridge: MIT Press.
Deleuze, G. (1990) The Logic of Sense. New York: Columbia University Press.
Deleuze, G. (2003) Francis Bacon: The Logic of Sensation, trans. D. W. Smith. London: Continuum.
Elcott, N. (2019) ‘Walead Beshty: The Aesthetics and Ethics of Materialist Transparency’, in L. Kost (ed.) Walead Beshty. Work in Exhibition 2011-2020. London: Koenig Books,pp.48-70.
Fazi, B. (2018) Contingent Computation: Abstraction, Experience and Indeterminacy in Computational Aesthetics. London:Rowman & Littlefield.
Frege, G. (1980) ‘What is a Function?’, in P. Geach and M. Black (eds.) Translations from the Philosophical Writings of Gottlob Frege. Oxford: Basil Blackwell, pp.107-116.
Fuller, M. and E. Weizman (2021) Investigative Aesthetics: Conflicts and Commons in the Politics of Truth.London: Verso.
Grundtmann, N. (2023) ‘Convolutional Aesthetics, a cultural and philosophical analysis of the perceptual logic of machine learning systems’, PhD Thesis, University of Copenhagen.
Joselit, D. (2013) After Art. New Jersey: Princeton University Press.
Kafer, G. (2023) ‘After Ubiquity: Surveillance Media and the Technics of Social Difference in Twenty-First Century United States’, PhD Thesis, University of Chicago.
Kodak. (1969) Kodak Wratten Filters and Other Filters Manufactured by Kodak Limited, London. London: Kodak.
Mackenzie, A. (2017) Machine Learners: Archaeology of a Data Practice.Cambridge: MIT Press.
Malraux, A. (1967) Museum Without Walls.New York: Doubleday & Co.
Manovich, L. (2011) ‘Inside Photoshop’, Computational Culture 1(November). Available at: http://computationalculture.net/inside-photoshop/ (Accessed: 09 May, 2024).
Marks, L. U. (2002) ‘How Electrons Remember’, in Touch: Sensuous Theory and Multisensory Media. Minneapolis: University of Minnesota Press, pp.161-176.
Nguyen, A., J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski (2017) ‘Plug and Play Generative Networks: Conditional Interactive Generation of Images in Latent Space’, arXiv. Available at: https://arxiv.org/abs/1612.00005 (Accessed: 09 May, 2024).
Parks, L. (2009) ‘Points of Departure: The Culture of US Airport Screening’, in R. Braidotti, C. Colebrook, P. Hanafin (eds) Deleuze and Law. London: Palgrave Macmillan, pp.163-178. doi: 10.1057/9780230244771_10
Soon, W. (2016) Executing Liveness: An Examination of the Live Dimensions of Code Inter-actions in Software (Art) Practices, PhD Thesis, Aarhus University.
Tissandier, A. (2018) Affirming Divergence: Deleuze’s Reading of Leibniz. Edinburgh: Edinburgh University Press.
Toscano, A. (2008) ‘The Culture of Abstraction’, Theory, Culture and Society 25(4): 57-75.
Zylinska, J. (2017) Nonhuman Photography. Cambridge:MIT Press.
Notes
[1] Beshty maintains a meticulously documented website available at https://www.actionstakenunderthefictitiousnamewaleadbeshtystudiosinc.com/
[2] Walead Beshty, Travel Picture Fog [Tschaikowskistrasse 17 in multiple exposures* (LAXFRATHF/TXLCPHSEALAX) March 27-April 3, 2006] *Contax G-2, L-3 Communications eXaminer 3DX 6000, and InVision Technologies CTX 5000. 2006/2008. Chromogenic print. 51 1/2 x 90 3/8 inches (130.8 x 229.6 cm). Edition of 5 (2 AP). Photo: Richard Ivey. Courtesy: the artist; Regen Projects, Los Angeles; Petzel, New York; and Thomas Dane Gallery, London.
[3] See a retrospective demo of Photoshop One by John Knoll, the program’s co-developer: Photoshop: The First Demo, available at https://adobe.fandom.com/wiki/Adobe_Photoshop_1?file=Photoshop_The_First_Demo
[4] We understand Frege to mean ‘unsaturation’ in the sense of a liquid acting as the solvent for a solution, in which, when unsaturated, it still has the capacity to take on more of the solute, the additional chemical being dissolved in it.
[5] Here, the various kinds of typological photographic practices that work by amassing examples of a genre of a kind of image or of a kind of thing represented in that image are pertinent.
[6] Walead Beshty, Six-Sided Picture (RBGCMY), March 23, 2010, Irvine, California, Fujicolor Crystal Archive Super Type C. 2012. Color photographic paper. 30 5/8 x 40 5/8 inches (77.8 x 103.2 cm). Photo: Richard Ivey. Courtesy: the artist; Regen Projects, Los Angeles; Petzel, New York; and Thomas Dane Gallery, London
[7] Walead Beshty, Single-Sided RA4 Full-Spectrum Color Relief (D-Max), Los Angeles, California, January 12, 2022, Fujicolor Crystal Archive Type II, Em. No. 859809B217, 01022. 2022. Color photographic paper. 21 3/4 x 20 x 1 inches (55.2 x 50.8 x 2.5 cm). Photo: Walead Beshty Studios, Inc.. Courtesy: the artist; Regen Projects, Los Angeles; Petzel, New York; and Thomas Dane Gallery, London
[8] Interrogability in this context means the affordances a software avails, directly or not, for having its underlying functionality understood. For the notion of interrogability, see the pamphlet Fuller, M. (2006) ‘Softness, interrogability, general intellect’, available at http://reader.lgru.net/texts/softness-interrogability-general-intellect-art-methodologies-in-software/
Ina Blom is a professor at the Department of Philosophy, Classics, History of Art and Ideas, University of Oslo and Visiting Professor at the Department of Art History, University of Chicago. She is the author of, among others, The Autobiography of Video. The Life and Times of a Memory Technology (2016), On the Style Site. Art, Sociality and Media Culture (2007; 2009) and Houses to Die In and Other Essays on Art (2022). Edited volumes include Memory in Motion. Archives, Technology and the Social (2017).
Email: ina.blom@ifikk.uio.no
Matthew Fuller is professor at Goldsmiths, University of London. His books include How to Sleep: The Art, Biology and Culture of Unconsciousness (2018), How to Be a Geek: Essays on the Culture of Software (2017), with Olga Goriunova, Bleak Joys: Aesthetics of Ecology and Impossibility (2019) and with Eyal Weizman, Investigative Aesthetics: Conflicts and Commons in the Politics of Truth (2021). He is a member of the editorial collective of Computational Culture – a journal of software studies (http://www.computationalculture.net/).
Email: m.fuller@gold.ac.uk


Leave a Reply