Alan Díaz Alva: Technics and Contingency

For the official version of record, see here:

Díaz Alva, A. (2023). Technics and Contingency: Ontological Productivity in Computation. Media Theory7(2), 37–76. Retrieved from https://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/587

Technics and Contingency: Ontological Productivity in Computation

ALAN DÍAZ ALVA

Leuphana Universität Lüneburg, GERMANY

Abstract

This text aims to explore the relationship between technics and contingency within a context where computational technologies often have the effect of domesticating the latter, leading to its statistical reduction within the realm of the probable. I will argue that the post-prosthetic and non-correlational character of these technologies prompts us to pursue a line of inquiry that calls for a different understanding of contingency. Instead of solely assigning it to domains external to computational technologies (the ‘externality thesis’), this perspective identifies a form of contingency inherent in these technologies themselves (the ‘internality thesis’). I will develop this approach through a comparative analysis of the work of M. Beatrice Fazi and Mark B.N. Hansen, authors who both draw from Alfred N. Whitehead’s process ontology. I will underscore the tensions between their proposals, trying to identify the main divergences within their speculative ontologies of computation. However, I will also propose that we can read them as complementary propositions that account for the ontological productivity of computational media and operations, allowing us to theorise a form of computational contingency independent from the tendencies towards algorithmic closure and determinism present in contemporary technical systems.

Keywords

computation, digital media, contingency, digital ontology, Whitehead

Introduction

In Recursivity and Contingency, Yuk Hui (2019:218) argues that “the relation between technics and contingency must be analyzed materially and historically.” Understanding technics broadly as the anthropologically universal disposition of relating to the world through technical artefacts, he identifies what is arguably a momentous historical shift in the way this relation operates today: “Technics in general is that which attempts to eliminate contingency, but in comparison with technical objects based on linear causality and hence susceptible to contingencies, the recursive mode can effectively integrate contingency in order to produce something new; in other words, it demands constant contingencies” (ibid.: 138). Here, Hui is referring to the algorithmic techniques of machine learning and predictive analytics which have progressively come to constitute the backbone of our technological systems.

Leaving aside the suggestive thesis that the overcoming of contingency might be the aim of technics as such,[i] Hui’s diagnosis correctly identifies how the tendency towards probabilistic prediction that runs through contemporary computational technologies is grounded in the instrumentalisation and domestication of contingency. According to him, the recursive algorithms that we can find in machine learning technologies can “domesticate different forms of contingency in order to render them useful” (ibid.: 218); a form of statistical domestication that implies “a reduction of the contingent to the most probable” (ibid.: 211). The increasing systematisation and synchronisation of our globalised culture and its digital technosphere (Haff, 2014) attest to the ongoing presence of this tendency towards the domestication of contingency and its statistical reduction to the domain of probability.[ii] Many of our technological systems are being designed along a vector of prediction, igniting many discussions about the implications of this for problems such as the construction of futurity, the circumscription of the field of the possible, and forms of control which have been characterised as constituting a new regime of  ‘algorithmic governmentality’ (Rouvroy & Berns, 2013).

No doubt partly due to the prevalence of this tendency, in both popular and academic discourse, computational technologies are often regarded as deterministic techniques that “quash contingency and erod[e] the conditions for novelty, creativity, and surprise” (Evens, 2023: 36).[iii] In response, recent scholarship in various subfields concerned with digitality has tried to counter this opinion, providing a more nuanced and complex picture that attempts to show how these technological systems actually involve contingency in one way or another. The broad parameters of this discussion will be presented in the next section. I will also try to unpack the epistemological and ontological presuppositions underpinning the image of computation as fundamentally antagonistic to contingency. However, before doing so, let’s take a closer look at the notion of contingency at play here.

There are at least three senses in which one can talk about computational contingency or contingency manifesting in computation: (1) as operational instability, (2) as unpredictability or unknowability, or (3) as the ontological production of the new (Lushetich et al., 2023: xi). Aligned squarely with the second option, Hui writes that “the contingent is possible, but it is not the most or highly probable. The contingent is the least probable, or even the improbable” (2019: 211).[iv] In this account, the contingent is understood as that which is grasped by thought as being unforeseeable or entirely improbable, and its domestication by predictive analytics is regarded as its inscription and subsumption within a predetermined matrix of possibilities. In other words, the domain of the contingent qua improbable is progressively being substituted by a future circumscribed by possibilities preestablished through predictive infrastructures. Such a standpoint assumes the externality of contingency to computation: contingency is regarded as pertaining to the domain of lived experience, and digital technologies, in virtue of their ambivalent or pharmacological relationship to us, can either hamper or enable it. This is a form of contingency that goes hand in hand with the continuation of an understanding of technics as the domain of artefacts that stand in a relationship of prosthetic interdependency with the human and its lived experience.[v]

Heeding Hui’s injunction to historicise the relationship between technics and contingency, in the following pages I will attempt to sketch a different way to understand this relation – as well as the meaning of these concepts – in light of our technological present. Several authors have argued that today we find ourselves amid a post-phenomenological, non-prosthetic and non-correlational (Scannel, 2022) phase of the history of technology in which such an understanding of technicity is no longer tenable. Besides the existence of artificial cognitive agents that can hardly be described as technological extensions or prostheses of the human once their formalised decision-making procedures attain a certain degree of autonomy (Fazi, 2019b), new computational technologies operate in microtemporal regimes – or through ‘time-critical’ processes (Ernst, 2016) – that take place below the threshold of conscious experience, which in turn implies a temporal and operational disjunction between the human and twenty-first-century media (Hansen, 2015: 73). I argue that this context demands us to rethink the relationship between technics and contingency. The domain of technical objects is saturated by non-prosthetic processes that operate alongside us insofar as “they function both in proximity to us, but also in autonomy from us” (Fazi, 2019b: 94). Admittedly, this is a context which stretches the limits of the concept of technics itself which, while not necessarily anthropocentric, is nevertheless focused on the co-determining rapport between human and artefact. Moreover, inquiring about what contingency might mean from such a standpoint leads us to investigate the ontological specificity of computational technologies themselves.

I will approach this problem through the work of two contemporary authors, M. Beatrice Fazi and Mark B. N. Hansen, who present two novel ontologies of digital computation that aim to show how computational technologies themselves can be thought of as ontologically productive or ontogenetic,[vi] thus establishing a relationship with contingency and novelty that exceeds the tendencies towards closure and determination present in contemporary technical systems. While their proposals present a shared conceptual armature provided by Alfred N. Whitehead’s speculative metaphysics, their respective interpretations of the latter also diverge in significant ways. I will attempt to foreground these differences while ultimately arguing in favour of their complementarity. I hold that in both Hansen’s and Fazi’s work, we encounter – overtly in the latter while somewhat implicitly in the former – compelling arguments in favour of the internality of contingency (Lushetich et al., 2023: xv) to computational technologies. This ‘internality thesis’ posits the existence of computational contingency in a stronger sense than its counterpart – the ‘externality thesis’ – discussed above. The computational contingency at play here is not the kind that inevitably enters the scene once these technologies are contextualized within the “domain of history and materiality, a feedback loop between people and machines” (Evens, 2023: 43). Nor is it the potential result of a pharmacological overturning of the regime of digital automatization by a “supplementary invention” (Stiegler, 2016: 117) whereby contingency – qua ‘the improbable’ (ibid.: 115) – could potentially emerge. At play in the internality thesis is contingency understood in the last of the senses mentioned above, that is, as the ontological production of the new.

The locus of contingency and its ‘degree’ of internality vis-a-vis computational technologies is fundamentally different in the work of the two authors that I will focus on – a difference in approach that, as we will see below, becomes strikingly clear in the Whiteheadian concepts that each of them chooses to foreground. However, I argue that it is precisely due to this shared conceptual background that these differences and tensions can be rendered complementary rather than mutually exclusive. Anticipating part of the argument that will be fleshed out in the following pages, I claim that by reading Hansen and Fazi together we can account for the internality of contingency to both computational media and computational operations through the Whiteheadian concepts of real and pure potentiality, respectively. However, before presenting these two thinkers in more detail I will first contextualize their intervention by briefly explaining the prevalent view which negates the internality of contingency to computation due to the latter’s formal, axiomatic, and deductive character.

The denial of computational contingency

In contemporary media studies and critical theory more generally, digital computation is usually assumed to be antithetical to the production of the new. Understood broadly as “a method of organising reality through logico-quantitative means” (Fazi in Beer, 2021: 290), its systematisations are often seen as reductive attempts to capture or represent the complexity of the world. As Fazi explains, from this perspective “computation is assumed to merely appropriate reality”, and hence posits that there is “no novelty in computation, but only the repetition of the pre-programmed” (Fazi in Beer, 2021: 290). Let’s take a quick look at a couple of examples. Emma Stamm, for instance, presents the digital as primarily a means of representation whose veridicality mandates are only able to reproduce the known and predetermined at the expense of the unknown or the new. Moreover, this process of discretisation also renders various domains capitalizable: “To be digitized is to assume the standard form of symbols intended to transmit meaning rather than create it anew [. . .] computational outputs cannot supersede the predetermined structure of their encoding. The digital mandate to representation seals off all terra incognita” (Stamm, 2022: 12). In a similar vein, Brian Massumi posits the superiority of the analogue over the digital – or of sensation over formalisation – by claiming that all novelty arises within the former. In contrast, the digital is reduced to a form of ‘machinic habit’: “The digital is a numerically based form of codification (zeros and ones). As such, it is a close cousin to quantification. Digitization is a numeric way of arraying alternative states so that they can be sequenced into a set of alternative routines. Step after ploddingly programmed step. Machinic habit” (2002: 137). Massumi’s position draws from Deleuze, a figure who has exerted a considerable influence in media studies by strictly distinguishing between the virtual – his understanding of potentiality – from the digital, arguing that the latter is unable to produce novelty. Fazi summarises this position in the following manner:

The formal and symbolic logic of the digital machine is a cognitive abstraction that returns, as output, only what one has put in as an input. Aesthetically, novelty is instead produced in the matter-flow of sensibility [. . .] the digital would seem to be excluded from the production of the new on the basis of its automated repetition of the preprogrammed. Working through possibilities and probabilities, the digital is a way of recognising through prediction. Supposedly no new thought as well as no new being can be produced, because everything is already set up; in the digital machine, there is a lot of iteration and repetition, but no differentiation. For this reason, when seen from a Deleuzian perspective, the digital has no potentiality (Fazi, 2018: 32–33).

The purpose of this text is to question the purported ‘sealing off all terra incognita’ by the ‘machinic habit’ of the digital. As mentioned in the introduction, the exclusion of the digital from the production of the new can be partly justified if one considers its actual instantiations in procedures of probabilistic inference and algorithmic governmentality which constitute a tendency towards determinism, ‘closure’. However, beyond these real-world implementations, the antagonistic relation between contingency and computation can also be grounded in the nature of computational axiomatics itself. Critical scholarship on the subject often attributes computational and algorithmic procedures with the sole purpose of establishing a reductive, imposing, or even deterministic relationship with the world due to their formal and seemingly closed nature vis-à-vis experience.[vii] As procedures which operate through a series of rule-guided inferential steps that start from self-evident premises, a certain “striving for conclusion is an internal feature of deductive systems [. . .] in both a computer program and an axiomatic system, everything is closed because everything is inferentially connected ” (Fazi, 2018: 97–98).

According to Fazi, the negative view of computation as a means to pre-program, predict, and determine the totality of reality stems from an (ultimately justified) suspicion towards what she terms the ‘metacomputational approach’, i.e., “the belief that rational calculation can explain and encompass every element of actuality” (Fazi, 2016b). This paradigmatically Western endeavour has a long history, from Leibniz’s project of mathesis universalis to contemporary strands of cognitive science and theories of pancomputationalism, all of them attempting to explain the domains of mind and nature as reducible to computational processes. Such an approach universalises the computational method qua abstractive technique of deductive reasoning and projects it to a metaphysical register. Computational abstraction acquires an ontological status of transcendence in relation to the empirical world, while also claiming to possess the capacity to make it intelligible and operable. It is from such an approach that formal axiomatic systems in their various implementations “could all be seen to serve the agenda of a logos that aims to assign, through chains of sequential operations, an inferential and procedural form to the real [. . .] when seen in this light [computational axiomatics] transfers it deductive and determinist structure from the mathematical and technological to the societal, the economic and the political” (Fazi, 2018: 98–99). Moreover, this understanding of formal entities underpins an idealist view of computation or a ‘computational idealism’, which Fazi describes as “the technocultural view that would maintain that computational structures are ideal forms, insofar as they are immutable entities, independent from matter as well as from contingency” (Fazi, 2016b). This idealist view of computational structures as abstract and self-enclosed formal systems – transcendent to and yet able to act upon the world – grounds the “all-too-human faith in a computing machine that can do no wrong, as well as the technosocial construction of an image of infallible – and fundamentally impenetrable – algorithmic rationality” (Fazi, 2016b). Reiterating the Platonic metaphysical schema wherein the intelligible forms the sensible, such an algorithmic rationality can be seen as a form of ‘machinic neoplatonism’ (McQuillan, 2018).[viii]

The onto-epistemological primacy and instrumental competence granted to computation are thus predicated on a misguided idealisation of its formal, axiomatic, and deductive character. In very broad terms, axiomatisation can be understood as the process which deploys techniques of formalisation through symbols, which eliminate any reference to content or direct empirical observation, with the purpose of transforming a theory into a system of deductively linked theorems. Explaining the axiomatic and formal character of computation, Fazi writes: “The isomorphism between the computational and the formal provides additional evidence of the closed character of deductive forms. Formalism itself, as a philosophical, historical and mathematical enterprise, is in the business of conclusion; it constructs conclusive systems through the combination and manipulation of expressions that are ‘closed’ because they are derived from the premises of the system itself” (2016b).

Recent interdisciplinary approaches in the humanities and social sciences have tried to resituate or contextualise these logico-mathematical formalisms within real-world computational procedures which – entangled as they are in a broader set of cultural, material, and social relations – cannot be reduced to idealised self-enclosed systems nor simply to “a question of realized instrumentality” (Fuller, 2008: 3). This is because computation is “not bounded to preprogramming, since it participates in world-making processes” (Fazi, 2018: 108).[ix] Such proposals can present valuable contributions which provide us with a much needed counterweight to the stereotyped view of algorithmic rationality which extrapolates from the formal and axiomatic nature of computation to posit an inherent tendency and vocation of digital technologies towards closure, determinism, control and prediction – a narrative which often serves as ancillary for both the technocratic praise as well as some of the critical scholarship addressing contemporary digital culture. Despite their usefulness, however, such perspectives are still circumscribed to the idea that contingency is external to computation. In other words, contingency can be ascribed to computational technologies insofar as they participate in the complex unfolding of sociocultural dynamics. In what way can we also posit the internality of contingency to computational technologies? 

One could try to locate the emergence of ‘internal’ contingency in computational systems through the kind of glitches that can result from eventualities such as hardware malfunction, software imperfection, or faulty input data. Betty Marenko, for instance, conceptualises the glitch as “the tangible and visual manifestation of something unexpected: the irruption of the unplanned” (Marenko, 2015: 112). However, I do not intend to pursue such a path of inquiry, concerned as it is with instances of computational blunder, operational instability, or ‘algorithmic catastrophe’ (Hui, 2015). The reasoning behind this is that, by investigating contingency through notions such as that of error and the glitch, we risk remaining tethered to the externality thesis. While one could develop the argument that a glitchy computational process can be ontologically productive, the very notion of the glitch can lead us to the second understanding of contingency mentioned above, i.e., contingency as the unplanned, unpredictable or improbable. Thus, following Fazi closely on this point, my attempt to elaborate the relationship between contingency and computational technologies is not focused on those eventualities when the latter can malfunction or break down – prompted by either internal or external causes – but rather when, as formal and axiomatic procedures, they are doing exactly what they are supposed to do (2018: 85). I argue that such an approach is more adequate to investigate the thesis of the internality of contingency to computation that we are trying to explore. 

In short, the sense of contingency that I want to elaborate on here goes beyond the observation that computation can be considered contingent (or non-necessary) since its actual real-world performance cannot be reduced to the necessity of the inferential automatism that characterises its procedures. Instead, I intend to transpose the inquiry into the idea of computational contingency to an ontological register. In other words, I will argue that digital computation can be considered as ontogenetic or ontologically productive. I will do this by delving into the work of two contemporary authors, M. Beatrice Fazi and Mark B. N. Hansen, who turn towards Alfred N. Whitehead’s metaphysics to articulate the speculative claim that computational technologies themselves are capable of introducing novelty and contingency into the world.

Computational contingency I: Twenty-first-century media and worldly sensibility

As already mentioned in passing, Hansen is among the contemporary authors critical of the theoretical gesture that circumscribes digital technologies to a prosthetic role vis-à-vis the domain of human-centred lived experience. Against this, he defends the thesis that the temporal and cognitive disjunction between human perception and contemporary data-driven computational technologies signals a post-prosthetic age in the history of media, one which opens a new domain of experience which exceeds – but also implicates and conditions – intentionality and conscious thought. This fact, he argues, demands a different theoretical focus, one which avoids placing human consciousness at the centre and is instead able to address this domain of experience in its own terms. Whereas previous forms of technical objects mediated human experience directly by functioning as externalised memory, ‘twenty-first-century media’ require a supplement (or interface) for machine-human interaction to happen – or, as he writes, to compose “relations between technical circuits and human experience” (Hansen, 2015: 43).[x] Eschewing this direct mediation, however, does not mean that computational technologies are entirely untethered from human experience. Instead, their processes operate on a level and scale that can affect and mould human experience while entirely bypassing our awareness. Hansen is staunchly critical of the implications of such infra-sensorial affordances when these are mobilised for the purposes of algorithmic governmentality and/or capital accumulation. Capitalism’s “micro sociological turn”, predicated on a “postnormative” and “nonrepresentational” orientation towards the biopolitical and “ecological” modulation of bodies and populations through data and the algorithmic shaping of the future (Haber, 2016), acts on the conditions of sensibility, on the ‘operational present of sensibility’ (Hansen, 2016).

Besides a critique of the foreclosing effects of algorithmic governmentality and an acute attunement to capitalism’s advances in this domain, in Feed-Forward Hansen also presents us with a novel framework for thinking about the role that new media technologies play in human experience. Crafted through a meticulous reinterpretation of Whitehead’s speculative metaphysics, he presents a rich and multilayered conception of media-enhanced experience, agency, and subjectivity. I argue that the theoretical upshot of his proposal is two-sided. On the one hand, he mobilises this imposing framework to propose, in an avowed Stieglerian spirit, a ‘new media pharmacology’ (Schneider, 2019) aimed at trying to identify the novel experiential affordances that we encounter precisely due to (rather than despite) the temporal and operational disjunction between computational processes and human experience. While I will briefly delineate this side of his proposal, the focus of my reading lies elsewhere. The other side of his project is centred on explaining how computational technologies can effectively expand and intensify the domain of ‘worldly sensibility’, i.e., a non-human-centred (albeit human-implying) domain of sensibility which involves the whole cosmos. I argue that this side of his project can be interpreted as an argument in favour of what I have called the internality thesis. Before doing this, however, I will try to unpack some of the essential features of Whitehead’s metaphysics without going into too much detail – an arduous task which would take us deep into exegetical thickets and away from the broader issues that concern us here – to explain why Hansen depicts Whitehead as “the philosopher par excellence of twenty-first-century media” (2015: 30).

Mostly a marginal figure until the last couple of decades, today Whitehead’s speculative metaphysics attracts the attention of scholars across a wide variety of fields, from philosophy and media theory to ecology and science and technology studies. One aspect of his thought that has proved to be particularly fruitful (and which is a useful entry point for our discussion) is his critique of the ‘bifurcation of nature’, i.e., the constitutive operation underlying the modern conception of nature which consisted in the separation of nature into two divisions: causal and apparent, or “the nature apprehended in awareness and the nature which is the cause of awareness” (Whitehead, 1964: 21). The bifurcation of nature implies a disjunction between human experience – the scent of the rose, the warmth of the sun – and the scientifically postulated entities – molecules and electrons – which cause it. Crystallised most systematically in his 1929 masterwork Process and Reality, Whitehead’s response to the bifurcation of nature is a metaphysical framework which attempts to give an account of how the world must be for experience – in all its multifarious diversity – to be what it is. The result is a speculative metaphysics of the patterned and essentially intertwined becoming of all things – both human and non-human – which he christened the ‘philosophy of organism’. This philosophical framework is constituted by an ontology of ‘actual occasions’ or entities mutually imbricated in a perpetual becoming through dynamics of prehension and concrescence, forming groups (or ‘societies’) with scales and complexities ranging from the microphysical to the mental.

In Process and Reality, Whitehead writes that “the whole universe consists of elements disclosed in the analysis of the experiences of subjects” (1979: 166). Crucially for us, in this “metaphysics of experience” (Kraus, 1998), experience is understood as ontologically ‘neutral’ and ‘environmental’ insofar as it eschews familiar dualisms such as human and nonhuman, living and inanimate, and subject and object. In Whitehead’s speculative metaphysics, “[e]xperience and subjectivity range from one to the other end [. . .] all entities are subjects that/who ‘have’ or ‘feel’ experience” (Schneider, 2019: 137). It is by granting such a central place to an expanded category of experience – a position described by Fazi as a “panexperientialism” (2018: 13) – that Didier Debaise (2017: 58) can claim that, in Whitehead’s metaphysics, “the aesthetic becomes the site of all ontology”.

Admittedly, Whitehead’s ‘neutral’ understanding of experience could be read along panpsychist lines or, as it has often been done over the last years, as a kind of ‘flat ontology’, i.e., “one in which entities on different scales, and of different levels of reflexivity and complexity, are all treated in the same manner” (Shaviro, 2009: 27–28). Hansen’s own painstakingly expounded reading of Whitehead carefully distinguishes the latter’s process ontology from these interpretations, as well as from other contemporary authors who have mostly tended to read Whitehead through Deleuze in an attempt to recruit him within the ranks of the great ‘philosophers of becoming’ (see Shaviro, 2009).[xi] Hansen dislodges human consciousness from the prominent place that it has occupied in previous forms of media theory and philosophy of technology. However, this act of decentering does not lead him to espouse an ontology in which the human-machine distinction is blurred to the point of dissolution. In other words, Whitehead’s neutral theory of experience is deployed “in order to decenter – but not to dispense with – the perspective of the human” (Hansen, 2015: 15). Hansen argues that the human can only provide a partial perspective on experience, but also acknowledges the enduring importance of human subjectivity as a crucial site of political contestation.

As already mentioned, Hansen sees the entanglement of human agency with computational networks and technologies as an opportunity to expand the domain of a non-human-centred sensibility and intensify our capacity to interact with it through technological mediations. In other words, his goal is to “decenter the human agent not by marginalizing consciousness but by enhancing it through a reengagement with the rich, distributed context from which it arises” (Haber, 2016: 162). By situating subjectivity within an “environmental outside,” Hansen wants to “complexify the human by multiplying its connections” (2015: 17). He argues that new possibilities for human agency are enabled by the capacity of new technologies to “gather data concerning aspects of experience that are not directly accessible to us qua individual agents, that we simply cannot experience through consciousness and perceptual awareness” (Hansen, 2016: 39). Contemporary computational technologies of data gathering and sensing are capable of accessing these precognitive processes “in their own ‘operational’ timeframe” (ibid.: 41), mediating human consciousness through various kinds of interfaces and devices. This enables the technical ‘feeding-forward’ to our consciousness of otherwise imperceptible events, making us aware of occurrences that have already taken place (or are constantly taking place) at an infra-perceptual level and thus allowing us to act upon or with them. Identifying these technical affordances allows Hansen to gesture towards a pharmacology of twenty-first-century media, showing how the temporal gap between human consciousness and micro-temporal operationality of media – the so-called ‘missing half-second’ instrumentalized by the institutions that profit from algorithmic governmentality – can be mediated and yield potentially positive results.

The novelty, relevance, and urgency of this proposal notwithstanding, this is not the side of Hansen’s project that I want to focus on here. As already mentioned, I want to argue that the upshot of his project is not only a theory of the micro-computational expansion of human awareness, aesthetic experience, and agency, and a consideration of what such affordances might entail for an “ecological pharmacology of twenty-first-century media” (Hansen, 2016: 48). I hold that his reading of Whitehead’s metaphysics of experience also permits us to affirm the internality thesis and thus attribute an ontological productivity to computational technologies in themselves. The first side of his project can certainly be read as affirming a computational contingency in the external sense, i.e., the unpredictable or improbable effects that computational technologies can have by allowing us to consciously access and intervene in the microtemporal and precognitive domains of our lived experience. However, it is the second side of his project that, I argue, can be put forward as an affirmation of the internality of contingency to computation. At the centre of this lies the concept of worldly sensibility.

A good entryway into Hansen’s concept of worldly sensibility can be found in what he calls the ‘claim for inversion’ (CFI), one of Feed-Forward’s core technical interventions in Whiteheadian scholarship. The core feature of Whitehead’s speculative metaphysics is an account of actual occasions – the ‘basic’ or ‘atomic’ events or occurrences that lie at the basis of all experience – which come into being through processes of concrescence. A process of concrescence consists of the two-fold ‘grasping’ of disparate features and elements of the settled world of ‘attained actualities’ (i.e., ‘physical prehensions’), along with the ‘grasping’ of timeless and purely ideal ‘eternal objects’ (i.e., ‘conceptual prehensions) which provide a source of indetermination and novelty to the whole process. Whitehead characterises this process in which actuality comes into being as a process of determination which requires the ingression of indeterminacy: it is “a process of transition from indetermination towards terminal determination” (Whitehead, 1979: 45). The ‘terminal’ character of an actual occasion’s subjective experience implies that it is fleeting. It ‘perishes’ as soon as its prehensions are completed to subsequently become part of the settled world as an objective ‘datum’. The settled world is made from actual entities which have transitioned from their ‘creative phase’ into their ‘dative phase’. By becoming a ‘data-fied’ entity, it adds something new to the multiplicity of the world which can then participate in new processes of becoming, i.e., the genesis of new actual occasions through processes of prehension-concrescence (Whitehead, 1979: 156). Unlike concrescent entities, whose prehensions are guided by a ‘subjective aim’, these dative entities are objective or ‘superjectal’, which means that they “become passive and inert and can only become creative again if they are taken up by future concrescences of new actual entities” (Hansen, 2015: 13).

This brief outline roughly encapsulates what Hansen describes as the ‘canonical picture’ of Whitehead’s process ontology. It is by reading him in this way that several commenters have tended to portray Whitehead as another one of the great ‘philosophers of becoming’. By reading him in a Deleuzian key, Hansen argues, these critics borrow piecemeal from Whitehead’s metaphysics to construct an “ontological ground for some account of experience as becoming” (ibid.: 18). More specifically, Hansen mentions the work of scholars such as Brian Massumi, Erin Manning, Steven Shaviro, Luciana Parisi, and Stamatia Portanova. Such a reading, he argues, emphasises the process of concrescence and the subjective aim of actual occasions as the main (or rather sole) agents of becoming, to the detriment of the objective, the superjectal, and the settled world more generally. Contrary to this, Hansen urges us to invert the canonical picture in the following way:

the claim for inversion contends that we should invert the orthodox understanding of creativity provided by Whitehead and ratified by virtually all of his commentators: rather than looking to concrescences as the sole source of creativity, we must view them as vehicles for the ongoing production and expansion of worldly sensibility, as instruments for the expression of a creative power that necessarily involves the entirety of the superjective force of the world. Far from operating as the exclusive agents of the creative process, as they do for almost all of Whitehead’s commentators, concrescences on my understanding are nothing more nor less than a speculative means to explain the production of superjects or, even more precisely (keeping with Whitehead’s stress on process), of superjectal relationalities (webs of objectified, i.e., “data-fied” prehensions) that constitute worldly sensibility (ibid.: 11–12).

In contrast to the canonical picture, which views concrescence as the ontological pinnacle of the creation of the new in Whitehead’s philosophy, Hansen argues that we should instead regard the concrescence of actual occasions as “instruments for the production of worldly superjects, which are the true source of experiential creativity” (ibid.: 28). For him, the main site of ontological productivity is to be located in the creativity of the superjectal, i.e., in the actual entities that, having ‘perished’ and entered their dative phase, compose the multiplicity of the settled world. In other words, while the canonical picture sees data-fied entities as the passive and inert material for future concrescence, Hansen sees concrescences as being ‘sparked into becoming’ by the settled world.

in this [canonical] picture, transition (the objectification of an actual entity and its addition to the settled world) operates simply and solely to supply inert, passive source material for future concrescences. On the corrected account, by contrast, concrescence is literally sparked into becoming; by an encounter with objective data of the settled world (the dative phase), and comes into being alongside, and indeed as a component in, the ongoing sensibility of the objective world [. . .] concrescence is not the privileged operation of Whitehead’s philosophy but merely a vehicle for the propagation of worldly sensibility (ibid.: 172–173).

To explain the creative potentiality of the settled or superjectal world – or what he calls real potentiality – in more detail, Hansen closely follows the work of Judith Jones (1998), another largely neglected Whiteheadian scholar who presents an interpretation of Process and Reality centred on the concept of intensity. In a way that supports Hansen’s CFI, she calls for an account which emphasises the sources of novelty located in the contrasts and intensities generated within ‘societies’ of dative entities or superjects.

Hansen’s reading of Whitehead is thorough and nuanced, revising and upturning many of the assumptions that have influenced the reception of his speculative metaphysics in contemporary thought. However, although one could regard these caveats as conceptual nitpicking or as an attempt to argue in favour of a ‘right’ version of process ontology, the stakes of Hansen’s project go well beyond the realm of Whiteheadian scholarship. Taken seriously, they effectively urge us to reconsider the way we understand our technologically mediated present. The concept of worldly sensibility is the culmination of the claim for inversion insofar as it foregrounds the creative potentiality of the settled world; a world which is composed, not of passive discrete elements awaiting to be revivified by new prehensions-concrescences, but of a multiplicity of “superjectal, environmental micro-agencies” (Hansen, 2015: 165) which in themselves constitute the potentiality of the actual, or what he terms real potentiality. In contrast with ‘pure potentiality’, which refers to a “potentiality completely divorced from any actualization”, real potentiality designates a potentiality which “operates wholly within the domain of the actual”, one that “belongs to the actual but that is not relative to any given actuality-in-attainment.” Put in different words, what real potentiality postulates is the role of the objective datum as a “crucial and non-substitutable source of novelty that contributes to the creativity of the universe.”[xii] Moreover, and crucially for our present inquiry, Hansen will coin the term data potentiality to name the “contemporary instantiation of real potentiality” (ibid.: 167-168). This instantiation combines the new forms of production of objective data via computational techniques of environmental sensing and data mining, with the unprecedented possibilities for human access to such data that certain forms of media, interfaces and software have allowed for. In other words, micro computational sensors have the capacity to produce fine-grained “data-inscriptions of prehensions” which become “independently addressable” (ibid.: 168-169) by the human through various techniques of data analysis and through their re-presentation via digital interfaces.

As already mentioned above, I claim that Hansen’s interpretation of Whitehead’s metaphysics of experience can be read as an affirmation of the internality thesis and thus of the ontological productivity of computational technologies in themselves. To sharpen this argument, perhaps it is useful to compare it to Fazi’s own critique and appraisal of Hansen’s project. In contrast to the argument put forward here, in her review of Feed-Forward Fazi claims that the theorisation of worldly sensibility is first and foremost a phenomenological issue. According to her, Hansen uses Whitehead’s speculative metaphysics primarily as a corrective to an anthropocentric phenomenological tradition insufficiently equipped to engage with the environmentally distributed character of experience in our contemporary technological condition. She argues that Hansen’s reading “aims to assess the reality of techno-human experience rather than that of twenty-first-century media per se” and consequently, that “what the book affords is less a Whiteheadian ontology of media technology than a Whiteheadian phenomenology of contemporary media situations” (Fazi, 2016a: 66). Sharp and insightful as they are, these are claims that I am only in partial agreement with.  

Certainly, the technologically enhanced possibilities of human access to the imperceptible or ‘precognitive’ domain of worldly sensibility is one of Hansen’s central concerns. This is evident nowhere else more clearly than in his ‘claim for access to the data of sensibility’ (CADS). This is the claim that certain forms of media “involve technical operations to which humans lack any direct access” (Hansen, 2015: 6), operations that tap into the domain of worldly sensibility extracting data which can then be ‘forwarded’ to human awareness so that, in good Stieglerian fashion, we can reclaim the pharmacological affordances of twenty-first-century media. However, I believe that in Hansen’s development of the concept of worldly sensibility, we can find a vector which goes beyond the realm of phenomenology and into that of ontology. This comes forth in the bidirectionality (or ‘indirectionality’) of media invoked by the notion of “data sensing”:

In the figure of “data sensing” – the act of accessing sensibility that is itself the production of a new sensible – we encounter the perfect expression of the “indirection” of twenty-first-century media: twenty-first-century media come to bear on human experience as the simple result of the activity of accessing worldly sensibility. And it is, moreover, potential in its mode of being, or, more precisely, in relation to any events that may be constructed out of it: thus, the sheer activity of accessing worldly sensibility doesn’t directly or necessarily wield any impact on any given human experience; it simply furnishes an expanded sensibility, which is to say, a source of potentiality, that could lead to concrete effects at the level of human experience (ibid.: 234).

In this passage we can see how, besides transforming the possibilities of our media-expanded perceptual access to the world, the propagation of data in and about the world also entails an expanded capacity of the world to sense itself, to yield new sensibles. It is in this sense that Hansen “want[s] to claim that media impact the general sensibility of the world prior to and as a condition for impacting human experience” (ibid.: 6). I argue that, by bringing this to the forefront, we can interpret Hansen’s ‘inverted’ reading of Whitehead as providing us with an account of the ontological productivity of computational media.

As explained above, in Whitehead we encounter a ‘neutral’ category of experience which can be extended across the board. Everything – from atoms to biological organisms and digital artefacts – is constituted by actual occasions which come into being by grasping or ‘feeling’ the world. In his speculative metaphysics, “[e]xperience and subjectivity range from one to the other end [. . .] all entities are subjects that/who ‘have’ or ‘feel’ experience” (Schneider, 2019: 137). Despite Whitehead’s use of terms such as ‘subject’, ‘feeling’, and ‘grasping’ to describe the process whereby actual occasions come into being and perish, his expanded understanding of experience attempts to describe the most basic ontological process. It is in this sense that Debaise can claim that “the aesthetic [or experience] becomes the site of all ontology” (2017: 58). Reading Hansen’s application of this panexperientialism as a posthuman-cum-nonhuman “phenomenology of contemporary media situations” (Fazi, 2016a: 66) instead of as a Whiteheadian ontology of computational technologies capable of affirming the internality thesis is, I would say, a partial reading of what the concept of worldly sensibility is doing here. By expanding and propagating worldly sensibility, understood as the domain of the real potentiality of the settled world that informs the genesis of new actualities, digital technologies can be regarded as having the capacity to foster the new and the contingent, not only in rapport with our technologically-enhanced subjective experience but also independently of it.

As a way to conclude this section and segue into an analysis of Fazi’s own project, it is worth going back to one important critique contained in her review of Hansen’s book. As already discussed, she argues that Hansen’s project is primarily phenomenological, and as such it aims “to assess the reality of techno-human experience rather than that of twenty-first-century media per se” (Fazi, 2016a: 66). I have already attempted to present the counterargument that in his concept of worldly sensibility there is a foray into the domain of ontology (and to that extent, it addresses these technologies ‘per se’). However, this incursion has certain limits upon which Fazi’s critique sheds a clear light. She takes Hansen to task for “not engag[ing] more directly with the calculative nature of twenty-first-century media” (ibid.). In other words, his account is circumscribed to the domain of computational media technologies, without attempting in any way to address the ontological specificity of the computational operations that underlie them. Addressing the calculative nature of computational operations is precisely Fazi’s starting point, and trying to affirm and elucidate a contingency inherent to them is her main objective.

Computational contingency II: Quantitative infinity and the incomputable

As already mentioned above, Fazi’s work departs from a critique of contemporary thinking about digital media, where “the digital would seem to be excluded from the production of the new on the basis of its automated repetition of the preprogrammed” (2018: 33). In Hansen’s reading of Whitehead’s process ontology as a metaphysics of worldly sensibility and real potentiality, we can get a glimpse of a proposal that shines a different light on digital technologies. Not only can they serve the purpose of domesticating contingency through the various techniques of algorithmic governmentality mobilised by capitalism’s affective or ‘micro-sociological turn’: technologies of environmental sensing are ontologically productive insofar as they expand the domain of worldly sensibility from where new actualities emerge, and techniques of data harvesting and analysis and can provide us with ways to feed-forward information about this infra-perceptual realm into our conscious experience. Even though Fazi also develops her theoretical framework by drawing amply from Whitehead’s metaphysics of experience, she presents us with a radically different account of the way in which computation can foster contingency.

While Fazi recognises the importance of sensation, matter, and embodiment as domains of critical inquiry on the role that computational technologies play in contemporary culture and society, she argues that an exclusive focus on these is not enough. In addition to this, “digital technologies [also] employ strategies of formal abstraction, digital computation has a relation with the intelligible (i.e., what is apprehensible only through forms of abstractive activity)” (Fazi in Beer, 2021: 292). Thus, her aim is not that of addressing the contingency arising in “computation’s existence in its situated environmental complexity” (ibid.: 296) – i.e., a contingency external to computation – nor that of analysing how computational technologies expand the real potentiality contained in worldly sensibility. Instead, her purpose is to discover contingency in the logico-formal and axiomatic core of computational operations.

Fazi argues that it is not necessary to jettison the axiomatic and preprogrammed to inquire about contingency in computation. As mentioned above, the philosophical critiques of computation and the digital which claim that digital technologies cannot produce novelty often ground their arguments on the axiomatic character of computation. As Fazi explains, “according to this perspective, if computation cannot generate true novelty and can only produce an output in accordance with whatever has been provided as input, then this is due to its axiomatic nature: axioms are self-evident truths that confirm what is already known” (2018: 99). This exclusion of axiomatic procedures from the realm of the contingent is precisely the point of contention. Fazi’s central ontological wager is that “the reliance of computational systems on pre-programmed rules does not prevent, but in fact enables, the generation of novelty” (Fazi in Beer, 2021: 303). In other words, our attempt to attribute contingency to computation should not neglect or eschew the abstract formalisms of computation in favour of computation’s phenomenal and empirical engagements. To the contrary, “we should fully engage with formal abstraction and with the rational, discrete and axiomatic character of computation” (Fazi, 2016b).

Drawing from Whitehead’s ‘panexperientialist’ framework, Fazi also approaches the notion of experience in a way that can be called ‘neutral’ since it is not to be restricted to the realm of human intentionality nor to any specific kind of entity.[xiii] Despite this shared starting point, however, her account diverges in a significant way from Hansen’s. As we discussed above, Hansen grants computational technologies with the capacity to experience insofar as they engage in processes of prehension-concrescence that contribute to the propagation of worldly sensibility and the real potentiality contained therein. This capacity to experience and to engage in prehensive processes, however, is that of computational media, and not of computation per se. In other words, it pertains to computational technologies in their rapport with the sensible world, rather than to the underlying computational processes in themselves. In a gesture that could be read as a direct reply to Hansen, Fazi takes a standpoint which is critical of “a sense-data-centred mode of empirical enquiry that overlooks computation’s conceptual capacity or [. . .] computation’s relation with intelligibility” (Fazi in Beer, 2021: 294). To address this ‘other side’ of computation, she effectively brackets out the domain of sensibility (human or otherwise) to inquire about the mode of experience of computational operations – or the experience of the computational – focusing on their strictly formal dimension, independently of their possible engagement with empirical reality. For her, “computation [possesses] a mode of experience, one that is not limited to the sensible input of empirical reality or the simulation thereof” (2018: 6) but instead pertains to its logical-mathematical dimension.[xiv]

Fazi invites us to rethink what computation is and how it relates to indeterminacy or contingency through a speculative account of computation’s experience. Drawing from Whitehead’s ontological schema, Fazi characterises the experience of the computational by rendering computational processes as actual occasions or “discrete processes of self-determination” (Fazi in Beer, 2021: 291). She argues that computational actual occasions – like all other actual occasions – should be understood as processes of determining indeterminacy. However, the indeterminacy in question here is logical rather than empirical. To move forward in this inquiry, she proposes a bold methodological move: to “divorce the concept of the contingent from that of the empirical” (ibid.: 296).[xv] In other words, when discussing computation as processes of determining indeterminacy we are not talking about an (ultimately futile) attempt to capture or reduce the contingency that characterises sensibility and lived experience. If “ontological indeterminacy is what brings about the new” (ibid.: 293) such indeterminacy ought to be located on the side of digital computation itself if we are to think of the latter as ontologically productive. Fazi’s core philosophical thesis is that this indeterminacy or contingency is resolutely non-empirical: it is “logical, quantitative and internal to computational processing. It is the indeterminacy that is inscribed into the formal and mathematical definition of an algorithmic procedure and that, as such, does not have to simulate the indeterminacies of life or lived experience” (ibid.: 291). She claims that “we should understand the contingency of computation as being predicated not upon the latter’s empirical dimension, but upon its formal one” (2018: 17). This is a contingency which “is not dependent on empirical mutability, but which instead works through computational processes of formal discretisation” (ibid.: 116).

How is indeterminacy inscribed in the formal definition of an algorithmic procedure? Addressing this crucial question is one of the central tasks that Fazi undertakes in her work. Her answer is developed through a novel philosophical interpretation of some of the foundational papers of computational theory which have questioned the limits of formal reasoning and the constraints of computational axiomatics, foregrounding the ontological stakes that can be found therein. More specifically, she mobilises Kurt Gödel’s incompleteness theorem and Alan Turing’s notion of incomputability to theorise the relationship between computation and contingency by highlighting the indeterminacy always already inscribed in formal computational operations. Fazi’s endeavour is, in one word, to “think processuality and axiomatic formalism together” (Fazi in Beer, 2021: 300).

In 1931, Gödel demonstrated the incomplete nature of formal axiomatic systems by showing the existence of true statements whose proof cannot be deduced from within the axiomatic system in question. This spelt the demise of the Hilbertian project, predicated as it was on the search for solid foundations identified as consistency, completeness, and decidability. Gödel’s response to Hilbert proposed the existence of undecidable propositions which posited the internal limits of formal systematisation. Writing in the wake of Gödel, in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem”, Turing rethought the problem of decidability from the standpoint of computability by inquiring about the possibility of establishing a standard for deductive procedures in terms of finite and mechanical sequences of steps and formal rules, i.e., in algorithmic terms. By addressing this problem with a formal method of computation and a hypothetical universal computing machine, Turing not only provided a mathematical model that would serve as the blueprint for the computing machines to come but also discovered the existence of problems that cannot be solved through finite algorithmic procedures – that is, incomputable functions.

Fazi reads Gödel’s proof of incompleteness as providing the crucial starting point to transform our understanding of formalisms by showing the inherent limitations of axiomatics. Nonetheless, she argues that his Platonist appeal to rational intuition is still tethered to a certain anthropocentrism and subjectivism insofar as it presupposes a human subject capable of attaining intuitive insight into the transcendent realm of mathematical truths or essences that our formal systems cannot account for. This is the point where Fazi turns to Turing. In her reading, Turing’s formalisation of axiomatics in mechanical terms adds “an important non-existential ‘flavour’ to the open-endedness of axiomatics [. . .] incomputability allows me to look into the limits of formalism that Gödel envisaged, and to do so vis-à- vis its utmost mechanisation, i.e., via the computational automation of formal procedures” (2016b). In other words, while Gödel’s intuitionism posited a tie between the transcendent powers of human cognition and the infinite, “Turing’s analysis [. . .] turned the tables because nearly everything that used to belong eminently to the order of the organic – including order itself – has now been relegated to the order of the machine” (Pourciau, 2022: 253). The indeterminacy of formal systems is not to be found either in the order of the organic or in the intuitive capacities of the human mind, but rather in the axiomatic character of the computing machine itself.

Computational systems are formal and axiomatic by nature. A computer program functions axiomatically insofar as it operates as a system of pre-determined definitions and rules for deductive inference. As mentioned above, Fazi argues that the apparently ‘closed’ or self-sufficient character of these formal procedures furnishes the metacomputational approach with its idealist claims to onto-epistemological primacy and infallibility. Gödel’s and Turing’s developments provide the resources to topple this image by proving the ultimate unrealizability of a fully closed axiomatic system. In other words, incompleteness and incomputability, in their respective ways, evidence the failures and limits of formal axiomatic systems, computation among them. In this way, Fazi contends that they also fundamentally undermine the very ontological pretensions of the metacomputational view:

[B]oth Gödel’s proof of incompleteness and Turing’s computational limits can be understood to have rephrased the difficulty faced by man-made formal encodings and procedures in attempting to mirror exactly the empirical world [. . .] Gödel and Turing profoundly upset the transcendentally closed depiction of formal axiomatic systems advanced by computational idealism. More precisely, they preclude the possibility that axiomatic formulation could be the method through which the metacomputation of the real is fulfilled. With Gödel and Turing, one discovers that formal axiomatic structures are not returning the transcendent blueprint to preform and regulate the sensible through deductive and logically necessary representational forms of intelligibility (Fazi, 2018: 119).

Through her rereading of Gödel and Turing, what Fazi proposes is to think of axiomatics differently, so that incompleteness and incompatibility are deployed “not to undermine axiomatic formal systems, but rather to enhance the possibility of an ‘open-ended’ – or indeed, of a contingent – understanding of them” (2018: 116). While these mathematical breakthroughs have been read by some as ultimately implying the demise of scientific objectivity and heralding a form of postmodern relativism (Thomas, 1995), Fazi argues this is nothing but a misconstrual that contradicts Gödel’s views concerning the philosophical implications of incompleteness. Neither Gödel nor Turing intended to prove that deductive abstraction was mistaken or misguided, but only that logical-mathematical and deductive reasoning itself exceeds our finite rule-following techniques of formalisation.[xvi] If their work teaches us a lesson, she argues, it is that “formal axiomatic structures always tend towards their own infinity. Incompleteness means that more axioms can be added to the formal system; incomputability means that more digits and steps can be added to computation” (Fazi, 2018: 119). Fazi’s notion of non-empirical contingency is grounded precisely on this metaphysical problem of infinity – or more precisely, on the mechanical discretisation of infinity as it takes place in computational procedures as formalised by Turing.  

According to Fazi, one of Turing’s central contributions was the formalisation of a procedure that presupposes a quantitative infinity, i.e., a discretised infinite or an infinite divided into finite parts.[xvii] Turing’s imagined computational machine is a discrete-state deterministic machine that operates on real numbers by making binary inscriptions on an endless strip of paper on the basis of an enumerable set of instructions (or algorithms). Operationally it is finite and discrete, and its functionality coincides with the whole domain of finitely articulable – i.e., computable – mathematical operations. In other words, Turing understands the computable as the domain of that which can be calculated through such a mechanistic procedure (Turing, 2004: 58). However, he also discovered the existence of numbers and mathematical problems which are unsolvable by computing machines because they involve infinite sequences of digits that cannot be calculated by means of a finite set of instructions.  In Fazi’s reading, the implicit ontological proposition found here is that computation is constituted by discrete quantities which cannot be fully counted (Fazi, 2018: 124). She neatly summarises this point of view claiming that “computing machines are constructed upon the twentieth-century discovery of the logical deadlock between finitude and infinity” (ibid.: 123).

While the problem of infinity in computation is very much palpable in concrete cases such as the unavoidable presence of bugs in computer programs, Fazi is not so much interested in these operational manifestations as in the “revolutionary ontological proposition” that “computation is made up of quantities, yet these quantities cannot be fully counted” (ibid.: 124). Turing’s conception of computability is grounded on a ‘quantitative infinity’, i.e., on a division or discretisation of mathematical infinity into an endlessly receding sequence of finite steps, the totality of which are incomputable by actual time-bound computational procedures and devices. Crucially, although this quantitative infinity can be pragmatically ignored for practical purposes (as Turing himself seemingly did), Fazi argues that it constitutes an unavoidable element and an essential aspect of the actuality of computation that we have to reckon with. Moreover, she claims that it is precisely what defines computation’s capacity to relate to an indeterminacy located on a formal (and not only an empirical) level. To repeat, this is not the contingency of the empirical world as it entangles itself with the operationality of the algorithmic procedure, nor is it the result of unforeseen technical glitches, accidents or system failures. As Fazi clarifies,

Contingency is the ontological status of algorithmic operations of ordering and systematising the real through logico-quantitative means. The latter are algorithmic operations that are preset, but which are always ultimately contingent because of their internal indeterminacy. This contingency means that formalisation-as-discretisation never exhausts indeterminacy. Computation is made of quantities, yet these are quantities that cannot be fully counted; an ever-increasing number of axioms can be added to the system, and an ever-increasing number of steps can be added to the calculation [. . .] The contingency of computational systems should not be understood in terms of a capacity for accident, wherein the terms of the algorithmic process can deviate from the preset rule. The preset rule does not in fact have to be other than itself in order to be contingent: it does not have to deviate from formal logic or axiomatic deduction in order to embrace indeterminacy, because it is contingent already, at its logical, formal, and axiomatic level, thanks to the infinity that is inscribed in its being preprogrammed (ibid.: 129–130).

Here, it is worth mentioning that other theorists have also engaged with the work of Turing to locate a contingency internal to computation via the notion of the incomputable. Most notably, Luciana Parisi has expanded on this idea through a speculative reading of the work of mathematician Gregory Chaitin’s algorithmic information theory. Reworking the notion of incomputability through information theory, one of Chaitin’s central contributions is the idea of an ‘algorithmic randomness’ for which he uses the name Omega. In rough terms, the theory of algorithmic randomness postulates an entropic tendency in computational procedures, that is, a tendency of data to increase in volume from input to output. This irreversibility or non-correspondence between output and input instructions undermines the picture of computational processes as operations whose outcome is limited to what was pre-determined, and implies that “increasing yet unknown quantities of data […] characterize rule-based processing” (Parisi, 2015: 133). Chaitin explains algorithmic randomness in terms of the incomputable and claims that it ought to be regarded as “a continuation of Turing’s attempt to account for indeterminacy in computation” (ibid.). Parisi argues that Chaitin’s algorithmic randomness is a useful development of Turing’s incomputable insofar as it allows us to frame it as an indeterminacy inherent to every computational process, rather than as one that looms on the horizon and presents itself as a limit. In other words, whereas for Turing computation stops when the incomputable begins, “for Chaitin computation itself has an internal margin of incomputability insofar as rules are always accompanied and infected by randomness […] the incomputable [is] the unconditional of computation” (ibid.: 134). In a co-authored text, Parisi and Fazi put it in the following way:

For us, such a reworking of the incomputable is striking and speculatively productive, because what was conceived to be the external limit of computation (i.e., the incomputable) has now become internalized in the sequential arrangement of algorithms (randomness works within algorithmic procedures). One can thus even suggest that algorithmic randomness is not ‘outside’, but has become constitutive of the actuality of the procedure […] indeterminacy is always part and parcel of a determinate actual occasion/algorithm as it strives towards completion (Parisi & Fazi, 2014: 119).

In this quote, we glimpse the attempt to assimilate actual occasions with algorithms/computational processes, a conceptual move that lies at the heart of Fazi’s project in Contingent Computation. As already mentioned above, actual occasions are the basic building block of Whitehead’s metaphysics of experience. They constitute the basic fleeting occurrences or events of self-actualisation through which the atomic elements of reality come into being by determining themselves. Following Whitehead, Fazi describes an actual occasion as “a process of self-determination that is ingressed by indeterminacy” which is “carried out both at the level of the sensible and at the level of the intelligible” and thus involves “both physical and conceptual operations, conveying distinct yet related ontological determinations” (Fazi in Beer, 2021: 297). These operations are prehensions which are both ‘physical’ (the prehension of elements of the settled world) as well as ‘conceptual’. In Whitehead’s system, this latter form of prehension pertains to what he calls ‘eternal objects’, defined as “any entity whose conceptual recognition does not involve a necessary reference to any definite actual entities of the temporal world” (Whitehead, 1979: 44). However, unlike Platonic forms or the Kantian a priori categories, such ‘ideal entities’, cannot be conceived, known or represented by themselves independently of the actual occasions that they inform. It is due to this unknowability that “eternal objects are to be understood as the indeterminacy that determines the realisation of the actual occasion” (Fazi, 2018: 134). Eternal objects provide the actual occasion or event with a source of potentiality that is integral to its realisation.[xviii]

Interpreting computational experience through this Whiteheadian framework, Fazi characterises computational processes through the physical and conceptual prehensions they enact. Computational actual occasions “determine themselves via the physical manipulation of data (in other words, by physically prehending other computational actualities), but also because they address logically – and not affectively or empirically – their own logical indeterminacy”, two dimensions of the same process which are “related and yet not immanent to each other” (Fazi in Beer, 2021: 297). As we have seen, her central claim is that this second logico-quantitative dimension – of conceptual prehensions and logical indeterminacy – presents us with a source of indeterminacy which functions as a generative potential for the production of novelty in computation. The indeterminacy at work here is the incomputable, understood as the quantitative infinity inherent to computational axiomatics. Fazi argues that this quantitative infinity has the same function as Whitehead’s eternal objects insofar as they are the ‘unknown condition’ or ‘pure potentiality’ integral to every computational process.

Having reached this point in our discussion, it is now that we can more clearly appreciate the differences and tensions between Hansen’s and Fazi’s projects. As stated at the beginning, the distinction between their approaches becomes sharply visible when standing at the crossroads between real potentiality and pure potentiality. The choice between these two forms of potentiality reflects two very different ways to affirm the internality thesis and, ultimately, two different ways to argue that computational technologies are ontologically productive. To make this clearer, let’s recapitulate some of the points developed so far.

As explained above, Whitehead characterises the concrescence of actual occasions as processes of determination, the potentiality of which is predicated on the ingression of indeterminacy into their constitution. Real potentiality is the potential that the settled world of dative entities offers for the creation of new actualities through physical prehensions. This form of potentiality is central to Hansen’s account of worldly sensibility and to the role that computational media plays within it. His ‘claim for inversion’ is precisely an attempt to place the real potentiality that results from the indeterminacy of physical prehensions at the forefront; a domain of potentiality which, he argues, is almost entirely neglected by most readers of Whitehead. However, alongside real potentiality, Whitehead also discusses the pure potentiality that arises from the conceptual prehension of indeterminate idealities or eternal objects. In Fazi’s speculative rereading of Turing, the pure potentiality of computational actual occasions is grounded on the incomputable, as the “residue of infinity” (2018: 134) that ingresses into the actuality of computational processes. It is in this sense that computational operations “confront indeterminacy in each and every iterative process” (ibid.: 130). Computational actual occasions prehend the infinite conceptually; they exhibit an intelligible or non-empirical (rather than sensible) relation with indeterminacy.[xix]

In these pages, I have attempted to present both of these proposals as arguments in favour of the thesis which postulates the internality of contingency to computation. This is Fazi’s overt and explicit aim from the get-go, while in the case of Hansen, it is an argument which has to be teased out from his concept of worldly sensibility and analytically separated from his post-phenomenological pharmacology of new media. However, if both can be read as affirmations of the internality thesis, in the introductory section I also mentioned that in each of them the locus of contingency, its ‘degree’ of internality, and its relationship to computational technologies are fundamentally different. On the one hand, Fazi’s staunch defence of a resolutely non-empirical concept of computational contingency signals a source of indeterminacy – the ingression of pure potentiality into conceptual prehensions – that is not related in any way to computation’s role in worldly sensibility. It posits the ontological productivity of computational processes in their formal and mathematical character.

In Hansen, on the other hand, we encounter an argument that posits the ontological productivity of computational media. This involves a source of indeterminacy – the ingression of real potentiality into physical prehensions – which can be said to be internal to computation insofar as, through technologies of sensing and data processing, computation directly contributes to its expansion and intensification. It is a contingency internal to computation, albeit not independent of its imbrication in the multifarious processes whereby the cosmos senses itself. In that sense, it can perhaps be specified and distinguished as akin to a particular kind of ontological productivity that can be called ontogenesis. Steeped in Deleuzian aesthetics, this is a form of ontological productivity concerned with the genesis of individuated entities from the sub-representational domain of micro-variations and affective intensities that constitute the continuum that is the sensible world; it is concerned, in short, with “the ontological inventiveness of the sensible” (Fazi, 2018: 35). In her own account, Fazi rejects certain attempts to include the digital within ontogenesis, arguing that these “break down at a conceptual level when trying to cope with the discreteness of the digital itself” (ibid.: 31). Trying to reconcile the continuous and the discrete, she sees such attempts as bound to fail due to the “incompatibility of two divergent understandings of the structure of the real” (ibid.: 32). This impasse is precisely the reason why she turns to Whitehead, arguing that his process ontology of actual occasions manages to surpass this deadlock. What we find there, Fazi argues, is an extensive structure of ‘atomic events’ (the ‘extensive continuum’) that allows us to think “the becoming of discreteness” (ibid.: 76). This attempt to think the becoming of discreteness lies at the heart of her focus on the pure potentiality that introduces indeterminacy through the non-empirical dimension of computational actual occasions.

Hansen does not discuss computational actual occasions per se, nor does his ontogenetic rendering of computational media account for the contingency attributable to the discreteness of computation’s quantitative and formal procedures in and of themselves. However, by also adopting a Whiteheadian framework concerned with the extensive (or ‘vibratory’) continuum as a “meshwork of real potentiality” (Hansen, 2015: 233) for the becoming of discreteness, his account could arguably be read alongside Fazi’s rather than in contradistinction to it.

Computational processes consist of abstract logical structures which also live a physical life (Possati, 2020: 10); algorithms are “objects that seem to occupy the middle zone or the gap, between […] abstraction and actuality” (Ikoniadou, 2014). As already mentioned above, these are two dimensions of the same process which are “related and yet not immanent to each other” (Fazi in Beer, 2021: 297).

Conclusion

A great deal of contemporary forms of economic and social power are grounded on the pervasive mathematisation, datafication, and quantification of the world enabled by digital media and computational technologies. With this in mind, many critiques of digital technologies have focused on the abstractive maladies entailed by formalisation. And rightfully so: the discretisation inherent to digital representation, and the apparent seamlessness in which this renders things quantifiable and thus instrumentalisable for capital, has spurred several attempts to establish historical continuities and structural isomorphisms between digital abstraction and value-mediated social relations (Beller, 2021; Franklin, 2021). From such viewpoints, one could surmise that the vocation of digital technologies has been limited to an ancillary role in the totalising ambitions of algorithmic governmentality. As I have argued, the predictive vector present in the way our contemporary technosphere has been constructed and imagined partly justifies this prevalent fear of a tendency towards a form of algorithmic determinism, one which has been characterised as a statistical domestication of contingency through techniques of predictive analytics. Likewise, the (often subterranean) presence of the metacomputational ideal, which grants onto-epistemological primacy and instrumental infallibility to computational technologies, has also contributed to adding fuel to these concerns. It would seem, then, that computation and contingency are somewhat at odds with each other. We are told that, in itself, computation is a means to pre-program, predict, and algorithmically determine reality, and that its relationship contingency can only be external: computation can be seen as operating in non-predetermined ways if we pay attention to its glitches and errors, or if we consider its concrete instantiations as always already being intertwined and embedded in a broader set of cultural, material, and social relations.

I started this text with the injunction to reconsider and rethink the concepts of technics and contingency (as well as their relationship) in light of this present situation. I argued that the externality thesis remains pertinent as a counterargument to the blanket denial of computational contingency. It permits us to cut through some of the tropes and imaginaries that are couched on the metacomputational ideal, thereby allowing us to mitigate, to a certain extent, both the technocratic dreams and the technophobic tendencies that computational technologies are often imbued with. However, it is also limited insofar as it reflects the continuation of an understanding of technics as the domain of artefacts engaged in a prosthetic interdependency or co-determining rapport with the human and its lived experience. Such an understanding of the relationship between technics and contingency is insufficient to address the specific post-prosthetic and non-correlational character of many of the computational technologies that we encounter today. Attending to this specificity means recognising the alterity or alienness of computation vis-à-vis human reality and experience. As it was mentioned above, contemporary computational technologies operate alongside us, insofar as “they function both in proximity to us, but also in autonomy from us” (Fazi, 2019b: 94). Faced with this crux between proximity and alterity, we must then strive to “open up a conceptual space that would allow us to inhabit this proximity, but at the same time also dwell and build on the alterity” (Fazi, 2019b: 98). In this paper, the attempt to wedge an opening of such a conceptual space has been approached mostly by dwelling on the alterity and doing so primarily on an ontological register. By thinking these technologies ontologically rather than phenomenologically, we stretch the notion of technics to its outer limit and end up with a different understanding of the contingency that technical artefacts involve.

Throughout these pages, I have extended a speculative thesis – contained in the work of Hansen and Fazi – which is grounded on an attempt to address computational systems in their ontological specificity. In a nutshell, this is what I have called the internality thesis: the claim that there is a contingency which is internal to computational technologies, an indeterminacy internal to their operation is what makes them ontologically productive. In virtue of the relationship that they establish with the real potentiality of worldly sensibility and the pure potentiality of quantitative infinity, computational actual occasions establish a prehensive relationship with indeterminacy. With Hansen, this allows us to rethink computation’s entanglement with the sensible world, alongside us but also in autonomy from us. With Fazi, “we are given the possibility of thinking again what formal abstraction is, ontologically, and of thus confronting the speculative richness of the formal axiomatic structure” (2016b). Their Whiteheadian accounts of computation, while divergent in emphasis and focus, allow us to reconceptualise computation to set the contingency of its ontology to the forefront. In short, these authors allow us to understand computation differently. As Fazi writes, “considering computation as a process of determination (and not as a matrix of total determinism) points to an ontological struggle between determinacy and indeterminacy, according to which computational forms are not preformed and static but are instead always eventual, and in a process of becoming” (2018: 132).

Admittedly, ontology and metaphysics might seem somewhat removed from the concrete applications of computation and from the calculative and predictive infrastructures that underpin the technopolitical situation currently unfolding. However, as Fazi rightly points out, “the ontologies and epistemologies of technoscience are never neutral, but in fact often normative and ideological, insofar as they impose upon society and culture specific assumptions” (2019b: 92). Thus, I would argue that the potential outcome of what I have tried to do in this text is more than a mere “digital apologia” (Evens, 2023: 35), i.e., a theoretically-informed corrective to the image of computers as deterministic calculating machines. An ontology of computation which depicts it as a formal matrix of total determinism will likely translate into accounts – both critical and otherwise – which assume that the aim of computational technologies is to overcome contingency and indeterminacy. A speculative ontology of computation which highlights its contingent nature offers us a new perspective from where the relationship between the computational and the world could be imagined otherwise.

References

Bajohr, H. (forthcoming) ‘Writing at a Distance: Notes on Authorship and Artificial Intelligence’, German Studies Review.

Beer, D. (2021) ‘Explorations in the Indeterminacy of Computation: An Interview with M. Beatrice Fazi’, Theory, Culture & Society, 38(7-8): 289-308. https://‌doi.org/‌10.1177/0263276420957054

Beller, J. (2021) The World Computer: Derivative Conditions of Racial Capitalism. Durham: Duke University Press.

Blanchot, M. (1993) The Infinite Conversation (trans. S. Hanson). Minneapolis: University of Minnesota Press.

Bradley, A. (2011) Originary Technicity: The Theory of Technology from Marx to Derrida. Hampshire: Palgrave Macmillan.

Debaise, D. (2017) Nature as Event: The Lure of the Possible (trans. M. Halewood). Durham: Duke University Press.

Ernst, W. (2016) Chronopoetics: The Temporal Being and Operativity of Technological Media (trans. A. Ennis). London: Rowman & Littlefield.

Evens, A. (2023) ‘Digital Ontology and Contingency’, in N. Lushetich, I. Campbell, & D. Smith (Eds.) Contingency and Plasticity in Everyday Technologies. London: Rowman & Littlefield, pp. 35–51.

Fazi, M. B. (2016a) ‘Black-boxed’, Radical Philosophy, 197: 64–66.

Fazi, M. B. (2016b) ‘Incomputable Aesthetics: Open Axioms of Contingency’, Computational Culture, 5. http://computationalculture.net/incomputable-aesthetics-open-‌axioms-of-contingency/.

Fazi, M. B. (2018) Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics. London: Rowman & Littlefield.

Fazi, M. B. (2019a) ‘Can a machine think (anything new)? Automation beyond simulation’, AI & Society, 34: 813–824.

Fazi, M. B. (2019b) ‘Distraction Machines? Augmentation, Automation and Attention in a Computational Age’, New Formations: A Journal of Culture, Theory, Politics, 98: 85–100.

Franklin, S. (2015) Control: Digitality as Cultural Logic. Cambridge: The MIT Press.

Franklin, S. (2021) The Digitally Disposed: Racial Capitalism and the Informatics of Value. Minneapolis: University of Minnesota Press.

Fuller, M. (Ed.) (2008) Software Studies: A Lexicon. Cambridge: The MIT Press.

Haber, B. (2016) ‘The Queer Ontology of Digital Method’, WSQ: Women’s Studies Quarterly, 44(3–4): 150–169. https://doi.org/10.1353/wsq.2016.0040

Haff, P. (2014) ‘Humans and technology in the Anthropocene: Six rules’, The Anthropocene Review, 1(2): 126-136.

Hansen, M. B. N. (2015) Feed-Forward: On the Future of Twenty-First Century Media. Chicago: The University of Chicago Press.

Hansen, M. B. N. (2016) ‘The Operational Present of Sensibility’, The Nordic Journal of Aesthetics, 24(47). https://doi.org/10.7146/nja.v24i47.23054

Hui, Y. (2015) ‘Algorithmic catastrophe: The revenge of contingency’, Parrhesia, 23: 122–143.

Hui, Y. (2019) Recursivity and Contingency. London: Rowman & Littlefield.

Ikoniadou, E. (2014) ‘Algorithmic Thought: A review of Contagious Architecture by Luciana Parisi’, Computational Culture, 4. http://computationalculture.net/‌algo‌rith‌mic-‌thought-a-review-of-contagious-architecture-by-luciana-parisi/

Jones, J. (1998) Intensity: An Essay in Whiteheadian Ontology. London: Vanderbilt University Press.

Kapp, E. (2018) Elements of a Philosophy of Technology: On the Evolutionary History of Culture (trans. L. K. Wolfe). Minneapolis: University of Minnesota Press.

Kraus, E. M. (1998) The Metaphysics of Experience: A Companion to Whitehead’s Process and Reality. New York: Fordham University Press.

Lushetich, N., Campbell, I., & Smith, D. (2023) ‘Prologue. Normalising Catastrophe or Revealing Mysterious Sur-Chaotic Micro-Worlds?’, in Contingency and Plasticity in Everyday Technologies. London: Rowman & Littlefield, pp. xi–xxx.

Marenko, B. (2015) ‘When making becomes divination: Uncertainty and contingency in computational glitch-events’, Design Studies, 41: 110–125.

Massumi, B. (2002) Parables for the Virtual: Movement, Affect, Sensation. Durham: Duke University Press.

McQuillan, D. (2018) ‘Data Science as Machinic Neoplatonism’, Philosophy & Technology, 31(2): 253–272. https://doi.org/10.1007/s13347-017-0273-3

Parisi, L. (2015) ‘Instrumental Reason, Algorithmic Capitalism and the Incomputable’, in M. Pasquinelli (Ed.) Alleys of Your Mind: Augmented Intelligence and Its Traumas. Lüneburg: meson press, pp. 125–138.

Parisi, L., & Fazi, M. B. (2014) ‘Do Algorithms Have Fun? On Completion, Indeterminacy and Autonomy in Computation’, in O. Goriunova (Ed.) Fun and Software: Exploring Pleasure, Paradox and Pain in Computing. New York: Bloomsbury, pp. 109–128.

Possati, L. M. (2020) ‘Algorithmic unconscious: Why psychoanalysis helps in understanding AI’, Palgrave Communications, 6(70). https://doi.org/10.1057/s41599-020-0445-0

Pourciau, S. (2022) ‘On the Digital Ocean’, Critical Inquiry, 48(2): 233–261.

Rouvroy, A., & Berns, T. (2013) ‘Algorithmic governmentality and prospects of emancipation: Disparateness as a precondition for individuation through relationships?’ Réseaux, 177(1): 163–196.

Scannel, R. J. (2022) ‘Terra Ignota: Noncorrelation and Computational Agency’, in D. Cecchetto (Ed.) My Computer Was a Computer. Catalyst M. Beatrice Fazi. Noxious Sector Press, pp. 13–32.

Schneider, J. (2019) ‘New Media Pharmacology: Hansen, Whitehead and Worldly Sensibility’, Theory, Culture and Society, 36(1): 133–154.

Shaviro, S. (2009) Without Criteria: Kant, Whitehead, Deleuze, and Aesthetics. Cambridge: MIT Press.

Stamm, E. (2022) ‘The Digital Image of Thought’, La Deleuziana, 14: 7–18.

Stiegler, B. (2016) Automatic Society Vol.1: The Future of Work. Cambridge: Polity.

Thomas, D. W. (1995) ‘Gödel’s Theorem and Postmodern Theory’, PMLA, 110(2): 248–261.

Turing, A. M. (2004) ‘On Computable Numbers, with an Application to the Entscheidungsproblem (1936)’, in B. J. Copeland (Ed.) The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus The Secrets of Enigma. Oxford: Oxford University Press, pp. 58–90. Whitehead, A. N. (

Notes


[i] This idea is discussed in more detail in Hui, 2015.

[ii] Where does this particular relationship between digital technologies, prediction and contingency come from? Several authors have turned towards the history of cybernetics to find an answer. In his incisive study about digitality and control, Seb Franklin investigates how these ideals of control that originated in cybernetics came to exceed the domain of engineering and biological sciences to become the ‘cultural logic’ that underlies contemporary forms of governance and technology. According to him, one can already identify this trend as early as the famous Macy conferences, where the rigorous formal limitations and domain specificity that scientists such as Wiener, McCulloch and Pitts established for the use of cybernetic methods were soon reasoned away when faced with the seductive ideal of forecasting human behaviour and society in general (Franklin, 2015: 15). Thus, the methods and principles of cybernetic control were extrapolated to wider socioeconomic applications from the get-go; a universalizing cybernetics which still prevails today as a generalised sociocultural logic (or episteme) which undergirds the way in which society is thought about and technical systems designed. As Franklin points out, “the vector along which the slippage of materialist cybernetics into socioeconomic applications appears to move is that of prediction or, more accurately, forecasting. It is this prospect of modelling and forecasting (and thus valorizing) social behaviour that drives the desire for a universally applicable cybernetics, and this prospect must be seen as grounded in the idealisation of the universal digital computer” (ibid.: 44).

[iii] Arguably, the field of AI can be seen as a notable exception to this trend. With the recent meteoric rise of large language models, adversarial neural networks, and various text-to-image/video generators, the sheer amount of discussion around the creativity and novelty of machines has reached an unprecedented level. However, I would argue that this involves very different considerations insofar as it always involves identifications and analogies (or the denial thereof) between computational performance and human intelligence/creativity. As Hannes Bajohr argues (Bajohr, forthcoming), in the current conversation about AI, notions of creativity go hand in hand with that of an anthropomorphic authorship and agency. 

[iv] Hui borrows the use of the term ‘improbable’ in this context from Bernard Stiegler, who in turn finds it in Maurice Blanchot. In The Automatic Society (Stiegler, 2016) he cites the following passage: “The improbable escapes proof, not because it cannot be demonstrated for the time being but because it never arises in the region where proof is required. […] The improbable is not simply that which, remaining with the horizon of probability and its calculations, would be defined by a greater or lesser probability. The improbable is not what is only very slightly probable. It is infinitely more than the most probable” (Blanchot, 1993: 41).

[v] Hui’s understanding of technics can be situated within the lineage of the ‘originary technicity’ thesis (Bradley, 2011). With roots in the work of Heidegger, Leroi-Gourhan, and Derrida, this thesis goes one step beyond the view of technical objects as the “projection of organs” (Kapp, 2018) to argue that technology is not only constitutive of our physiology but also of our modes of cognition, sensibility, sociality and the way we experience time. It is in this sense that technology assumes a quasi-transcendental (Bradley, 2011: 127) or a-transcendental (Hui, 2019: 202) position, where – as a posteriori that becomes a priori – it is neither purely transcendental nor completely empirical. One of the most important authors within this lineage is the late Stiegler. His work highlighted the pharmacological dimension of technical objects, understanding their nature as originary mnemonic supplements or prostheses.

[vi] Although one can read these terms as equivalent, once they are placed in their philosophical context there is a significant conceptual difference between them which will be developed near the end of the text. I thank one of the anonymous reviewers for insisting on the importance of this distinction, allowing me to sharpen my argument on this particular point.

[vii] More broadly speaking, the critical stance towards the ‘the instrumentalisation of abstraction’ (Fazi, 2019a: 818) has a long lineage which can be traced back to paradigmatic critics of modernity and instrumental reason such as Max Weber and the Frankfurt School. In their accounts, they warned against the way in which a form of thinking which acts through generalisations, and is thus detached from the particular and from lived experience, was mobilised by modern industrial society with totalising consequences. As Fazi points out, the strength of computation as formalised in the 20th century (by Turing and others) stems precisely from its capacity to posit a mechanical and rule-based procedure which is detached from the particular; a “procedural determinism of rules of inference” (ibid.) which is universal insofar as it is abstract: “Turing’s digital computers are machines that are arithmetic, highly formalised and formalising, and which pertain more to the laws of abstraction than to the laws of matter and life. As such, they are also machines that can be universal, for all the good and bad that this universalism implies” (ibid.: 822). If one assumes this standpoint, computational techniques can easily be depicted as a continuation, extension, or offshoot of instrumental rationality.

[viii] Fazi describes the explicative and regulative force of computational idealism by assimilating to a form of Platonism: “from a philosophical perspective, it is based on the presupposition that the intelligible dimension of eternal ideality might offer a means of regulating the phenomenal and the material, and that this normative character might be expressed through the transcendent nature of the realm of logico-mathematical formulation in respect to its sensible instantiations [. . .] Computational idealism, in this sense, might be seen to reiterate the Platonic top-down approach of the intelligible informing the sensible” (Fazi, 2016b).

[ix] While sympathetic towards such “material ontologies of computation” and their rebuttal of metacomputationalism, Fazi is also critical of the way in which “such efforts to reconceptualise the actuality of algorithms can also be said to often flatten down the contingent dimension of these algorithms’ material reality onto the empirical variability of technosocial and technocultural assemblages” (Fazi, 2018: 109). As we will see in more detail below, the core aspect of Fazi’s proposal is a non-empirical conception of contingency which addresses computation specifically in its formal dimension.

[x] Hansen defines twenty-first-century media in the following way: “By twenty-first-century media, I mean to designate less a set of objects or processes than a tendency: the tendency for media to operate at microtemporal scales without any necessary – let alone any direct – connection to human sense perception and conscious awareness” (Hansen, 2015: 37).

[xi] Hansen is trenchant in repudiation of any allegiance with contemporary theoretical currents that can be identified under the banner of new materialism and/or speculative realism. He writes: “To put it bluntly, I don’t believe such an aim is tenable, and I further believe that Whitehead’s work shows us why, insofar as it undertakes a radical deterritorialization of experience from the framework of human affairs that ultimately –  and as it were, inexorably –  yields a far more complex account of the human, and indeed of the universe itself as an entity (the supreme society) that necessarily includes (or as I prefer to say, implicates) humans within it and within each and every element comprising it” (Hansen, 2015: 15–16).

[xii] Hansen rejects “the orthodox picture that subordinates the settled world – or worldly sensibility – to the genesis of new actual entities” and argues that, “Once we correct this picture, we can appreciate that sensibility exercises its power not simply via its “incorporation” into new actual entities, but as catalyst for and environmental sculptor of the concrescences that yield new actualities” (2015: 172).

[xiii] Steering clear from the vitalist and panpsychist undertones often associated with Whitehead’s ‘panexperientialism’, Fazi opts for a more minimal definition of experience as the self-determination of actuality: “Whitehead’s notion of experience does not suggest any existential connotation, related to what is lived, nor is it circumscribed to the sensible either [. . .]the point that needs to be made is as follows: for Whitehead, all actuality experiences because experience, in his philosophy, is equivalent to the process of self-determination of actuality [. . .]I follow Whitehead’s characterisation of experience as self-actualisation to argue that computation’s own experience can be understood to correspond to computation’s own process of self-determination” (2018: 13–14).

[xiv] In an interview with David Beer, she succinctly expresses this crucial point in the following way: “I investigate the possibility for computation to hold a level of experienceability that is specific to its logico-quantitative character. In this sense, one of the central speculative operations that I carry out involves looking at how computation’s abstractive discretisations might construct the experience of computation (i.e., computation’s own experience) beyond or before the possibility of an associated milieu between not only machines and humans, but also between ontologies of the mechanical on the one hand, and ontologies of the lived, on the other” (Fazi in Beer, 2021: 296).

[xv] Another target of Fazi’s critique is what she calls “computational empiricism”, i.e., the non-standard technoscientific view that tries to make computational systems behave in more complex and less pre-determined ways by taking the behaviour of biological systems as their models. However, she argues that such computational endeavours (exemplified by genetic algorithms and evolutionary computation) still presuppose that indeterminacy is exclusively located in the mutability of the empirical world and not in the formal and quantitative aspect of computational procedures themselves. See Fazi, 2018: 143–156).

[xvi] “Both Gödel and Turing never intended to abandon the ship of deductive reasoning, or sought to imply that that ship was sinking. Incompleteness and incomputability do not suggest an arbitrary ‘anything goes’ attitude or indicate that deductive abstraction is inescapably mistaken. On the contrary, they prove that logicomathematical reasoning cannot be contained within a finite formulation” (Fazi, 2018: 117).

[xvii] “What is formalised inside the computing machine? The operational answer would be that we formalise a procedure that works by dividing the infinite into finite parts” (Fazi, 2018: 123).

[xviii] Whitehead’s notion of eternal objects is a complex one to parse and understand, not the least because of their seemingly paradoxical place in the context of a metaphysical system grounded in becoming, process, and events. Steven Shaviro gives a useful account of them, framing their attribution of ‘eternity’ precisely as a crucial aspect of Whitehead’s attempt to dethrone the realm of ideality from a transcendent status and include it as a ‘relational’ and ‘emergent’ aspect of the reality of experience. See (Shaviro, 2009: 37-45). His account, however, assimilates the realm of eternal objects with that of the Deleuzian virtual. Fazi is critical of this move, since she argues that it elides the fact that, while the indeterminacy of Deleuze’s virtual pertains to the domain of sensibility, the indeterminacy of conceptual prehensions pertains to the domain of the intelligible.

[xix] “The formal structure has its own way of being eventual, and therefore of ‘experiencing’ insofar as it determines itself vis-à-vis indeterminacy. Via the notions of incompleteness and incomputability one discovers that the inherent indetermination of computation pertains to its intelligible dimension, and that this indetermination is encountered by way of abstraction: this is a formal indeterminacy, not an empirical one” (Fazi, 2016b).

Alan Díaz Alva is a PhD student at Leuphana Universität Lüneburg, Germany. ORCID id: https://orcid.org/0000-0002-8192-1754

Email: alan.diaz.al@gmail.com / alan.g.diaz@stud.leuphana.de

Share this article

Leave a Reply

Discover more from Media Theory

Subscribe now to keep reading and get access to the full archive.

Continue reading