
For the official version of record, see here:
Fitzgerald, A. (2025). Death by Data: Abstraction and the Political Economy of Computationally Driven State Violence. Media Theory, 9(1), 169–200. https://doi.org/10.70064/mt.v9i1.1169
Death by Data:
Abstraction and the Political Economy of Computationally Driven State Violence
ANDREW FITZGERALD
Rensselaer Polytechnic Institute, USA
Abstract
Computational state violence has developed through the broader political economy of datafication, where increasing dimensions of life are “made productive” via the intensification of enclosure, data capture, and analysis for recursive operationalization. Through a Marxist conceptualization of abstraction, this article analyzes the U.S.’s use of metadata and geolocation algorithms for targeted killings, and the broader “platformization of the military” including Israel’s AI-powered bombing campaigns. It examines two interrelated aspects of the rhetoric legitimating this lethal labor, which compound abstraction and ethico-political mystification: first, “accountability” frameworks embracing illusions of human oversight for systems operating beyond human cognition and accelerating deadly action; second, the framing of the technology itself as both problem and solution, normalizing fallibility and the unknowability of lethal outcomes in the factory-like production of mass violence. Capitalism’s data-driven expansion extends responsibility for computational state violence to commercial users of datafied systems, obliging mass action to halt the scaling of these systems’ atrocities.
Keywords
Algorithmic Warfare, Artificial Intelligence, Datafication, Military Targeting, Political Economy
Unity remains the watchword from Parmenides to Russell.
All gods and qualities must be destroyed.
— Max Horkheimer & Theodor Adorno (2002), The Dialectic of Enlightenment
We kill people based on metadata.
— Former NSA director Gen. Michael Hayden (Cole, 2014)
It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.
—Analyst working in the Israeli military’s New Targets Administrative Division, commenting on an AI driven targeting system used in Gaza (Abraham, 2023)
On July 5, 2024, American whistleblower Daniel Hale was released from federal prison, having served 33 months of a 45-month sentence for leaking classified documents about the U.S. drone assassination program. In 2013, as an intelligence analyst deployed within Joint Special Operations Command (JSOC) at Bagram Airbase in Afghanistan, Hale assisted in the targeting of drone strikes within JSOC’s wing of the targeted assassination program.[1] Specifically, Hale was tasked with supporting a targeting system utilizing cellphone metadata that was collected in-theater via pods mounted on the drones, supposedly affording “active” geolocation of targets on the kill list. While this practice afforded a veneer of accuracy, what drove Hale to leak the documents was how often they got it wrong and killed people who were not the target ––a leaked Pentagon report (Scahill, 2015) found nearly 90% of people killed during a 5-month period were not targets. Just before Hale’s release from prison, another exposé highlighted a new frontier in data-driven state killing: Israel is using AI to generate targets in its mass bombing of Gaza (Abraham, 2023; 2024). In the decade since The Intercept published “the Drone Papers” (2015), computationally driven state violence has only scaled up, as has its entanglement with a broader political economy––and attendant epistemology and ethics––that stretches from everyday digital life to the obliteration of city blocks and the elimination of bloodlines.
This article outlines how such computationally driven state violence has developed in concert with and been legitimated by broader political economic shifts tied to datafication, including the expansion and increasing intimacy of capitalist enclosure as ever-greater swaths of life are “made productive” through data capture (Couldry and Mejias, 2018; Fitzgerald, 2019), and the intensification of abstraction through this data’s analysis and recursive operationalization. Datafication’s epistemology and ethics is biased towards action, based on logics of possibility (Amoore, 2013) and in the specific context of state violence and state security apparatuses, preemption (Andrejevic, 2019; Massumi, 2015), influencing the future through ongoing analyses and decisions based on past generated and decontextualized data. Through the drone program, and through practices of surveillance and “algorithmic warfare” more broadly, datafication has facilitated a “reinvention of accuracy,” one that is “technologically-determined” and that serves “an apparatus designed to normalize unaccountable acts of extrajudicial assassination” (Suchman, 2020: 176), including indiscriminate mass violence.
Situating my argument in the longer nexus of (often racialized) violence, media technology, and capitalism, I examine the American use of metadata and geolocation algorithms for targeted killing, and the broader “platformization of the military” (Hoijtink and Planqué-van Hardeveld, 2022) including the integration of machine learning and invisual operations (Parikka, 2023) as seen in Israel’s AI fueled bombing campaigns. I then turn to the rhetoric legitimating this lethal labor, how the prospective ethics of these complex bureaucratic but semi-automated systems––before they are fully revealed and subjected to public criticism––relies on two interrelated factors that compound abstraction. The first factor includes “accountability” frameworks embracing anthropocentric illusions (Kittler, 1986; Packer and Reeves, 2020) premised on holding individual human actors accountable, part of the broader legitimation of datafication and automation that is increasingly operating beyond human sensory perception and cognition (Hansen, 2015; Hayles, 2017). The second factor is how the technology itself is rhetorically framed as the problem––a point I link to the racialized development of light-based media and its attendant industries and culture (Dyer, 1997)––such that an acceptance of failure and unknowability of lethal effects is baked into the adoption and deployment of such systems in the factory-like production of mass violence. This affords an ethical distancing that conforms to a logic of constant collection, analysis, targeting, and execution extending throughout the data-driven political economy from algorithmically targeted advertisements to the destruction of whole buildings in airstrikes.
From early capitalism to datafication: enclosure, extraction, and abstraction
I begin by tracing the relationship between capitalism, enclosure, extraction, and abstraction––and abstraction’s connection to categorization, quantification, technical knowledge and mediation––before turning to the intensification of abstraction in computational datafication and its relationship to state violence. In Marxism, capitalist abstraction entailed the decontextualization of social relations into “indirect, and impersonal” (Best, 2010: 11) relations mediated by commodities and values exchanged and circulated within the market (Marx, 1993). For labor, this abstraction related most immediately to the extraction of surplus value and accumulation of capital. It was also coeval with laborers’ progressive alienation across levels of relation scaling from the micro to the macro: an alienation from the fruit of one’s labor, with the product becoming that “alien, hostile, and powerful object”; alienation from the labor process, where one’s own “activity” no longer belonged to them; alienation from one’s “species-being,” in short, the conversion of one’s irreducible spirit or human qualities into instrumental “means”; and a more generalized social estrangement from others as relations among workers are abstracted into market logics and individualist ideologies of competition and self-interest (Marx, 1844). Alienation thus constitutes a generalized process of worker dehumanization that was operative across the psychological, organizational, and social levels.
Over time, early capitalism’s abstractions scaled into systems of knowledge including the “sciences” of economics and business management. With this scaling came the increasing quantification of labor in order to extract ever more surplus value, and an attendant detachment from skills or clear use values constituting what Marx (1992) termed abstract labor, “the abstracting of all individual and specific characteristics of actual, physical labor resulting in socially average labor-power” (Best, 2010: 14). Best (2010: 14) summarizes abstraction’s centrality to capitalism, “the process of the abstraction and subsequent equalization of human labor is the movement of [the] economy” (emphasis added). Despite this totalizing movement wherein, as Alberto Toscano (2019: 12) writes, “abstraction may be understood as the high-level ‘logic’ at work in capitalism at its most generic […] [it] also defines multiple and highly-specialized social practices in the domains of science, law or politics that involve a separation from concreteness and intensely formalized operations.” To theorize abstraction as practices denaturalizes it and puts it in the realm of social action and construction––things could be thought and done otherwise. It also situates generic––if not global––actions of abstraction within scaling contexts of practice from their immediate application by practitioners to their broader organizational and ideological settings as well as their migrations, circulations, and imbrications.
These scaling contexts of abstraction developed in tandem with the broader historically contingent emergence of objectivity and attendant concepts of “fact”––a history usefully outlined by foundational work in science and technology studies (STS). Mary Poovey (1998) traces the emergence of the modern fact as a product of early modern scientific and economic practices. For example, merchants’ double-entry bookkeeping––recording every transaction as both credit and debit––naturalized value as essentially economic and thus intrinsic to the commodity while situating the practice within the domain of empirical facts, and by extension stable (economic and not social) reality. It also was a model for a self-legitimating locus of knowledge-power. As Poovey (1998: 30) writes, “the rhetorical apology for merchants embedded in the double-entry form drew additional support from the epistemological claim built into this system of writing: the double-entry system seemed to guarantee that the details it recorded were accurate reflections of the goods that had changed hands because the system was formally precise”––a haunting conflation of accuracy and precision when juxtaposed with contemporary algorithmic warfare.
Ian Hacking (1990; 2006; 2016), building on Foucault’s biopolitics, notably argued that statistical thinking and numeracy emerged not only to describe and naturalize the world as it existed but to constitute and manage it through practices of classification and population-level governance––an institutionalization of abstraction within modern enclosure that I expand upon further below. Drawing on the work of Theodore Porter (1995), we can add that the embrace of and even demand for numerically tethered objectivity in emergent professional and bureaucratic fields developed as a legitimation strategy, securing authority when trust in these nascent fields was weak or contested. Numbers thus served not merely as index of reality but were an apparatus producing and proliferating (seemingly) depersonalized and mechanical trust in practices and institutions.
Quantification proved so socially and politically useful it led to what Hacking (2016) dubbed the “avalanche of printed numbers,” increasing with the emergence of pre-electronic digital infrastructures. Hacking marks the 1890 US census as a pivotal moment with the adoption of punch cards as a means for storing and mechanically sorting the ever-growing mountains of “information”––a system developed by key census statistician Herman Hollerith, who went on to found the Computing Tabulating Recording Company, later renamed IBM. Tracing the emergence of these information infrastructures in the context of classification systems and standards from the 19th and 20th centuries up to 1990s software, Bowker and Star (1999: 47) articulate how they worked as “scaffolding in the conduct of modern life.” A key argument they make––invoking, briefly, Marx’s notion of technology as “frozen labor”––is that “values, opinions, and rhetoric are frozen into codes, electronic thresholds, and computer applications” that classify and sort information in seemingly mundane practices but with relatively significant stakes (1999: 135). While “to classify is human,” as Bowker and Star note, it is not to say that we could not categorize otherwise––a point underscored by this broad literature on the politics of standards, accounting, metrics, and algorithms within STS and adjacent fields. This is critical, given the ethico-political stakes of quantification and some of its most violent legacies.
The Frankfurt School’s critique of capitalist abstraction consistently emphasized its relationship to violence, in particular its dialectical generation of fascism.[2] Horkheimer and Adorno (2002) famously traced liberal capitalism’s abstraction––and fascist mobilization in response to it––to enlightenment thought, which broadly entails collecting and analyzing quantitative data and then putting it to instrumental use. While ostensibly aimed at eliminating the tyrannical power of myth and superstition, this also simultaneously mythologized calculability and technological language. The abstraction of quantification therefore coincided with mystification, and was, in Horkheimer and Adorno’s words, “totalitarian,” aiming to subject all objects to “means” of measurement, analysis, and manipulation towards instrumental ends.
Horkheimer and Adorno were more correct than they could have known. Just two decades after IBM’s founding, the company infamously collaborated with Nazi Germany, facilitating what Kenneth Werbin (2017) dubs Nazi Governmentality. This began with the use of “list technologies” to enforce “caesuric fractures between ‘normal’ and ‘abnormal’ populations” based in the “pseudo-scientific articulations of biology and taxonomy” of Nazi race theory (2017: 17-18). The system of classification was then operationalized for a “final accounting of humans” (2017: 49-50), enabling the regime to manage populations under their control, assess and intervene against perceived risks, mark those categorized as abnormal for containment and elimination via sterilization or extermination, and facilitate the execution of those policies. Liberal enlightenment’s universalizing ambition thus worked in concert with capitalist abstraction, and was consequently tied to the quantification, racial categorization, and operationalization of data for the violences of colonization, chattel slavery, and in the case of 20th century fascism, eliminationism (Browne, 2015; Parikka, 2023; Robinson, 2021).
Attendant with early liberal abstraction was the expansionist enclosure of common space as private, what Marx dubbed primitive accumulation—the first stage of establishing territory in which masses could be dispossessed of ownership and control of their concrete labor, and abstract (and alienated) labor could therefore dominate. As noted above, the rise of modern liberal enclosure thus facilitated a further abstraction of power, and operation of power through abstraction, in the institutionalization and disciplining of knowledge, particularly through the emergence of technical knowledge: a syncretization and normalization of “the plural, polymorphous, multiple, and dispersed existence of different knowledges” by technical, more “industrialized knowledges […] that circulated more easily” (Foucault, 2003: 179). While disciplinary enclosure, surveillance, and sovereign violence persisted, the ever-present demand for increasing and increasingly uninhibited circulation of commodities and capital coincided with the emergence of governmentality and the biopolitical management of populations.
Key to this shift was the emergence of the security apparatus (Foucault, 2007). Amoore (2013: 65) describes how “the techniques of the disciplinary society and the exercise of sovereignty become correlated elements in a mobile and modulated approach to the norm […] [wherein] the differential curve of normality breaks subjects and objects into elemental degrees of risk such that the norm is always in a process of becoming.” These elemental degrees of risk and their ongoing calculation were reliant upon media technologies and their operationalization: mapping, filing, databasing, and disseminating directives or policies based on calculations of aggregated data, as well as their integration with techniques for identification and intervention––this interventionism accelerated with the emergence of visual technologies, the measurable image, and their operationalization within this apparatus (Parikka, 2023).
Alliez and Lazzarato (2018) argue that the security apparatus interlaced logics of war and capitalism, “militarizing” the economy and society through successive phases. These phases included the rise of mass production (accelerated by successive world wars and coinciding with further abstraction of labor in Taylorist “scientific management”), and neoliberalism’s public disinvestment, deregulation, and deterritorialization. This latter shift coincided with the rise of the euphemistic “police action” ensuring access for Western-concentrated Capital to markets, resources, and cheap labor globally, while necessitating computationally-enabled mass surveillance, incessant analysis, and the “self-deforming cast” of data-driven modulatory control (Alliez and Lazzarato, 2018: 365–368; Deleuze, 1992). As the security paradigm constituted, in Foucault’s (2007; 2008) terminology, milieux to maximize “good” circulation while isolating risks to it, it did so by abstracting people into these categories of degrees of risk.
When contemporary “terrorism” emerged as a central risk category in the 1960s and 1970s, the interests of Capital still guided when it was worth intervening. After hijacking commercial airliners became a “viral” phenomenon in the late ’60s, American airline companies lobbied extensively against the implementation of passenger screening and checkpoints––noting that the lines and increased affective friction would lower sales (Koerner, 2014). The scales tipped as the trend persisted and hijackings turned deadly, and when behavioral profiling and selective screening was implemented many passengers who were falsely screened were supportive of the practice, and relieved something was being done to address the spate of hijackings (2014: 67-69). After 9/11, profiling, categorizations, and “lists” proliferated further and became central in a new era of in-your-face securitization––with the “no-fly list” serving as both exemplar of security theater and a cautionary tale of techno-solutionist mystification masking racial discrimination in policing “high-risk milieus of circulation” (Werbin, 2009: 614).
The “mobile norm” thus prefigured the serial scaling of various forms of capitalism—and with them, new circulatory milieus to securitize—from consumer capitalism to financial capitalism, and in turn, what has been variously dubbed “surveillance capitalism” (Zuboff, 2015), “platform capitalism” (Srnicek, 2016), or other terminological variations noting the centrality of datafication to this latest iteration of the political economy (Couldry and Mejias, 2019; Fitzgerald, 2019; Segura and Waisbord, 2019). Whatever one’s preferred terminology,[3] this new form of capitalism coincided with the emergence of what Mark Andrejevic (2007) calls the digital enclosure, extending Walter Benjamin’s (1982) study of the Paris shopping arcades, where consumers could leisurely wander and peruse displays all while producing the abstraction of commodity fetishism that further mystifies labor and its value. It is in this more intimate and mobile enclosure, where user data is shed and captured for Capital, and where surveillant assemblages (Haggerty and Ericson, 2003) aggregate and analyze data traces, that the groundwork was laid for today’s ever-increasing datafication and automation, and where a generalized ethic of incessant action and preemption could fully take hold (Andrejevic, 2019).
Semi-automated targeting and the slippery epistemology & ethics of datafication
Contemporary computationally-driven state violence traces a lineage to the pre-digital theories of cybernetics, like Norbert Wiener’s iconic plans for a self-targeting anti-aircraft gun during World War II (Galison, 1994; Wiener, 1980). The cybernetic vision was more fully realized postwar in Cold War corporate architecture (Martin, 2003), particularly the overlapping aesthetic and technological applications that migrated outward from the military-industrial complex into broader workplace control and urban design given possible nuclear war, and manifested in air defense systems like the Semi-Automatic Ground Environment (SAGE). The dream of real time information processing, feedback, and automatic action was also realized and deployed in “hot” conflicts of the Cold War, serving as an early prototype for data-driven drone strikes. Shaw (2016: 86) notes one such case with the United States’s Operation Igloo White during the Vietnam War, where truck routes along the Ho Chi Minh Trail were seeded “with myriad electronic sensors delivered by airplane [allowing] bombers [to] then be directed toward the electronic signal producing an automated link between sensor and shooter.” This semi-automated warfare afforded a perceived accuracy in the sense that it conserved bomber fuel, ordinance, and risk of being shot down for an operational instance with a higher probability of hitting intended targets. At the same time, it also abstracted this revised meaning of accuracy from the real presence of the targets (substituting the sensor signal) and distancing the bomber crews from culpability for what was in fact destroyed and who was really killed or injured.
A similar dynamic unfolded after 9/11 with the early integration of “platform logics” into intelligence analysis and the rise of algorithmic targeting of drone strikes––both a migration of empirically dubious abstractions from the emergent digital political economy into “warfighting.” In the former case, Werbin (2011) notes how Web 2.0’s “bottom-up collective sensemaking” inspired intelligence agencies to adopt internal social media platforms utilizing affordances such as analyst “tags” and user generated wikis that could be quantitatively analyzed. One again, we see the self-legitimating move of categorization-as-quantification: “fundamentally qualitative decisions (e.g. one analyst’s decision to tag a person as a ‘security risk’) can become a problematic form of quantified and distributed fact (e.g., x number of analysts confirm this person a ‘security risk’ […]), ultimately obscuring the trails of how the tag was established in the first place” (2011: 1257).
In the case of algorithmic targeting, the practice emerged as a relatively convenient media technological fix for a logistical (and political) limitation, what commanders involved in the program came to refer to as the tyranny of distance (Currier and Maas, 2015)—an inversion of the constructed “tyranny of convenience” that facilitated the social integration of highly intimate dataveillant technologies more generally (Andrejevic, 2007). As the United States wound down its more-conventional occupations in Iraq and Afghanistan––shrinking its “footprint” and its publicly acknowledged involvement in increasingly unpopular and costly wars––the drones had to fly longer distances to reach targets within extraterritorial areas such as Waziristan (the tribal region on the Afghanistan and Pakistan border), Yemen, Somalia, and West Africa, far from formal American operating bases. By operating closer to the limit of the drone fleets’ range, Western militaries and intelligence services lose the critical advantage of constant hovering––what Chamayou (2015) calls “the constant stare.” In its place, signals intelligence (SIGINT), such as cell phone metadata collected in a potential target area, and the outputs of computational analysis, provided a convenient work-around for the tyranny of distance.
As Chamayou (2015: 44–42) notes, drone warfare, at least in its use for Western targeted killing programs, combines “the principle of data fusion” and “the principle of schematization of forms of life.” These principles also mutually underpin and are spread by:
- new capitalist modes of digital enclosure, data extraction, and abstraction, such as secondary markets for data analysis and data brokers, and the algorithmic targeting of advertisements based on imprecise “measurable types” (Cheney-Lippold, 2017) and “data doppelgängers” (Harcourt, 2015)
- the surveillant assemblages of the security apparatus, including mass surveillance programs that create SIGINT databases leached from the circulation of digital communication within commercial media infrastructure
- the ongoing development and legitimation of abstracting practices through advancements in computational and data sciences that traverse both consumer/user-oriented markets and the military-police security apparatus
These twin principles fueled an increasing use of drones in protracted and proliferating “distant” conflicts. This included the adoption of signature strikes, where targets were chosen and the decision to strike made based off patterns of behavior suggesting that of a “terrorist” (e.g., the infamous Wedding caravan mistaken for a convoy of militants), and personality strikes, where individuals were targeted and struck because their data shadow appeared to match someone on the kill list. Abstraction in the drone assassination program thus wedded decontextualized visual and pseudo-visual information––such as infrared imagery on drones which “turns all bodies into indistinct human morphologies that cannot be differentiated according to conventional visible light indicators of gender, race, or class” (Parks, 2014: 2519)––with the emergent realm of invisual information, how platforms and automated computational systems “see” (Mackenzie and Munster, 2019; Parikka, 2023) through the processing of massive volumes of non-ocular data. These invisual abstractions could include the transformation of digital images into invisual information through “computer vision,” that can then be operationalized by machines.
In the 2010s drone assassination program, humans like Hale were still an essential part of the sociotechnical assemblage. However, compounding practices of abstraction––not only the remote piloting of the drone but the increasing incorporation of invisual computational operations in targeting (e.g., geolocation algorithms processing SIGINT and predicting where a perceived target would possibly be)––led to an estrangement from the targets and the act of killing them. For example, The Intercept’s exposé on the US drone assassination program noted that for many analysts the targets become “just a ‘selector,’” the term used for information such as cell phone numbers or metadata used to “find, fix, and finish” the targets on the kill list. Scahill (2016) anonymously quotes the whistleblower:
It requires an enormous amount of faith in the technology that you’re using … It’s stunning the number of instances when selectors are misattributed to certain people. And it isn’t until several months or years later that you all of a sudden realize that the entire time you thought you were going after this really hot target, you wind up realizing it was his mother’s phone the whole time.
As Gusterson (2016: 91) writes of the layered abstraction in this assemblage, it is a “process of technical, organizational, and ethical slippage.” The drone assassination program allowed the operation to get ahead of the analysis with brutal results for the innocent victims of strikes, as well as some of the perpetrating drone operators and analysts like Hale who discovered their error after the fact.
This privileging of operation and action, eclipsing, if not fully obviating, deliberative analysis, indexes the significant restructuring of phenomenology, epistemology, and ethics through datafication. Using the figure of the drone to illustrate this move across both military media technology and commercialized popular media ecosystems, Mark Andrejevic (2015: 214) writes:
Despite the rhetoric of personalization associated with data mining, it yields predictions that are probabilistic in character, privileging decision making at this level. Moreover, it ushers in the era of what might be called emergent social sorting: the ability to discern un-anticipatable patterns that can be used to make decisions that influence the life chances of individuals and groups […] At a deeper level, the big data paradigm proposes a post-explanatory pragmatics (available only to the few) as superior to the forms of comprehension that digital media were supposed to make more accessible to a greater portion of the populace.
While Andrejevic is correct to note datafication’s post-narratival pragmatics (and the parochialism of baseline technical literacy necessary to grasp what is occurring even abstractly), the paradigm is not so much probabilistic, but as Louise Amoore (2013: 70) convincingly argues, possibilistic. Amoore (2013: 74) traces the mutual adoption of computational “means of dividing, separating, and acting upon arrays of possible futures” in both the financial sector and the security sector. Probabilistic analysis and justification for action, which tended to correlate with high thresholds for evidence, “occludes the black swan event––the improbable, hi-impact occurrence” (2013: 74) such as a terrorist attack, the presence of a hunted target in a strikable location, or the “conversion” of a data body algorithmically targeted for an advertisement into a purchase. With the rise of machine learning, computational analysis has significantly “scaled the capacity to act on the basis of what is not known” (2013: 62) to consider a vast array of possible futures based on disparate data sets––even synthetic ones (Jacobsen, 2023; 2024)––in order to act on and therefore influence these possible futures.
The shift to possibilistic standards biases processes towards action, and in the same move preemptively justifies it, thus abstracting the action from meaningful ethical reflexivity and justification (Fitzgerald, 2024). This shift built on, but went beyond, the broader paradigm of preemption in post-9/11 Western military strategy (Andrejevic, 2019; Massumi, 2015)––an extension of the so-called “Bush Doctrine” that was further legitimized and institutionally entrenched within the “sovereign presidency” (Hiland, 2019) of the American executive branch by the Obama administration, transitioning from pre-emptive conventional war to more technocratic, “innovative,” and legalistically justified distanced and “remote” warfare of the drone program or special operations raids. Preemption thus supplants the deterrence logic of the cold war, and “imposes the imperative of ongoing, incessant and accelerated intervention” (Andrejevic, 2019: 76).
Massumi, Amoore, and Andrejevic, all highlight that computationally driven state violence and security practices aim to simultaneously assess and act to shape and limit the emergence of possible futures through present recontextualizing (and abstracting) analysis of past-generated data. These systems are, as Mark B.N. Hansen (2015) describes, feed-forward, influencing the shape the future will take through a process of ongoing present, yet prospective and future oriented, analysis and operationalization of the massive “data lakes” generated in the past and aggregated via diverse data streams. In contrast to the cybernetic “feedback” of Wiener’s automated gun, predicting where an already known and acquired target will most likely be in order to accurately direct fire, a feed-forward warfare system––driven by machine learning and cloud computing provided by “Big Tech” firms––might use decontextualized data to generate a numerous array of entirely new targets, and then direct various distributed units to asynchronously strike at times and in locations where these possible targets might possibly be.
Automating mass death: The imbrication of “big tech” and the military
Big Tech’s involvement with state violence, particularly through the development and provision of AI and cloud computing services, has sparked controversy and organized opposition from tech workers and the public. Such pressure even led Google to cancel its contract for the U.S. Military’s Project Maven, a machine-learning and data fusion initiative designed to scale up threat assessment and target generation, acquisition, and engagement. Despite this public opposition and victory, other major tech companies like Palantir, Amazon, and Microsoft were also involved and continued to support Project Maven, and the uptake of machine learning and cloud computing in military intelligence and warfighting has nevertheless continued.
This ongoing trend has led to what Hoijtink and Planqué-van Hardeveld (2022: 1) conceptualize as a platformization of the military, a “growing involvement and permeation of the (technomaterial) ML [machine learning] platform as the infrastructure that enables new practices and experimental algorithm development across the military.” Crucially, they argue this permeation involves and implicates not only Big Tech companies but “the open-source community that is organized around these platforms” (2022: 1). This community contributes to the collaborative development of ML algorithms as well as the training and continual development and deployment of ML models that can then be adopted for military use via the platform, often through digital marketplaces and catalogues providing various services.
Even when agreements between cloud computing service companies and governments aim to limit the services’ application for deadly violence, there are often loopholes. For example, in the case of Google’s contract with the Israeli state’s “Project Nimbus,” developing and providing cloud computing infrastructure and services, along with Amazon Web Services and Microsoft Azure, the agreement contains clauses that allow the state to unilaterally modify the terms. In Google’s case, the contract includes language permitting Israel to “make any use of any service included in the supplier’s catalog of services” (Biddle, 2024: 16). It also precludes any “restrictions on the part of the Provider as to the type of system and information that the Clients may migrate to the service, including vital systems of high sensitivity level” (Biddle, 2024: para.17). Therefore, even if the initial governmental actions the services were produced for were not kinetic or highly classified, nothing in the agreement can prevent them from being used for those purposes in the future.
Examining TensorFlow, Google’s open-source modular “platform” for developing, training, and deploying ML models, Hoijtink and Planqué-van Hardeveld (2022: 8) note that the platform includes published “premade ML layers and pretrained models” and APIs that “makes ML less resource dependent and labor intensive.” Such models are thus available to states, as a client, within the digital marketplaces and catalogues of a given cloud computing platform, and play an increasingly central role due to the fact that resource and labor management are essential for military-security applications processing ever growing “data lakes”—the constantly growing repositories of structured, semi-structured, and unstructured data pouring in from various data streams across the state’s surveillant assemblage. Amoore (2020: 47) writes:
In the context of a security paradigm that seeks out the uncertain possible future threat, the volume of data in the lake––much of it transactions and social media data––is analyzed with machine learning techniques that promise to yield previously unseen patterns via processes of ‘knowledge discovery’ […] The machine learning algorithms deployed in the contemporary intelligence cloud are generative and experimental; they work to identify possible links, associations and inferences […] The algorithms modified through the patterns in data decide, at least in part, which fallible inferences to surface on the screen of intelligence analyst, drone pilot, or border guard.
Platformization thus further compounds abstraction, via both decontextualizing data analysis as well as the ever-greater abstraction of human labor from processual practices, reconstituting the war/workspace wherever the veining of the platforms’ data streams and operationalizations wind––all the while baking in a preemptively accepted fallibility.
While the specific adoption and use of TensorFlow by militaries is unclear, ML algorithms and models often migrate across “domains”––the ML field’s term for contexts of development and commercial application, including a range of diverse areas such as fraud detection and medical imaging. This migration across various commercial and institutional domains, through academic disciplines, or from the open-source community into military applications has at least been publicly acknowledged, leading Hoijtink and Planqué-van Hardeveld (2022: 15) to conclude that the open-source community and these broader industries are all “to some extent, complicit in the making of military technology.” I extend this further, arguing that there is a broader scope of––and mystifying abstraction from––socially-connected responsibility (Young, 2010) mapping to the increasingly generalized adoption of feed-forward systems built on constant data collection, recursive analysis and operationalization through targeting and execution. These systems are legitimated by possibilistic ethics that extend throughout the data-driven political economy as well as the military-police security apparatus.
This scope of responsibility stretches to include, even if at a remove, the legions of digital users who “engage” with recursively curated and targeted content, producing a generalized acceptance of possible error, and reinforcing this view as proper if not optimal development and practice from the standpoint of the tech workers and companies developing and deploying such systems. These users, myself and many of this article’s readers included, often labor rather directly (if nonconsciously) in the testing of ML algorithms and the training of models that then circulate within the abstracted knowledges and practices of the computing industry and collaborating academic disciplines, potentially winding their way back into automated killing systems. Even seemingly innocuous consumer applications such as computer vision models used in video games can serve in the development of military technologies folded into weapons systems; if one’s gaming “platform” shares use and performance data with the game and the platforms’ developers, through your gameplay you are likely helping further develop models similar to those increasingly used in the development of real-time machine vision driven target identification and discrimination (Amoore, 2020; Fitzgerald, 2024; Parikka, 2023).
The United States military has publicly acknowledged the use of Project Maven in combat scenarios, including the withdrawal from Afghanistan where it was used for threat assessment (Manson, 2024). It was also deployed for targeting airstrikes against Ansar Allah (“the Houthis”), a group attacking Israel and commercial shipping in the Red Sea, with the aim of pressuring Israel––and by proxy the United States––into a ceasefire in Gaza (Moon, 2024). The greatest scaling of AI for state violence, though, is arguably by Israel itself.
Recent exposés in the Israeli magazine +972 (Abraham, 2023; 2024) and in The Guardian (McKernan and Davies, 2024)reveal how several AI systems are part of Israel’s ongoing annihilatory urban bombing campaign in Gaza. One such system, named “Habsora” (“the Gospel” in English)––described as a mass assassination “factory” in the epigraph and the news article it is quoted from––was used to rapidly accelerate target generation for high-payload bombing of buildings or infrastructure, as the volume of Israel’s bombing quickly overran the number of known targets (Abraham, 2023). These targets included “tactical targets” related to militant operations, “underground targets” like tunnels, and so-called “power targets” that are part of Palestinian civil society and infrastructure including “high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices” (Abraham, 2023: para. 18).
A second system, “Lavender,” is an AI-powered database used to identify and dynamically generate a list of alleged Hamas and Palestinian Islamic Jihad operatives who are collectively marked for death, likely processing millions of Palestinians besieged in Gaza as possible targets. As described in the +972 article:
The Lavender software analyzes information collected on most of the 2.3 million residents of the Gaza Strip through a system of mass surveillance, then assesses and ranks the likelihood that each particular person is active in the military wing of Hamas or PIJ. According to sources, the machine gives almost every single person in Gaza a rating from 1 to 100, expressing how likely it is that they are a militant. Lavender learns to identify characteristics of known Hamas and PIJ operatives, whose information was fed to the machine as training data, and then to locate these same characteristics—also called “features”—among the general population, the sources explained. An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination (Abraham, 2024: 23).
Other systems used in concert with Lavender include one perversely named “Where’s Daddy?” which was used “to track the targeted individuals and carry out bombings when they had entered their family’s residences.” One source in the same article claimed, “the system is built to look for them in these situations.” While the system supposedly tracked and considered how many civilians would be present and possibly killed, another source countered, “This model was not connected to reality. There was no connection between those who were in the home now, during the war, and those who were listed as living there prior to the war. [On one occasion] we bombed a house without knowing that there were several families inside, hiding together” (Abraham, 2024: 102).
Scholarship on the bureaucracy of the US’s drone assassination program, including the working conditions of drone pilots and analysts, highlights the intense pace and the emotional, moral, cognitive and bureaucratic pressures of processing and operationalizing information to successfully target and kill individuals or groups (Asaro, 2017; Gregory, 2011). Asaro (2017: 292) describes a “fast-paced multimedia and social media environment of intelligence gathering and killing,” which, if you replace the word “killing,” would easily resemble the description for a job in digital advertising, marketing, media, or tech––accentuating the persistence of capitalist abstraction’s scaling. Reports on Israel’s use of AI systems describe a similar intensification and massive scaling in these “work” environments. One source for +972 (Abraham, 2024: 40) described the incessant demand to produce: “We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us.” Meanwhile, the intertwined systems created an endless workflow: “Because of the system, the targets never end. You have another 36,000 waiting.” At the same time, human analysts became increasingly alienated from the abstracting practices. As one described: “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.” This computationally accelerated “op-tempo”––a manifestation in practice of the broader cross-sector military-industrial logic of data-driven automated preemption––constitutes a new conception of warfare. As Ford and Hoskins (2022: 97) argue, this approach, what they dub radical war, “values owning time over other variables,” orienting the military sociotechnical assemblage towards “deliver[ing] military effect instantaneously.”
I want to push this idea of “ownership” of time a step further, connecting it to capitalism’s abstracting practices that make ownership possible. The abstraction and ownership of time is necessary in two ways for Capital: first, through abstracting practices that make possible ownership and accumulation of private property such as enclosure, primitive accumulation, and the circulation of commodities in the market; and second, through the relations of production that abstract labor, transforming it into an operation generating surplus value to accumulate capital. In the latter, time played a crucial role throughout various stages of capitalism. First, it was necessary to control workers’ time beyond what was needed to produce value equivalent to their wage, the “extorted time” which produces surplus value. Then, with the rise of industrial capitalism, time was a key metric in accelerating production and increasing capital accumulation (Thompson, 1967); this, in turn, led to the intensification of industrial discipline in Taylorist management. Finally, late capitalism facilitated a de-differentiation of space-time (Andrejevic, 2007) associated with “the physicosocial concept of work” (Deleuze and Guattari, 1987: 492), fully realized in the emergent data-driven political economy.
The emergence of a “generalized ‘machinic enslavement,’ such that one may furnish surplus-value without doing any work” (Deleuze and Guattari, 1987: 492) does not mean that time no longer matters. In fact, for both post-datafication labor and computationally driven state violence the importance of the temporal dimension cannot be overstated. As Hansen (2015: 189) writes:
It is crucial to point out that contemporary capitalist industries are able to bypass consciousness––and thus to control individual behavior––precisely (and solely) because of their capacity to exploit the massive acceleration in the operationality of culture caused by massive scale data-gathering and predictive analysis. These industries benefit from the maintenance of the crucial temporal gap at the heart of experience: the gap between the operationality of media and the subsequent advent of consciousness.
Hansen (2015) theorizes this temporal dimension through Thrift (2008), and Thrift’s expansion on Grusin’s(2010: 8) concept premediation, where media now aims to create an “affectivity of anticipation” such that the “future becomes immanent in the present.” Feed-forward systems premediate in that they aim “to mobilize the momentary processes that go to make up much of what counts as human…to produce a certain anticipatory readiness about the world, a rapid perceptual style which can move easily between interchangeable opportunities, thus adding to the sum total of intellect that can be drawn on. This is a style which is congenial to capitalism” (Thrift, 2008: 37–38). Computational feed-forward is manufactured through the incitement to produce (via action), the extraction, and analysis of data in a recursive operationalization that intensifies the production of data and therefore the abstraction of datafication and its instrumental application. In fact, this domain of the abstract is an emergent “sensible” that can only be mediated indirectly––or “implicated” in Hansen’s words––in human experience. I will expand on this below with respect to illusions of human accountability legitimating the deadly practices of abstraction in computational state violence.
Techno-solutionism as rhetorical absolution: Hallucinatory accountability in the “permanent beta war”
Having explored the relationship between the political economy and the “labor” of computationally driven state violence––through examples such as algorithmically targeted drone strikes, the platformization of the military, and the use of AI in generating targets for urban mass bombing campaigns––I now turn to the ideological work that legitimates these deadly “practices of abstraction” and their ethical implications. Much of this ideological work is carried out through the rhetoric used to justify this lethal labor, and I focus on two interrelated factors that compound abstraction: first, how “accountability” is framed in relationship to automation, and second, how the increasing integration of emergent computational technologies into warfare, along with the accelerating pace of their deployment driven by Capital, adopts a rhetorical move that presents the media technology itself as inherently (and due to this rhetoric’s connection to incessant productivism, optimally) imperfect. This framing bakes in an acceptance of constant “fixing” and problem-solving, what Ford and Hoskins (2022) call a permanent “beta phase” of war. This permanent beta, however, is not entirely new. I argue that it is tied to a longer history of media technology under capitalism and its racial dimensions.
Despite the fact that computational systems often eclipse human consciousness and perception, the companies developing, promoting, and deploying these products––and the states using them for warfare––frequently invoke anthropocentric illusions (Kittler, 1986; Packer and Reeves, 2020) of human oversight. These illusions hinge on ex post facto retributive accountability should an automated system’s “hallucination”––the term for an AI model’s generation of false or misleading information––cause harm. In response to the AI warfare exposé, the Israeli military stated that “Information systems are merely tools for analysts in the target identification process” (McKernan and Davies, 2024: 18). This claim is contradicted by multiple accounts from analysts, who describe their involvement in––and experience of––the workflow of AI-driven mass killing. As one analyst put it, “when it comes to a junior militant, you don’t want to invest manpower and time in it” (McKernan and Davies, 2024: para. 31). Another source in the exposé, quoted earlier, echoed this statement, saying: “I had zero added value as a human, apart from being a stamp of approval” (McKernan and Davies, 2024: para. 6). With the operation of feed-forward systems working explicitly beyond the scope of human sensory perception or conscious thought to operate on the future from a vanishingly immediate present, this adds another layer of inaccessibility for human overseers, where they can only be indirectly “implicated” in the operation and are therefore further alienated from the very practices of abstraction they labor to deploy.
As I have argued elsewhere (Fitzgerald, 2024)––specifically against the rapid uptake and industrial promotion of synthetic data, but also applicable to recursively evolving ML-based automation more generally––“domain transfer” and deployment of pretrained models bring with them data previously baked or “frozen” into the process. While models can be tweaked (and often continuously tweak themselves) the ability to isolate problematic data and “unbake the cake,” so to speak, does not really exist, a brand-new model would need to be trained. Amoore (2020: 126) has described this processual fallibility as a generative “madness” central to cloud-based algorithmic war, with random forest algorithms “increasingly now applied to a mobile data stream (and not only a static dataset)––for example, in the video feed of an unmanned arial vehicle (UAV)––[to discover] similar behavioral patterns…without human input.” It is likely that random forest algorithms, or similar, are involved in both Project Maven and Israel’s AI-based targeting. Amoore (2020: 126) argues that “though random forest algorithms could be said to generate a kind of madness of false positives that become actionable as a kill list… in fact this madness is useful to the algorithm.” As such, the use of these systems entails an implicit acceptance of their fallibility. As I argued regarding the use of synthetic data, this is not so much about trust in the academic-industry parlance of “trust and safety,” but rather a type of faith (Fitzgerald, 2024). These systems’ prospective ethics, like the fallible and at times hallucinatory outputs of AI, accept the possibility, if not the inevitability, of failure as the cost of doing business.
The intertwined legitimating rhetorics of technical accuracy and illusory human oversight and accountability often contradicts the permanent beta techno-solutionist rhetoric used to defend these systems, even though they sometimes invoke each other. For example, a TensorFlow tutorial analyzed by Hoijtink and Planqué-van Hardeveld (2022: 10) offers a technical solution to the technical limits of the system, contingent on the machine successfully re-integrating the human overseer into the operation: “‘[t]he deep classifier should be aware of its own limitations and when it should hand over control to the human experts,’ which is based on a quantification of the model’s uncertainty.” The fallibility and unknowability of these systems––and their current limitations––are not seen as barriers to their use for state killing. In fact, these apparent liabilities can be leveraged as a rhetorical asset.
After the release of “the drone papers,” Georgetown University security studies professor Christine Fair appeared on a panel on Al Jazeera English to defend the drone assassination program. Fair argued, “We don’t actually know who we were targeting, and we don’t actually know who was killed, and I’m going to argue that this actually isn’t knowable with the tools that have been used thus far” (Al Jazeera English, 2015). While her comment might seem to undermine her argument, it reflects the “permanent beta” logic of algorithmic warfare. This logic permeates the interlocking discourses of industry, academia, and the military as well as the media talking points of their spokespeople. Since the identities of whoever was killed were not technically knowable it is therefore irrelevant, with the implication that the practice should continue until maybe someday the “tools” may allow us to know and then the question can be revisited. However, as evidenced by reporting on the Israeli AI targeting systems, even a decade later, these tools remain deficient––and that deficiency is still considered irrelevant to the humming of the computational mass death factory: “The sources said they did not know how many civilians were actually killed in each strike, and for the low-ranking suspected Hamas and PIJ operatives marked by AI, they did not even know whether the target himself was killed” (Abraham, 2024: 107).
Contradictorily framing technology as simultaneously the problem and the solution to justify its use and absolve its harms is not new to new media technologies. Richard Dyer (1997) observed a similar logic in the pre-digital culture industry, particularly in photography and cinema, and its techno-solutionist absolution of (and its role in reproducing) white supremacy. Dyer analyzed professional publications on photography and film and noted that, since whiteness was the default in the development of these visual media technologies and their professional use, photographing or filming nonwhite faces was seen as a technical problem to be solved. “Accurate flesh tones” became “the key issue in innovation” (Dyer, 1997: 94). Moreover, the appearance of non-white human subjects within professional publications was often used “to illustrate a general technical point” (Dyer, 1997: 94). At the same time, these professions took up an eliminationist approach to shadows, a move subordinating the dark while reinforcing a “culture of light” that was associated with white people, their perceived Godliness, and therefore their supremacy (Dyer, 1997: 96).
While the examples Dyer analyzed may seem distant from the realm of algorithmic warfare, we can trace a historical line of racializing and subordinating media technologies, industries, and labor—along with the rhetorical absolution of their harms through techno-solutionist “innovation.” This lineage stretches from the “culture of light” in visual media production, to orientalist Hollywood portrayals of the mass killing of Arabs and Muslims (Palestine Diary,2012; Konzett, 2004), and now to today’s digital epidermalization (Browne, 2015). This new epidermalization no longer “reads race on the skin” as Fanon (1967) described, but instead abductively racializes through the invisual operations and mediating racializing categories of “criminal” or “terrorist” (Khan, 2021). Certainly for any resident of Gaza or members of the targeted class––those who are constantly “haunted by the specter of aerial monitoring and bombardment” due to their geographic location and patterned associations (Parks, 2016: 231)––one’s possible computationally driven death, potentially along with members of one’s family and one’s neighbors, can be abstracted as a “technical” point for further refining ML models, or serve as an anecdote for targeting analysts, news stories, and even academic articles like this one.
Conclusion
Capitalism’s militarization of the economy dialectically produced the “economization” of the military and state violence through the enclosure, extraction, and abstraction of data, which are continually operationalized, intensified, and accelerated. This marks an inversion in the traditional role of Capital within the military-industrial complex, where “Military Keynesianism” (Keynes, 1933) promoted the idea that public investment in state violence could boost the industrial economy, despite its immediate wastefulness of both life and public wealth. While this logic persists, the inversion unfolds through the layering of computing and datafied platform industries––and their orientation towards “optimization” and acceleration––atop the sedimented layers of post-industrial labor in the military’s still-sprawling bureaucracies, and the military-industrial complex’s longstanding Military Keynesianism focused on designing, producing, and selling “weapons platforms” and ordinance.
Marx used the term “inversion” to note how capitalism converted the wage laborer’s supposed freedom and autonomy into wage slavery, as well as the inversion between the concrete and the abstract (and in the same move form and content) leading to the mystification of capitalism’s mechanics (Best, 2010). In computationally-driven state violence, these dialectical inversions, and the mystifications they entail, are legible in the rhetorical turns analyzed above: redefining “accuracy” to mean prospectively accepting the consequences of speculative and black boxed processes, thus preemptively framing the technologies’ inevitable failures as merely a development phase that will never be completed, because a new version can always be “pushed.” Every error is already accepted, and if indeed identified, merely serves as an opportunity to “find” and “fix” the bug, a logic permeating the productivist mindset of data solutionism more broadly.
With the emergent and (at the time of writing) ongoing deployment of these systems in Israel’s military campaign in Gaza, the factory-like production of mass death in this stage of capitalism places all of us laboring in the digital enclosure in a position of responsibility relative to these unfolding atrocities. At the same time, the data-driven political economy of platformization and mobile media has inverted the abstraction of the atrocities. Through both Israeli and Palestinian self-documentation on social media, Israel’s actions have become some of the most well-documented atrocities in history.
As I have argued here, due to the increasingly dominant datafied layer of capitalism, the nature of machine learning and the computing industry, and the breadth of feed-forward systems that wind and “platformize” both everyday life and state-rendered death, we are all, in some small part, responsible for putting the target on the tens of thousands of innocent people killed by computational weapons systems. Once the trajectories of abstraction are traced and demystified, this responsibility can be consciously apprehended, and collective action must follow to ensure that the companies whose computational services we use no longer make a killing helping states more efficiently wage annihilatory and indiscriminate wars.
Yet, identifying “what is to be done” is daunting. The Tech Worker Movement offers one path. Hundreds of actions–– ranging from protests and persuasion campaigns, to boycotts, strikes, and sit-ins or occupations––were taken in the late 2010s and early 2020s (Boag et al., 2022). The No Tech for Apartheid campaign, focused on Israel’s use of computational services to maintain its occupation of Palestine, takes inspiration from earlier efforts like the Polaroid Revolutionary Workers’ Movement (Haymarket Books, 2023). Formed in 1970 by Black Polaroid employees to agitate against the company’s ties to Apartheid in South Africa––including the use of Polaroid photo ID cameras to facilitate racial segregation––the movement won after years of pressure and boycott campaigns. Polaroid, the Apple of its day, became the first American company to fully divest from South Africa, precipitating the 1980s “exoduses of American businesses from South Africa [that] forever changed apartheid’s legitimacy” (Morgan, 2006: 522).
Modern tech organizing has also won victories, including restrictions on selling facial recognition software to police, or temporarily slowing Project Maven as noted above, but given the foundational abstraction and circulation of data and models examined in this article, such victories are easy to skirt. While one path is to escalate, layoffs and retributive firings have hindered worker power internally. Outside of Big Tech, tech users have applied pressure through boycott or deletion campaigns, such as #DeleteUber. But what about our responsibility as producers—“permanent beta” testers for a globe-spanning death factory?
Likely, we must embrace and realize the decelerationist politics Gavin Mueller (2021) advocates, marshaling dissatisfaction with automation and exploitation against habituated convenience and our momentary desires—“throwing a wrench” in how globe-spanning datafication is able to abstract these desires, stoke them, and make them “productive.” This will require first constructing and growing non-commercial communication networks to preserve social media’s connective and mobilizing power, before making a concerted––and collective––push to abandon the old guard. In the current moment it may seem fanciful to imagine the masses abandoning their “scrolling” habit or demanding that their employer cancel its contracts with Amazon Web Services. Yet at one time the idea of major corporations divesting from Apartheid South Africa seemed equally implausible. We have also just recently joined billions of others in radically altering our lives, if temporarily: during the COVID-19 pandemic, many took precautions not just to protect themselves or because they were told to, but because they feared invisibly contributing to the death of another. The embers of that inchoate solidarity may still burn.
References
Abraham, Y. (2023) ‘“A mass assassination factory”: Inside Israel’s calculated bombing of Gaza’, +972, 30 November. Available at: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/ (Accessed: 5 December 2024).
Abraham, Y. (2024) ‘“Lavender”: The AI machine directing Israel’s bombing spree in Gaza’, +972, 3 April. Available at: https://www.972mag.com/lavender-ai-israeli-army-gaza/ (Accessed: 23 July 2024).
Al Jazeera English (2015) ‘UpFront – Do drone strikes create more terrorists than they kill?’, YouTube. Available at: https://www.youtube.com/watch?v=eXXPWbFyhK0 (Accessed: 24 June 2025).
Alliez, É and M. Lazzarato (2018) Wars and Capital, trans. A. Hodges. South Pasadena: Semiotext(e).
Amoore, L. (2013) The Politics of Possibility: Risk and Security Beyond Probability. Durham, NC: Duke University Press.
Amoore, L. (2020) Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham, NC: Duke University Press.
Andrejevic, M. (2007) iSpy: Surveillance and Power in the Interactive Era. Lawrence: University of Kansas Press.
Andrejevic, M. (2015) ‘The Droning of Experience’, The Fibreculture Journal 187(25): 202-217.
Andrejevic, M. (2019) Automated Media. New York: Routledge.
Asaro, P. (2017) ‘The Labor of Surveillance and Bureaucratized Killing: New Subjectivities of Military Drone Operators’, in L. Parks. & C. Kaplan (eds.) Life in the Age of Drone Warfare. Durham, NC: Duke University Press, pp. 282-314.
Benjamin, W. (1982) The Arcades Project, trans. H. Eiland & K. McLaughlin. Cambridge, MA and London: Harvard University Press.
Best, B. (2010) Marx and the Dynamic of the Capital Formation: An Aesthetics of Political Economy. New York: Palgrave Macmillan.
Biddle, S. (2024) ‘Documents Contradict Google’s Claims About Its Project Nimbus Contract With Israel’, The Intercept, 2 December. Available at: https://theintercept.com/2024/12/02/google-project-nimbus-ai-israel/ (Accessed: 3 December 2024).
Boag, W., H. Suresh, B. Lepe and C. D’Ignazio (2022) ‘Tech worker organizing for power and accountability’, Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency: 452–463.
Bowker, G. and S. Star (1999) Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press.
Browne, S. (2015) Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.
Chamayou, G. (2015) A Theory of the Drone. New York: New Press.
Cheney-Lippold, J. (2017) We Are Data: Algorithms and the Making of Our Digital Selves. New York: NYU Press.
Cole, D. (2014) ‘We Kill People Based on Metadata’, New York Review of Books, 10 May. Available at: http://www.nybooks.com/daily/2014/05/10/we-kill-people-based-metadata/ (Accessed: 22 November 2024).
Couldry, N. and U.A. Mejias (2018) ‘Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject’, Television & New Media 20(4): 336–349.
Couldry, N. and U.A. Mejias (2019) The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford: Stanford University Press.
Currier, C. and P. Maas (2015) ‘Firing Blind: Critical intelligence failures and the limits of drone technology’, The Intercept, 15 October. Available at: https://theintercept.com/drone-papers/firing-blind/ (Accessed: 22 November 2024).
Deleuze, G. (1992) ‘Postscript on the Societies of Control’, October 59 (Winter 1992): 3-7.
Deleuze, G. and F. Guattari (1987) A Thousand Plateaus: Capitalism and Schizophrenia, trans. B. Massumi. Minneapolis: University of Minnesota Press.
Dyer, R. (1997) White. London and New York: Routledge.
Fanon, F. (1967) Black Skin, White Masks, trans. C. Markmann. New York: Grove Press.
Fitzgerald, A.A. (2019) ‘“Mapping” Media Spaces: Smoothness, Striation, and the Expropriation of Desire in American Journalism from Postindustrial to Datafied Capitalism’, Communication Theory 29(4): 401–420.
Fitzgerald, A.A. (2024) ‘Why Synthetic Data Can Never Be Ethical: A Lesson from Media Ethics’, Surveillance & Society 22(4): 477–482.
Ford, M. and A. Hoskin (2022) Radical War: Data, Attention and Control in the Twenty-First Century. Oxford: Oxford University Press.
Foucault, M. (2003) Society Must Be Defended: Lectures at the Collège de France, 1975-76, trans. D. Macey. London: Picador.
Foucault, M. (2007) Security, Territory, Population: Lectures at the Collège de France, 1977-78, trans. G. Burchell. New York: Palgrave Macmillan.
Foucault, M. (2008) The Birth of Biopolitics: Lectures at the Collège de France, 1978-1979, trans. G. Burchell. New York: Palgrave Macmillan.
Galison, P. (1994) ‘The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision’, Critical Inquiry 21(1): 228–266.
Gregory, D. (2011) ‘From a view to a kill: Drones and late modern war’, Theory, Culture & Society 28(7–8): 188–215.
Grusin, R. (2010) Premediation: Affect and Mediality after 9/11. New York: Palgrave Macmillan.
Gusterson, H. (2016) Drone: Remote Control Warfare. Cambridge, MA: MIT Press.
Hacking, I. (1990) The Taming of Chance. Cambridge: Cambridge University Press.
Hacking, I. (2006) The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. 2nd ed. Cambridge: Cambridge University Press.
Hacking, I. (2016) ‘Biopower and the avalanche of printed numbers’, in R. Vernon & N. Morar (eds.) Biopower: Foucault and Beyond. Chicago: University of Chicago Press, pp. 65–81.
Haggerty, K.D. and R.V. Ericson (2003) ‘The Surveillant Assemblage’, The British Journal of Sociology 51(4): 605–622.
Hansen, M.B.N. (2015) Feed-Forward: On the Future of Twenty-First-Century Media. Chicago: University of Chicago Press.
Harcourt, B.E. (2015) Exposed: Desire and Disobedience in the Digital Age. Cambridge, MA: Harvard University Press.
Hayles, N.K. (2017) Unthought: The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press.
Hiland, A. (2019) Presidential Power, Rhetoric, and the Terror Wars: The Sovereign Presidency. Lanham, MD: Lexington Books.
Hoijtink, M. and A. Planqué-van Hardeveld (2022) ‘Machine Learning and the Platformization of the Military: A Study of Google’s Machine Learning Platform TensorFlow’, International Political Sociology 16(2): 1-19.
Horkheimer, M. and T. Adorno (2002) The Dialectic of Enlightenment, trans. E. Jephcott. Palo Alto: Stanford University Press.
Jacobsen, B.N. (2023) ‘Machine learning and the politics of synthetic data’, Big Data & Society 10(1): 1-12.
Jacobsen, B.N. (2024) ‘The Logic of the Synthetic Supplement in Algorithmic Societies’, Theory, Culture & Society 41(4): 41-56.
Keynes, J.M. (1933) ‘An open letter to President Roosevelt’, The New York Times, 16 November.
Khan, R.M. (2021) ‘Race, coloniality and the post 9/11 counter-discourse: Critical Terrorism Studies and the reproduction of the Islam-Terrorism discourse’, Critical Studies on Terrorism 14(4): 498–501.
Kittler, F. (1986) A Discourse on Discourse. Stanford Literature Review 3(1): 157–66.
Koerner, B.I. (2014) The Skies Belong to Us: Love and Terror in the Golden Age of Hijacking. New York: Crown.
Konzett, D. (2004) ‘War and Orientalism in Hollywood combat film’, Quarterly Review of Film and Video 21(4): 327–338.
Mackenzie, A. and A. Munster (2019) ‘Platform seeing: Image ensembles and their invisualities’, Theory, Culture & Society 36(5): 3–22.
Manson, K. (2024) ‘AI Warfare Is Already Here’, Bloomberg.com. Available at: https://www.bloomberg.com/features/2024-ai-warfare-project-maven/ (Accessed: 3 December 2024).
Martin, R. (2003) The Organizational Complex: Architecture, Media and Corporate Space. Cambridge, MA: MIT Press.
Marx, K. (1844) ‘Estranged Labour, Marx, 1844’, in Economic and Philosophical Manuscripts. Available at: https://www.marxists.org/archive/marx/works/1844/manuscripts/labour.htm (Accessed: 26 February 2024).
Marx, K. (1992) Capital, Volume I: The Process of Production of Capital, trans. B. Fowkes. New York: Penguin.
Marx, K. (1993) Grundrisse: Foundations of the Critique of Political Economy, trans. M. Nicolaus. London: Penguin.
Massumi, B. (2015) Ontopower: War, Powers, and the State of Perception. Durham, NC: Duke University Press.
McKernan, B. and H. Davies (2024) ‘“The machine did it coldly”: Israel used AI to identify 37,000 Hamas targets’, The Guardian, 3 April. Available at: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes (Accessed: 9 December 2024).
Moon, M. (2024) ‘The Pentagon used Project Maven-developed AI to identify air strike targets’, Engadget, Available at: https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html (Accessed: 2 December 2024).
Morgan, E.J. (2006) ‘The world is watching: Polaroid and South Africa’, Enterprise & Society 7(3): 520–549.
Mueller, G. (2021) Breaking Things at Work: The Luddites Are Right About Why You Hate Your Job. New York: Verso Books.
Haymarket Books (2023) ‘#NoTechForApartheid’, YouTube. Available at: https://www.youtube.com/watch?v=QVrjA-5Ak0U (Accessed: 8 May 2025).
Packer, J. and J. Reeves (2020) Killer Apps: War, Media, Machine. Durham, NC: Duke University Press.
Palestine Diary (2012) ‘Edward Said on Orientalims’, YouTube. Available at: https://www.youtube.com/watch?v=fVC8EYd_Z_g (Accessed: 14 December 2024).
Parikka, J. (2023) Operational Images: From the Visual to the Invisual. Minneapolis: University of Minnesota Press.
Parks, L. (2014) ‘Drones, Infrared Imagery, and Body Heat’, International Journal of Communication, 8: 2642-2648.
Parks, L. (2016) ‘Drones, Vertical Mediation, and the Targeted Class’, Feminist Studies 42(1): 227-235.
Poovey, M. (1998) A History of the Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society. Chicago: University of Chicago Press.
Porter, T. (1995) Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press.
Robinson, C.J. (2021) Black Marxism, The Making of the Black Radical Tradition. 3rd rev. ed. Chapel Hill, NC: UNC press Books.
Scahill, J. (2015) ‘The assassination complex: the whistleblower who leaked the drone papers believes the public is entitled to know how people are placed on kill lists and assassinated on orders from the president’, The Intercept, 15 October. Available at: https://theintercept.com/drone-papers/the-assassination-complex/ (Accessed: 28 June 2025).
Scahill, J. (2016) The Assassination Complex: Inside the Government’s Secret Drone Warfare Program. New York: Simon & Schuster.
Segura, M.S. and S. Waisbord (2019) ‘Between data capitalism and data citizenship’, Television & New Media 20(4): 412–419.
Shaw. I.G. (2016) Predator Empire: Drone Warfare and Full Spectrum Dominance. Minneapolis: University of Minnesota Press.
Srnicek, N. (2016) Platform Capitalism. Cambridge: Wiley.
Suchman, L. (2020) ‘Algorithmic warfare and the reinvention of accuracy,’ Critical Studies on Security 8(2): 175–187.
The Intercept (2015) The Drone Papers. Available at: https://theintercept.com/drone-papers/ (Accessed: 28 June 2025).
Thompson, E.P. (1967) ‘Time, Work-Discipline, and Industrial Capitalism’, Past & Present 38(1): 56–97.
Thrift, N. (2008) Non-Representational Theory: Space, Politics, Affect. London: Routledge.
Toscano, A. (2019) ‘The Violence of Abstraction’, Public Seminar, 9 May. Available at: https://publicseminar.org/essays/the-violence-of-abstraction/ (Accessed: 19 November 2024).
Toscano, A. (2023) Late Fascism: Race, Capitalism and the Politics of Crisis. New York: Verso Books.
Werbin, K.C. (2009) ‘Fear and no-fly listing in Canada: The biopolitics of the “war on terror”’, Canadian Journal of Communication 34(4): 613–634.
Werbin, K.C. (2011) ‘Spookipedia: intelligence, social media and biopolitics’, Media, Culture & Society 33(8): 1254–1265.
Wiener, N. (1980) The Human Use of Human Beings. Cambridge, MA: MIT Press.
Young, I.M. (2010) Responsibility for Justice. Oxford: Oxford University Press.
Zuboff, S. (2015) ‘Big other: surveillance capitalism and the prospects of an information civilization,’ Journal of Information Technology 30(1): 75-89.
Notes
[1] A parallel program was run by the Central Intelligence Agency.
[2] Toscano’s more recent work, Late Fascism, provides an excellent overview of the Frankfurt School’s approach to abstraction and fascism, as well as those of other Marxists like Alfred Sohn-Rethel and Norbert Guterman & Henri Lefebvre (Toscano, 2023: 75–94)
[3] In other work I conceptualized this political economy as datafied capitalism (Fitzgerald, 2019).
Andrew Fitzgerald is Assistant Professor of Communication & Media at Rensselaer Polytechnic Institute. His research focuses on mediatized violence and its circulation and reception in platform-centric mobile media environments, as well as the threat of far-right authoritarian subjectivization and movements. He is the founding director of the Mediatization Lab at RPI. The Lab studies the interweaving of data-driven social media platforms and mobile devices and how they shape users’ construction of their digital lives and realities, and our broader political climate in turn. Fitzgerald’s research has appeared in Journal of Communication, Communication Theory, Surveillance & Society among other venues.
Email: fitzga2@rpi.edus@oregonstate.edu
Conflicts of interest
None declared
Funding
None declared
Article history
Article submitted: 10/8/2024
Date of original decision: 30/9/2024
Revised article submitted: 4/2/2025
Article accepted: 18/4/2025


Leave a Reply