Review: Matteo Pasquinelli’s The Eye of the Master. Reviewed by Alex Levant

the-eye-of-the-master

the-eye-of-the-masterReview of The Eye of the Master: A Social History of Artificial Intelligence by Matteo Pasquinelli

Published by Verso, 272 pages. Available Oct 10, 2023 ISBN 9781788730068: https://www.versobooks.com/products/735-the-eye-of-the-master

Reviewed by Alex Levant

 

Matteo Pasquinelli’s The Eye of the Master: A Social History of Artificial Intelligence offers a compelling new perspective on the nature of artificial intelligence (AI). The author situates the recent development of AI in a longer history of attempts to automate work activities already in the Industrial Age, noting that current AI systems have an old pedigree prior to modern computers. This history, the author claims, extends into the distant past where algorithms were already embodied in ancient rituals to transmit knowledge and skills. By situating AI in a historical perspective, Pasquinelli urges a substantive rethinking of the nature of artificial intelligence and calls for a fresh approach to meeting the challenges and recognizing the opportunities posed by this technology.

Pasquinelli’s central thesis is that AI does not emerge by replicating human intelligence, but from encoding human activities into repeatable procedures – algorithms. In other words, the true basis of AI appears not as an attempt to create a thinking machine, but as the latest iteration of the algorithmic modeling of social practices. Pasquinelli asks, “What is AI? A dominant view describes it as the quest ‘to solve intelligence’ – a solution supposedly to be found in the secret logic of the mind or in the deep physiology of the brain, such as in its complex neural networks. In this book I argue, to the contrary, that the inner code of AI is constituted not by the imitation of biological intelligence but by the intelligence of labour and social relations” (p. 2). He reframes our understanding of AI primarily as a technological entity to its reflection of human practices and social forces.

Pasquinelli’s work stands as a unique intervention in the field of critical AI studies, as it shifts the focus of understanding AI from technological advancements to the role of human actors and social forces in shaping AI systems, underlining the importance of understanding AI within its socio-historical contexts.

Pasquinelli demonstrates that algorithmic thinking is a fundamental cultural technique that has existed across human civilizations. The history of the term “algorithm” is traced back to the introduction of Hindu-Arabic numerals and calculation techniques in medieval Europe. It comes from the Latinisation of the name of the 9th century Persian mathematician al-Khwarizmi, who wrote early textbooks on calculations with Hindu-Arabic numerals. In medieval Europe, “algorismus” referred to techniques for calculations with these numerals, which were more efficient than Roman numerals. Algorithms allowed more complex mathematical operations. However, algorithms were initially not formalized but were procedures described in words. Formal mathematical notation and language developed later to represent algorithms more abstractly. Instead, Pasquinelli claims that algorithms originated in ancient rituals and practices that involved step-by-step instructions. He cites the Hindu Agnicayana ritual for building symbolic altars as an early example. These embodied algorithms were a way to transmit knowledge and skills. Hence, the intelligence of AI, Pasquinelli claims, originates not from its technological abilities to “think”, but from the knowledge expressed in collective human behaviors encoded into algorithms.

Pasquinelli urges us to consider the history of algorithmic thinking and AI in relation to socio-economic changes, rather than solely through the lens of technological advancements. According to this “labour theory of automation”, mathematical and computational abstractions like algorithms are not transcendent logical forms but are deeply embedded in material social practices. For instance, Pasquinelli claims that early calculation tools and rituals emerged as embodied forms of algorithmic thinking, and that mathematical concepts like numbers emerged gradually from practical needs like distributing resources, not as abstract Platonic ideals. Counting and basic arithmetic grew out of the rhythms and songs used to coordinate manual labour. Pasquinelli situates mathematics in material experience, where counting predates numbers, and indeed, numbers emerge as abstractions of the practice of counting.

These are “real abstractions”, as they are embodied in actual material practices, rather than only ideas. Pasquinelli draws on Alfred Sohn-Rethel, who claimed that “real abstractions”, like money for instance, arise from real material practices of exchange in society, not just intellectual reflection. They have a reality and shape human activity, apart from being conceptual abstractions. Similarly, one can think of algorithms as real abstractions. They embody a set of instructions that guide human activity.

From this perspective, labour (as socially meaningful activity) appears as the first algorithm, and the development of algorithmic thinking is closely tied to economic and societal needs. The mechanization and automation of algorithms is linked to broader economic transformations, like the rise of early mercantilism and industrial capitalism. The implementation of algorithms in machines is connected to the need to accelerate communication, automate mental labour, and manage the economy. Pasquinelli argues that the development of AI has historically followed and replicated the logic of the division of labour. He notes that since the industrial revolution, machines and automation have been designed by studying and imitating human labour processes.

He cites various examples, such as Charles Babbage’s design of computing machines in the 19th century to illustrate this process of encoding human work patterns into machine processes. In early 19th century England, “computers” were workers (mostly women) who performed tedious calculations by hand. Babbage aimed to mechanize this mental labour with calculating engines powered by steam. His Difference Engine (1822) automated an algorithm to compute logarithmic tables. The Analytical Engine (1834) was conceived as a general purpose, programmable computer. These machines embodied specific practices and specific ways of carrying out those practices. Babbage initiated the mechanization of algorithmic mental labour, and his principles shaped how computation was conceived in relation to industry, labour, and society. Pasquinelli argues this approach continues today with AI systems imitating collective human behaviour, essentially automating an abstracted division of labour on a societal level.

The second half of the book takes us into the Information Age and a more recognizable world of computer algorithms. Pasquinelli recounts the recent history of artificial intelligence demonstrating that this logic continues with machine learning, or what he calls the automation of automation. He challenges traditional perspectives that perceive AI as an imitation of biological intelligence. Instead, his labour theory of automation and machine learning proposes that AI’s essence is not to replicate human cognition, but to codify and automate social practices and labour relations. This theory posits that AI systems are essentially embodiments of human knowledge, skills, and the division of labour.

In addition to ancient algorithms and innovations in the Industrial Age, Pasquinelli suggests that this logic continues with current AI systems in the Information Age. For instance, like Babbage’s computing machines, he notes that the recent history of creating an artificial neural network was not an attempt to model the brain, but to automate the “labour of perception” – classifying and interpreting visual data by learning associations. He discusses the ‘perceptron’, the first artificial neural network, invented in 1957 by psychologist Frank Rosenblatt. It was designed for visual pattern recognition tasks like identifying ships in radar images. As a classifier algorithm, the perceptron reduced recognition to optimizing a decision boundary in a multidimensional vector space. This technique became central to modern machine learning.

The book sees current AI based on deep neural networks as the automation of what Marx called the “general intellect” – a collective form of knowledge that arises from social cooperation. It argues this represents a kind of monopoly power over knowledge itself. It takes the reader on a detailed tour of Marx’s concept of the general intellect embodied in the collective worker, and his Fragment on Machines, noting how social knowledge becomes embodied in the machine, where the machine imposes new metrics on labour, assuming the role of a supervisor, like the factory master’s “eye”. This theoretical perspective takes Pasquinelli’s analysis in a new direction and makes an important contribution to the field of critical AI studies.

Pasquinelli asserts that while critical AI studies rightfully raises concerns about the impacts of AI, it often overlooks the collective labour and experiences encoded within AI systems. This paints AI as primarily a tool of top-down control. For instance, James Steinhoff’s Automation and Autonomy (2021), which also engages with Operaismo and post-Operaismo, argues that the purpose of AI is to control workers and to extract surplus value. It offers a convincing critique of the argument that AI results in new forms of autonomy for workers. Pasquinelli does not challenge this view; however, his labour theory of machine learning flips this script by revealing AI’s roots in collective labour and knowledge. Rather than simply masters imposing algorithms, AI emanates from the accumulated skills and intelligence of workers. This reoriented perspective opens space for new configurations of knowledge production and invention with greater autonomy from capitalist control. It points towards workers reclaiming agency over the fruits of their own collective knowledge and labour. The text suggests this reclaiming could enable new participatory forms of knowledge making and invention that break from the capitalist focus on profit and control.

Moreover, in contrast to the current focus on the problem of ethics in AI systems, Pasquinelli’s analysis calls for something significantly more than ethical AI. Contemporary studies tend to focus on the problem of alignment: the concern that AI systems may behave in ways that do not align with human values. For instance, see Christian (2019), Vallor (2016), and McStay (2018), as well as Stark (2023) for a review of recent literature on the ethics of AI. However, Pasquinelli argues that attempts to make technology more ethical by hard-coding rules or constraints are insufficient, because they do not change the underlying political and economic functions that technology serves. As an alternative, he points to past cooperative movements that built new technologies situated within alternative social relations. For example, workers’ cooperatives that collectively owned and managed factories and machinery were grounded in ideals of mutual aid and solidarity rather than profit maximization. His view is that to really change the political effects of technology like AI, you need to transform the social and economic relations in which it is embedded – things like property rights, the wage system, ownership structures, and power dynamics, i.e., capitalism itself. Merely tweaking algorithms or constraining AIs does not affect their fundamental purpose in extending quantification, control, and exploitation. To make AI truly “ethical” in a political sense, it needs to be situated within cooperative or collectivist relations oriented around human needs instead of capital accumulation.

This book provides a unique perspective on the nature of artificial intelligence, asserting that its intelligence is not derived from technology, but rather from the human intelligence encoded within its algorithms. However, if AI’s intelligence derives from codified social practices, is it possible that human intelligence may likewise arise not from the individual brain but from internalizing historically developed social practices?

Pasquinelli’s proposition about the social origins of AI’s intelligence echoes theories that see human intelligence as likewise arising from social practices rather than purely individual cognition. For instance, Cultural-Historical Activity Theory (CHAT) views intelligence as rooted in collective, historically developed patterns of activity that are internalized by individuals. Lev Vygotsky, one of its key theorists, argues that intelligence is fundamentally a social process, not an internal mental phenomenon (Pasquinelli, 2023, p. 235; Vygotsky, 1978, p. 57). Similarly, Evald Ilyenkov (known as the philosophical mentor of activity theory) asserts that to function in a community, individuals must internalize its normative patterns of behaviour as an objective reality distinct from themselves (Ilyenkov, 2014, p. 30). CHAT contends that individual consciousness emerges through acquiring and navigating these normative patterns.

This “activity approach” to intelligence resonates with Pasquinelli’s discussion of algorithms as embodied instructions that shape behaviour in patterned ways, as “a model to follow” (p. 14). It also accords with his references to “real abstractions” like money or algorithms that are objective social forms that mediate activity. Perhaps Pasquinelli’s ground-breaking analysis of AI challenges us to reconsider not only the nature of artificial intelligence, but the nature of human intelligence as well.

Pasquinelli’s The Eye of the Master makes a significant intervention in the field of critical AI studies by reframing our understanding of artificial intelligence. His labour theory of automation convincingly argues that the essence of AI lies not in imitating biological cognition, but in encoding collective human behaviour and social relations into algorithms. This shifts the focus from technology to the human knowledge embodied within AI systems.

The book’s analysis opens new directions for rethinking and transforming the social impact of AI. It emphasises that AI is a site of struggle whose outcome is not set. It reveals that workers’ own knowledge and skills are embedded within automation, pointing to reclaiming worker agency over this knowledge. It also argues that merely making AI more ethical is insufficient without transforming the political economy in which it operates. Perhaps even more consequentially, Pasquinelli’s social theory of AI’s origins implicitly raises questions about human intelligence itself arising from internalized social practices rather than individual cognition alone.

Pasquinelli’s timely intervention compellingly resituates AI in a socio-historical context and provides a framework for imagining more just and equitable forms of automation serving human needs. Studies by Bareis and Katzenbach (2021), and McKelvey and Roberge (preprint), illustrate how AI imaginaries shape public perception, policy, and investment, thereby influencing both how AI is perceived and its future development. The future of AI is far from settled, and this book makes a major contribution to help us to shape that future.

 

References

Bareis, J. and Katzenback, C. (2022). “Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics,” Science, Technology, & Human Values, Vol. 47(5), pp. 855-881.

Christian, B. (2019). The Alignment Problem: Machine Learning and Human Values. Norton.

Ilyenkov, E.V. (2014). Dialectics of the Ideal. In A. Levant and V. Oittinen (eds.), Dialectics of the Ideal: Evald Ilyenkov and Creative Soviet Marxism. Brill.

McKelvey, F. and Roberge, J. (preprint). Recursive Power: AI Governmentality and Technofutures.

McStay, A. (2018). Emotional AI: The Rise of Empathic Media. Sage.

Pasquinelli, M. (2023). Eye of the Master: A Social History of Artificial Intelligence. Verso.

Sohn-Rethel, A. (1978). Intellectual and Manual Labour: A Critique of Epistemology. Humanities Press.

Stark, Luke (2023). “Breaking Up AI Ethics,” American Literature, Volume 95, Number 2, June 2023.

Steinhoff, J. (2021). Automation and Autonomy: Labour, Capital and Machines in the Artificial Intelligence Industry. Palgrave Macmillan.

Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.

Vygotsky, L. S. (1978). Mind in Society: The Psychology of Higher Mental Functions. Harvard University Press.

 

Alex Levant

Alex Levant teaches in the Department of Communication Studies at Wilfrid Laurier University, Waterloo, Canada. He specializes in critical media theory and emerging/future technologies. He is co-editor Activity Theory: An Introduction (forth.) and Dialectics of the Ideal. His articles have appeared in Historical Materialism, Stasis, Critique, Educational Review, and Mind, Culture and Activity. He is corresponding editor of Historical Materialism and on the editorial board of Mind, Culture and Activity.

Email: alevant@wlu.ca

 

 

Discover more from Media Theory

Subscribe now to keep reading and get access to the full archive.

Continue reading