Gary Hall: Creative AI – Thinking Outside the Black Box

Blogpost May 2024

Gary Hall

Sixty years ago the renowned science-fiction writer Arthur C. Clarke emphasized in a BBC Horizon documentary that: ‘The future is not merely an extension of the present with bigger and better machines and cities and gadgets. It will be fundamentally different … try as we can, we’ll never outguess it’. Despite this, the same documentary sees Clarke successfully predicting the invention of 3D printers, the internet and remote working. Admittedly, he also envisions the use of trained chimpanzees as cleaners and cooks! Nonetheless, Clarke’s warning from 1964 about the pointlessness of trying to outguess the future remains relevant today. And all the more so given how many people are intent on interpreting the political promise of our latest bigger and better machines, namely, those that are gathered under the label ‘artificial intelligence’ (AI), in terms of either fascism, nihilism, Thatcherism or that collective form of labour Marx referred to as the ‘general intellect’.  The future here is not only an extension of the present; it is also an algorithmic repetition of the political philosophies of the past.

What, though, if we’re interested in adopting a less pre-programmed approach to AI: an approach that (under the influence of theorists such as Alberto Moreiras, Wendy Brown and Chantal Mouffe) understands politics as the taking of a decision in an undecidable terrain, and so does not claim to know what the politics of artificial intelligence is in advance of intellectual interrogation, but instead leaves the question of AI and its future more open?

It’s precisely out of an attempt to engage with AI politically in this non-moralistic sense that in a forthcoming book I’ve been experimenting – playfully and piratically, I must admit – with the concept of artificial creative intelligence (ACI). Collaboratively generated by the artist Mark Amerika and OpenAI’s GPT2, ACI is defined in the former’s My Life as an Artificial Creative Intelligence as ‘a human being who can think outside of the box’. I’m using the term ‘piratically’ here according to its etymological meaning of trying, testing, teasing and giving trouble because, for me, artificial creative intelligence needs to include thinking outside of the masked black box that ontologically separates the human, its thought-processes and philosophies, from the nonhuman: be it animals, plants, the planet, the cosmos … or indeed technologies such as the book and generative AI.

Understood like this, the approach to AI of ACI is very different from the left-liberal techno-humanism that is promoted by the various institutes for human-centred, -compatible or -inspired AI that have been established over the last decade. But it is also distinct from that advocated in recent work looking to ‘unmask’ the algorithmic biases of AI in order to safeguard the human, but which likewise functions dangerously to deny the human’s co-constitutive relation with the nonhuman whilst simultaneously maintaining the former’s position at the centre of the world.

A snapshot illustration of such creative outside of the box thinking can be provided with the help of two accounts of AI art. The first comes from a 2023 paper on ‘AI Art and Its Impact on Artists’, written by members of the Distributed AI Research Institute (DAIR) in collaboration with a number of artists. In this paper the human is set up by Harry H. Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers and Timnit Gebru in a traditional hierarchical dichotomy with the nonhuman machine that is artificial intelligence through the insistence that multimodal generative AI systems do not have agency and ‘are not artists’. Art is portrayed as a ‘uniquely human activity’. It is connected ‘specifically to human culture and experience’: those continually evolving ‘customs, beliefs, meanings, and habits, including those habits of aesthetic production, supplied by the larger culture’.

Declarations of human exceptionalism of this kind should come as no surprise. Not when ‘AI Art and Its Impact on Artists’ derives its understanding of art and aesthetics in the age of AI in part from liberal, humanist figures who were writing in the first few decades of the 20th century: namely, the American philosopher and reformer of liberal arts education, John Dewey; and a representative of Bloomsbury Group liberalism, the Englishman Clive Bell. To be fair, Jiang et al.  also refer to several publications by contemporary scholars of Chinese, Japanese and Africana Philosophy in this context (although it’s noticeable most of these scholars are themselves located in western nations). Still, liberal humanism holds it values to be universal (rather than pluriversal or situated), so nothing much changes as a result: most philosophers on art and aesthetics argue that nonhuman entities are unable to be truly creative, according to Jiang et al.. On this (common sense) view, artists use ‘external materials or the body’ to make their lived experience present to an audience in an ‘intensified form’ through the development of a personal style that is authentic to them. It is an experience that is ‘unique to each human by virtue of the different cultural environments that furnish the broader set of habits, dispositions towards action, that enabled the development of anything called a personal style through how an individual took up those habits and deployed them intelligently’. Consequently, art cannot be performed by artifacts. Generative AI technologies ‘lack the kinds of experiences and cultural inheritances that structure every creative act’. (The human exceptionalism of Jiang et al. thus aligns with the majority of legal systems to date for which works created using artificial intelligence do not meet the criteria for copyright protection, on the basis the latter rule out anything that is authored to a significant extent by nonhumans. It is also very much in keeping with how the question of computer creativity has been approached historically.)

The second account of artificially intelligent art can be found in Joanna Zylinska’s 2020 book, AI Art. It shows how human artists can be conceived more imaginatively – and politically – as themselves ‘having always been technical, and thus also, to some extent, artificially intelligent’. This is because technology, far from being external, is at the ‘heart of the heart’ of the human, its ‘“body and soul”’, in a relation of what Jacques Derrida and Bernard Stiegler term originary technicity or originary prostheticity. As Zylinska has it: ‘humans are quintessentially technical beings, in the sense that we have emerged with technology and through our relationship to it, from flint stones used as tools and weapons to genetic and cultural algorithms’. She even goes as far as to argue that the ethical choices we think we make as a result of human deliberation consist primarily of physical responses as performed by ‘an “algorithm” of DNA, hormones and other chemicals’ that drives us to behave in particular ways.

How can this second ‘human-as-machine’ conception of artificially intelligent art be positioned (albeit heuristically) as the more political of the two? After all, doing so would appear to many to be rather counter-intuitive given the overtly politically-engaged nature of the work of DAIR, Gebru et al.. (DAIR describes itself as operating ‘free from Big Tech’s pervasive influence’ to publish ‘work uncovering and mitigating the harms of current AI systems, and research, tools and frameworks for the technological future we should build instead’.) The reason the second of these accounts of AI art can be positioned as the more political is because, in its destabilising of the belief that art and culture stems from the creativity of self-identical, non-technological human individuals – a belief that stretches back at least as far as the 18th century – and its opening up to an expanded notion of agency and intelligence that is not delimited by anthropocentrism (and so is not decided anti-politically in advance: i.e., as that which is recognised by humans as agency and intelligence), such an ACI approach to AI presents an opportunity even more radical – in a non-liberal, non-neoliberal, non-moralistic sense – than that Jiang et al. point to in ‘AI Art and Its Impact on Artists’.

Rooted as the latter is in the ‘argument that art is a uniquely human endeavor’, Jiang and his co-authors advocate for new ‘sector and industry specific’ auditing, reporting and transparency proposals to be introduced for the effective future regulation and governance of otherwise black-boxed large-scale GenAI systems currently based on the unliteral appropriation of free labour without consent. (One idea often proposed is to devise either a legal or a technological means whereby artists can opt out of having their work exploited for commercial machine learning like this. Nightshade v1.0, for example, is a free tool made available by computer scientists at the University of Chicago that enables artists to protect their creative works from being used without their permission for training large language model AI. It does so by ‘poisoning’ images at the pixel level – hence its name – obfuscating them from the perspective of AI, but not the human viewer. Alternative ideas involve incorporating watermarks or tags into AI-generated output for the purpose of distinguishing it from human-generated content. Some intellectual property experts have even suggested the introduction of a new legal framework, termed ‘learnright’, complete with laws designed to oversee the manner in which AI utilises content for self-training.) The aim is to orient these tools, together with the people and organisations that build them, toward the goal of enhancing human creativity rather than trying to ‘supplant it’. When it comes to the impact of AI on small-scale artists especially, the danger of the latter approach includes loss of market share, income, credit and compensation along with labour displacement and reputational damage, not to mention plagiarism and copyright infringement, at least as these are conventionally conceived within the proprietorial culture of late-stage capitalism. It’s a list of earnings-related harms that is in keeping with Jiang et al.’s presentation of independent artists today – especially those who have neither existing wealth nor the ability to support their practice by taking on other kinds of day jobs – as highly competitive microentrepreneurs. Witness the interest attributed to them in trading ‘tutorials, tools, and resources’, and in gaining sufficient visibility on social media platforms to be able to ‘build an audience and sell their work’.

According to Demis Hassabis, chief executive of Google’s AI unit, we ought to approach the dangers posed by artificial intelligence with the same level of seriousness we do the climate crisis. We should thus institute a regulatory framework overseen initially by a body akin to the Intergovernmental Panel on Climate Change (IPCC), and subsequently by an organisation resembling the International Atomic Energy Agency (IAEA) for the long term. Of course, it is quite typical of those behind Big Tech to call for the regulation of the anticipated or hypothetical dangers that will be posed by foundational AI models at some point in the future, such as their ability to circumvent our control or render humanity extinct, rather than for actions that address the very real risks they represent to society right now. The position of Amazon, Google, Microsoft, et al. as the dominant businesses in the AI market – the latter both in its own right and as a major investor in OpenAI – would be impacted far more if governments were to seriously adopt the second of these approaches to the safety testing of AI instead of leaving it to voluntary self-regulation on their part. These companies would also be exposed to greater competition and challenge if it wasn’t just Big Tech that was held as having the money, computing power and technical expertise to deal with such existential concerns: if AI engines and their datasets had to be made available on an open or commons basis that makes it easier for a diverse range of smaller, independent and non-profit entities to be part of the AI ecosystem, and thus at least offer alternative visions of the future for AI, the human and indeed the planet. (It’s estimated that OpenAI burned through $100 million in computing energy and resources when training its GPT-4 model for release in 2023.)  Nevertheless, to convey a sense of the radical political potential of artificial creative intelligence, let’s return to the example of the climate crisis – an example I also cited previously in relation to the author and social activist Naomi Klein’s critique of the architects of generative AI. As we saw there, our romantic and extractive attitude toward the environment, which presents it – much as Jiang et al. do the work of artists in the face of AI – as either passive background to be protected or freely accessible resource to be exploited for wealth and profit, is underpinned by a modernist ontology based on the separation of human from nonhuman, including monkey servants. It is this very ontology and the associated liberal, humanist values – which in their neoliberal form frequently include an emphasis on auditing, transparency and reporting, as we have seen – that artificial creative intelligence can help us to move beyond with its ability to think outside of the box. What’s more, it can do so not just in the unguessable future but in the present, too.

This post is extracted from the author’s forthcoming book, Masked Media: What It Means to Be Human in the Age of Artificial Creative Intelligence (London: Open Humanities Press).

Gary Hall is a media theorist who works at the intersections of digital culture, politics and philosophy. He is Professor of Media at Coventry University, UK, where he is founding director of the Centre for Postdigital Cultures. His research has appeared in Radical Philosophy, New Formations, Media Theory, Cultural Studies, Cultural Politics, American Literature and Angelaki. He is also the author of a number of books including A Stubborn Fury: How Writing Works In Elitist Britain (Open Humanities Press, 2021), Pirate Philosophy (MIT Press, 2016) and The Uberfication of the University (Minnesota UP, 2016).

Leave a Reply

Create a website or blog at WordPress.com

Discover more from Media Theory

Subscribe now to keep reading and get access to the full archive.

Continue reading