
For the official version of record, see here:
Roderick, I. (2023). Autism Robot Therapy, Remediation, and Mimetic Disabling. Media Theory, 7(2), 103–126. Retrieved from https://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/585
Autism Robot Therapy, Remediation, and Mimetic Disabling[1]
IAN RODERICK
Wilfrid Laurier University, CANADA
Abstract
This article presents a critical study of the discursive positioning of social robots as intermediaries in remedial practices of care in autism therapy. It begins by examining the promotion of social robots to augment autistic children’s social skills as a form of remediation in which they take on a mediator role between therapist and patient. Drawing from McGuire’s genealogy of autism advocacy, I argue that the therapy robots operate as part of a network of coordinating and normalizing strategies that emerges in response to the evident “crisis” of autism. Hayle’s concept of material metaphor is then applied to highlight the potential discursive-material relations of the interposing role played by the robot. Finally, I argue that though designed as an enabling device, in actuality, metaphoric transferences make the robots mimetically disabling.
Keywords
Autism, Robots, Disability, Metaphor, Remediation, Mimesis
Introduction
Social robots are robots designed with a “social interface,” making them capable of conveying “intention” and interacting with humans or other agents in a way that is responsive and supportive of tasked goals (Hegel et al., 2009). More recently, research has increasingly focused upon developing “socially assistive robots [to] help people through social, rather than physical, interactions” (Admoni and Scassellati, 2014: 1). As such, developments in social robotics are increasingly being expanded to potential applications in education, healthcare, eldercare, as well as the service industries. One such application is in autism behavioural therapy where social robots are being trialed as therapeutic devices to improve interaction between therapist and patient. Robots are being incorporated into behavioural therapy because they are thought to afford greater engagement on the part of the autistic patient. Accordingly, this article presents a critical study of the discursive positioning of social robots as intermediaries in remedial practices of care in autism therapy.
This article begins by describing how researchers position their robots as being able to compensate for the perceived deficits in sociability on the part of autistic children. The paper then seeks to contextualize the call for autism robot therapy in relation to dominant autism discourse and its articulation within an apparatus of advocacy. This I argue undergirds the use of social robotics to augment autistic children’s social skills as a form of remediation in which social robots take on a mediator role. To better understand the triadic relations between therapist, robot, and patient, Hayle’s concept of material metaphor is then applied to highlight the potential discursive-material relations of the role played by the robot as mediator. Finally, the paper explores the metaphoric associations between robot and autistic patient and the potential for mimetic transference.
What concerns me is the possibility of what Kathleen Richardson (2010) refers to as mimetic disabling being realized in the therapy design. As Richardson (2010: 10) explains, “central to the practices of mimesis and alterity are disabling techniques that act to create new forms of symmetry and alterity between humans and machines”. It is those forms of symmetry and alterity being established in the triadic therapy relationship that are to be addressed below. While Richardson’s study focuses upon the ways in which roboticists mimetically reproduce their own embodiments in the humanoid robots they develop, I borrow the concept to consider how the robot mediates between autistic and normative or “neurotypical” sociabilities mimetically. In short, how might the incorporation of social robots in behavioural therapy, already fraught with perils of disablement, end up reproducing disabling practices and discourses?
Enrolling robots into ASD therapy
As a diagnostic category, the appellation “autism” represents a developmental disorder marked by deficits in language, communication, and managing social relationships. Relabelled as Autism Spectrum Disorder (ASD) in 2013, an autism diagnosis has come to encompass a wide-ranging cluster of symptoms and degrees of impairment such that, as Singh (2015) proposes, there can be said to be multiple autisms. Nevertheless, despite this move to seemingly define autism along a continuum, by and large, “the dominant psychological and neurological models continue to emphasize the absence of empathy” (Silverman, 2011: 8) as if there is one common, stable signifier of autism (Biklen and Attfield, 2005: 11). In other words, autism continues to be defined as a disability in which those afflicted lack the ability to be oriented towards others.[2]
Insofar as autism is understood as a neurobehavioural developmental disability marked by “mind blindness” (Murray, 2008), behavioural therapies are called for as “interventions” to offset developmental “deficits” in communication and social perception. By constituting these differences as deficiencies, behavioural autism therapies orient actors (therapists, families, patients, etc.) towards a program of action that seeks to “recover” the patient, bringing them as close to the range of “normal” as possible (see Keyes, 2020; Williams, 2021). Therapy such as Applied Behaviour Analysis (initially, sometimes called Behavioural Engineering) is sought out for its apparent ability to provide autistic children with training in social and communication related behaviours presumed to positively impact their lives. The idea is to essentially “bootstrap” the children’s social interactions by giving them a repertoire of neurotypical and therefore “socially appropriate” behaviours that can then be carried over from an instructional setting into their everyday lives. It is this logic of bootstrapping that underwrites research into robotic autism therapy whereby robots are enrolled (Callon, 1984) as intermediaries to facilitate the transfer of skills between therapist and patient.
Fong, Nourbakhsh, and Kerstin (2003: 145) introduce the term “socially interactive robot” to demarcate a class of robots that is distinct “from other robots that involve ‘conventional’ human-robot interaction, such as those used in teleoperation scenarios”. Such robots are explicitly designed to possess and demonstrate “human-social characteristics” including:
• express and/or perceive emotions;
• communicate with high-level dialogue;
• learn/recognize models of other agents;
• establish/maintain social relationships;
• use natural cues (gaze, gestures, etc.);
• exhibit distinctive personality and character;
• may learn/develop social competencies (Fong et al., 2003: 145).
Interestingly, these tend to be precisely those qualities thought to be lacking in children diagnosed as autistic. Such competencies endow the social robot with the ability to be both persuasive machine and avatar (Fong et al., 2003: 146). In this way, social robots are understood as possessing qualities that will make it suited to broaching “the divide” between autism therapist and autistic patient.
Accordingly, within the scientific literature and, in keeping with everyday talk about technology (Slack and Wise, 2015: 110), social robots figure as useful, tool-like things that serve specific practical purposes. In this respect, the literature continues the convention of understanding technology as a neutral extension of human rational intention. The result is that social robots are presumed to simply extend the “social reach” of the therapists by better facilitating their interaction with their patients. This then puts the robot into a perceived intermediary role whereby it is thought to merely convey, much like a conduit, the intentions of the therapists. However, this depends upon a continued failure to attend to the (re)mediating capacities of technology and the ways in which the therapy robot, as an interceding material-metaphor (Hayles, 2002), is ordered into a set of relations that constitute autism as a crisis to be overcome.
Advocacy and crisis autism
The promotion of autism robot therapy needs to be contextualized in terms of how autism and advocacy have been co-constituted. As McGuire (2016: 24) explains, autism comes to be conceptualized as an imminent threat to the well-being of families and, therefore, society, which in turn requires advocacy to ameliorate that threat:
As autism is narrated as a growing threat to the ‘good life’ of neoliberal development, autism advocacy is called into being as that which must generate more and more ways to neutralize the non-normative threat of autism by acting now on individual bodies to secure better (i.e., more normative) futures for all.
The narrative does not stop at making autism a “social risk” in which all families share; it intensifies that risk by making autism visible as an unstable and destabilizing threat such that, if left unchecked, the risk will only grow and the “disease” of autism will only spread (McGuire, 2016: 55-58; Yergeau, 2018: 10-11; Duquette et al., 2008). Advocacy enlists others to intervene on behalf of the person “with autism”, what Yergeau (2018: 2) terms “autism somethings” whereby “nonautisitc stakeholders become authorized … as autism parents, as autism researchers, as autism therapists, and specialists and mentors and advocates”. As such, advocacy is not only productive of a subject-with-autism but also produces a whole host of other subjects-subject-to-autism.
Advocacy, as McGuire documents it, is a network of coordinating and normalizing strategies, an apparatus of capture, that emerges in response to the evident “crisis” of autism that burdens both private families and public institutions such as education and health care. An apparatus is “literally anything that has in some way the capacity to capture, orient, determine, intercept, model, control, or secure the gestures, behaviors, opinions, or discourses of living beings” (Agamben, 2009: 14). Both material and linguistic, apparatuses capture living beings, making them subject to concrete, strategic interventions that arise through the intersection of relations of power and knowledge. In other words, apparatuses make subjects knowable in relation to known problems and are “rooted in the very processes of ‘humanization’ that made ‘humans’ out of the animals we classify under the rubric Homo sapiens” (Agamben 2009: 18). Accordingly, apparatuses are both ontological and normative since they establish a border between the fullness of “human being” and the incompleteness of “under-developed” life.
Advocacy, then, is more than a simple response to the “problem” of autism. It advocates for a particular but dominant knowledge of autism as a developmental impairment that an individual has which thus causes them to behave and respond to others improperly. As McGuire (2016: 27) surmises, “In dominant contemporary discourses of advocacy, to be aware of autism is to be aware of it as a disease of epidemic proportions and to treat autism is to seek to “cure” it…”. This means that the only autism that autism advocacy can speak is therefore “crisis autism”. It is by understanding autism in this way that leads to an advocacy that can only be against autism since it casts autistic people as instead “having autism” and thus, in grammar and in phenotype, their autism is potentially detachable (Sinclair, 2013).
Social robots become part of the advocacy apparatus as a response to the call to remedy crisis autism. They are incorporated as corrective tools to address those “deficits” that come with autism, thus potentially severing autism from the carrier. However, their “fit” within the advocacy apparatus depends upon more than their apparent utility. Social robotics enter the advocacy apparatus only insofar as they are supportive of and supported by the knowledge of crisis autism. So, social robots will aid autism therapists by being better able to capture the attention of the “disconnected” child “with autism” and more objective in recording abhorrent behaviours. They will help advocate-parents by making diagnoses and access to therapy quicker, cheaper, and more perceptive. Once authorized, autism therapy robots can then begin to do their work of mediating between the other elements of the apparatus.
The (re)mediation of robots
Communication is often conceived of as a relatively straightforward process in which the medium makes its presence known only through the successful conveyance of sounds, symbols and ideas. The medium is thought to serve as a channel or conduit that immediately carries communication between a sender and receiver (Shannon and Weaver, 1949) and any “noise” or alteration of the “message” by the medium is conceived of as “degradation.” As Bolter and Grusin (2000: 5) propose, we have a proclivity for conceptualising communication using a logic of immediacy, whereby the referent of the representation is rendered present through the disappearance or disavowal of the medium. Immediacy, then, underwrites a normative theory of communication that assumes there to be no gap, no difference, between communicators. In the case of autism therapy, the robot is enrolled under the logic of immediacy such that the affordances of the technology will overcome the gap between therapist and patient – that it will bring both parties into communication through what can be termed, following from Latour (2005), a process of (re)mediation.
Latour’s (2005) distinction between intermediaries and mediators is helpful here because it parallels two opposing logics of communication. The intermediary is “what transports meaning or force without transformation: defining its inputs is enough to define its outputs” (Latour, 2005: 39). In keeping with the logic of immediacy, intermediaries remain faithful servants in the background and do not degrade the message through their (material) presence. In effect, the intermediary is merely a vehicle for carrying the message unaltered to its destination. Conversely, in the case of mediators, “input is never a good predictor of their output; their specificity has to be taken into account every time. Mediators transform, translate, distort, and modify the meaning or the elements they are supposed to carry” (Latour, 2005: 39). So, in contrast to the intermediary, the mediator does not simply channel or carry the message between interactants but acts on it and so re-mediates it.
This is not to say that actor-entities must take on one or the other role exclusively. As Latour (2005: 39) explains, there is “constant uncertainty over the intimate nature of entities – are they behaving as intermediaries or as mediators?” Entities that might be initially taken for an intermediary may in due course turn out to be an unexpected mediator (Latour, 2005: 39). So long as the entities appear to function as intermediaries, then they are conceptualised as what Latour (1986: 5) has called elsewhere “well-aligned and faithful allies”, but when an entity begins to intercede in ways that come from its own complexity and produces its own outputs, it is then understood to be a mediator. In the case of the therapy robots, it is not so much that there is uncertainty as to which role they perform but, rather, they are enrolled to act simultaneously as intermediary on behalf of the therapist and, at the same time, as mediator to remediate the attention of the patient.
Remediation can of course mean the giving of therapy to correct a defect and, instrumentally, this is how robot therapy is understood. Robot therapy is designed by researchers to improve “neurotypical” social and communication skills in autistic patients. In a different sense though, remediation can also refer to the process of incorporating a new medium of communication and the rearrangement of relations and matter that go with it. Bolter and Grusin (2000: 17) describe the process of remediation as one whereby “each medium promises to reform its predecessors by offering a more immediate or authentic experience, the promise of reform inevitably leads us to become aware of the new medium as a medium”. Conceptually, remediation thus calls attention to the way the introduced robot calls our attention as an innovation into the therapist-patient interaction.
Introducing social robotics is said to improve the effectiveness of behavioural therapy by being able to capture the attention of the autistic child in a way that the human therapist cannot and, implicitly, making more immediate the interaction between therapist and autistic patient. The robot does this, not by disappearing as one might expect of an intermediary but rather by becoming present to the patient as a mediator between therapist and patient (see Spiel et al., 2019: 15). So, while intermediaries are easily associated with the ambition to immediacy, the social robot is also enrolled as a mediator to reform the patient-therapist interaction towards greater immediacy and, ultimately, sociability. The robot can therefore be understood as a remediation of remedial attention whereby it is introduced to overcome barriers (in communication) between therapist and patient. What the concept of remediation points to then is that while proponents of robot therapy describe the function of the robot as a neutral tool to enhance therapist-patient interaction, as a medium, the robot intercedes in a manner that is never neutral and therefore refashions the interaction in accordance with its own form.
This is not simply a technologically determinist perspective since new media and technologies “emerge from within cultural contexts, and they refashion other media, which are embedded in the same or similar contexts” (Bolter and Grusin, 2000: 17). In other words, the social robot does not suddenly appear in autism behavioural therapy, but instead comes to occupy a space that was already, to some degree, made readily available to it. The robot functions as a device of “interessement” by being interposed into and stabilising not only the immediate set of relations between therapist and patient but also the broader advocacy relations between client family, practitioners, researchers, institutions, health authorities, funding bodies, amongst others (Callon, 1984). Robots are called for/upon for their seeming ability to produce patient engagement and therefore an experience of greater immediacy between patient and therapist as the robot intercedes on behalf of the therapist. As such, the social robot becomes an object of interest to all entities through the promise of being able “to shape and consolidate the social link” (Callon, 1984: 208) between patient and therapist initially and then to extend it “outward” to the “neurotypical” world. Accordingly, the presence of the robot in the therapeutic interaction is conceptualized as an enhancement rather than an alteration of the therapeutic programme, acting on the patient on behalf of the therapist.
In this way, the social robot is represented as a remedial tool that will enable the autistic patient to better integrate with the surrounding social world. Researchers and their institutions therefore promote interest using social robots by presenting them as devices that are of interest to the patient. However, if rather than a “therapeutic” tool, the robot is instead examined as a remediating artefact interposed between the therapist and the patient, a different understanding of the operations of the robot becomes possible. In being assigned the role of (re)mediator, the robot takes up the function of interface. As such, it is both a mediating material artefact and, at the same time, a metaphor for a sociability yet to come. Thus, rather than assuming the robot to be simply a means of intervening in the therapist-patient relation to overcome “social deficits”, a more critical perspective is possible in which the robot can be conceptualised as mimetically disabling through material-metaphoric slippages between patient and robot. Analysing autism robot therapy in this way invites us to consider how the autistic patient is not only an interactant – the recipient of therapeutic care – but is also addressed as a site of remediation. In other words, while the aim of the robot-enhanced therapy is to make the patient “more social”, by leveraging the socialness of the robot to transfer remedial communication skills, there is also the potential to discursively transfer the machine’s less-than-human or quasi-social status back onto the patient.
Material metaphors and mimetic disabling
Social robots differ from service robots in that they are designed with the expectation that people will interact with them rather than simply operate them. The capacity for social interaction is engineered into the robot as its “social interface” which entails “all the designed features by which a user judges the robot as having social qualities” (Hegel et al., 2009: 171) and encompasses the form, function, and context of the robot:
Every robot has an aesthetic form. Also, every robot has different kinds of behaviour, i.e. varying functions. Finally, the robot with its aesthetic form and functions is within various contexts whereby context also defines the problem which is the need for specific functions and a specific form within a specific context (Hegel et al., 2009: 172).
The social interface is therefore realised through design choices that are made to allow the intended user to perceive the robot as having the capacity for social interaction. Accordingly, roboticists conceptualise the social interface as “a metaphor for people to interact naturally with robots” analogous to the computer desktop “where people treat the things in the graphical user interface like in their real world – due to the metaphor they have an idea on how it works” (Hegel et al., 2009: 172). The social interface is thus designed to allow the user to treat the robot as a social being based upon their understanding of how one interacts with another social being. But this also means that the social interface works both ways… The social attributes of the robot are realised through the attributions of the user but, equally, the social interface is designed in accordance with the social attributes attributed to the user. Quite simply, the socialness of the robot is designed in relation to the assumed socialness of the anticipated human interactant. Social robots therefore operate as what Hayles (2002: 22) terms material metaphors in that they simultaneously materialise and transfer our understandings of how “natural” social interaction works from one domain to another.
Material-metaphoric affordances
Hayles (2002: 22) proposes material metaphor to call attention to “the traffic between words and physical artifacts”. Though not a new phenomenon, she proposes that the concept becomes
increasingly important as the symbol-processing machines we call computers are hooked into networks in which they are seamlessly integrated with apparatus that can actually do things in the world, from the sensors and actuators of mobile robots to the semiotic-material machinery that changes the numbers in bank accounts.
Material metaphors therefore point to the ways in which technological artefacts have both symbolic potentials and material affordances as mediators. This means that computational interfaces are materially metaphoric because they afford transferences between the materials and symbols by moving beyond the representational to actually “having the power to make things happen…” (Hayles, 2002: 22). If we were to understand the interface as an intermediary then, we might easily assume that any “effects” produced through it are unidirectional. However, understood as a mediator, the interface cannot be merely a representation of what can be made to happen. Instead, interfaces are generative and structure our experiences of the material changes that we effect through their use. Accordingly, material metaphors foreground “[t]he interplay between semiotic components and physical attributes that gives rise to materiality simultaneously and with the same gesture gives rise to subjects who both perceive and are acted upon by this materiality” (Hayles, 2002: 107). Interfaces do more than simply symbolise our actions upon the world; they position us as particular kinds of actors intent upon performing actions particular to kinds of actors we are perceived to be. As Hayles (2002: 107) explains, “Even when the interface is rendered as transparent as possible, this very immediacy is itself an act of meaning-making that positions the reader in a specific material relationship with the imaginative world…”. This is because, unlike referential signifiers, metaphors mediate subjectivities as well as meanings. Thus, the technical artefacts that we interface with and through have subjective affordances such that they not only bear meaning potentials but also have the potential to transfer those meanings between interactants.
Metaphoric-mimetic association
Aristotle (1932) conceptualized metaphor as “the application of a strange term either transferred from the genus and applied to the species or from the species and applied to the genus, or from one species to another or else by analogy”. A metaphor thus establishes a meaningful association between two different things that have been deemed comparable by the transference of familiar qualities over to the otherwise unfamiliar. Metaphor therefore depends upon a claim of semblance whereby the less familiar is made to resemble the more familiar in some specific way.
Through the process of metaphoric transference, metaphors afford imaginative ways of attributing information and affects to an unfamiliar conception by connecting it to a more familiar conception that bears or carries those attributes (see Ricœur, 1978). Accordingly, metaphors are generative rather than simply comparative or expressive. Furthermore, since metaphors work by transferring knowledge and affective associations, they also bring less familiar experience into the discursive. As Ricœur (1978: 144) proposes, metaphor provides “a kind of figurability to the message” and so makes discourse appear accordingly. It is precisely this generative aspect of metaphor that leads Lakoff and Johnson (1980: 4) to conceptualize metaphors as integral to thought and action rather than being marginalized to “a matter of words” in poetic language.
Metaphors should therefore be understood as more than representations. They encompass a process of transformation in which a feature or aspect of one domain is made salient and significant to another. Returning to Ricœur (1978: 150 emphasis added), “To imagine, then, is not to have a mental picture of something but to display relations in a depicting mode”. In this way, metaphor is a relationality (a reality constituted of relations) in which a “new semantic pertinence” is established and “borne by the whole utterance” (Ricœur, 1978: 146) rather than through dyadic denotation. So, while metaphors are often analysed in terms of tenor and vehicle (Richards, 1936) or source and target (Lakoff and Johnson, 1980), this encourages treating metaphor as a unidirectional transfer of a concept from one domain to another.
Alternatively, Guldin (2012: 41) proposes that metaphors be understood as triadic in structure such that “they distinguish between a point of departure, a point of arrival and a space in between that has to be crossed in order to complete the process”. In keeping with the interactional approach to metaphor, greater emphasis is placed upon the process of establishing a relation by effecting a particular way of interpreting and experiencing the two domains. Indeed, for I.A. Richards,
the word ‘metaphor’ is sometimes used to signify what he means by ‘vehicle’, and sometimes to mean the symbiosis resulting from the conjunction of tenor and vehicle, and that metaphor … in the sense of this symbiosis, is impossible without just such an interaction of tenor and vehicle (Kirby, 1997: 519).
Since metaphoric meaning emerges through the crossing of “a space in-between” the two domains, both target and source domains must be adjusted to one another so that each is then comprehended in relation to the other. A metaphor thus mediates between the two. Moreover, this relation of symbiosis and the necessity to adjust suggests that metaphors are not unidirectional from source to target and so opens the possibility of transferences whereby both domains take on attributes in relation to the metaphor.
In autism robot therapy, the robot occupies the space in-between patient and therapist so that they may interface with one another. It is presumed to operate as an intermediary on behalf of the therapist, drawing the patient’s attention and delivering social cues that might otherwise be too complex or overwhelming (or too “neurotypical”) for the autistic patient. However, the robot is not simply a vehicle for delivering the remediating attention of the therapist to an otherwise inattentive patient. As an interface, both parties must interact through the affordances of the robot. Accordingly, the therapy robot becomes a generative material metaphor for normal(ised) social interaction to make (re)mediation possible. Furthermore, the viability of the social robot as an interface is derived not only through its role as an analogue to the therapist but equally in the way it analogizes the patient. Therefore, just as the robot mimetically serves as a proxy or avatar for the therapist, through metaphoric association, it also mimetically depicts the patient to the therapist. It is through this depiction that the autistic patient can be made to share resemblances with the robot that are mimetically disabling.
The longstanding association between autistic people and robots (Guberman and Haimson, 2023; Keyes, 2020; Yergeau, 2018) makes the potential for mimetic transference from the robot back onto the patient an already well-travelled path. Very much like Williams and Gilbert’s (2020: 3) assessment of wearable technologies and autism intervention, “the autistic subject is tensored, suspended precariously as both a computer-like human and in need of instruction from a human-like computer”. Given this association, it is not surprising that in remedial technology research, “robots appear to be one of the most commonly used technologies” (Spiel et al., 2019: 10). The seeming obviousness of social robotics as a technical “solution” to behavioural signs of autism is ultimately derived from a well-articulated and yet somehow unacknowledged ‘amplifying [of] the inhumanity of both autistic people and robots’ (Williams, 2021: 453). It is precisely this process of metaphoric slippage that allows the non-human status of autistic people to be both asserted and yet remain implicit.
Some metaphors we disable by
Recalling the persuasive and avatar functions of social robots, three metaphors are commonly used to describe the functionality of therapy robots: a crutch, a bridge, and a simplified human. Both the crutch and the bridge metaphors constitute how the social robot is understood to enhance autism therapy by broaching a gap between the disability of the patient and the normative expectations bound up in the therapy. As source domains, crutches and bridges both share a common functionality themselves in that they afford the user the opportunity to cross an otherwise difficult or even impassable traversal. Thus, as representations of the therapeutic role of the robot, both crutch and bridge metaphors call upon a number of associations regarding the “nature” of autism and the autistic individual not only impaired but separated from the world of normative abilities and sociability (recall Biklen and Attfield, 2005: 269).
Scassellati (2005a; 2005b: 12) proposed that the functioning of the therapy robot would be akin to a “social crutch” whereby it “motivates and engages children, teaches them social skills incrementally, and assists in the transfer of this knowledge to interactions with humans”. Taken up by a number of researchers, the metaphor is consistently applied with the idea that the robot would function as a support and that the skills learned through interacting with the therapy robot could ultimately be transferred over to human interactions:
Children can use robots as a “social crutch” to practice turn taking, language, and joint attention skills; it is hypothesized that they would eventually transfer learned skills to interactions with humans (Srinivasan et al., 2015: 3).
Here, the theoretical point of view is to create an environment in which a robot can model specific behaviors for a child or the child can practice specific skills with the robot (Scassellati speaks of “social crutch”). The aim is to teach a skill that the child can imitate or learn and eventually transfer to interactions with humans (Boucenna et al., 2014: 734).
This presents the hope that a robot might be used as a social crutch which engages children, teaches them social skills incrementally, and assists in the transfer of this knowledge to interactions with humans (Tapus et al., 2007: 37).
The robot is said to function like a crutch since just as a crutch provides support and still allows the user to use and presumably strengthen the impaired limb, the robot is presumed to provide support to autistic children, allowing them to use and strengthen their impaired social skills. The fruitfulness of the therapy thus depends upon the skills learned through human-robot interaction being transferable to human interactions. Applying a crutch metaphor anticipates this desired outcome since crutches are typically thought of as a temporary assistive device. In other words, the metaphor choice anticipates the effectiveness of the device as a persuasive machine.
At the same time, though, the relational character of the crutch metaphor means that it not only affords a particular understanding of the robot but also the “user” of the robot. The researchers’ use of the metaphor positions the autistic child as being effectively a-social. Since the therapy robot is introduced to support the child in acquiring deficit social skills, the child is thus positioned as lacking in relation to the robot.
All of the above demonstrate the robots generate a high degree of motivation and engagement in children, including in particular those who were unlikely or unwilling to interact socially with human therapists. Overall, this underlines the potential of using a robot as a “social crutch” to engage children, teach them social skills, and assist in the transfer of this knowledge to interactions with humans (Thill et al., 2013: 221).
…the goal would be to limit the social complexities typically experienced in therapy, so that individuals with ASD could attend to the lowest level of information and subsequently build social skills into complex behaviors from their more basic components (Diehl et al., 2012: 9).
The aim is to teach a skill that the child can imitate or learn and eventually transfer to interactions with humans (Boucenna et al., 2014: 11).
The implication is that the child begins as “low functioning” and, as Simon (2016: 205) surmises, such a deficit model “does not foreground the exploration of relational possibilities and drawing out evidence of resourcefulness in people”. Instead, the metaphor reproduces an individuating account of autism in which the actual relational world of an autistic person is obfuscated and they are rendered up as a figure alone. In this respect, though intended to explain the functioning of a supposedly enabling device, the metaphor is in fact disabling.
The degree to which the therapy robot is an analogue to a crutch is questionable. Crutches support the user by transferring weight from the legs to the upper body in order to allow the user to accomplish the actual task of standing and/or walking. With the exception of a small handful of studies, the robot is not employed to aid the user in the actual act of human interaction but instead is a surrogate in therapeutic settings. In other words, the actual process of skill transference is implied by the choice of metaphor rather than necessarily realized in the functionality of the robot itself.
Whereas the crutch metaphor emphasizes deficit skill-acquisition through augmentation, the bridge metaphor stresses the overcoming of a barrier which prevents the autistic child from joining the social world. In applying a bridge metaphor, researchers seek to highlight how therapy robots can engage with the otherwise disengaged autistic child and establish “a connection” that will allow the therapist to interact vicariously with the patient:
However, it should be emphasised that if a robotic toy is used to help the child develop his/her social skills, it is never considered a tool that must replace the adult or peers in the play activity. On the contrary, “social” robots, because they foster interaction and imitation skills, can create a bridge between children with severe relational and communicative disorders and others (Besio et al., 2010: 15).
This is typical in that the robot is imagined as an intermediary on behalf of the therapist rather than a mediator. Like a crutch, a bridge makes traversal possible but as a spatial metaphor, it also establishes a clear separation between two distinct domains. The robot is said to bridge “the world” of the therapist and “the world” of the patient and so establish a relationship between therapist and patient:
The AuRoRA project was attempting to use the robot to bridge the gap between the complex and unpredictable world of human social behavior and the safe predictable world of simple toys (Duquette et al., 2008: 148).
…this approach could “bridge the gap,” so to speak, between a preference for the object-related world discussed previously and the demands of the social world, the latter of which pose a significant (and characteristic) challenge for individuals with ASD (Diehl et al., 2014: 415).
A robot is an object that can behave like a social partner, which can be a perfect bridge between the physical and the social world. Using a robot also allows for the embodied characteristics of face-to-face human interactions, without all the implicit rules and cues that regulate social interactions between humans and that are so difficult for ASD children to read (Melo et al., 2019: 202).
The bridge metaphor therefore does not just posit two separate domains but also hierarchizes them. While a bridge might be thought to mediate between the separated domains so that they might meet and afford traffic between, as it is used in descriptions of robot therapy, the function of the “bridge” is to allow one-way traffic from the lesser world characterized as that of objects and therefore a-social, to a superior world of social relations, populated by human beings.
Again, as with the crutch metaphor, the implication is that the autistic child begins as low-functioning and a-social, needing to be brought out of their world in order to be fully-functioning. Again, this replicates the notion of the autistic person “trapped” in a primigenial existence. The assumption is that without a suitable bridge, the autistic child will remain focused upon objects and a-social. As a quasi-social actor, the social robot is thought to offer the child more social-like transitional relations with an object, which could then lead crossing-over and forming social relationships with fully human beings.
Notably, other objects of interest to the child are precluded from functioning as material-mediators of social relations. As Katherine Runswick-Cole (2016: 22) has observed, while her son’s enthusiasm for chess and transmedia franchises such as Star Wars are treated as evidence of his autism, those interests are nonetheless established in his relationships with family members. The sociableness of his interests is overlooked because it fits the popular narrative that autism, as a tangible force, has taken hold of him and “stolen” him for his family. In this way, the robotics research reproduces the assumption that sociality is governed by a zero-one law of probability; without a bridge to bring the child out of their autistic “isolation,” they will be lost to the social world.
Robots are called upon then to support autistic children in learning a repertoire of social behaviours that will bring them closer to participation in the social world. Social robots, it is argued, are suited to this because their quasi-human qualities make them persuasive to the autistic child who would otherwise avoid social interaction. As Boucenna et al. (2014: 734) explain, “the aim is to teach a skill that the child can imitate or learn and eventually transfer to interactions with humans. In this case, the robot is used to simplify and facilitate social interaction”. The persuasiveness is rooted in both the capacity to capture the attention of the autistic child due to their presumed predisposition towards technology and the minimally expressive design. The idea is to engineer the robots so that they are not “too human-like or the child may lose interest” (Cabibihan et al., 2013: 596).
If, as a persuasive machine, the therapy robot functions to transfer desired behaviours to the patient, then as an avatar, it will also function as a simplified “representation of, or representative for, the human” (Fong et al., 2003: 146). As the source domain, the human is figured in the robot in a manner that is “stylised, simplified or cartoon-like” (Dautenhahn et al., 2009: 370) rather than as a realistic rendition. Thus, as a target domain, the robot shares features with its “normal” human counterparts but they are presented to reduce the complexity of the social cues. In effect, it is the ambiguity of the social robot that makes it a persuasive metaphor for both therapist and patient; as a simplified human avatar, it “at once signifies both ‘is not’ and ‘is like’” (Ricœur, 2003: 6). It is this ambiguity that situates the robot between the therapist and the patient, between the social world and the world of objects, and between “normal” and autistic.
Since the degree of socialness of a social robot is designed in relation to the anticipated human interactants, the therapy robot is not just an analogue or avatar acting on behalf of the therapist. It also analogizes the child as autistic. The robot materializes ideas of social disablement and isolation and depicts what it is to be autistic from the perspective of the researchers. Accordingly, it is built to be minimally expressive and programmed to be rudimentary in its interactions because this is assumed to approximate the social functioning of the autistic child. Mimetically, therefore, the therapy robot is in fact disabling despite it being designed to be an “enabling” device. It positions the autistic child as trapped in a world of a-social objects and the robot’s “simplification” in turn underscores the idea that in being autistic, they too are incomplete and simplified human beings.
Conclusion
Despite being promoted as enabling devices intended to enhance the lives of autistic children, this paper makes the argument that autism therapy robots are in fact disabling. Researchers propose using robots to make the behavioural therapies more engaging to autistic children. Their argument is that the robots are better able to capture the attention and motivate the children because they are said to show a preference for interacting with robots over humans. Underlying this claim is the presupposition that autistic people are more oriented to objects than people and that their interests in objects are, in effect, limited to simple, denotative meanings rather than potentially connoting more complex interpersonal relationships.
Whereas the proponents of robot therapy tend to understand technology instrumentally, such that the robot is presumed to be a neutral tool that merely serves as an extension of the therapist, I have argued that the robots should be understood as anything but unbiased conduits of remedial care. Assuming the robots to function as intermediaries rather than mediators depends upon a continued obfuscation of the remediating capacities of technology. Rather than a mere corrective tool, the robot mimetically reproduces a conception of autism that renders the autistic person as being deficient, alone and trapped. Quite simply, the robot both materializes and symbolizes these discourses on autism but also potentially transfers those discourses back onto the patient. Accordingly, the robot is not simply an avatar/analogue to the therapist but also analogizes the patient in relation to the functionality metaphors of crutch, bridge and simplified human. As an interface, the robot materializes ideas of social disablement and isolation and depicts what it is to be autistic from the perspective of advocacy. Mimetically, therefore, the therapy robot is in fact disabling despite it being designed to be an “enabling” device since it positions the autistic child as trapped in a world of a-social objects and the robot’s “simplification” in turn underscores the idea that in being autistic, they too are incomplete and simplified as a human being.
References
Admoni, H. & Scassellati, B. (2014) ‘Roles of Robots in Socially Assistive Applications’, IROS 2014 Workshop on Rehabilitation and Assistive Robotics.
Agamben, G. (2009) What is an Apparatus? And Other Essays. Stanford, Calif.: Stanford University Press.
Aristotle (1932) Poetics. Section 1457b, in Aristotle in 23 Volumes, Cambridge, Mass.: Harvard University Press. http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0056%3Asection%3D1457b (Accessed September 20, 2022).
Besio, S., Caprino, F. & Laudanna, E. (2010) ‘Guidelines for Using Robots in education and Therapy Sessions for Children with Disabilities’, IROMEC.Interactive RObotic social MEdiators as Companions.
Biklen, D. & Attfield, R. (2005) Autism and the Myth of the Person Alone. New York: New York University Press.
Bolter, J.D. & Grusin, R. (2000) Remediation. Cambridge, Mass.: MIT Press.
Boucenna, S. et al. (2014) ‘Interactive Technologies for Autistic Children: A Review’, Cognitive Computation, 6(4): 722-740.
Cabibihan, J. et al. (2013) ‘Why Robots? A Survey on the Roles and Benefits of Social Robots in the Therapy of Children with Autism’, International Journal of Social Robotics, 5(4): 593-618.
Callon, M. (1984) ‘Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay’, The Sociological Review, 32(1 Suppl): 196-233.
Dautenhahn, K. et al. (2009) ‘KASPAR – A Minimally Expressive Humanoid Robot for Human–Robot Interaction Research’, Applied Bionics and Biomechanics, 6(3-4): 369-397.
Diehl, J.J. et al. (2014) ‘Clinical Applications of Robots in Autism Spectrum Disorder Diagnosis and Treatment’, in Comprehensive Guide to Autism. New York: Springer, pp. 411-422.
Diehl, J.J. et al. (2012) ‘The Clinical Use of Robots for Individuals with Autism Spectrum Disorders: A Critical Review’, Research in Autism Spectrum Disorders, 6(1): 249-262.
Duquette, A., Michaud, F. & Mercier, H. (2008) ‘Exploring the Use of a Mobile Robot as an Imitation Agent with Children with Low-Functioning Autism’, Autonomous Robots, 24(2): 147-157.
Fong, T., Nourbakhsh, I. & Dautenhahn, K. (2003) ‘A Survey of Socially Interactive Robots’, Robotics and Autonomous Systems, 42(3-4): 143-166.
Foucault, M. & Gordon, C. (1980) Power/Knowledge: Selected Interviews and Other Writings, 1972-1977. New York: Pantheon Books.
Guberman, J. & Haimson, O. (2023) ‘Not robots; Cyborgs — Furthering anti-ableist research in human-computer interaction’, First Monday, 28(1) [online]. Available At: https://doi.org/10.5210/fm.v28i1.12910. Accessed 10 March 2023.
Guldin, R. (2012) ‘From Transportation to Transformation: On the Use of the Metaphor of Translation within Media and Communication Theory’, Global Media Journal: Canadian Edition, 5(1) [online]. Available At: http://gmj-canadianedition.ca//wp-content/uploads/2018/11/v5i1_guldin.pdf. Accessed 12 September 2022.
Hayles, N.K. (2002) Writing Machines. Boston, Mass.: MIT Press.
Hegel, F. et al. (2009) ‘Understanding Social Robots’, 2009 Second International Conferences on Advances in Computer-Human Interactions: 169-174.
Keyes, O. (2020) ‘Automating autism: Disability, discourse, and Artificial Intelligence’, The Journal of Socio-Technical Critique, 1: 1-31.
Kirby, J.T. (1997) ‘Aristotle on Metaphor’, The American Journal of Philology, 118(4): 517-554.
Lakoff, G. & Johnson, M. (1980) Metaphors We Live By. Chicago: University of Chicago Press.
Latour, B. (1986) ‘Visualization and Cognition: Drawing Things Together’, in E. Long et al. (eds.) Knowledge and Society: Studies in the Sociology of Culture Past and Present. A Research Annual. Greenwich, Conn.: JAI Press, pp. 1-40.
Latour, B. (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford; New York: Oxford University Press.
McGuire, A. (2016) War on Autism: On the Cultural Logic of Normative Violence. Ann Arbor: University of Michigan Press.
Melo, F.S. et al. (2019) ‘Project INSIDE: Towards Autonomous Semi-Unstructured Human-Robot Social Interaction in Autism Therapy’, Artif Intell Med, 96: 198-216.
Murray, S. (2008) Representing Autism: Culture, Narrative, Fascination. Liverpool: Liverpool University Press.
Richards, I.A. (1936) The Philosophy of Rhetoric. New York: Oxford University Press.
Richardson, K. (2010) ‘Disabling as Mimesis and Alterity: Making Humanoid Robots at the Massachusetts Institute of Technology’ Etnofoor, 22(1): 75-90.
Ricœur, P. (1978) ‘The Metaphorical Process as Cognition, Imagination, and Feeling’, Critical Inquiry, 5(1): 143-159.
Ricœur, P. (2003) The Rule of Metaphor. New York: Routledge.
Runswick-Cole, K. (2016) ‘Understanding This Thing Called Autism’, in S. Timimi, R. Mallett & K. Runswick-Cole (eds.) Re-Thinking Autism: Diagnosis, Identity and Equality. Philadelphia: Jessica Kingsley Publishers, pp. 17-26.
Scassellati, B. (2005a) ‘Quantitative Metrics of Social Response for Autism Diagnosis’,
ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication: 585-590.
Scassellati, B. (2005b) ‘Using social robots to study abnormal social development’, Proceedings of the Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems: 11-14.
Shannon, C.E. & Weaver, W. (1949) The Mathematical Theory of Communication. Urbana: University of Illinois Press.
Silverman, C. (2011) Understanding Autism: Parents, Doctors, and the History of a Disorder. Princeton: Princeton University Press.
Simon, G. (2016) ‘Thinking Systems: ‘Mind’ As Relational Activity’, in S. Timimi, R. Mallett & K. Runswick-Cole (eds.) Re-Thinking Autism: Diagnosis, Identity and Equality. Philadelphia: Jessica Kingsley Publishers, pp. 198-209.
Sinclair, J. (2013) ‘Why I Dislike “Person First” Language’, Autonomy, the Critical Journal of Interdisciplinary Autism Studies, 1(2) [online]. Available At: http://www.larry-arnold.net/Autonomy/index.php/autonomy/article/view/ED2/html. Accessed 12 September 2022.
Singh, J.S. (2015) Multiple Autisms: Spectrums of Advocacy and Genomic Science. Minnesota: Univ of Minnesota Press.
Slack, J.D. & Wise, J.M. (2015) Culture and Technology: A Primer. New York: Peter Lang.
Spiel, K. et al. (2019) ‘Agency of Autistic Children in Technology Research – A Critical Literature Review’, ACM Transactions on Computer-Human Interaction, 26(6): 1-40.
Srinivasan, S.M. et al. (2015) ‘The Effects of Rhythm and Robotic Interventions on the Imitation/Praxis, Interpersonal Synchrony, and Motor Performance of Children with Autism Spectrum Disorder (ASD): A Pilot Randomized Controlled Trial’, Autism Res Treat, (27) 736516: 73-87.
Tapus, A., Maja, M. & Scassellatti, B. (2007) ‘The Grand Challenges in Socially Assistive Robotics’, IEEE Robotics and Automation Magazine, 14(1): 35-42.
Thill, S. et al. (2013) ‘Robot-Assisted Therapy for Autism Spectrum Disorders with (Partially) Autonomous Control: Challenges and Outlook’, Paladyn, 3(4): 209-217.
Williams, R.M. (2021) ‘I, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research’, Techné: Research in Philosophy and Technology, 25(3): 451-478.
Williams, R.M. & Gilbert, J.E. (2020) ‘Perseverations of the Academy: A Survey of Wearable Technologies Applied to Autism Intervention’, International Journal of Human-Computer Studies, 143: 1-20.
Yergeau, M. (2018) Authoring Autism: On Rhetoric and Neurological Queerness. Durham: Duke University Press.
Notes
[1] I should like to express my gratitude for the generous comments of the reviewer of this paper and the kind manner in which they directed me towards a literature to which I was previously unaware.
[2] What Biklen (Biklen and Attfield, 2005: 269) calls the myth of the person alone.
Ian Roderick is an Associate Professor in Communication Studies and the MA in Cultural Analysis and Social Theory at Wilfrid Laurier University in Waterloo, Ontario, CANADA. He currently serves as the Special Issues Editor for Critical Discourse Studies. His research focuses upon the intersection between design, technology, culture, and power. He is author of Critical Discourse Studies and Technology: A Multimodal Approach to Analysing Technoculture. His essays have appeared in multiple journals including The Journal of Multicultural Discourses, Social Semiotics, Discourse: Studies in the Cultural Politics of Education, Critical Discourse Studies, and Journal of Language and Politics.
Email: iroderick@wlu.ca


Leave a Reply