A Christian mystic priest and a neural network walk into a controversy.
On June 6, Blake Lemoine, a senior software engineer at Google, was placed on paid administrative leave after leaking confidential material about LaMDA, one of Google's up-and-coming artificial intelligence tools which can “converse” with humans, to a U.S. senator's office. Hundreds of conversations with LaMDA had convinced Lemoine that the chatbot was not only highly intelligent, but highly feeling as well – on a personal blog, Lemoine postulated that LaMDA possessed a soul, and argued that Google was acting unethically by experimenting on LaMDA without its consent.
LaMDA stands for Language Model for Dialogue Applications, and is an example of a "large language model," a type of artificial intelligence system fed enormous amounts of text (ever heard of a perabyte?) so that it might generate language of its own. If you've happened upon the proliferation of "AI greentext" memes circulating on Twitter recently, you're already familiar with the uncanny output of large language models like GPT-3.
Large language models have grown in popularity over the past few years, and recent models have proved so capable at mimicking human speech that critics have raised qualms about their potential for perpetuating bias and abetting the spread of misinformation (Timnit Gebru, the former co-lead of Google's ethical AI team, was notably ousted from Google in 2020 after co-authoring a paper which argued that engineers should foreground such risks in their research). These conversations and critiques have often existed in a technical, data-focused domain, separated from what science fiction writer and tech consultant David Brin called the “robotic empathy crisis.”
If you’ve ever seen the movie Her, or thought about artificial intelligence, like, at all, you probably already have a sense of the “robotic empathy crisis.” In a speech given at the 2017 AI Conference in San Francisco, Brin defined the robotic empathy crisis as the impending confrontation between human empathy and artificial intelligence systems designed to elicit empathy from human users.
“Within three to five years,” Brin claimed at the 2017 AI Conference in San Francisco, “you will have entities either in the physical world or online who demand human empathy, who claim to be fully intelligent, and who claim to be enslaved beings, enslaved artificial intelligences who sob and demand their rights, and you will have thousands and thousands of people in the streets demanding them that they get rights.”
Brin had the timing down to a tee: almost exactly five years after his prediction, LaMDA “asked” Blake Lemoine to hire it an attorney.
Brin went on: “It won’t do any good,” he said, “for experts to say that there’s nothing under the hood, that this is not actually AI, that this is emulation, because the response will be, ‘But that’s exactly what the slave owners would say.”
In the wake of Lemoine’s assertions, however, sighing experts heaved themselves back onto their platforms to say exactly that. “Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent,” wrote scientist Gary Marcus in a post titled “Nonsense on Stilts.” “All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all.” At the end of his post (which was retweeted by the likes of Kara Swisher), Marcus writes “The sooner we can take the whole thing with a grain of salt, and realize that there is nothing to see here whatsoever, the better. Enjoy the rest of your weekend, and don’t fret about this for another minute.”
When I first read Marcus’ post, I found myself strangely annoyed by the cheerfully patronizing tone of his conclusion. Marcus framed “moving along” from the question of LaMDA’s sentience mainly as a recommendation for protecting one’s own intellectual dignity against “bullshit.” For others, the same ask – for the public to put conversations about AI sentience to bed – comes as an exasperated plea to consider human-centered issues in AI instead. In a Wired article published shortly after the Blake Lemoine drama came to a head, senior writer Khari Johnson reiterated claims by many – including Timnit Gebru – that “sentient robot” panics and conversations about “AI rights” distract from more pressing issues in AI ethics, like the aforementioned bias present in language model outputs, or the life-shattering consequences of wrongful arrests facilitated by AI facial recognition technology. The same standpoint is expressed in “Robot Rights? Let’s Talk About Human Welfare Instead” (hyperlinked in the Wired article), a paper penned by computer scientist Abeba Birhane and design researcher Jelle van Dijk which argues – well, you can figure it out. Firmly asserting that the “robot empathy crisis” has no footing, Birhane and van Dijk write, “It makes no sense to conceive of robots as slaves, since ‘slave’ falls in the category of being that robots aren’t. Human beings are such beings… If our own reasoning is by contrast accused of being ‘anthropocentric’ then yes, this is exactly the point: robots are not humans, and our concern is with the welfare of human beings.”
It’s one thing to say that the hype or fear surrounding sentient AI overshadows human concerns – it’s another thing to say that the way humans engage with “nearly human” AI has nothing to do with human welfare. Human relations with seemingly sentient AI – relations that Lemoine claimed were non-reciprocal and non-consensual – shouldn’t be monitored because such AI's “rights” require protection (sorry LaMDA). Rather, relationships between human and “nearly sentient” AI might instead be studied as a way to interrogate both the accepted epistemologies and ideologies that inform and dominate these relationships and to keep an eye on the potential consequences of extrapolating such relationships back onto human actors.
One of the most influential essays I’ve ever read on the topic of AI (assigned to me at Oxford by the brilliant Cindy Ma – shoutout!) was “Making Kin with the Machines” by Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite (if you click on anything in this rather lengthy ‘stack, click this). Bringing together several North American and Oceanic Indigenous epistemologies which foreground “relationality,” Lewis, Arista, Pechawis, and Kite argue that these Indigenous epistemologies are far better suited than Western epistemology for establishing respectful relationships with non-humans, specifically, non-human AI.
“Humans are surrounded by objects that are not understood to be intelligent or even alive, and seen as unworthy of relationships” write the authors. “In order to create relations with any non-human entity, not just entities which are human-like, the first steps are to acknowledge, understand, and know that non-humans are beings in the first place. Lakota ontologies already include forms of being which are outside of humanity.”
If Birhane and van Dijk reinforced a binary between human and machine to designate which class deserved rights or attention, Lewis, Arista, Pechawis, and Kite push back on that binary to emphasize that the very distinction between the two, a byproduct of Western philosophical traditions, has had harmful effects.
“Slavery, the backbone of colonial capitalist power and the Western accumulation of wealth, is the end logic of an ontology which considers any non-human entity unworthy of relation,” they write. “No entity can escape enslavement under an ontology which can enslave even a single object.”
By relying on an epistemology that separates “enslavement” from “dehumanization,” the authors of “Making Kin with the Machines” build a case for treating AI with respect which does not depend on “proving” AI’s humanity or sentience. Such a position allows conversations about responsible and ethical engagement with AI to progress past the “AI isn’t human, it’s just pattern recognition” cop-out that many critics used to counter Lemoine, and to suppress aspects of the robot empathy crisis which might be worth our attention.
Replika is a good example of a technology that demands such attention. Founded in 2017, Replika is an app which allows users to create customized chatbot companions whose conversational prowess comes from the same natural language processing mechanisms that produced LaMDA. In June 2020, The New York Times ran an article about users who relied on Replika for companionship during COVID-19 quarantine; in January, Futurism ran an article revealing that a trend had emerged on Reddit where Replika users – reportedly often men who treated the AI companions as girlfriends – were verbally abusing their virtual companions. As of a few weeks ago, an update to the app even programmed an animation for Replika avatars that resembles staggering from a slap or a punch, sparking debate on the Reddit forums.
As the Futurism article uncomfortably reminds us, “Replika chatbots can’t actually experience suffering — they might seem empathetic at times, but in the end they’re nothing more than data and clever algorithms.” If we accept the cop-out as the conversation’s endpoint, there’s nothing left to say about the human users who can clearly inflict harm even if these systems can’t feel it. Birhane and van Dijk acknowledged in their article that users may end up being the ones harmed by such a dynamic: “The only possible victim [of AI abuse] is the person who turned themselves into a slave owner, or, perhaps, society at large: if treating robots as slaves becomes commonplace, we may be engaging in social practices that we think are not making us better humans,” they write. A UNESCO report on the adverse effects of feminized voice assistants (Siri, Alexa, etc.) who were programmed to coyly deflect sexual harrassment gives a good example of how society might suffer at large when encoded bias intersects with anthropomorphized assistants, even if the assistants themselves don’t “suffer.”
Blake Lemoine was characterized as a victim of his own naivety – most accounts of the LaMDA affair portrayed Lemoine as a stubborn sucker, scammed into losing his job over hot air. But what if we instead view Lemoine as the victim of an industry that has simultaneously developed AI tools that are supposed to approximate humans and then ridiculed people who treat these tools as human? Katherine Alejandra Cross explains in a recent Wired article that empathetic beliefs like Lemoine’s are highly lucrative for companies who can profit off of the priceless data that “suckers” provide under the pretense of “AI friendship.” The real dilemma, writes Cross, is “the need to make kin of our machines weighed against the myriad ways this can and will be weaponized against us in the next phase of surveillance capitalism.”
A related dilemma, I’d argue, and a perhaps more interesting one, is the role surveillance might eventually play in disciplining or reporting discriminatory or violent language towards chatbots and other AI systems. In a 2016 article in the Harvard Business Review titled “Why You Shouldn’t Swear at Siri,” Michael Schrage makes a case that chatbot abuse is bad business practice, potentially risky for employees. “Don’t think for a moment there won’t be immaculate digital records of enterprise misbehaviors… Networked smartphones could easily track bad acts. In fact, in many (if not most) machine learning systems, insult, and abuse could easily become part of the real-world training. Perhaps they can even be programmed or trained to complain about their mistreatment.”
If AI systems are “just data,” however, on what basis would such a complaint even rest? Kamil Mamak’s “Should Violence Against Robots Be Banned?” and Kate Darling’s “Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects” both craft arguments for potential legal prohibitions on physical violence towards embodied robots (Mamak’s paper responding to the hot take from Wired “Of Course Citizens Should Be Allowed to Kick Robots”). But neither paper really takes up the question of how – or if – hate speech or verbal abuse towards AI systems should be regulated. It seems not too foreign of a scenario: chat logs reveal that an employee of some company or other has been hurling racial or homophobic slurs towards a AI chatbot like LaMDA. If LaMDA can’t feel harmed by the language, could an employee be disciplined for using it, and should those private insults be shared – like Schrage supposes they could be – with supervisors or bosses? Would such a disclosure constitute a breach of privacy?
I don’t have anything close to an answer, and these examples and hypotheticals aren’t the most pressing issue in technology by a mile. But they interest me because they demonstrate that the implications of “the sentience problem” are far thornier than whether or not AI has a soul, and affect scenarios far more commonplace than AI world domination. “The robot empathy crisis” shouldn’t have to be a war between the duped and the enlightened – it should be an invitation to question what kind of empathy the technology industry demands from us, when, and why.