ChatGPT may be polite, but it’s not cooperating with you

4 hours ago 5
Pixelated artwork showing a gray smiley face mask being lifted to reveal a sinister face underneath.
Illustration: Mathieu Labrecque/The Guardian

After publishing my third book in early April, I kept encountering headlines that made me feel like the protagonist of some Black Mirror episode. “Vauhini Vara consulted ChatGPT to help craft her new book ‘Searches,’” one of them read. “To tell her own story, this acclaimed novelist turned to ChatGPT,” said another. “Vauhini Vara examines selfhood with assistance from ChatGPT,” went a third.

The publications describing Searches this way were reputable and fact-based. But their descriptions of my book – and of ChatGPT’s role in it – didn’t match my own reading. It was true that I had put my ChatGPT conversations in the book, but my goal had been critique, not collaboration. In interviews and public events, I had repeatedly cautioned against using large language models such as the ones behind ChatGPT for help with self-expression. Had these headline writers misunderstood what I’d written? Had I?

In the book, I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It’s a dynamic that makes us complicit in big tech’s accumulation of wealth and power: we’re both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues.

The polite politics of AI

The book opens with epigraphs from Audre Lorde and Ngũgĩ wa Thiong’o evoking the political power of language, followed by the beginning of a conversation in which I ask ChatGPT to respond to my writing. The juxtaposition is deliberate: I planned to get its feedback on a series of chapters I’d written to see how the exercise would reveal the politics of both my language use and ChatGPT’s.

My tone was polite, even timid: “I’m nervous,” I claimed. OpenAI, the company behind ChatGPT, tells us its product is built to be good at following instructions, and some research suggests that ChatGPT is most obedient when we act nice to it. I couched my own requests in good manners. When it complimented me, I sweetly thanked it; when I pointed out its factual errors, I kept any judgment out of my tone.

ChatGPT was likewise polite by design. People often describe chatbots’ textual output as “bland” or “generic” – the linguistic equivalent of a beige office building. OpenAI’s products are built to “sound like a colleague”, as OpenAI puts it, using language that, coming from a person, would sound “polite”, “empathetic”, “kind”, “rationally optimistic” and “engaging”, among other qualities. OpenAI describes these strategies as helping its products seem “professional” and “approachable”. This appears to be bound up with making us feel safe: “ChatGPT’s default personality deeply affects the way you experience and trust it,” OpenAI recently explained in a blogpost explaining the rollback of an update that had made ChatGPT sound creepily sycophantic.

Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren’t flukes. Research suggests that both tendencies are widespread.

In my own ChatGPT dialogues, I wanted to enact how the product’s veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech – including editing my description of OpenAI’s CEO, Sam Altman, to call him “a visionary and a pragmatist”. I’m not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn’t attempt to influence users’ thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data – though I suspect my arguably leading questions played a role too.

When I queried ChatGPT about its rhetoric, it responded: “The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.”

Still, by the end of the dialogue, ChatGPT was proposing an ending to my book in which Altman tells me: “AI can give us tools to explore our humanity in ways we never imagined. It’s up to us to use them wisely.” Altman never said this to me, though it tracks with a common talking point emphasizing our responsibilities over AI products’ shortcomings.

I felt my point had been made: ChatGPT’s epilogue was both false and biased. I gracefully exited the chat. I had – I thought – won.

I thought I was critiquing the machine. Headlines described me as working with it

Then came the headlines (and, in some cases, articles or reviews referring to my use of ChatGPT as an aid in self-expression). People were also asking about my so-called collaboration with ChatGPT in interviews and at public appearances. Each time, I rejected the premise, referring to the Cambridge Dictionary definition of a collaboration: “the situation of two or more people working together to create or achieve the same thing.” No matter how human-like its rhetoric seemed, ChatGPT was not a person – it was incapable of either working with me or sharing my goals.

OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that “benefits all of humanity”. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are – a goal that is easier to accomplish if people see those products as trustworthy collaborators. Last year, Altman envisioned AI behaving as a “super-competent colleague that knows absolutely everything about my whole life”. In a Ted interview this April, he suggested this could even function at the societal level: “I think AI can help us be wiser and make better collective governance decisions than we could before.” By this month, he was testifying at a US Senate hearing about the hypothetical benefits of having “an agent in your pocket fully integrated with the United States government”.

Reading the headlines that seemed to echo Altman, my first instinct was to blame the headline writers’ thirst for something sexy to tantalize readers (or, in any case, the algorithms that increasingly determine what readers see). My second instinct was to blame the companies behind the algorithms, including the AI companies whose chatbots are trained on published material. When I asked ChatGPT about well-known recent books that are “AI collaborations”, it named mine, citing a few of the reviews whose headlines had bothered me.

I went back to my book to see if maybe I’d inadvertently referred to collaboration myself. At first it seemed like I had. I found 30 instances of words such as “collaboration” and “collaborating”. Of those, though, 25 came from ChatGPT, in the interstitial dialogues, often describing the relationship between people and AI products. None of the other five were references to AI “collaboration” except when I was quoting someone else or being ironic: I asked, for example, about the fate ChatGPT expected for “writers who refuse to collaborate with AI”.

Was I an accomplice to AI companies?

But did it matter that I mostly hadn’t been the one using the term? It occurred to me that those talking about my ChatGPT “collaboration” might have gotten the idea from my book even if I hadn’t put it there. What had made me so sure that the only effect of printing ChatGPT’s rhetoric would be to reveal its insidiousness? How hadn’t I imagined that at least some readers might be convinced by ChatGPT’s position? Maybe my book had been more of a collaboration than I had realized – not because an AI product had helped me express myself, but because I had helped the companies behind these products with their own goals. My book concerns how those in power exploit our language to their benefit – and about our complicity in this. Now, it seemed, the public life of my book was itself caught up in this dynamic. It was a chilling experience, but I should have anticipated it: of course there was no reason my book should be exempt from an exploitation that has taken over the globe.

And yet, my book was also about the way in which we can – and do – use language to serve our own purposes, independent from, and indeed in opposition to, the goals of the powerful. While ChatGPT proposed that I close with a quote from Altman, I instead picked one from Ursula K Le Guin: “We live in capitalism. Its power seems inescapable – but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.” I wondered aloud where we might go from here: how might we get our governments to meaningfully rein in big tech wealth and power? How might we fund and build technologies so that they serve our needs and desires without being bound up in exploitation?

I’d imagined that my rhetorical power struggle against big tech had begun and ended within the pages of my book. It clearly hadn’t. If the headlines I read represented the actual end of the struggle, it would mean I had lost. And yet, I soon also started hearing from readers who said the book had made them feel complicit in big tech’s rise and moved to act in response to this feeling. Several had canceled their Amazon Prime subscriptions; one stopped soliciting intimate personal advice from ChatGPT. The struggle is ongoing. Collaboration will be required – among human beings.

Read Entire Article
Bhayangkara | Wisata | | |