How AI is changing writing

Comment

In 1950, computer scientist Alan Turing famously proposed what we now call the Turing test of artificial intelligence, which says that a machine might be “thinking” if it can pass as human in a typewritten chat. Even if you’re familiar with this story, you might not know that Turing imagined starting his test with a literary request: “Please write me a sonnet on the subject of the Forth Bridge.” He predicted an evasive but very human response from some future computer: “Count me out on this one. I never could write poetry.” That’s just what my dad would say.

Last week, I sent the same request to ChatGPT, the latest artificial-intelligence chatbot from OpenAI. “Upon the Firth of Forth, a bridge doth stand,” it began. In less than a minute, the program had created in full a rhyming Shakespearean sonnet. With the exception of offensive or controversial topics that its content filters block, ChatGPT will compose original verse on any theme: lost love, lost socks, jobs lost to automation. Tools like ChatGPT seem poised to change the world of poetry — and so much else — but poets also have a lot to teach us about artificial intelligence. If algorithms are getting good at writing poetry, it’s partially because poetry was always an algorithmic business.

Even the most rebellious poets follow more rules than they might like to admit. A good poet understands grammatical norms and when to break them. Some poems rhyme in a pattern, some irregularly and some not at all. Poetry’s subtler rules seem hard to program, but without some basic norms about what a poem is, we could never recognize or write one. When schoolchildren are taught to imitate the structure of a haiku or the short-long thrum of iambic pentameter, they are effectively learning to follow algorithmic constraints. Should it surprise us that computers can do so, too?

But considering how ChatGPT works, its ability to follow the rules for sonnets seems a little more impressive. No one taught it these rules. An earlier technology, called symbolic AI, involved programming computers with axioms for specific subjects, such as molecular biology or architecture. These systems worked well within narrow areas but lacked more general adaptability. ChatGPT is based on a newer kind of AI known as a large language model (LLM). Simplified to the extreme, LLMs analyze enormous amounts of human writing and learn to predict what the next word in a string of text should be, based on context. This method of word-guessing enables the AI to write coherent college admission essays, rough treatments for film scripts and even sonnets about bridges in Scotland, none of which gets programmed directly.

Reporter Danielle Abril tests columnist Geoffrey A. Fowler to see if he can tell the difference between an email written by her or ChatGPT. (Video: Monica Rodman/The Washington Post)

Who is behind the writing?

One frequent criticism of LLMs is that they do not understand what they write; they just do a great job of guessing the next word. The results sound plausible but often miss the mark. For example, I asked ChatGPT to explain this joke: “What’s the best thing about Switzerland? I don’t know, but the flag is a big plus.” It responded that the “reference to the flag” is funny because it “contradicts the expectation that the answer would be something related to the country’s positive attributes.” It missed the pun on “plus,” which is the core of the joke. Some scholars claim that LLMs develop knowledge about the world, but most experts say otherwise — that while these technologies write coherently, there’s nobody home.

But the same is true of language itself. As modernist poet William Carlos Williams tells us, “A poem is a small (or large) machine made of words.” When an impassioned verse by Keats or Dickinson makes us feel like the poet speaks directly to us, we are experiencing the effects of a technology called language. Poems are made of paper and ink — or, these days, electricity and light. There is no one “inside” a Dickinson poem any more than one by ChatGPT.

Of course, every Dickinson poem reflects her intention to create meaning. When ChatGPT puts words together, it does not intend anything. Some argue that writings by LLMs therefore have no meaning, only the appearance of it. If I see a cloud in the sky that looks like a giraffe, I recognize it as an accidental resemblance. In the same way, this argument goes, we should regard the writings of ChatGPT as merely resembling real language, meaningless and random as cloud shapes.

Experimental writers have given us reasons to doubt this theory since early last century, when Tristan Tzara and others sought to eliminate conscious decisions from their work. Their techniques now seem like rudimentary versions of the principles behind LLMs. Tzara drew words out of a hat to compose a poem. In the 1950s, William S. Burroughs popularized the “cut up method,” which involves cutting words out of newspapers and reassembling them into literature. Around the same time, linguists developed the “bag-of-words” approach to modeling a text by counting how many times each word appears. LLMs do far more complex analysis, but randomization still helps ChatGPT to avoid predictable outputs, just as it helped Burroughs.

Automation didn’t ruin chess

There’s an old joke among AI researchers: “Artificial intelligence” is whatever computers can’t do yet. The classic example is chess. The dream of automating chess reaches back to 1770, when a robotic player called the Mechanical Turk dazzled the courts of Europe, thanks to a human chess master hidden under the desk. In 1948, Turing wrote a chess program, but it was too complex to run on 1940s hardware. Finally, in 1997, a supercomputer defeated world chess champion Garry Kasparov. Since then, computers have become so much better than humans that today’s world champion, Magnus Carlsen, considers it pointless and depressing to play them. Maybe it seems less magical for a computer to win at chess than it once did, but as AI poetry continues to improve, we should remember that chess has remained enjoyable for millions of humans.

LLMs represent a new phase in computer-assisted writing, but the next steps for AI poetry remain unclear. Like Turing, the internet polymath Gwern Branwen uses poetry as a test, asking AI to imitate Shelley, Yeats and others. Here is ersatz Whitman: “O lands! O lands! to be cruise-faring, to be sealanding! / To go on visiting Niagara, to go on, to go on!” As the AI improves, so do these imitations. Meanwhile, futurist poet Sasha Stiles collaborates with LLMs to herald a new posthuman era. “In ten more years,” she writes, “we’ll know how to implant IQ, / insert whole languages. I’ll be a superpoet then, // microchipped to turbo-read neural odes, / history of sonnets and aubades brainlaced.” Though visually stunning, her work sometimes overlooks the political, environmental and practical downsides of these technologies. The future of AI poetry has not yet arrived, but the LLMs tell us that it soon will.

Among the best recent AI poetry is Lillian-Yvonne Bertram’s “Travesty Generator” (2019), which borrows its title from a poem-generating program that the critic Hugh Kenner co-wrote in the 1980s. In Bertram’s hands, “travesty” also refers to the violent injustices against Black people to which these poems respond. Work like Bertram’s is especially urgent as researchers study how AI risks amplifying the racism and other hate already prevalent online.

When I showed my friends the sonnet by ChatGPT, they called it “soulless and barren.” Despite following all the rules for sonnets, the poem is cliche and predictable. But is the average sonnet by a human any better? Turing imagined asking a computer for poetry to see if it could think like a person. If we now expect computers to write not just poems but good poems, then we have set a much higher bar.

A note to our readers

We are a participant in the Amazon Services LLC Associates Program,
an affiliate advertising program designed to provide a means for us to earn fees by linking
to Amazon.com and affiliated sites.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *