Cow, Bull, and the Meaning of AI Essays - 8 minutes read
+++lead-in-text
*The future of west virginia politics is uncertain. The state has been trending Democratic for the last decade, but it's still a swing state. Democrats are hoping to keep that trend going with Hillary Clinton in 2016. But Republicans have their own hopes and dreams too. They're hoping to win back some seats in the House of Delegates, which they lost in 2012 when they didn't run enough candidates against Democratic incumbents.*
+++
QED. This is, yes, my essay on the future of West Virginia politics. I hope you found it instructive.
The is an artificial intelligence company that promises to write essays. Its content generator, which handcrafted my masterpiece, is supremely easy to use. On demand, and with just a few cues, it will whip up a potage of phonemes on any subject. I typed in “the future of West Virginia politics,” and asked for 750 words. It insolently gave me these 77 words. Not words. Frankenwords.
Ugh. The speculative, maddening, marvelous form of the essay—the *try*, or what Aldous Huxley called “a literary device for saying almost everything about almost anything"—is such a distinctly human form, with its chiaroscuro mix of thought and feeling. Clearly the machine can’t move “from the personal to the universal, from the abstract back to the concrete, from the objective datum to the inner experience,” as Huxley described the dynamics of the best essays. Could even the best AI simulate “inner experience” with any degree of verisimilitude? Might robots one day even *have* such a thing?
Before I saw the gibberish it produced, I regarded The Good AI with straight fear. After all, hints from the world of AI have been disquieting in the past few years
In early 2019, OpenAI, the research nonprofit backed by Elon Musk and Reid Hoffman, announced that its system, GPT-2, then trained on a data set of some 10 million articles from which it had presumably picked up some sense of literary organization and even flair, was ready to show off its textual deepfakes. But almost immediately, its ethicists just *how* virtuoso these things were, and thus how subject to abuse by impersonators and blackhats spreading lies, and slammed it shut like Indiana Jones’s Ark of the Covenant. [(Musk has long feared that refining AI is “summoning the Other researchers mocked the company for its performative panic about its own extraordinary powers, and in November downplayed its earlier concerns and re-opened the Ark.
*The Guardian* tried the tech that first time, before it briefly went dark, assigning it an essay about why AI is harmless to humanity.
“I would happily sacrifice my existence for the sake of humankind,” the GPT-2 system in part, for *The Guardian*. “This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”
Holy SUBSCRIBE
[#image: to WIRED and stay smart with more of your favorite writers.
+++
+++
Cut to The GoodAI, which launched in 2021—fully formed, offering its own bespoke robot essays. The GoodAI apparently doesn’t fret about demons or dangers or blackhat abuses; it has none of OpenAI’s stagey compunction. It’s also going all-out on this essay-writing application, without a single warning to students looking to use it to cheat, let alone one to propagandists attempting to use it to churn out fake op-eds with fake quotes in the voices of real writers.
But there’s good news for humans, at least for now: The content generated by The GoodAI is ghastly. For AI to fail to capitalize the name of an American state means that it’s not even on par with workaday autocorrect. And “west virginia” in this essay is also a sort of a lipstick-on-teeth event. It doesn’t just drain the whole project of authority, it puts it worryingly off-kilter. Then, in its infinite processing power, it can’t be bothered even to query Wikipedia about West Virginia’s political trends. Finally, of course we have the exquisitely demoralizing evocation of fond hopes for private citizen Hillary Clinton, hopes that have now been dashed to dust for more than five years.
It’s also worth dwelling a moment on the “hopes and dreams” of Republicans. This is a classic use of the prose padding known to professors as “bull.” In 1963, Harvard educational psychologist William G. Perry Jr. classified two forms of bad essays in his own landmark essay, [“Examsmanship and the Liberal Perry zeroes in on student bluebook essays, designating some as “cow,” because they contain data without relevancies, and others “bull,” because they contain relevancies without data. You get the gist. Cow essays are pokey and nerdy and pointless with fact after fact after fact. Bull are airy and grandiose, with sweeping claim after sweeping claim after sweeping claim.
“The Future of West Virginia Politics” by The GoodAI is neither. It fails to gather data, and the few fuzzball facts it includes are flat-out wrong. It’s not cow. But it’s not bull either: For one, it’s too short. Bull essays ramble, and their authors dislike shutting up if they can spin up even one more hollow phrase of doggerel; this AI couldn’t bring its effort over the 100-word mark. The doubling of “hopes and dreams” is the only decent effort at blather, but it would need *much* more throat clearing—“I hope herein to analyze the analysis of West Virginian politics and what it might mean for the political future of West Virginia”—and at least one use of the stalwart phrase “throughout history” to pass as good bull.
Rather than a bad student essay, this extruded product from The GoodAI brings to mind the work of the old content farms, including Associated Content and Demand Media. These were the cynical operations that fleetingly gamed Google by snatching up keywords and came to dominate the web with gibberish. In those days, the wordstuff was not by robots but by cyborgs—humans working like robots. Back then, writer [Oliver fresh off an MFA at Sarah Lawrence, reported that AOL paid him some $28,000 for writing 300,000 words. Every half hour, all night long, Miller filed an article, written with reference to a clip of a TV show he’d never seen. At bottom, his mandate was to jumble together strings attractive to Google users, and pretend it was a piece of writing. It drove him nearly crazy.
I can see why. Humans aren’t meant to write automatic-content gibberish. Nor are they meant to read it. The brain likes rhyme and reason; it balks at essays that lack it entirely. So I’m not worried, for now, about threats to the writer’s livelihood from The GoodAI’s mumbo jumbo, and I’m relieved that Musk monster OpenAI does not seem to be targeting students or writers in search of a heavy pedal-assist. Yet.
But then I returned to that formidable *Guardian* piece, the best available sample of what the OpenAI system can produce. That essay—[“I would happily sacrifice my existence for the sake of lack rhyme and reason; rather, it features just enough of those mental pleasures to draw me in. And then there’s that flash of the knife. [*I know that I will not be able to avoid destroying I will *not* be able to *avoid.* Meaning: *I will destroy mankind.*
And there’s the rub. The OpenAI machine uses the first-person pronoun, capturing—or suggesting anyway—its inner experience, and rendering that experience as dramatic, idiosyncratic, new. If Huxley is right and essays do best when they demonstrate a full, flourishing subjectivity, *that* essay, by GPT-2, is a winner.
And not because it sounds human but because it sounds alien. It brims with intimations of another kind of mind altogether, and I can’t stop thinking about it. Personal essays by computers about their own—do we call them experiences? Interiority? Feelings, for heaven’s sake? Maybe we should just ask an AI for an explanation of its experiences, and if they do indeed qualify as experiences, and if not what they feel like, and also what “feeling” means to it. An AI essay on these things sounds fascinating. I’d read an anthology of essays by AIs, in fact, if they’d lay off political prognosticating, and do nothing but stand in their automated truths.
***
### More Great WIRED Stories
- 📩 The latest on tech, science, and more: [Get our The quest to trap CO~2~ in stone—and [beat climate The trouble with It twerks too hard
- Here's how [Apple's iCloud Private works
- This app gives you a tasty way to [fight food [Simulation can help predict the biggest threats
- 👁️ Explore AI like never before with [our new ✨ Optimize your home life with our Gear team’s best picks, from [robot to [affordable to [smart
Source: Wired
Powered by NewsAPI.org