Dear readers, “Would you like to humanize your text?” That was a solution proposed to the thriller author Andrea Bartz after she had put some of her writing into an A.I. checking tool. The program, Ace, inaccurately concluded that her work was 82 percent A.I.-generated. “We’re reaching this era of distrust, with no easy way to prove the veracity of your own writing,” Bartz said in an interview with my colleague Alexandra Alter. (Bartz was a lead plaintiff in the class-action lawsuit brought by authors against Anthropic, which agreed to a $1.5 billion settlement.) However you want to characterize the collision of artificial intelligence and a publishing industry that seems ill-prepared to grapple with it, it’s going to be messy. I wrote in here a few weeks ago about “Shy Girl,” a horror novel whose U.S. publication was canceled after evidence emerged suggesting it been at least partly produced using artificial intelligence. It had already been published in the United Kingdom, and readers on Goodreads and Reddit had complained for months about language in the book they felt had obviously come from a chatbot. Dozens of you emailed to share your dismay about A.I.’s encroaching on literature. Phrases like “lacking in integrity,” “abhorrent,” “pointless” and “disturbing” — to pluck a few reactions from your correspondence — are representative. Yet I also fear that the horse is out of the barn: A.I. is here, and doesn’t seem as if it will be leaving any time soon. What is a conscientious reader or writer to do? As always, I’d love to hear your thoughts, whether on this subject or about what you’re reading. You can reach me and my genuine human colleagues by emailing books@nytimes.com. Like this email? We hope you’ve enjoyed this newsletter, which is made possible through subscriber support. Subscribe to The New York Times.
BEST SELLERSWant to see more of our expert reporting in your Google search results?
|