By Honora Quinn ʼ27
Staff Writer
Content warning: This article briefly mentions child abuse.
Artificial intelligence has been lurking in every conversation involving the literary world, covering topics from publishing to the classroom. It is therefore not surprising that an institution like National Novel Writing Month — or NaNoWriMo — would issue an official statement for participants in the annual novel-writing contest on the use of AI within the competition. However, instead of reassuring writers, the release of the document brought shock and outrage to the writing community. Within days, the document was taken down and replaced.
NaNoWriMo occurs annually, having started small in 1999 and grown to hundreds of thousands of participants in the 2020s with the goal of writing 50,000 words in the month of November, or 100 pages of text. Books like Marissa Meyer’s Cinder and Erin Morgenstern’s The Night Circus, for example, have been success stories of the program, showing that anyone could participate and potentially get a publishing deal, hitting The New York Times bestseller list in a few thousand keystrokes.
In the original, now stricken policy, NaNoWriMo claimed that the banning of AI within writing spaces is both “classist” and “ableist” and that they will not outright condemn the use. They claimed banning AI is classist because “Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one.” The statement also described the banning of AI as ableist because “Not all brains have the same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing” and need AI as a technical crutch.
This policy did not go over well with many, with some authors like Rebecca Thorne taking to social media to share their criticisms. The controversy also ended up resurfacing previous discourse involving the company and its approach to forum moderation, which originated after an incident where a volunteer moderator was accused of inappropriate interactions with minors.
These compounding events have caused some members of the writing community to boycott the use of NaNoWriMo this November, and instead create their own challenges for communities to participate in instead, such as the Pathfinders Writing Collective.
Within days of the original policy’s release, it was stricken from the official website and replaced with the following statement: “NaNoWriMo neither explicitly supports nor condemns any approach to writing, including the use of tools that leverage AI … the fact that AI is a large, complex technology category, which encompasses both non-generative and generative AI, applied in a range of ways to a range of uses, contributes to our belief that AI is simply too big and too varied to categorically support or condemn.”
This was followed by an excerpt of the company’s mission which emphasizes “encouragement to help people use their voices.” The company clearly does not see the irony within this notion due to its refusal to criticize the use of artificial intelligence, which arguably takes away and replaces the voice of the people it claims to champion.
Controversy surrounding the use of AI is not strictly contained to the bigtime world of publishing, and has endured as an underlying current of tension in almost every college campus and classroom. Mount Holyoke College is no exception. Sept. 4 marked the first day of classes for the 2024-25 school year, meaning that professors have begun to address the syllabus and outline classroom policies, including the institution's AI policy. The College used to have a standard statement prohibiting the use of generative AI on its website’s “Student Accountability” page, and professors across the board often include some variation of the sentiment in their respective syllabi. This statement has been criticized by students, including in a Mount Holyoke News opinion piece, who argue that the institution's values of “innovative, adventurous education” contradict this AI edict.
There is no quick and easy solution to the current predicament in either education or the publishing industry. AI books are uploaded every single day to online marketplaces like Amazon, and often pose as rip-offs of those that are traditionally published. Literary Hub documented this phenomenon last May in an article that chronicled the influx of AI-written Kathleen Hanna biographies after the author’s memoir was released.
Additionally, a report from the Stanford Graduate School of Education has noted that, in academic spaces, about 60%-70% of students admitted to cheating before the rise of ChatGPT. Education Week also reports that Turnitin flagged about 1 in every 10 submitted assignments for AI use, although only 3 in 100 assignments used AI to generate the majority of the assignment. Technology and art will likely continue to merge as they have done throughout all of history with the advent of new inventions. The fact remains that art is a human construct, as is everything, and humans continue to crave and create that exposed, raw humanity.