top of page

Preserve the Humanity of Writing: More Than Words by John Warner

ree

11 November 2025

Please forgive that this review is a bit tattered about the edges; I have fallen out of my writing practice, and this is my first attempt to clamber back in.


Today’s focus: More Than Words: How to Think About Writing in the Age of AI by John Warner.

 

John Warner is a fellow composition instructor at the College of Charleston in SC. He has published a handful of books (nonfiction and fiction) and writes a weekly book review column for The Chicago Tribune as “the Biblioracle.” All of this comes from the back jacket of his book, More Than Words: How to Think About Writing in the Age of AI.

 

It is not generally my modus operandi to provide long-winded block quotes from authors, but I’ll make an exception here so that readers know precisely what to expect before purchasing and reading More Than Words. Per the final two pages of his introduction, here is what Warner wishes to accomplish in his book, chapter by chapter:

 

            “What ChatGPT and other large language models are doing is not writing and shouldn’t be considered as such.

            Writing is thinking. Writing involves both the expression and exploration of an idea, meaning that even as we’re trying to capture the idea on the page, the idea may change based on our attempts to capture it. Removing thinking from writing renders an act not writing.

            Writing is also feeling, a way for us to be invested and involved not only in our own lives but the lives of others and the world around us.

            Reading and writing are inextricable, and outsourcing our reading to AI is essentially a choice to give up on being human.

            If ChatGPT can produce an acceptable example of something, that thing is not worth doing by humans and quite probably isn’t worth doing at all.

            Deep down, I believe that ChatGPT by itself cannot kill anything worth preserving. My concern is that out of convenience, or expedience, or through carelessness, we may allow these meaningful things to be lost or reduced to the province of a select few rather than being accessible to all.

            What I’d like to do for the reminder of our time together [this book] is use the capabilities of ChatGPT (and its ilk) as a lens to examine how we work with words as a way to uncover those things that can and must be preserved” (11-12).


I am finding it difficult to begin my review this book for a variety of reasons, not least is that I conscientiously recognize Warner’s perspective aligns with my own, and so in reading this book, and admitting publicly to reading it, I fear I have stepped into an echo chamber and am no longer entertaining a variety of stances on this topic. And truly, for every chapter of this book there was at least one moment when I laid the book on my lap, threw a fist in the air, and shouted, “Hell yeah! Preserve our humanity!” But perhaps awareness is enough. Perhaps, based on this awareness, I may yet convince myself to pick up some book that extolls the advantages of AI. Although…I doubt it. My heart rebels because I am angry, it seems.

 

Despite my bias, there were moments in the book where the topic-at-hand felt forced. Warner spends a chapter talking about the profitability and business of writing, writing as a profession/livelihood, and the world of publication, including the myriad avenues to publication. There is something in this chapter that grated my sensibilities because it felt deeply self-serving. Lofty. Egoic. Maybe whiny? This felt off topic to LLMs, yet I was magnetically drawn to the chapter! It’s almost as though a writer will use any excuse (in this case, ChatGPT and LLMs) to opine over the difficulty of making a living from writing. And my frustration with this is that I agree. I want to whine too. I want something to blame.

 

Thus, I am conflicted. I don’t want to like the book too much, yet I’m frustrated that there are parts I disliked. Make this make sense.

 

The truth is Warner puts onto the page in very human, often clunky and flawed language, many of my own thoughts on this topic. I have a friend, for example…let’s call him Jim. Actually, his name is Jim. Jim’s work is exceptionally technical and involves, like, math or something. Jim uses LLMs and AI to develop algorithms for efficiency in transporting oil to gas stations nationwide, or something—this is my very laywoman’s representation of what he does. Jim is bright and he is wise: he perceives AI as a tool to improve not only his work outputs but also his daily life. And Jim seems to believe that everyone should have access to these tools so that they may use them as he does, as an aid to improve human efficiency and thus quality of life.

 

One day, Jim read one of my short stories, and afterwards said to me, “If I could teach an algorithm the patterns and style and voice of your writing, then tell it to write your plot/story for you, wouldn’t that be helpful? It’d save you so much time!”

 

At that moment, I was speechless. I think I fumble-mumbled something about how I “choose language very specifically to suit the needs of the narrative moment.” But I wish I could’ve opened my mouth and had Warner’s words fall out.

 

To let GenAI write for me would be to rob me of one of my greatest joys. While the finished product (story, book, essay) provides satisfaction—even more so if it becomes published—the process of organizing whatever muddled thing occupies my mind gives me purpose. Using an LLM would rob my writing of its feeling, its humanity, and its capacity to help me make sense of the most complex aspects of my life: relationships, traumas, shadow self, emotions—all the things that make me human and an active participant in human interactions.

 

I did not have those words available to me the day Jim crushed my fledgling writer’s soul with that suggestion. Thanks to Warner, these words/ideas are available to me now, for the future.

 

Another thing Warner addresses that Jim overlooks is that not everyone is educated enough to use GenAI, including LLMs like ChatGPT, effectively. While Jim may be using GenAI to accomplish tasks that would otherwise waste the valuable resource of his energy and time, my students are using it to complete their coursework—and when they do so, they short-stop the learning process. Jim is over 40. Most of my students are GenZ-ers under 25. They hardly knew a world before LLMs; he did. He had the opportunity to develop his critical thinking skills without free, flashy, broadly available shortcuts that might tempt him. In this sense, perhaps the largest contributors to the problem are corporations like Meta, Google, and Apple, who force-feed us more and more AI until it becomes ubiquitous. It is because of this my students think there is an AI tool for anything, and thus use it in my classroom.

 

There may well be benefits to using Generative AI and LLMs. I promise to keep my mind open for them and identify them when I see them. I promise.

 

In the meantime, it’s tempting to restructure my Comp I class around the theme of Generative AI in the writing classroom and require students read this book as part of their literacy pedagogy. At the very least, it could encourage their skepticism of corporations selling them new AI products, which alone might serve as a foundation for critical thinking in the information age.

 

Fundamentally, however, as a writer I stand with Warner: “It is frankly bizarre to me that many people find the outsourcing of their own humanity to AI attractive” (7).

 

Comments


  • X
  • Instagram
  • LinkedIn

©2021 by Making Words. Proudly created with Wix.com

bottom of page