It's Not Personal; It's Political
- Meg Vlaun

- 3 hours ago
- 15 min read
my decision to refuse AI in my writing, my classroom, and my life

8 April 2026
It’s Not Personal; It’s Political: Six Reasons I Choose to Refuse AI in My Writing, My Writing Classroom, and My Life
Before I Even Begin: in case you are not familiar with the Refuse AI movement, please explore the ideas founding this movement on this Refuse AI Blog: Refusal.blog.
So far in my writing career, I have resigned never to be a political writer. I’ve read my fair share of Joan Didion and been jealous of her confidence to take a political stance, but I’ve simultaneously set that goal aside for myself. As a sensitive, I presumed my writing would always only focus on psychology, humanity, nature—that it would be profitable to those select few senstitives in the world, like me, interested in depth and breadth of minutia over some grander social impact.
But suddenly here we are. My political sense has been roused.
Let me tell you the tale of how it happened…
Bizarrely, it never occurred to me until yesterday that I might someday need to defend my decision to Refuse AI. Perhaps I am too closely surrounded by artists and writing instructors that there has been zero pushback when I’ve voiced my stance. Indeed, thankfully, my employer, a community college, backs any English instructors who choose to Refuse AI in their classrooms.
But two days ago, a tech-bro close friend of mine sent me links to two Substack articles he’d recently written, suggesting I might want to read them and give him feedback. I took a look at the first and last paragraphs of the first piece, immediately identified multiple red flags for AI use in its development, and asked him whether he’d used AI to write his articles. Thankfully, he was honest. He said yes, he’d used AI at the idea synthesis, structural, and sentence-levels. I thanked him for his honesty and told him that I would not be reading his pieces because of my stance, as a writing instructor and writer actively publishing work, to Refuse AI. Because I do love this friend and care about his feelings, I told him, “It’s not personal; it’s political.”
My friend’s first response was the ubiquitous “head-in-the-sand” strawman fallacy.
Therefore, I am here today to share my research and internal processing on this topic to illustrate the myriad ways my version of Refusing AI is not representative of ignorance or a “head-in-the-sand” approach to AI. Someone also said this is Luddism, and at first I rankled at the comparison, but then I researched and realized that these artisans (the Luddites) refused the industrial revolution so that they might protect their skills and expertise (their livelihoods) from exploitation. Maybe it is an apt analogy. Indeed, I’ve recently been writing a novel set in Philadelphia in the 1920s, just prior to the textile industry’s overthrow by industrial technology—and yea, it contributed to the Great Depression turmoil of that area. I empathize with their perspective.
This friend did two other things in his argument justifying his LLeMming Kool-Aid sellout mentality(ad hominem and weighted language, I know, but I'm having a bit of fun here): he compared AI to calculators (faulty logic) and he implied that AI is inevitable (a false narrative). As for the calculators, you can follow down that rabbit hole on the internet all you like—I will just say this: Easter morning, while my hands were covered in goo as I assembled deviled eggs, I asked my husband to help me with some recipe math. Scaling my egg recipe by three, I needed to know how much mayonnaise to add to the yolks I’d just dislodged from the boiled eggs, and I determined that drawing six tablespoons of mayo from my jar would be six times as difficult as drawing ONE portion, so I asked my husband to look up for me, “How many cups is six tablespoons?” I knew four tablespoons was ¼ cup, and I figured, if six was like about 1/3 cup, I could just use one 1/3 cup scoop. My husband, a sometimes-tech-bro himself, asked me, “Why don’t you just ask Alexa?” Me? The AI Refuser actively removing AI from all aspects of my life? I shot him a look, rolled my eyes, then asked Alexa. Alexa answered me: “Six tablespoons equals ¾ of a cup.” Although I do concede that calculators have contributed to humanity’s incapacity to math, I did know this answer was incorrect. We won’t talk about the ways that GenAI and LLMs are on orders of magnitude more pervasive than calculators. And we won’t talk about the ways calculator corporations have not been selling calculators to us like crack/cocaine. When was the last calculator ad you’ve seen?
As for the “AI is inevitable” false narrative, I’ll address that below in my list under corporate greed.
First, a bit about me. I am a published creative writer and college writing instructor. So I am living this dystopia every day of my life. I am also on the spectrum with a hyperfocus on patterns in language and literature. And a hyperfocus on AI now, I guess.
Frustrated by what I was witnessing on social media and in my own classroom regarding the invasion of AI, this semester in my Composition II (Rhetorical Awareness) class, I reworked our semester to center upon this theme: How Social Media and AI Rewire Our Brains. For this purpose, for the past year, to support this semester’s class readings, media, and research, I dove headfirst into the dredges of media on the topic of social media and AI in ed tech, writing, critical thinking, cognition, etc. You get the picture. It was my intention, initially, to remain as objective as possible and to really, seriously consider the benefits of AI in my field. My favored argument structure—the one I find most convincing—is Rogerian. A balanced approach. Unfortunately, the benefits of AI in my field do not appear to be balanced, as you’ll see below.
One really, really cool output of this decision is that many of my very bright Comp II students have chosen to carry this research forward on their own (I did not ask them to; they could choose any research topic that interested them), exploring the pros and cons and prospects of AI in the following fields:
1. Architecture (middling)
2. Engineering (middling to promising, depending on the field of engineering)
3. Construction (promising)
4. Nursing (exceptionally promising)
5. Diagnostics (promising)
6. Medicine (sometimes tragic)
7. Law (almost always tragic)
8. Psychology, Grief, and Counseling (almost always fraught with peril)
9. Youth Cognitive Development (we won’t talk about this—let’s just say, banning before age 16 is an idea well-supported by research evidence)
My students have carried this research forward and developed Multimodal Public Arguments on their chosen topics. Thanks to my students, who scoured our Library’s resource databases (EBSCO, Gale, JSTOR, etc.) for their own research papers, I have evaluated balanced, scholarly, peer-reviewed research on all these topics.
So. As you can see, no “head-in-the-sand” here. But enough salad. Let’s get to the steak. Below you will find the list of reasons, based on my personal experience, anecdotal evidence, and both popular and scholarly research, I choose to Refuse AI.
1. Cognition and Brain Health
I am currently on a deeply personal mission to avoid the horrors of my sweet Nana’s demise via Alzheimer’s. A sidebar deep-dive research topic of mine is the neuroscience of women’s brain changes during perimenopause and menopause—which leads to higher rates of Alzheimer’s in women than men. Some clear correlations (and proven causations) between social media, AI, and cognition include altered sleep patterns and mindlessness. Essentially, the brain is a muscle. It needs rest and it needs exercise to function properly. For these reasons, I have removed all social media from my iPhone, I do not keep my iPhone near my bed, and I choose not to use AI—at all. Believe it or not, this also means that I opt not to use certain tools like Google Maps whenever I do not need them. This is a conscious effort; sometimes I will deliberately choose to drive a different route just to keep my mind nimble. Social media and AI interfere with these two core needs of sleep and mindfulness.
Very specifically, AI has been proven by researchers, including some at MIT and Microsoft, detrimental to critical thinking. Here is my research on this topic:
“ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention” by André Barcaui in Social Sciences and Humanities Open, 25 Nov. 2025
“Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” by Nataliya Kosmyna et al., in MIT Media Lab, 10 Jun. 2025
“The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers” by Hao-Ping (Hank) Lee et al. in Chi ’25, a Microsoft Publication, Apr. 2025.
“AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” by Michael Gerlich in Societies, 3 Jan. 2025.
“Renewing the Culture of Critical Thinking: The Essential Role of Librarians in an Age of Ai and Distraction: Renewing the Culture of Critical Thinking” By Suzanne Morrison-Williams, in Florida Libraries, vol. 67, no. 3, Sept. 2025, pp. 47–50. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=8b460457-af31-3d70-aa1b-7863de0beb8a.
2. Writing Voice and Authenticity
As a Creative Writer and Writing Instructor, I have read my fair share of AI-generated slop both in my classroom and online. As a student of the written word since, well, birth, I believe the writing process is soulful. I experience every person’s writing as a fingerprint: each piece is unique and could only have been composed by its author. Each piece has distinct voice and will tell you so much about its author.
Further, as a sensitive who has experienced her fair share of inauthentic (read: toxic) people in her life, I do not have any space for people who are disingenuous, people who do not present their honest, human selves. To me, these people are dangerous for a variety of reasons, not the least of which that they are hiding something. As a result, I have weeded them all out of my life. And as a weirdo, I do not differentiate writing from life. What you present on the page is not distinct from what you present in life. To me, they are the same. Dishonesty in one is dishonesty in the other.
When AI steps in at any stage of that process (idea generation, organization, sentence polishing) authenticity is compromised. This is a violation of my commitment to integrity, and so I choose not to engage with it.
I don’t have specific research on this, as the point is deeply personal and anecdotal, but if needed you could cull support for this stance from Jon Warner’s book, More Than Words: How to Think About Writing in the Age of AI.
3. The Writing Process
My stance as a writing instructor is that writing is an iterative process that permits (forces?) us to slow down, deeply investigate a topic, and thus evolve our critical knowledge. Writing is thinking, writing is learning, and writing is a process. Writing is the way I move through my ideas on any topic (such as this one right here—Refusing AI!) to flesh them out, better understand them, and connect them to various other aspects of my life to develop context and texture. This process is unique to humans.
When AI steps into this process, it robs the process of its process because AI does not process information as a human brain can. In short, AI shortstops writing/thinking.
While you could include the resources above on Critical Thinking, here are additional resources I’ve explored on this topic (yes, some of these are fiction. I believe in the persuasive power and deep truths found in fiction):
Reader, Come Home: The Reading Brain in a Digital World by Maryanne Wolf
More Than Words: How to Think About Writing in the Age of AI by John Warner
Brave New World by Aldous Huxley
Farenheit 451 by Ray Bradbury
4. Copyright Violation for Training Data
As many are already aware, AI corporations’ use of copyrighted works in its training data is a contentious topic. Authors (like me!) do not want their works included in the training data for AI because their work is copyrighted (my ideas, my style, my voice). This violation is plagiarism. In 2024, a group of authors filed suit against Anthropic for using their published works in Anthropic’s AI training data. Anthropic settled with those authors, shelling out $1.5B in damages.
My writerly friend, Jeff, is so wary of the theft of his writing and ideas that he only puts his writing into a device attached to the internet at its very final stages before publication. Jeff’s decisions may seem extreme, but his caution is not unfounded. All of my writerly friends know that whatever we publish for free online will likely be stolen from us and used as training data. This is why I do not publish anything at all online for free that I am too attached to—or that I think I might make money from.
AI corporations all-too-free use of copyrighted work terrifies writers like Jeff and me.
Research:
Empire of AI by Karen Hao
“Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit” by Chloe Veltman for NPR, 5 Sep. 2025
5. Corporate Greed
AI Corporations and their unwavering march toward Artificial General Intelligence (AGI) and a profit (yes, they’re in debt to the tune of trillions)—are unethical in that they are chasing advancement for the sake of profit at the expense of our culture and public health.
I have three specific examples of this:
a. Grok’s Demise: Elon Musk’s AI “tool” Grok was recently shut down amidst allegations that its image generations sexually violated both women and children. The development of these so-called AI tools for profit is outstripping oversight and regulation, endangering some of our most vulnerable communities.
b. Suppression of Research: This one really, really upsets me. Karen Hao has determined that AI corporations are actively suppressing research related to AI. Actually, this has not upset just me, it has also upset a handful of my students who wished to produce Public Arguments on this topic but could not find any scholarly research to support their stances. In Empire of AI, Hao describes the was that AI Corporations are suppressing/quashing research that is counter indicative to their profit margins. Censorship makes me absolutely nauseous.
c. The False Narrative that “AI is Inevitable”: As I mentioned earlier, my tech-bro friend suggested that I was simply delaying the inevitable by choosing to Refuse AI. But the “AI is Inevitable” narrative is false and spread by AI Corporations to sell more of their products. I see it personally, intimately, within my job. No school wants their students to be “left behind,” and so AI companies sell schools “state-of-the-art ed tech” so that their students don’t “fall behind” the AI trends. Therefore, we watch in horror from the English department as Administration and Enrollment invest in AI products that will violate our students’ privacy (FERPA).
The business leaders at our school have bought, hook-line-and-sinker, the “AI is Inevitable” pressure to keep up at the expense of our students’ best interests. I have found evidence of this both in Hao’s Empire of AI and scholarly research on the long-term learning outcomes of ed tech, some of which I have included below.
Resources:
Empire of AI by Karen Hao
“AI Debt is Spiraling Out of Control: The Real Threat of AI is to the Economy” by Will Lockett in Medium, 1 Mar. 2026.
“Expert Comment: Chatbot-driven sexual abuse? The Grok case is just the tip of the iceberg” by Federica Fedorczyk in The University of Oxford News, 14 Jan. 2026.
“Commentary: Grok image shutdown shows why AI innovation cannot outpace responsibility” by Nishanth Sastry at the University of Surrey, 9 Jan. 2026.
“Impact of multiple educational technologies on well-being: the mediating role of digital cognitive load” by Rasha Kadri Ibrahim et al. in BMC Nursing, 5 Aug. 2025.
6. Data Centers and Environmental Impact
One of my closest friends lives in Loudon County, VA. Whenever I visit her there, she takes me touring wineries, and we get rip-roaring drunk and solicit a third sober friend to drive us home.
Unfortunately, the stunning rolling hills and valleys of Loudon County are currently under assault, as this is the wealthiest county in not just Virginia but the entire US and has recently become the largest data center development region in the world.
I joke with my friend that we can go to the wineries once we can get past the data centers, but it’s not actually funny. It’s terrifying. And what’s further terrifying is that I can find zero scholarly, peer-reviewed research on this topic, which tells me that this research is being quashed, as mentioned above. Indeed, many researchers on this topic opine that they cannot get AI Corporations to reveal things like water consumption so that they can investigate further.
What is the actual environmental impact of AI? I want to know…because I have suspicions, and I am mad.
Research:
Empire of AI by Karen Hao
“The carbon and water footprints of data centers and what this could mean for artificial intelligence” by Alex de Vries-Gao in Patterns, 9 Jan. 2026.
“A Humming Annoyance or Jobs Boom? Life Next to 199 Data Centers” by Ana Faguy in BBC, 25 Oct. 2026.
“The World’s Data Center Capital Has Residents Surrounded: Northern Virginia housing developments that will soon be walled in by data centers exemplify the tensions over unfettered growth, as calls increase for regulation” by Linda Poon in Bloomberg, 29 Jul. 2025.
So my friend was incredulous when I said to him that I Refuse AI. He suggested that I “examine” my life further to see all the ways I rely on AI without knowing I do. My response to that is—perhaps he is projecting. My choices to Refuse AI include the following very active decisions:
1. Google searching with the -ai command to eliminate the AI Overview
2. Focusing on scholarly research databases for my evidence
3. Turning off all “AI Assistants” available on my iPhone
4. Reporting AI slop when I encounter it on social media
5. Choosing to avoid exposure to AI when I encounter it anywhere
6. Opting out of “writing tools” powered by LLMs
7. Opting for media (books, movies, music, poetry) not powered by AI
8. Refusing to consume media composed with AI
Indeed, based on Iris Murdoch’s premise about the virtue of paying attention, I believe deeply that my attention is a political act. Therefore, I am withdrawing my attention from AI where I find it harmful in my life.
This is not meek ignorance; it is my stance.
~M
Here is the full list of research I’ve compiled on this journey, for your enjoyment:
Books:
Nonfiction:
Empire of AI by Karen Hao
Reader, Come Home: The Reading Brain in a Digital World by Maryanne Wolf
More Than Words: How to Think About Writing in the Age of AI by John Warner
Fiction:
Brave New World by Aldous Huxley
Farenheit 451 by Ray Bradbury
Scholarly Articles:
“Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” by Nataliya Kosmyna et al., in MIT Media Lab, 10 Jun. 2025
“The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers” by Hao-Ping (Hank) Lee et al. in Chi ’25, a Microsoft Publication, Apr. 2025.
“AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” by Michael Gerlich in Societies, 3 Jan. 2025
“Renewing the Culture of Critical Thinking: The Essential Role of Librarians in an Age of Ai and Distraction: Renewing the Culture of Critical Thinking” By Suzanne Morrison-Williams, in Florida Libraries, vol. 67, no. 3, Sept. 2025, pp. 47–50. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=8b460457-af31-3d70-aa1b-7863de0beb8a.
“The carbon and water footprints of data centers and what this could mean for artificial intelligence” by Alex de Vries-Gao in Patterns, 9 Jan. 2026.
“Impact of multiple educational technologies on well-being: the mediating role of digital cognitive load” by Rasha Kadri Ibrahim et al. in BMC Nursing, 5 Aug. 2025.
Blog/Guide:
“Refusing GenAI in Writing Studies: A Quickstart Guide” by Jennifer Sano-Franchini et al.
Popular and News Articles (I can provide PDFs for anyone without access):
“Writing Faculty Push for the Right to Refuse AI” by Kathryn Palmer in Inside Higher Ed, 16 Mar. 2026.
“AI and Threats to Academic Integrity: What to Do” by Colleen Flaherty in Inside Higher Ed, 20 May 2025.
“The People Outsourcing their Thinking to AI: The Rise of the LLeMmings” by Lila Shroff in The Atlantic, 1 Dec. 2025
“Will the Humanities Survive Artificial Intelligence?: Maybe not as we’ve known them. But, in the ruins of the old curriculum, something vital is stirring” by Graham Burnett, The New Yorker, 26 Apr. 2025.
“How AI Changes Critical Thinking: New Microsoft Research Findings” by Charles Towers Clark in Forbes, 14 Mar. 2025.
“ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study” by Andrew Chow in Time Magazine, 17 Jun. 2025.
“Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit” by Chloe Veltman for NPR, 5 Sep. 2025
“Expert Comment: Chatbot-driven sexual abuse? The Grok case is just the tip of the iceberg” by Federica Fedorczyk in The University of Oxford News, 14 Jan. 2026
“Commentary: Grok image shutdown shows why AI innovation cannot outpace responsibility” by Nishanth Sastry at the University of Surrey, 9 Jan. 2026.
“AI Debt is Spiraling Out of Control: The Real Threat of AI is to the Economy” by Will Lockett in Medium, 1 Mar. 2026.
“A Humming Annoyance or Jobs Boom? Life Next to 199 Data Centers” by Ana Faguy in BBC, 25 Oct. 2026.
“The World’s Data Center Capital Has Residents Surrounded: Northern Virginia housing developments that will soon be walled in by data centers exemplify the tensions over unfettered growth, as calls increase for regulation” by Linda Poon in Bloomberg, 29 Jul. 2025.
Comments