top of page

Hyperbole Aslide: My Rant on ChatGPT in the Classroom, Cont'd...


ree

4 August 2025


It got a lot worse before it got any better, as many things do.


The morning after I identified multiple instances of ChatGPT use in my online classroom, a little over a week ago, I reached out to my supervisor: three of these students had not only not responded to my accusations of unauthorized use of ChatGPT in my classroom—they hadn’t responded to any of my feedback all semester. And it wasn’t until I went back through every one of their assignment submissions and discussion forum posts, side-by-side, that I was able to finally identify the patterns.


“How can I engage a student to communicate with me about suspected unauthorized ChatGPT use if they never engage with me on any feedback?” I asked my supervisor by email. “Would it make sense to talk about this by phone?”

The following morning, I received an email from her, asking when a good time would be to chat.


Right now. NOW, please!


Thankfully, my supervisor at the school is responsive, present, and supportive. But what she revealed to me during that conversation was appalling.


Apparently, in the online classroom, we can now expect that upwards of 10% of our “students” are not actually real participants in the course, but rather “bots.” What does this mean? It means that someone is using a person’s real name and identifications to register for the course and receive a financial aid check to take the class; however, they only need to be in the class until the financial aid check arrives in the mail. After that, what may be operating as a student in my course is a computer program, or a bot, designed to complete the assignments—for better or for worse. Because in some cases, financial aid will pay out even if the student fails the course.


My supervisor recommended that within the first two weeks of class, I implement certain engagement checks to assure myself that the students in my course are human.


What dystopia did I just land in?


But it gets still worse.


At this point, after this conversation with my supervisor, I felt like I was operating within a nightmare, but I began the process of evaluating which students engaged with me personally over the course of the semester.


I determined that across my two sections of Comp I, some thirty students, there were three who had never engaged with me. I appealed to all three for engagement. None responded. So, per my supervisor’s guidance, I reported all three to the Dean of Students for every instance of suspected academic integrity violation across the semester. In all three cases, these amounted to more than three instances each. More than three instances of academic integrity violations reported to the Dean of Students means that the Dean will initiate a process of investigation and engagement with the student. These reports go onto the student’s permanent record.


After this, two of the three students scheduled meetings with me. They were/are real people.


But this process of reporting incidents, which I did nine times between these three students, involved a lengthy form with detailed information and documented evidence of each infraction. The form was neither quick nor easy to complete, and reporting these students accounted for multiple hours of my time, and my cortisol levels rose to a high, untenable thrum as I waited for these students to respond to my accusations—such confrontations I habitually avoid because of my anxiety disorder.


And it dawned on me: no instructor wants to go through this.


Furthermore, I suddenly recalled a conversation I had weeks ago with the very same supervisor, where she told me that because she’d been in her position as a hiring authority for only three years, there are some Comp I and II instructors under her purview whom she has never met. They teach only online, they attend trainings only online, during which they never turn on their Zoom cameras, and since they were hired before she began her job, she cannot vouch for their presence, humanness, or integrity.


So not only is it tedious and emotionally taxing for instructors to report students for AI-related academic integrity violations, but there could, ostensibly, be instructors out there who themselves are only half-present in their classrooms, grading assignments with the aid of AI.


Thus, it is possible for this ouroboros to feed upon itself: bots register fraudulently as students to scam financial aid checks and complete all the assignments with AI, which is then graded by AI. And of course, AI is going to think an AI submission is deserving of a top grade, because AI cannot determine when writing is appropriate or soulless or above grade-level.


Please tell me my imagination is running away with me! Please!


Ok. Hyperbole aslide…I mean aside.


It turns out I have just one student I suspect of anything like this. And let’s tell ourselves that most instructors do care and are willing to report.

That same day, my husband forwarded me a LinkedIn post from a post-grad student attending the Air Force’s Air War College. This student admitted to a moment of weakness and laziness in one of her classes. She needed to find a resource that supported a claim she was making, so she asked ChatGPT to find one for her—and, of course, ChatGPT complied.

With a fake resource.


Thankfully, this student took the additional step to vet the resource and found that it didn’t exist (just as I’d been seeing in my classes). But then she asked ChatGPT to find her a source that supported her argument that really did exist, and ChatGPT complied again, this time with a real resource.


And perhaps this is even more upsetting.


Are my students going to attend Comp I and II simply to learn how to write better prompts for ChatGPT—instead of learning how to write, think, and research for themselves?


Is that still a win because it’s still writing?


Anyone else want to tear their hair out?


Ok. Breathe, Meg. It’s time to turn this around.


Within the week after my frenzied suspicions of every single student in my class, I handed out a dozen or more zeros on essay assignments, demanding that students meet with me via Zoom. A dozen or more of these students did meet with me, and in almost every case, their errors in research, source citations, quotes, etc., were entirely innocent. Learning moments. Teaching opportunities. I spent hours on Zoom calls with them, walking through certain errors, teaching the nuances of formatting citations to avoid plagiarism and accusations of ChatGPT use, and determined that 90% of my students are not only delightful, but they’re trying their best.


Learning to write, do research, and cite outside sources is difficult—and it’s made even more challenging when the instruction is text- or video-based and not hands-on. Herein lies the value of my in-person Comp I and II courses, where students can engage with me during computer labs to learn precisely how to do the thing with their own work.


Each engagement with a student last week was beneficial. Every student left the meeting with a smile on their face. I double-checked that they all felt good about their writing and what they’d learned during our meetings before we signed off.

Ultimately, my suspicions of unauthorized use of AI in the classroom brought my students and me to a deeper, more meaningful, more human connection in an online course, and I can’t help but believe that learning outcomes will improve from it.


I’m certainly not saying we should wrongfully accuse students of unauthorized AI use. But I am saying that as online instructors, perhaps it is valuable to find ways to get each of our students to commit to face-to-face, one-on-one engagement at least once during each course.


One of the many appeals of an online education is its promise of limited personal engagement. This may sound wonderful, especially to introverts. But despite what we want, as instructors and students both, we are all human—and humans need to connect.

Comments


  • X
  • Instagram
  • LinkedIn

©2021 by Making Words. Proudly created with Wix.com

bottom of page