All I Want for Christmas Is to Know How to Deal With AI-Assisted Cheating

All I Want for Christmas Is to Know How to Deal With AI-Assisted Cheating Main

December 11, 2023
Justin W. Marquis PH.D.

All I Want for Christmas Is to Know How to Deal With AI-Assisted Cheating

The rapid evolution of Artificial Intelligence (AI) has brought about groundbreaking changes in various sectors, including education. However, with these advancements comes a new challenge for faculty in higher education: AI-assisted cheating. I first became aware of this developing issue when faculty members began approaching me in the fall semester of 2022 about student work that they thought was too polished in the writing style but simultaneously lacking critical detail and oddly formulaic. AI hadn’t really hit my radar at that point as ChatGPT was just emerging on the scene. I was curious, so I began investigating and playing with the AI tools that were available. In the early days of generative AI (AI able to produce human-like text, images and other media), it was easy to spot when students had used the tool. Prompt Engineering (the practice of prompting the AI to generate the content and format you want) was still brand new to most people and the tools themselves had not begun their own evolution into the robust tools we see now. In response to this new academic challenge, 91Թ started a working group of concerned faculty and staff. This group tried to understand these AI tools, their impact on our fields, and how to use them for 'good' and prevent their abuse by students and society in general.

One of the outcomes of those and other conversations was my appointment as Interim Chair of the University’s Academic Integrity Board; a direct result of my engagement with this topic from the beginning and my participation in the AI Pedagogy group, disseminating information to faculty on AI. This included workshops about designing assignments to counteract the possible use of AI by students. But as AI becomes more accessible, so too does the temptation for students to use these tools unethically. Our University Academic Integrity Board has seen a dramatic increase in cases reported in the past year, so it seems timely, with final exams approaching, to share some insights on how to deal with this problem should it rear its ugly head just in time for the holidays.

Understanding AI-Assisted Cheating

Here’s the truth, a smart student can use AI to write a paper, take a test, or conduct research, and springboard off of those shortcuts to make their cheating undetectable by even the best AI detection software. I tested various AI detectors by feeding them a series of control papers. First, I put in a paper I knew one of my students had written – the detectors confirmed that it was indeed authentic in all cases. I then took that assignment prompt and engineered ChatGPT to write the same paper. The results were between 70-100% AI-written for a paper that was 100% AI written. I then took that paper and spent 10 minutes tweaking it to fool the detectors. AI detection results from that minimal tweaking effort resulted in an AI detection percentage between 0-30%. With a few more minutes of work, I could have fooled every AI detector. We are talking about a maximum of 20 minutes work to write a paper that probably takes the average student 2-3 hours to write at a minimum. This is the new reality we are facing.

Strategies for Identifying AI-Assisted Cheating

Here’s the good news. Most students aren’t as smart as a 50-something PhD who is actively engaged in studying AI and understanding how to work with and against the tool. Most students don’t even understand prompt engineering and the iterations required to get generative AI to write a good paper. Based upon this assumption, here are the telltale signs of an AI written academic paper:

  1. It often lacks personal insight or contains “personal” insights that seem generic, lack detail, or are vague in a way that seems artificial.
  2. It may seem overly polished, as in having perfect – literally inhumanly perfect grammar – while simultaneously being generic in a way that doesn’t jive with the attention to detail demonstrated by the perfect grammar.
  3. Writing style for AI-generated papers tends to be very vanilla, plain, and unadorned. In my experience, even smart students tend to overwrite, using the wrong words, too many words, or word choice that is slightly off. Without prompting AI to write in a specific style, AI writes in a style that could best be described as generic or sterile and it never uses unnecessary words or the wrong words.
  4. Students trying to incorporate AI into their own work are unlikely to be skilled enough to match that style, so be alert to changes in a student’s writing style or quality of work that happens abruptly.
  5. AI has thus far exhibited a shocking tendency to hallucinate. It makes things up. Those things often sound right but may not be verifiable or are obvious fabrications to someone with a PhD in their field. You might also notice details from one author or source being attributed to another source or details that don’t align with the assignment prompt or that are beyond the scope of the course content.

In the end, you need to trust your gut and expertise in identifying AI written student work. You are teaching at a university because you are one of the foremost experts in your field in the world. You are a legitimate competitor to AI and you have the advantage of understanding context. AI functions by using mathematical probability to put words together. It doesn’t understand what it is saying in any meaningful way. You both understand what you are saying and why you are saying it, and you can recognize when the author doesn’t really understand what is being conveyed in writing; as is the case with AI as an author. You also have a sense of what each of your students is capable of through your interactions with them and other writing samples. Use your expertise and intuition as your own finely tuned AI detector.

Before you get to that point you should take a few steps early in the semester to launch a preemptive strike against AI cheating. Here’s how:

  1. Promote Open Dialogue: Encourage discussions about academic integrity in your classroom. Make it clear that AI-assisted cheating is a violation of these principles and discuss the long-term repercussions of such actions.
  2. Be Crystal Clear about Expectations: If you are forbidding the use of AI on all or some assignments, be clear both in your syllabus and any assignment specifications you provide. Preferably in writing. Conversely, if you are allowing the use of AI on assignments, be clear about which assignments and what the boundaries are for that use.
  3. Foster a Culture of Integrity: Build a classroom environment where integrity is valued above grades. Emphasize the importance of original thought and the learning process.
  4. Adapt Assessment Methods: Consider incorporating more in-class assessments, oral presentations, and customized assignments that require personal reflections or experiences, making it harder for AI to assist.
  5. Educate Students: Teach students about the ethical use of AI. Many may not fully understand what constitutes cheating in the context of AI. I have a conversation with my students at the start of the semester establishing acceptable and unacceptable uses of AI both in my classroom and beyond.

Engaging in these preemptive practices can curtail the incidence of AI-cheating in your courses. But there will always be some students who need to push the limitations you set.

What happens when your R-AI-D-A-R starts blaring a warning?

If you have had the conversation with your class up front, and provided clear documentation about what is and is not acceptable (remember that the University’s existing Academic Integrity policy does cover the use of “unauthorized tools” such as generative AI) but you still find students engaged in AI cheating how should you approach the situation?

If you suspect a student of AI-assisted cheating, address the issue quickly and constructively. Aim for a conversation that leads to learning and understanding, rather than just confrontation and the threat of punitive measures. Reiterating the value of personal integrity and the value of doing hard work for yourself. The ideal outcome of one of these conversations is for the student to admit culpability and accept a reasonable “punishment.” I recommend re-doing the assignment(s) in question for partial or even full credit depending on their attitude in the conversation. In the worst case, gather your evidence and submit the required documentation for an Academic Integrity Violation. Learning can happen in many different ways and some students may need to face unpleasant consequences to get the message. There are resources on your campus to help you navigate that process and help students learn from their mistakes.

The challenges caused by the beginning of the Age of AI are difficult and adapting to them is going to be hard. We are facing a steep learning curve with AI as it re-writes the rule books about what knowledge is, what tools can and should do, and our role in using those tools responsibly. As we navigate this new landscape, it's essential to remember that AI, when used ethically, can be a powerful tool for learning and growth. Our goal should not be to instill a fear of AI but to cultivate a culture of integrity and responsibility in its use. By staying informed, adapting our teaching methods, and fostering open communication, we can guide our students to use AI as a means to enhance, not shortcut, their educational journey.

  • Academics
  • Instructional Design and Delivery IDD