Be Your Own Best AI Detector

Be Your Own Best AI Detector Hero Image of a college student having a robot telling them how to write their paper
May 03, 2024
Justin W. Marquis PH.D.

Preface: About AI Detection

I know that there has been some concern over the institutional decision to remove the AI Detection tool from TurnItIn (TII) and Canvas, so I want to address that first. In full disclosure, TII deployed that tool into their product free of charge for an introductory period, then elected to charge a significant amount to continue to include the tool with TII after they had hooked many of you. Here at 91勛圖厙 and across higher education that tactic was met with an overwhelming move away from that functionality, and not just because of the prohibitive cost. The decision was made primarily because the tool itself is also problematic for a number of reasons; there is no transparency regarding how it works; it is generally unreliable; and most importantly, it flags student writing for a variety of reasons that would not be considered cheating most of the time, such as using translation or grammar correcting programs such as Grammarly. The ethical considerations around confronting a student about their academic integrity over that kind of evidence were significant contributing factors to the decision. We will continue to evaluate this decision as the technology changes. But fear not, there is a human-centered, and I think manageable way to deal with these challenges. Ill give step-by-step guidance at the end of this post.

You Are an AI Detector

Just in time for finals Ive decided to provide additional insight and resources related to the challenges we are facing with AI cheating. Im not trying to drum up more business for the Academic Integrity Board, but rather attempting to set up faculty to feel confident in their own ability to detect AI writing as they prepare for the onslaught of final papers and projects that are about to start pouring in. In the last year, through working with AI and prompting my students to use it in targeted ways on assignments, I feel like I have really refined my ability to detect AI written work more often than not. In my dual roles as director of Instructional Design and Delivery and chair of the Academic Integrity Board I continue to get regular questions about AI detectors and suspected violations of the universitys Academic Integrity Policy. Building on my December, 2023 post All I Want for Christmas is to Know How to Deal with AI-Assisted Cheating, I wanted to unpack the semantic clues hidden in AI writing that will empower you to know when your students are using AI to do their work for them.

Here are several subtle but easy to spot markers that will help you identify AI-generated writing.

Context is King

The most obvious giveaway to AI writing is a strange, sometimes almost incoherent understanding of context. I am talking about the context in which the assignment was given, but also the contextual boundaries around the information returned by the AI system. What do I mean?

Without skillful prompt engineering by students, AI simply does not understand anything about the context surrounding the question being asked. It cannot distinguish between a student attempting to generate a personal reflection and a request for a formal research paper. Students need to provide that level of context in their prompting to get a result that fits the requirements of an assignment. If they dont provide the context, the answers they submit may not be addressing the actual question or might seem to be shifted slightly off topic.

In a broader sense, AI does not understand any of the context surrounding the material it has been trained on. I have seen several examples of this in my work with academic integrity in the past year. AI will provide answers beyond the constraints of the original prompt by providing related information that it thinks is relevant because of the proximity in its database to things in a students prompt. Add that to its lack of understanding of the context of an assignment and you get things like reference to sophisticated concepts far beyond the level of the class a student is submitting work for, information that was simply never covered in the course, or information that is factually wrong if AI has hallucinated a new and exciting reality based on its algorithms.

Beyond Monotony

Another AI giveaway is that AI-generated text often lacks the natural variability found in human writing. Sentences may follow a similar pattern or structure throughout the text, making it sound monotonous and formulaic. AI might struggle with the subtle ways punctuation is used to create emphasis or guide the reader's understanding. Look for mechanical use of commas or semicolons, or a lack of exclamation points or question marks where appropriate for the tone or emphasis. For example, a series of run-on sentences or excessive use of commas might indicate the writing lacks a sophisticated understanding of punctuation's role in conveying meaning. Look for instances where the sentence structure feels repetitive or lacks the flow and rhythm of natural language. Try reading the suspect sentences out loud. If it sounds like a robot talking, you might be reading AI text.

AI algorithms are trained on massive amounts of text data, but they might not capture the full range of sentence structures used by human writers. This can lead to a tendency to favor simple or formulaic sentence constructions throughout the text. Look for instances where sentences all start with the subject or follow a subject-verb-object pattern without variation. Also look for repetitive use of the same transitional phrases between sentences.

AI-Generated Sentence:

Al-generated text often lacks the natural variability found in human writing. Sentences may follow a similar pattern or structure throughout the text, making it sound monotonous and formulaic.

Human Writing:

While some AI-generated text can appear grammatically correct, it often suffers from repetitive sentence structures, making the writing sound dull and predictable.

More Than Just Good Grammar

While AI can follow grammatical rules, it might misinterpret the intended meaning of words or use them inconsistently within the context of the sentence. Look for misused homophones (e.g., "there" vs. "their") or capitalization errors that disrupt the intended meaning. AI also tends to overuse specific words or phrases, or it also might choose words that sound impressive but lack context or relevancy to the topic at hand. Be mindful of overly complex vocabulary or the repetitive use of synonyms that don't add depth or meaning.

AI-Generated Sentence:

Tracing the evolution of financial documentation reveals its genesis in the Mesopotamian civilization, where the inception of economic record-keeping was etched into clay tablets, prefiguring modern accounting systems.

Human Writing:

The practice of recording transactions in ledgers has been a cornerstone of commerce since the time of ancient Mesopotamia, where clay tablets served as the earliest known financial records.

Deep Thoughts

Simply put, AI aint people and dont know what people knows. I like to think about the difference between knowledge and information here. Information is basic facts, figures, and things you can give as answers on a multiple choice, true/false, or short answer test. Knowledge is taking basic information and applying it in context and with an awareness of the impact of the application. AI has information, people have knowledge. AI-generated content typically lacks the emotional depth and personal touch that human writers can convey. Take the first sentence of this paragraph for example. I image that my non-standard English phrasing had an impact on how you think about me, my writing ability, the validity of this post, my socio-economic background, cultural heritage, or many other possible subtle shadings of how you read the post. Without skillful prompting, students simply cant make AI evoke subtle feelings in the same way. To spot AI text, be on the lookout for instances where the writing feels sterile or objective, lacking the enthusiasm, critical perspective, or emotional nuance a human writer might bring to the topic.

AI-Generated Sentence:

Unemployment rates have increased, which could lead to economic challenges and social issues.

Human Writing:

With every uptick in unemployment figures more individuals face the stark reality of an uncertain future. This is not just a dip in economic graphs but an indicator of the real-life struggle for stability and dignity within our society.

Finally, while you were probably all taught at some point between Jr. High and now to avoid clich矇s like the plague, AI has been trained in the vastness of the internet where clich矇s are a common shorthand for expressing connections between concepts often used poorly, I might add. While AI might overuse clich矇s or other common phrases, it can also get tripped up by more nuanced vocabulary usage. Look for words or phrases that sound impressive on the surface but lack context or relevancy to the specific topic.

AI-generated content is characterized by a surface-level understanding of a topic, providing basic descriptions but lacking analysis, critical thinking, or unique insights. The AI tell here will be shallow summaries of concepts without evidence or counter arguments. As one of my graduate school mentors put it in a recent article:

While ChatGPT was impressive in terms of its linguistic sophistication, it reminded me of people I have occasionally encountered during the past 50 years in academia who are highly articulate, but who nonetheless really do not know what they are talking about. Their understanding is often superficial, even though they appear to be quite confident in their eloquent erudition about some topics. I even have a term for this type of academic prose that is unprintable here. I usually just ignore such claimsespecially after probing via Socratic method for critical awareness, those locutors often struggle to provide further clarification, real-world examples, and rational justifications of their claims.

Your students are almost certain to use generative AI in some capacity as it becomes increasingly available on platforms like Copilot, Gemini, ChatGPT, and Claude and integrated into almost every other application in some capacity. And they should be using it. They need to develop both the technical skills associated with this technology and an ethical understanding of the boundaries around that use. However, there is a real value in doing the hard labor of thinking and writing that students also need to be held accountable for. You can help do that by being your own best AI detector and unearthing the clues to AI writing in your students work if they try to pass it off as their own.

As promised at the outset of this article, here is a step-by-step guide to handling suspected cases of AI cheating.

  1. Read the student work with a critical eye for the markers explained above and in this article:

    a. Lack of awareness of context of the assignment, class, students experiences.

    b. Superficial writing that overuses complex words or cliches in a way that doesnt enhance meaning.

    c. Robotic or monotonous writing with repetitive sentence structures and word usage.

    d. Writing that is too perfect to be human by being overly polished, as in having perfect literally inhumanly perfect grammar.

    e. Writing style for AI-generated papers tends to be very vanilla, plain, and unadorned. AI writes in a style that could best be described as generic or sterile.

    f. Be alert to changes in a students writing style or quality of work that happens abruptly.

    g. AI tends to write things that sound right but may not be verifiable or are obvious fabrications to someone with substantial expertise in their field. You might also notice details from one author or source being attributed to another source or details that dont align with the assignment prompt or that are beyond the scope of the course content.

  2. Trust your gut. You are an expert in your field, if you see any of the markers above or something just feels off, run the paper through , a free AI detector.
  3. If you get results from ZeroGPT that support your suspicion, follow the normal Academic Integrity Violation process.

    a. Discuss the incident with your department chair or program director.

    b. If they agree with your assessment, you and the chair/director should have a conversation with the student to discuss your suspicion. Ask the student to explain their process for completing the assignment.

    c. If they admit to using AI in a way that you believe problematic and that is clearly forbidden in your syllabus or the university Academic Integrity Policy, Enforce the and complete an so we will have a record of the incident.

    d. If the student does not agree to the violation but you and your chair/director still believe a violation has occurred, complete the form and the Academic Integrity Board will handle the case from there.

Most importantly in this process is the idea that you should trust your own ability to discern if something is off with a students work. You really are the best possible AI detector available.

  • Instructional Design and Delivery IDD