Part 3: Conversations about Cheating – Revisiting AI & Academic Integrity

by Carie Cardamone, Associate Director for STEM, Professional Schools & Assessment

This is Part 3 of the series Addressing Academic Integrity in the Age of AI 

Image generated by ChatGPT4o in DALL-E to illustrate this article.

In Part 1: Beyond AI Detection – Rethinking Academic Assessments and Part 2: The AI Marble Layer Cake – Reconsidering In-Class and Out-of-Class Learning & Assessment of this series, we explored strategies to create a supportive learning environment and discourage the misuse of AI tools.  In Part 3, let’s explore approaches to address cheating useful for instructors increasingly encountering concerns about the originality of and AI contributions to their students’ work.

Suspicions of cheating can evoke feelings of disappointment and frustration. However, how we respond can significantly impact our relationships with our students and their learning environment beyond our course. By approaching the situation with empathy and openness, we can create an environment where students feel comfortable discussing their challenges and motivations. This not only addresses the immediate concern but also fosters trust and promotes long-term academic integrity.

The following questions provide a framework to help us in navigating a process of evidence-gathering and respectful dialogue that centers our relationships with our students while upholding academic integrity.  

How have we framed the use of AI in the course?

Setting clear expectations for AI use begins with our syllabus and extends through the day-to-day activities and assignments.  Start by including guidance around the use of AI in your syllabus; CELT’s website offers example statements. When discussing these guidelines with your students, use this as an opportunity to emphasize the value of the learning process and explain why certain AI uses might hinder their learning.

Clarifying your specific expectations for AI use in each type of assessment is increasingly important as many common writing tools now integrate suggestions from AI assistants. A student’s use of an AI tool for proofreading, editing, or brainstorming may lead to text that differs from their usual voice or one that resembles an AI-generated syntax or style. To help define acceptable AI activities for your assignments, consider exploring this worksheet  and this AI Assignment Scale

Maintain an open dialogue about AI use throughout the semester. Regularly revisit your policies, demonstrate useful versus unhelpful applications of generative AI, and clarify your expectations about citing AI use. Our guidelines and ongoing communication create a framework that will guide our conversations with students when we suspect AI-related cheating.

“I may be pessimistic about out-prompting ChatGPT or detecting AI text, but I am not at all worried that the act of writing has become less valuable to students. Writing practice continues to be intensely rewarding for students and central to intellectual growth in college.”

– Anna Mills

What observations trigger our suspicions of AI-generated text? 

We may question the authenticity of a student’s work for a variety of reasons. We might notice a sudden shift in writing style or quality compared to their previous assignments. If we’ve experimented with AI-generated responses to our prompts, we might recognize characteristic language patterns or “voices” associated with specific AI systems. Sometimes, AI detection tools flag text as potentially machine-generated, raising our suspicions. Whatever the source, it’s crucial to approach these suspicions with caution. 

Our ability to detect AI use in students’ work is uncertain and often flawed. When an AI detector attempts to determine if a passage has been generated by AI, it compares the language’s similarity to that generated by popular AI systems – all of which were trained on human writing. Writing generated by AI, e.g., that generated “in the style of XYZ” or submitted with small alterations, is also frequently missed by AI detection tools. Because of the inaccuracies and limitations inherent in these detectors, at Tufts Turnitin will not return an AI writing indicator and a formal hearing requires more information than a flag from an AI detection tool.

Even for those well-versed in AI-generated text, our perceptions are biased. We’re more likely to spot instances where students have directly copied AI outputs, use language similar to that of AI-generated text, employ less sophisticated AI models, or use simple prompting techniques. These biases can lead to false accusations, as evidenced by numerous cases involving both students and academic peers.

Given these limitations and potential for false or biased accusations, it’s essential to critically examine our suspicions before taking action. 

What other evidence might we gather to assess the situation?

To evaluate suspected AI use, look beyond the single assignment in question. A student’s evolving writing style provides valuable context. Even in larger classes, prior submitted work can establish a baseline. Drafts, outlines, or other products submitted by a student along the way can also help provide insight into their writing process. In-class observations of the student’s interactions also provide context about their evolving understandings. 

Course technology can also reveal patterns in a student’s work. For example, Google Documents’ version histories can reveal the writing process over time and Canvas logs can show patterns of engagement with course materials. An AI detector or text analyzer might be able to flag potential inconsistencies or unusual writing patterns to provide another perspective. You might also compare the assignment to the ideas and phrasing generated by AI systems (e.g., placing your assignment prompt into ChatGPT, Gemini, Perplexity and/or Claude).

A range of evidence—from writing samples to digital footprints—can help inform a conversation with the student about potential AI use.

How can we talk with our students about AI suspicions?

If we feel that our suspicions are worth exploring, the first step is a direct conversation with the student. It is important to start from a place of inquiry, because accusations can erode trust and damage the relationship with the student. Use questions rather than statements to guide the conversation: begin with an inquiry into the student’s process and ideas, address your observations and concerns, and ask about their potential interactions with AI.  Here are some example questions that might help guide your conversation.

Start by exploring the student’s process and ideas:

  • This paper represents a significant improvement from your last one. Can you walk me through how you approached writing it?
  • Did you encounter any challenges while writing this paper? How did you overcome them?
  • Can you tell me more about your perspective on XYZ in your essay? What inspired you to choose this angle?
  • How did you find and evaluate the sources you used?  
  • What helped you develop your understanding of Advanced Concept XYZ in this work?
  • Reflecting on your writing process, what do you think were the most valuable parts? How do you think you could improve further?

After discussing the assignment and the student’s writing process, address the use of AI:

  • I noticed XYZ, which led me to wonder if you used any AI tools to assist with your writing?

When a student describes their use of AI, it’s helpful to understand how and why they decided to use it:

  • What led you to use an AI tool for this assignment? Were there any time pressures or challenges you faced? 
  • What other resources or supports did you seek out besides AI?
  • How did you incorporate the AI tool’s suggestions into your writing? Can you show examples of how AI influenced your work?
  • How do you think using AI impacted your understanding of the assignment? What benefits did AI provide?
  • What other resources or supports did you seek out in addition to AI? 
  • What feedback or guidance would be most helpful from instructors regarding AI use in assignments like this?

If a student denies using AI, explore your suspicions further while framing the conversation as a learning opportunity:

  • Can you show me notes, an outline, or early draft to understand how your ideas developed over time? 
  • This section seems different from your previous work, has a different style or language from the rest of the paper. Can you tell me more about what informed the writing of this section? 
  • When did you work on this assignment and how much time did you spend? 
  • What software did you use to work on your paper? Are you aware that suggestions made by Grammarly/Google/Word/etc. are generated by AI systems?

If suspicions are unresolved after this conversation, the information gathered from these questions will inform your next steps. 

What do we do if we conclude that the student may have been cheating with AI?

Misuses of AI are included in the existing policies covering cheating, plagiarism, and inappropriate collaboration at Tufts.  Instructors in AS&E are required to report concerns about academic misconduct to the Office of Community Standards. The team is also available to instructors for consultations. 

References

Eaton (2024) AI Plagiarism Considerations Part 1 AI Plagiarism Detectors, Part 2 When Students Use AI & Part 3 Having the AI Conversation from the AI+Education=Simplified substack

Fleckenstein et al (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays in Computers and Education: Artificial Intelligence

Gallant (2024) How Do We Maintain Academic Integrity in the ChatGPT Era? from the AAC&U’s Liberal Education

Gregg-Harrison (2023) Against the Use of GPTZero and Other LLM-Output Detection Tools from Medium 

Grove (2023) British Academics Despair as ChatGPT-Written Essays Swamp Grading Season from Inside Higher Ed 

Klein (2023) ChatGPT Cheating: What to Do When It Happens from Education Week.

Lieberman (2024) AI and the Death of Student Writing from the Chronicle of Higher Education

Riyeff, (2024) Generative AI and the Problem of (Dis)Trust from Inside Higher Ed. 

Spector (2023) What do AI chatbots really mean for students and cheating? from the Stanford Accelerator for Learning

Steere (2024) Anatomy of an AI Essay from Inside Higher Ed 

Trust (2023) Essential Considerations for Addressing the Possibility of AI-Driven Cheating, Part 1 from Faculty Focus

Wolkovich (2024) Obviously ChatGPT’—How reviewers accused me of scientific fraud from Nature.

Weber-Wulff et al (2023) Testing of detection tools for AI-generated text from the International Journal for Educational Integrity 

Return to the series Addressing Academic Integrity in the Age of AI