Dealing with AI generated academic dishonesty from a policy and teaching perspective
Sometime towards the end of this last Spring semester, a professor at Texas A&M Commerce decided after an online consultation with what he called “ChatGTP” that all members in one of his senior classes had used the Large Language Model to write their papers [1]. He gave them all incompletes and effectively blocked their graduations. It turns out that the professor was largely wrong. One of his students pointed out that the same methodology the professor used to detect AI use in paper writing, when applied to his own published doctorate from 2021, falsely indicated that it was likely written by an AI as well.
The example from Commerce was largely caused by a lack of understanding of an emerging and disruptive technology. Although the situation was handled poorly, it does not eliminate the problem of some students using AIs such as ChatGPT to generate work that they pass off as their own. Do we need to re-look at our academic regulations in order to stay ahead of this problem?
The short answer is no.
Disruptive technology like ChatGPT will always evolve faster than academic regulators can produce and enforce specific new rules. Fortunately, the University of Alaska Student Code of Conduct [2] already has this completely covered. The UA Board of Regents Policy Part IX, P09.02.020 B. 1. states that “cheating, plagiarism, or other forms of academic dishonesty” is prohibited and that the University may initiate disciplinary action and impose sanctions on offenders. The rule is already approved; the enforcement mechanisms are already in place. If you detect plagiarism or academic dishonesty in your class, it is up to you and your department how to proceed with the offense. But at least you know that Regents’ policies have you covered.
This brings up two very large questions. How do you know when a student has used an AI to generate work they purport to be their own? How do you prevent such incidents from happening before they even occur?
The absolute best way to know if a student is plagiarizing is to be intimately familiar with that student’s writing. Written work is a window into the author’s mind. As you observe your student’s work across a semester from assignment to assignment, from initial drafts through completed work, you get an insight into growth and progress, or lack thereof. A sudden change in writing style, shift in tone, or incorrect claims of fact can be indicators of plagiarism. Yes, there are tools available at UAF that can help you detect a wide variety of plagiarism. In Canvas, the TurnitIn tool is available for all paper assignments that are submitted as file uploads. But these tools are not perfect, and when used incorrectly can lead you into situations similar to the one that the professor from Commerce, Texas is in right now.
The problem with any tool such as Turnitin is that the detection technology will always be behind the generation technology. We’re in the middle of an evolutionary tech race in which after-the-fact actions will always have poorer results than proactive educative efforts.
It might very well be time to review your course syllabus. You probably have some text pointing to the Student Code of Conduct and your policy on plagiarism that you’ve copied over from a template or a previous course. I’ll wager that your students spend as much time reading and thinking about this as you have when you roll out a new semester. Consider making the statement personal and covering the topic in light of the availability of AI tools such as ChatGPT.
If you have written assessments of any sort in your course, think about incorporating a longer cycle of assessment with incremental stages of development. Have students turn in drafts by certain dates, include rewrites, and then final versions. Have a dialogue with your students about changes. Require some self-criticism and metacognitive review. Ask the students questions about improvement, different angles of inquiry, more complete primary sources, or newly released information in your discipline. These kinds of conversations, spaced over days and weeks will give you insight into your students’ thinking and have a major side benefit of improving your students’ work.
This is not the last bit of advice on ChatGPT and other AIs from UAF CTL. Over this Summer, we’ll concentrate on this large AI challenge with a focus on how you can specifically use these new tools in a constructive manner. We know this is a challenging time to teach, but we can adapt and we can help our students, all while maintaining academic rigor. We are educators, and we educate. We can accomplish anything together.
Resources
[1] Klee, M. (2023, May 17). Professor flunks all his students after ChatGPT falsely claims it wrote their papers. Rolling Stone. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
[2] https://uaf.edu/csrr/student-conduct/
[3] https://ctl.uaf.edu/2020/09/08/preventing-academic-dishonesty-in-online-courses/
Dan LaSota
Instructional Designer
Certified QM Peer Reviewer
Certified QM Training Facilitator
[…] LaSota is an instructional designer, and he’s been writing teaching tips about artificially- intelligent tools for UAF’s Center for Teaching and Learning. He says […]
[…] LaSota is an instructional designer, and he’s been writing teaching tips about artificially- intelligent tools for UAF’s Center for Teaching and Learning. He says […]