Written by 9:10 am AI, Discussions

### Contemplating the Accusation: Is My Student Truly Guilty of AI-Assisted Cheating?

In the desperate scramble to combat AI, there is a real danger of penalising students who have done…

The anticipation surrounding ChatGPT reached its peak as I settled in to evaluate undergraduate student essays during the spring of 2023. Like many educators, I harbored concerns about the potential for students to relinquish critical thinking to automated systems. The widespread adoption of AI detection software by numerous universities, including my own, was a direct response to this emerging trend. When one of the papers received the label “100% AI-generated,” my apprehensions were swiftly validated.

The revelation that the initial “100% AI-generated” essay I assessed belonged to a brilliant and insightful thinker, whose previous essays from the pre-ChatGPT era were consistently exceptional albeit somewhat formulaic, struck a disheartening chord. It is worth noting that essays are evaluated anonymously.

I found myself entangled in a scenario that is increasingly common: caught between the realm of software solutions and human intellect, with educators and AI detection tools on one side and students and ChatGPT on the other. As per institutional policy, essays flagged with high Artificial detection scores must be reported for potential academic misconduct, which could culminate in severe consequences such as expulsion. However, the exceptional student in question contested the accusation, asserting that the university-approved spelling and grammar assistance software they utilized possessed limited relational AI functionalities akin to ChatGPT.

Turnitin, a prominent American educational technology giant and a key player in the academic integrity landscape, supplied the technology that scrutinized my student’s writing. Prior to the advent of ChatGPT, Turnitin primarily focused on generating “similarity reports” by cross-referencing essays with a repository of online sources and previously submitted student works. While a high similarity index does not necessarily indicate plagiarism (some students tend to overcite), it does facilitate the identification of copied content.

The conventional practice of copy-pasting content is gradually becoming obsolete due to the emergence of conceptual AI. ChatGPT formulates word sequences in response to essay prompts that may not register in a similarity report. In response to the perceived threat to its business model, Turnitin has developed AI surveillance tools to differentiate between essays strung together in predictable AI-generated patterns, akin to ChatGPT, and those crafted in a more idiosyncratic human style. However, the outcomes are inconclusive: despite labeling an essay as “X% AI-generated,” a discreet hyperlink beneath the percentage tentatively suggests that it “might be.”

In contrast to the transparent “similarity report” containing source references for instructors to verify instances of plagiarism or excessive citation, the AI recognition system operates as a “black box.” ChatGPT boasts over 180 million regular users, generating distinct albeit repetitive content for each user. Replicating the same verbiage for a given prompt is virtually impossible, let alone predicting how students might incorporate it. Thus, educators and students are embroiled in an intricate dance with AI. It is not uncommon to find students seeking advice online on evading AI detection using paraphrasing tools and AI “humanizers,” as well as seeking guidance on refuting baseless accusations stemming from flawed AI assessments.

Upon reviewing my student’s challenge to the AI detection outcome, I opted to side with human judgment over machine verdicts. Nonetheless, the student’s defense was compelling, particularly considering their consistent writing style predating the inception of ChatGPT. Despite my inclination to trust human discernment, I found myself making a high-stakes decision without concrete corroborative evidence. This student’s ordeal is emblematic of a broader issue pervading the academic sphere.

The fervor surrounding AI advancements has instilled a sense of skepticism among some educators towards students. It is undeniable that ChatGPT is capable of crafting university-level essays that meet the requisite standards. The amalgamation of ChatGPT and AI “humanizers” could potentially aid a student with a lower academic standing in navigating their college journey.

However, viewing this scenario as an arms race inevitably disadvantages students who require additional support to excel in an educational system that tends to favor white, middle-class, native English-speaking individuals without disabilities, whose parents have pursued higher education. Students falling outside these parameters are more inclined to rely on tools like Grammarly and other grammar checkers for assistance, which leverage generative AI to provide stylistic recommendations. Consequently, even original ideas run the risk of being flagged by AI systems. Honest students may find themselves ensnared in a Kafkaesque predicament, where one automated tool accuses them of improper reliance on another.

The pertinent question arises: how should we proceed? There is a genuine concern that educators may lose sight of the fundamental purpose behind assigning essays: providing students with an opportunity to showcase their analytical skills, critical thinking abilities, and capacity to articulate original viewpoints. This apprehension stems from the frantic—and often futile—race to keep pace with AI advancements. Perhaps this predicament presents an opportunity to move away from conventional essay prompts that AI like ChatGPT can effortlessly tackle. By incorporating class-generated content, posing questions with information beyond ChatGPT’s training data, and encouraging students to engage in critical discourse through lectures, podcasts, videos, and reflective writing, educators can foster originality and critical thinking.

Furthermore, instructors can prompt ChatGPT with specific essay topics and subsequently invite students to evaluate the resultant output in class, thereby directly addressing concerns related to external AI influence. Broadening the spectrum of assessments could aid universities in bridging the achievement gap, which is partly attributed to traditional assessment methods that tend to favor economically privileged students. The objective should extend beyond thwarting AI-generated essays.

While the knee-jerk reaction to revert to closed-book written examinations is understandable, it may not be the most effective solution. Such measures place additional strain on contingent faculty members who are already grappling with various challenges. It is imperative to adopt a critical stance towards AI while acknowledging its existence, especially if we aim to prepare individuals for a future where human interaction with intelligent machines is inevitable.

This goal can be better achieved if students enter university as open-minded critical thinkers rather than burdened consumers navigating a system fraught with stress and financial constraints. In essence, the apprehension surrounding AI is merely a symptom of broader upheavals within UK universities. The government’s directive to restrict admissions for “low-value” degrees, thereby jeopardizing funding sources for these programs, constitutes a direct blow to students from working-class and minority ethnic backgrounds. Responding to AI advancements with stringent penalties based on flawed detection mechanisms risks perpetuating these inequalities. Collaborating with AI rather than engaging in a futile battle against it may offer a viable solution to avert the ironic outcome of technological progress exacerbating existing educational disparities.

Robert Topinka, a seasoned lecturer in media and social studies at Birkbeck, University of London, underscores the need for a nuanced approach towards AI integration in academia.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 28, 2024
Close Search Window
Close