Professors ‘Miss Old-School Cheating’ In The AI Era


A couple of months ago, Dr Jonathan Fine – a lecturer in German Studies – shared an X post that made me laugh, then wince.

“I love normal cheaters now,” the academic wrote. “A student admitted to getting help from a person on an assignment, and I didn’t even penalise him because I was just so happy it wasn’t AI.”

It’s a sentiment I’ve seen echoed by other professors since.

And in a recent TikTok, Dr Steven Buckley, a lecturer in Media Digital Sociology who also helps his university to assess cases of academic misconduct, shared that he’d seen a dissertation with what appeared to be “hallucinated” references.

We spoke to Dr Fine and Dr Buckley about their experiences.

I love normal cheaters now. A student admitted to getting help from a person on an assignment, and I didn’t even penalize him because I was just so happy it wasn’t AI.

— Jonathan Fine (@jonathanbfine) April 25, 2025

Dr Fine says he’s been “paranoid” about machine learning for a long time

Dr Fine, who teaches in a language not native to most of his students, says he doesn’t think his experiences are typical of those teaching the humanities.

“I teach German, so I’ve been paranoid about machine translation ever since I started teaching,” he said.

“That’s always been my default attitude when reading student writing, so AI hasn’t been a major change for me or a big loss of trust in the students.”

Still, he says, “I don’t allow students to use AI, and I tell them at the beginning of class how awkward the conversation is when they’re caught, but they use it anyway.”

He assigns a lot of in-class writing, which means it’s clear when AI has been used by a student. The lecturer says it reads very differently from their usual work.

“When I catch a student using AI, I try to use it as a teaching moment,” he told HuffPost UK.

“I talk to the students about how my job is to help them improve, but I really can’t help a computer. If students were to reoffend, then I’d have to escalate the situation as a violation of academic policies.”

Every justification I come across for the widespread academic use of artificial intelligence presupposes an ideal student user that does not exist. https://t.co/5TF4YNaYpe

— Jonathan Fine (@jonathanbfine) June 19, 2025

Dr Buckley says about 70% of papers he’s seen flagged for academic misconduct seem to involve AI

Dr Buckley, who stressed he was speaking from a purely personal perspective, said that he’s seen the use of AI grow “rapidly” in the past two to three years.

Though his experience with AI has only been in the humanities (Dr Buckley said it may be different for STEM subjects), he says AI “has forced me and many of my colleagues to reconsider how we evaluate what students have learnt in our modules and certainly change the type of assessments we use.”

It’s encouraged some to consider more in-person exams or placing higher weight on in-lecture presentation, he claimed.

“I personally consider the use of AI to be a huge problem as it essentially outsources the learning process for students,” he added (a recent MIT study suggested that those who use generative AI may lose critical thinking skills).

“These days, many students simply do not do any of the reading of the actual academic material and instead rely on things like ChatGPT or Notebook LM to provide a summary. These summaries are often at best very shallow and at worst are totally incorrect.”

Nonetheless, he said, some students accept these summaries uncritically.

Of all the cases of potential academic misconduct flagged to him, he says, “at least 70% of cases involve the use of generative AI in some form.”

His university does not use AI detectors when checking student work, as they can be inaccurate.

“When a member of staff suspects use of AI, they typically report it, and after following a process, it may end up coming to an independent panel in the school, which I sit on.

“I then personally go through the essay, checking things like quotes, references, use of language, etc, to come up with a set of questions to ask the student.”

He looks for red flags like hallucinated references. Sometimes when asked about their paper, students suspected of cheating using AI “do not know or understand” the theories and arguments in their essays.

″Plus,” the lecturer added, “they often read like shit.”



We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Compare items
  • Total (0)
Compare
0