Over a dozen universities are using AI to catch AI — and getting it wrong
Over a dozen universities are using AI to catch AI — and getting it wrong
www.abc.net.au
Over a dozen universities are using AI to catch AI — and getting it wrong

In short:
At least a dozen Australian universities are using AI technology to detect cheating, and they're getting it wrong.
Students across the country say the false allegations are costing them money, time and stress.
What's next?
There is no national approach to AI in the university sector and experts say it's time for a rethink.
Here I was thinking having to run things through turnitin was bad enough a decade ago before LLMs entered the scene. At least then though there did seem to be an understanding that it was always going to claim some degree of similarity given how many other uni students were going to be writing on the same topics every year.
Detecting LLM output seems an almost impossible thing to do reliably (i.e. both high sensitivity and high specificity), the whole idea after all is that they output what is commonly written by humans in the training datasets. I can see unis having to return to having basically every assessment done in person as even with the privacy nightmare of monitoring software you can't really be sure non-supervised input actually was written by the student.