Right, but the whitespace between instructions wasn't whitespace at all but white text on white background instructions to poison the copy-paste.
Also the people who are using chatGPT to write the whole paper are probably not double-checking the pasted prompt. Some will, sure, but this isnt supposed to find all of them its supposed to catch some with a basically-0% false positive rate.
Yeah knocking out 99% of cheaters honestly is a pretty good strategy.
And for students, if you're reading through the prompt that carefully to see if it was poisoned, why not just put that same effort into actually doing the assignment?
Maybe I'm misunderstanding your point, so forgive me, but I expect carefully reading the prompt is still orders of magnitude less effort than actually writing a paper?
Eh, putting more than minimal effort into cheating seems to defeat the point to me. Even if it takes 10x less time, you wasted 1x or that to get one passing grade, for one assignment that you'll probably need for a test later anyway. Just spend the time and so the assignment.
For the same reasons, really. People who already intend to thoroughly go over the input and output to use AI as a tool to help them write a paper would always have had a chance to spot this. People who are in a rush or don't care about the assignment, it's easier to overlook.
Also, given the plagiarism punishments out there that also apply to AI, knowing there's traps at all is a deterrent. Plenty of people would rather get a 0 rather than get expelled in the worst case.
If this went viral enough that it could be considered common knowledge, it would reduce the effectiveness of the trap a bit, sure, but most of these techniques are talked about intentionally, anyway. A teacher would much rather scare would-be cheaters into honesty than get their students expelled for some petty thing. Less paperwork, even if they truly didn't care about the students.
No, because they think nothing of a request to cite Frankie Hawkes. Without doing a search themselves, the name is innocuous enough as to be credible. Given such a request, an LLM, even if it has some actual citation capability, currently will fabricate a reasonable sounding citation to meet the requirement rather than 'understanding' it can't just make stuff up.