I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded.
It’s not that hard. Just scroll through the editing history. You can even look at timestamps to see if the student actually spent any time thinking and editing or just re-typed a ChatGPT result word for word all in one go. Creating a plausible fake editing history isn’t easy.
In college (25+ years ago) we were warned that we couldn't trust Wikipedia and shouldn't use it. And, yes, it was true back then that you had to be careful with what you found on Wikipedia, but it was still an incredible resource for finding resources.
My 8 year old came home this year saying they were using AI, and I used it as an opportunity to teach her how to properly use an LLM, and how to be very suspicious of what it tells her.
She will need the skills to efficiently use an LLM, but I think it's going to be on me to teach her that because the schools aren't prepared.