Using AI to filter applicants must be fraught with legal risks. Discrimination, unknown biases in the model used, all kinds of things that risk screening someone out illegally. Hopefully someone loses a big lawsuit to scare the industry into sense.
I can't remember the court case per se but it involved a self driving car hitting a pedestrian I think.
My cursory search on Google seems to now show companies being held liable for AI hallucinations, with one court case currently pending where an AI chat bot encouraged a teen to commit suicide.
Given our current administration and make up of SCOTUS I'm not holding my breath.