I think the issue with this particular tool is it can authoritatively provide incorrect, entirely fabricated information or a gross misinterpretation of factual information.
In any field I've worked in, I've often had to refer to reference material as I simply can't remember everything. I have to use my experience and critical thinking skills to determine if I'm utilizing the correct material. I have not had to further determine if my reference material has simply made up a convincing, "correct sounding" answer. Yes, there are errors and corrections to material over time, but never has the entire reference been suspect, yet it continued to be used.
Imagine an AI with a model trained exclusively on a specific set of medical books, the same set of books all doctors have access to already. While there's still room for error it would guide the doctor to a very familiar reference. No internet junk, social media, etc.
Exactly as you say. It's a tool, not a replacement. Certainly not in healthcare anyway.
You were probably already at risk then of a misstep. Don't have time to think about the output then they probably didn't have time before AI came along, so the AI isn't really adding to the issue here.
The best use of AI at the moment is to act as a tool to quickly search and present data quicker than humanly possible. Not to act upon the findings blindly.
It's not as easy to say anyone using AI should be fired. There needs to be a more nuanced approach to this. It wholly depends on what the GP did with the information it presented.
An example: back in the day GPs had a huge book of knowledge they would defer to that was peer researched and therefore trusted. If you came in with an odd symptom they'd spend time (often in front of you) flipping through the book to find that elusive disease they read about that one time at university. Later that knowledge moved to a traditional search engine. Why wouldn't you now use AI to make that search faster? The AI can easily be trained on this same corpus of knowledge.
Of course the GP should double check what they are being told. But simply using AI is not the problem you make it out to be. If you have a corpus of knowledge and the GP uses this in a dangerous way then the GP should be fired. But you don't then burn the book they found this information from.
I think the difference here is that medical reference material is based on long process of proven research. It can be trusted as a reliable source of information.
AI tools however are so new they haven’t faced anything like the same level of scrutiny. For now they can’t be considered reliable, and their use should be kept within proper medical trials until we understand them better.
Yes human error will also always be an issue, but putting that on top of the currently shaky foundations of AI only compounds the problem.
'Everyone anywhere'? That's an amazingly broad statement. What're you defining as 'using one'? If I use ChatGPT to rewrite a paragraph, should I be fired? What about if a non native speaker uses it to remove grammatical errors from an email, should they be fired? How about using it for assisting with coding errors? Or generating draft product marketing copy? Or summarising content for third parties to make it easier to understand? Still a fireable offence? How about generating insights from data? Assistance with Roadmap prioritisation? Generating summaries of meeting notes or presentations? Helping users with learning disabilities understand complex information? Or helping them with letters, emails etc? How about if it use it to remind me of tasks? Or managing my routines?
It's depends purely on how it's used. Used blindly, and yes, it would be a serious issue. It should also not be used as a replacement for doctors.
However, if they could routinely put symptoms into an AI, and have it flag potential conditions, that would be powerful. The doctor would still be needed to sanity check the results and implement things. If it caught rare conditions or early signs of serious ones, that would be a big deal.
AI excels at pattern matching. Letting doctors use it to do that efficiently, to work beyond there current knowledge base is quite a positive use of AI.
Using Generative AI as a substitute for professional judgement is a disaster waiting to happen. LLMs aren't sentient and will frequently hallucinate answers. It's only a matter of time before incorrect output will lead to catastrophic consequences and the idiot who trusted the LLM, not the LLM itself, will be responsible.
The headline and the article are completely mismatched.
Basically all the article is saying is that doctors sometimes use AI. Which is a bit like saying sometimes doctors look things up in books. Yeah, course they do.
If somebody comes in with a sore throat and the AI prescribes morphine the doctor is probably smart enough to not do that so I don't really think there's a major issue here. They are skilled medical professionals they're not blindly following the AI.