I think the issue with this particular tool is it can authoritatively provide incorrect, entirely fabricated information or a gross misinterpretation of factual information.
In any field I've worked in, I've often had to refer to reference material as I simply can't remember everything. I have to use my experience and critical thinking skills to determine if I'm utilizing the correct material. I have not had to further determine if my reference material has simply made up a convincing, "correct sounding" answer. Yes, there are errors and corrections to material over time, but never has the entire reference been suspect, yet it continued to be used.
i maintain that AI companies could improve their stuff a huge amount by simply forcing it to prefix "I think" to all statements. It's sorta like how calculators shouldn't show more data than it can confidently produce, if the precision is only 4 decimals then don't show 8.
Imagine an AI with a model trained exclusively on a specific set of medical books, the same set of books all doctors have access to already. While there's still room for error it would guide the doctor to a very familiar reference. No internet junk, social media, etc.
Exactly as you say. It's a tool, not a replacement. Certainly not in healthcare anyway.
You were probably already at risk then of a misstep. Don't have time to think about the output then they probably didn't have time before AI came along, so the AI isn't really adding to the issue here.