ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans
Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.
ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.
I’m still confused that people don’t realize this. It’s not an oracle. It’s a program that generates sentences word by word based on statistical analysis, with no concept of fact checking. It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.
Well, it's a good thing absolutely no clinician is using it to figure out how to treat their patient's cancer.... then?
I imagine it also struggles when asked to go to the kitchen and make a cup of tea. Thankfully, nobody asks this, because it's outside of the scope of the application.
These studies are for the people out there who think ChatGPT thinks. Its a really good email assistant, and it can even get basic programming questions right if you are detailed with your prompt. Now everyone stop trying to make this thing like Finn's mom in adventure time and just use it to helo you write a long email in a few seconds. Jfc.
People really need to get in their heads that AI can "hallucinate" random information and that any implementation on an AI needs a qualified human overseeing it.
People really need to understand what LLMs are, and also what they are not. None of the messianic hype or even use of the term “AI” helps with this, and most of the ridiculous claims made in the space make me expect Peter Molyneux to be involved somehow.
This is just stupid clickbait. Would you use a screwdriver as a hammer? No. Of course not. Anyone with even a little bit of sense understands that GPT is useful for some things and not others. Expecting it to write a cancer treatment plan, it's just outlandish.
Even GPT says:I'm not a substitute for professional medical advice. Creating a cancer treatment plan requires specialized medical knowledge and the input of qualified healthcare professionals. It's important to consult with oncologists and medical experts to develop an appropriate and effective treatment strategy for cancer patients. If you have questions about cancer treatment, I recommend reaching out to a medical professional.
Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.
This is the second paper I've seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren't savvy enough to pick up on that and just run with "ChatGPT sucks!" Also it's not even ChatGPT if they're using that model. The paper is wrong (or it's old) because there's no way to use that model in the ChatGPT interface. I don't think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.
Anyway, tldr, paper is similar to "I tried running Diablo 4 on my Windows 95 computer and it didn't work. Surprised Pikachu!"
I suppose most sensible people already know that ChatGPT is not the answer for medical diagnosis.
Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.
If the researcher wanted to investigate whether LLM is helpful, they should develop a model specifically using cancer treatment plans with GPT-4/3.5 before testing it thoroughly, in addition to entering prompts into the model that is available on OpenAI.
I asked a retard to spend a week looking at medical treatment plans and related information on the internet. Then asked him to guestimate a treatment plan for my actual cancer patient. How could they have got it wrong!
This is how I translate all these AI Language model says bullshit, bullshit.
GPT has been utter garbage lately. I feel as though it's somehow become worse. I use it as a search engine alternative and it has RARELY been correct lately. I will respond to it, telling it that it is incorrect, and it will keep generating even more inaccurate answers... It's to the point where it almost becomes entirely useless, where sometimes it used to find some of the correct information.
I don't know what they did in 4.0 or whatever it is, but it's just plain bad.
According to the study, which was published in the journal JAMA Oncology and initially reported by Bloomberg – when asked to generate treatment plans for a variety of cancer cases, one-third of the large language model's responses contained incorrect information.
The chatbot sparked a rush to invest in AI companies and an intense debate over the long-term impact of artificial intelligence; Goldman Sachs research found it could affect 300 million jobs globally.
Famously, Google's ChatGPT rival Bard wiped $120 billion off the company's stock value when it gave an inaccurate answer to a question about the James Webb space telescope.
Earlier this month, a major study found that using AI to screen for breast cancer was safe, and suggested it could almost halve the workload of radiologists.
A computer scientist at Harvard recently found that GPT-4, the latest version of the model, could pass the US medical licensing exam with flying colors – and suggested it had better clinical judgment than some doctors.
The JAMA study found that 12.5% of ChatGPT's responses were "hallucinated," and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.
The original article contains 523 words, the summary contains 195 words. Saved 63%. I'm a bot and I'm open source!
Clickbait written by an idiot who doesn't understand technology. I guess they give out journalism degrees to anyone who can write a top 10 buzzfeed article.
It speeds things up for people who know what they're talking about. The doctor asking for the plan could probably argue a few of the errors and GPT will say "oh you're right, I'll change that to something better" and then it's good to go.
Yes you can't just rely on it to be right all the time, but you can often use it to find the right answer with a small conversation, which would be quicker than just doing it alone.
I recently won a client with GPTs help in my industry.
I personally think I'm very knowledgeable in what I do, but to save time I asked what I should be looking out for, and it gave me a long list of areas to consider in a proposal. That list alone was a great starting block to get going. Some of the list wasn't relevant to me or the client, so had to be ignored, but the majority of it was solid, and started me out an hour ahead, essentially tackling the planning stage for me.
To someone outside of my industry, if they used that list verbatim, they would have brought up a lot of irrelevant information and covered topics that would make no sense.
I feel it's a tool or partner rather than a replacement for experts. It helps me get to where I need to go quicker, and it's fantastic at brainstorming ideas or potential issues in plans. It takes some of the pressure off as I get things done.
I thought it released in 2021. Maybe it was on the cusp. I was basically using it to find what I couldn't seem to find in the docs. Its definitely replaced my rubber ducky, but I still have to double check it after my Unity experience.