Chicago Sun-Times caught publishing likely ChatGPT generated list of Summer Reading recommendations. 10 of 15 books don't exist.
Chicago Sun-Times caught publishing likely ChatGPT generated list of Summer Reading recommendations. 10 of 15 books don't exist.
Wait, so chat GPT can't even compile a fucking list of books without making up 2/3rds of it's response out of thin air?
I don't really see the appeal of using AI if it's going to take more time and effort to fact check the responses it gives me because it has a massively high failure rate.
Now you're getting it
That's because despite what AI companies keep trying to ram down people's throats, it's not built to compile facts
You just run the output back through and ask it to fact check for you. Problem solved!
My company paid for some people to go to one of these "accelerate your company with AI" seminars - the recommendation that the "AI Expert" gave was to ask the LLM to include a percentage of how confident it was in its answers. I'm technical enough to understand that that isn't how LLMs work, but it was pretty scary how people thought that was a reasonable, sensible idea.
LLM with a memory now: Yes, these books all exist and are highly recommended. I hear the Chicago Sun-Times is considering putting all of them on their summer reading list.
"Writer": (stopped reading at the word exist) print it!
I’m a newspaper editor. The people who are/were most excited about this tech, also happened to be the folks who did none of the actual writing to begin with.
We had sales folks gleefully hand us texts for advertisers that they’d ‘written’ with ChatGPT. Those texts contained so much wrong info, it wasn’t even funny. It made up things, had wrong information about websites, contact info, that sort of thing.
But since the sales monkeys weren’t actual writers, they didn’t catch on to that. Meanwhile, we were spending more time fact checking and unfucking their texts than if we’d written it ourselves in the first place.
It CAN be helpful to shorten or rearrange already written things, but if you ask it to write from scratch, it’s usually not going to be good.
I don't see how it fucked this up so badly. One of the few things I use AI for is book recommendations, and I have yet to be recommended a non existent book.
I treat LLM responses like I do random internet advice. Trust, but verify. Pretty light on the trust part lol.
I treat it more as distrust but verify. Sometimes it's right, but it has proven enough times to make shit up that it doesn't get my trust by default. Sometimes it can lead to me searching for the right thing though, so it is sometimes remotely useful. I rarely use it though and run it locally.
I consider LLMs to be "bullshit generators"
If the situation calls for only BS, an LLM is great for it. Anything else, not so much.