TIL about the Gell-Mann amnesia effect; when experts find articles published within their field to be full of errors, but trust articles about other fields in the same publication
I noticed something similar on websites like Reddit. I've come across an answer for a question on something I'm well educated on, and their answer is definitively wrong but "sounds correct". The reddit community will up vote them, and even down vote people who try correcting them.
But then later on I would come across a post on a topic I don't know, and I'm inclined to believe the answers because they sound right and there's a group consensus backing it up.
The reddit community will up vote them, and even down vote people who try correcting them.
Yeap..... Especially with any topic where there's a big hobbyist community.
I work in orthotics and prosthetics for a university hospital as both an educator and a healthcare provider. I can't tell you how many times I've been down voted by 3d printer enthusiasts for critiquing untrained and uneducated people fitting children with medical devices that can severely injure or debilitate them.
This is my experience with AI, specifically ChatGPT.
If I ask it questions about how to do technical things I already know how to do, ChatGPT comes off as wildly inept often times.
However, if I ask it something I don’t know and start working through it’s recommend processes, more often than not I end up where I want to end up. Even with missteps along the way.
This was a concern of mine with companies training AI on reddit. Both reddit and AI struggle with confidently providing false info in a way that sounds true, so training AI on reddit seems like it would really compound this issue.
Don't be too scared but... The same thing is happening on Wikipedia. I realized it when I tried to correct something benign on an article (a motorcycle being the first road legal model from the brand in 40 years) and pointed at an article confirming what I was correcting (article about another model released by the same brand 5 years prior that was a road legal model) and my edit got deleted.
I then went looking and found an article by an expert on a subject that argued with people on Wikipedia for over a year before just giving up because they wouldn't accept that a bunch of sources all quoting one wrong source didn't mean the information was true.
I get this with 5-10 minute educational type youtube videos. When it’s a topic I know, it’s obvious they just slightly changed the Wikipedia entry, or took google result headlines. But when I don’t know I’m tempted to parrot the information without checking it
I get this with wikipedia articles. I have to force myself to click through the links provided and check the reliability of the sources. They're usually fine, but every once in awhile you find something questionable snuck in there.
People upvote you if you sound right or confident and you’re early to post. Later posters don’t get the same number of eyeballs on them as earlier posts so any correction won’t (generally) receive the same amount of votes.
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
That is the Gell-Mann Amnesia effect. I'd point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all. But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn't. The only possible explanation for our behavior is amnesia.
Sure, we typically discount everything that a single unreliable individual says. But a newspaper is not one person — it's a collection of articles from different authors. If the science articles are inaccurate, that doesn't mean the political articles will be!
The idea is that it means there's no reason to trust anything the paper says. However, that doesn't go far enough.
If you read an article in a paper about something you have direct knowledge of, and you can confirm the article is factually correct, that still doesn't mean anything else in the paper can be trusted.
You can't really trust anything. For all you know, I'm a guinea pig who managed to steal a cell phone to post on the Internet. I'm not, of course. That would be impossible. However, how would you know?
“In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say.” — Sadly, this part has been solidly disproven.
Journal quality can buffer this by getting better reviewers (MDPI shouldn't be seen as having peer review at all, but peer review at the best journals--because professors want to say on their merit raise annual evals that they are doing the most service to the field by reviewing at the best journals--is usually good enough at weeding out bad papers), but it gets offset by the institutional prestige of authors when peer-review isn't double-blind. I've seen some garbage published in top journals by folks that are the caliber of Harvard professors (thinking of one in particular) because reviewers use institutional prestige as a heuristic.
When I'm teaching new grad students, I tell them exactly what you said, with the exception that they can use field-recognized journal quality (not shitty metrics like impact factor) as a relative heuristic until they can evaluate methods for themselves.
The original describes a newspaper, and those are written by multiple people. The editors are even different. For example, I trust the Associated Press more than my local paper.