Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TO
Posts
15
Comments
463
Joined
2 yr. ago

  • Slashdot had an interesting variation on voting. People would be randomly assigned mod points to give to posts they like. It didn’t happen every day but when it did you spent those points and then it was other people’s turn the next day. Then they had a similar meta-moderation system on top of that so that misused points or trolling could get corrected. I’ve not seen it’s like since.

  • I knew a very smart manager who quit smoking but still used to go hang out in the smoking area just to stay in touch with everything. I’ve learn more in 10 minute conversations while smoking with coworkers than entire week long seminars.

  • Good example and well explained. We should team up on a book on science for lay people!

    Your point about specifying the null hypothesis and the p value is very important. Another way studies can fail is if you pick 20 different variables, like you mentioned, and then look to see if any of them give you p<0.05. So in your example, we measure smiling and 19 other factors besides being told jokes. Let’s say the weather, the day of the week, what color clothes the person is wearing, what they had for breakfast, etc. Again, due to statistics, one of those 20 is going to appear relevant by chance. You’re essentially doing 20 experiments in one so again you’ll get one spurious result that you can report as “success”.

    Experimental design is tough and it’s hard to grok until you’ve had to design and run your own experiment including the math. That makes it easy for people to pass off bad science as legitimate, whether accidentally or on purpose. And it’s why peer review is important, where your study gets sent to another researcher in your field for critique before publication.

    There’s other things besides bad math that can trip you up like correlation vs causation, and how the data is gathered. In the above example, you might try to save money by asking subjects to self report on their smiling. But people are bad at doing that due to fallible memory and bias (did that really count as a full smile?). Ideally you want to follow them around and count yourself, with a clear definition of what counts as a smile. Or make them wear a camera that does facial recognition. But both of those cost more money than just handing someone a piece of paper and a pencil and hoping for the best. That’s why you should always be extra suspicious of studies that use self reporting. As my social psych prof said, surveys are the worst form of data collection. It’s what makes polling hard because what people say and what they do are often entirely different things.

  • Reminds me of reading the print version of Infinite Jest by David Foster Wallace, where you needed one bookmark for the novel and another for the endnotes, which made up like 20% of the book. Hopefully e-readers make that easier now.

  • Well said, and re: quality escape, it implies at least two errors. The initial mistake and the mistake in QA not catching the mistake (assuming there is a QA process). We’re fallible humans so we need a QA system to catch the inevitable errors. How tight the QA layer is depends on how mission critical the software or hardware is. A web app is going to have a lighter process than one where failure equals death or disability.

  • I like that and will start using it. We’re all pretty helpless after birth and before death, so being able bodied is just a temporary phase in the middle, for those lucky enough to not be born with a disability or acquire one in the middle of life.

  • Being poor shaves 13 points off your IQ due to the stress and extra cognitive load of having to make these tough decisions for every little thing. Those 13 points come back should you be lucky enough to improve your station in life. Meanwhile the loss of brainpower increases the likelihood of bad decisions that make your life worse and the cycle continues.

    https://www.reuters.com/article/idUSBRE97S10Y/

  • P<0.05 means the chance of this result being a statistical fluke is less than 0.05, or 1 in 20. It’s the most common standard for being considered relevant, but you’ll also see p<0.01 or smaller numbers if the data shows that the likelihood of the results being from chance are smaller than 1 in 20, like 1 in 100. The smaller the p value the better but it means you need larger data sets which costs more money out of your experiment budget to recruit subjects, buy equipment, and pay salaries. Gotta make those grant budgets stretch so researchers will go with 1 in 20 to save money since it’s the common standard.

  • P<0.05 means one in 20 studies are relevant just by chance. If you have 20 researchers studying the same thing then the 19 researchers who get non significant results don’t get published and get thrown in the trash and the one that gets a “result” sees the light of day.

    Thats why publishing negative results is important but it’s rarely done because nobody gets credit for a failed experiment. Also why it’s important to wait for replication. One swallow does not make a summer no matter how much breathless science reporting happens whenever someone announces a positive result from a novel study.

    TL;DR - math is hard