Bad brainwaves: ChatGPT makes you stupid
Bad brainwaves: ChatGPT makes you stupid

Bad brainwaves: ChatGPT makes you stupid

Bad brainwaves: ChatGPT makes you stupid
Bad brainwaves: ChatGPT makes you stupid
Posted this on a Discord I'm in - one of the near immediate responses was "I'm glad they made a non-invasive procedure to lobotomise people".
Nothing more to add, I just think that's hilarious
From p. 137:
The most consistent and significant behavioral divergence between the groups was observed in the ability to quote one's own essay. LLM users significantly underperformed in this domain, with 83% of participants (15/18) reporting difficulty quoting in Session 1, and none providing correct quotes. This impairment persisted albeit attenuated in subsequent sessions, with 6 out of 18 participants still failing to quote correctly by Session 3. [...] Search Engine and Brain-only participants did not display such impairments. By Session 2, both groups achieved near-perfect quoting ability, and by Session 3, 100% of both groups' participants reported the ability to quote their essays, with only minor deviations in quoting accuracy.
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people's handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn't any less correct than one that had been memorized (probably more so), the same couldn't be said about chatbots and LLMs. They aren't known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc.
Show me a study where they find typewriter users "consistently underperformed at neural, linguistic, and behavioral levels"
No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It's important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn't have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.
But I admit that this is not comparable to chatbots.
chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Or there are no skills that are being lost: because of low reliability, the operator still needs to check output data by using those very same skills. LLM just helps with routine.
You need an expertise to tell where the error is happening or even notice it at all.
The main argument against LLMs is that you don't get a systemic knowledge how e.g. the code works. Moreso, as long as it works right, you aren't motivated to look into it and understand just how it does so. That renders you unable to see and fix mistakes.
If you are an experienced task-doer, yes, a quick glance can help you with that. But if you start out with LLM vibe-coding, you jump over a couple of steps, and when confronted with an error, you need to apply more force (learning things you skipped) to make it right.
Personally, I grew seriously ill when we had math classes concerning sin\cos\tg\ctg basics and going forward I felt I just don't get it at all as the class marched towards further subjects based on them. It costed me way more labor to get on the same page than if I've never been absent. The same could've happened, I believe, if I could upload my homework into LLM, only to face exams where I can't use it anymore, or if I came upon real life applications - and surprisingly I did.
Thinking through a problem yourself, or taking an idea and putting it into words, is like exercise for the brain. You may think you understand a thing from reading or hearing about it, but it's only when you do it for yourself that you discover what you really know and what you don't. It's the difference between learning what a square root actually is and how to press the square root button on the calculator. It's the difference between learning to drive and learning to turn on self-driving mode. Even if the outcome is the same, the learning experience is day and night.
Once you understand a concept well enough, then using an LLM to get some busy work done or just get a starting point that you can improve isn't all that bad, much like using a calculator after learning pen-and-paper division, but trying to use one while learning is almost certain to hurt your understanding, even if the LLM doesn't outright make a bunch of stuff up.
Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?
chatbots really are leaded gasoline for zoomers