Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 11th May 2025

awful.systems /post/4167045

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

195 comments
  • Ran across a Bluesky thread which caught my attention - its nothing major, its just about how gen-AI painted one rando's views of the droids of Star Wars:

    Generative AI has helped me to understand why, in Star Wars, the droids seem to have personalities but are generally bad at whatever they're supposed to be programmed to do, and everyone is tired of their shit and constantly tells them to shut up

    Threepio: Sir, the odds of successfully navigating an asteroid field are 3720 to one!

    Han Solo (knowing that Threepio just pulls these numbers out of Reddit memes about Emperor Palpatine's odds of getting laid): SHUT THE FUCK UP!!!!!!

    "Why do the heroes of Star Wars never do anything to help the droids? They're clearly sentient, living things, yet they're treated as slaves!" Thanks for doing propaganda for Big Droid, you credulous ass!

    With that out the way, here's my personal sidenote:

    There's already been plenty of ink spilled on the myriad effects AI will have on society, but it seems one of the more subtle effects will be on the fiction we write and consume.

    Right off the bat, one thing I'm anticipating (which I've already talked about before) is that AI will see a sharp decline in usage as a plot device - whatever sci-fi pizzazz AI had as a concept is thoroughly gone at this point, replaced with the same all-consuming cringe that surrounds NFTs and the metaverse, two other failed technologies turned pop-cultural punchlines.

    If there are any attempts at using "superintelligent AI" as a plot point, I expect they'll be lambasted for shattering willing suspension of disbelief, at least for a while. If AI appears at all, my money's on it being presented as an annoyance/inconvenience (as someone else has predicted).

    Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble's wide-spread harms (environmental damage, widespread theft, AI slop, misinformation, etcetera) any notion of AI being value-neutral as a tech/concept has been equally undermined.

    With both of those in mind, I expect any positive depiction of AI is gonna face some backlash, at least for a good while.

    (As a semi-related aside, I found a couple of people openly siding with the Mos Eisley Cantina owner who refused to serve R2 and 3PO [Exhibit A, Exhibit B])

    • I've noticed the occasional joke about how new computer technology, or LLMs specifically, have changed the speaker's perspective about older science fiction. E.g., there was one that went something like, "I was always confused about how Picard ordered his tea with the weird word order and exactly the same inflection every time, but now I recognize that's the tea order of a man who has learned precisely what is necessary to avoid the replicator delivering you an ocelot instead."

      Notice how in TNG, everyone treats a PADD as a device that holds exactly one document and has to be physically handed to a person? The Doylist explanation is that it's a show from 1987 and everyone involved thought of them as notebooks. But the Watsonian explanation is that a device that holds exactly one document and zero distractions is the product of a society more psychologically healthy than ours....

    • AI will see a sharp decline in usage as a plot device

      Today I was looking for some new audiobooks again, and I was scrolling through curated1 lists for various genres. In the sci-fi genre, there is a noticeable uptick in AI-related fiction books. I have noticed this for a while already, and it's getting more intense. Most seem about "what if AI, but really powerful and scary" and singularity-related scenarios. While such fiction themes aren't new at all, it appears to me that there's a wave of it now, although it's possible as well that I am just more cognisant of it.

      I think that's another reason that will make your prediction true: sooner or later demand for this sub-genre will peak, as many people eventually become bored with it as a fiction theme as well. Like it happened with e.g. vampires and zombies.

      (1 Not sure when "curation" is even human-sourced these days. The overall state of curation, genre-sorting, tagging and algorithmic "recommendations" in commercial books and audiobooks is so terrible... but that's a different rant for another day.)

      • Back in the twenty-aughts, I wrote a science fiction murder mystery involving the invention of artificial intelligence. That whole plot angle feels dead today, even though the AI in question was, you know, in the Commander Data tradition, not the monstrosities of mediocrity we're suffering through now. (The story was also about a stand-in for the United States rebuilding itself after a fascist uprising, the emotional aftereffects of the night when shooting the fascists was necessary to stop them, queer loneliness and other things that maybe hold up better.)

    • That is a bit weird, as iirc the robots in star wars are not based on LLMs, the robots in SWs can think, and can be sentient beings but are often explicitly limited. (And according to Lucas this was somewhat intentional to show that people should be treated equally (if this was the initial intent is unclear as Lucas does change his mind a bit from time to time), the treatment of robots as slaves in SW is considered bad). What a misreading of the universe and the point. Also time flows the other way folks, LLMs didn't influence the creation of robots in SWs.

      Also if the droids were LLMs, nobody would use them to plot hyperspace lanes past stars. Somebody could send a message about Star Engorgement and block hyperspace travel for weeks.

      But yes, the backlash is going to be real.

      E: ow god im really in a 'take science fiction too seriously' period. (more SW droids are not LLMs stuff)

  • Derek Lowe comes in with another sneer at techbro-optimism of collection of AI startup talking points wearing skins of people saying that definitely all medicine is solved, just throw more compute at it https://www.science.org/content/blog-post/end-disease (it's two weeks old, but it's not like any of you read him regularly). more relevantly he also links all his previous writing on this topic, starting with 2007 piece about techbros wondering why didn't anyone brought SV Disruption™ to pharma: https://www.science.org/content/blog-post/andy-grove-rich-famous-smart-and-wrong

    interesting to see that he reaches some of pretty much compsci-flavoured conclusions despite not having compsci background. still not exactly there yet as he leaves some possibility of AGI

    • it’s not like any of you read him regularly

      Of course not he is a capitalist pigdog! A traitor to the cause! Bla bla. ;)

      I posted his work here before, despite thinking he isnt totally correct about his stance on capitalism stuff. He seems to be a good source on the whole medical chemistry science field. And quite skeptical and hype resistance. (Prob also why he I could make de self deprecating joke above). He wrote also negatively about the hackers who do homemade meds thing.

      • He wrote also negatively about the hackers who do homemade meds thing.

        i've heard about them before and got reminded of their existence against my will recently. (do you know that somebody made a recommendation engine for peertube? can you guess which CCC talk from last winter was on top of pile in their example?)

        you know, i think they have a bit of that techbro urge to turn every human activity into series of marketable ESP32 IOT-enabled widgets, except that they don't do that to woo VCs, they say they do that for betterment of humanity, but i think they're doing it for clout. because lemmy has only communist programmers and no one else, not much later i stumbled upon an essay on how trying to make programming languages easier in some ways is doomed to fail, because the task of programming itself is complex and much more than just writing code, and if you try, you get monstrosities like COBOL. i'm not in IT but it seems to me that this take is more common among fans of C and has little overlap with type of techbros from above.

        so in some way, they are trying to cobolify backyard chemistry. the thing that is stupid about it is that it has been done before, and it's a very useful tool, and also it does something completely opposite than what they wanted to do. it's called solid phase peptide synthesis, and it replaces synthetic process that previously has been used in liquid phase (that is, like you do usually in normal solutions in normal flasks). (there's also a way to make synthetic DNA/RNA in similar way. both have a limitation that only a certain number of aminoacids/bases is actually practical). the thing about SPPS is that it can be automated, and you can just type in sequence of a peptide you want to get, and machine handles part of the rest.

        what you gotta give it to them is that automated synthesis allows for a rapid output of many compounds. but it's also hideously expensive, uses equally expensive reagents, and requires constant attention and maintenance from two, ideally more, highly trained professionals in order to keep it running, and even then syntheses still can fail. in order to figure out what got wrong you need to use analytical equipment that costs about as much as that first machine, and then you have to unfuck up that failed synthesis in the first place, which is something that non-chemist won't be able to do. and even when everything goes right, product is still dirty and has to be purified using some other equipment. and even when it works, scaleup requires completely different approach (the older one) because it just doesn't scale well above early phase research amounts.

        what i meant to say is that while automation of this kind is good because it allows humans to skip mind-numbingly repetitive steps (and allows to focus on "the everything else" aspect of research, like experiment planning, or parts of synthesis that machine can't do - which tend to be harder and so more rewarding problems) this all absolutely does not lead to deskilling of synthesis like this bozo in camo vest wanted to, i'd say it's exactly the opposite. there's also the entire aspect of how they don't do analysis or purification of anything, and this alone i guess will kill people at some point

  • Amazon publishes Generative AI Adoption Index and the results are something! And by "something" I mean "annoying".

    I don't know how seriously I should take the numbers, because it's Amazon after all and they want to make money with this crap, but on the other hand they surveyed "senior IT decision-makers".. and my opinion on that crowd isn't the highest either.

    Highlights:

    • Prioritizing spending on GenAI over spending on security. Yes, that is not going to cause problems at all. I do not see how this could go wrong.
    • The junk chart about "job roles with generative AI skills as a requirement". What the fuck does that even mean, what is the skill? Do job interviews now include a section where you have to demonstrate promptfondling "skills"? (Also, the scale of the horizontal axis is wrong, but maybe no one noticed because they were so dazzled by the bars being suitcases for some reason.)
    • Cherry on top: one box to the left they list "limited understanding of generative AI skilling needs" as a barrier for "generative AI training". So yeah...
    • "CAIO". I hate that I just learned that.
  • Here's an interesting nugget I discovered today

    A long LW post tries to tie AI safety and regulations together. I didn't bother reading it all, but this passage caught my eye

    USS Eastland Disaster. After maritime regulations required more lifeboats following the Titanic disaster, ships became top-heavy, causing the USS Eastland to capsize and kill 844 people in 1915. This is an example of how well-intentioned regulations can create unforeseen risks if technological systems aren't considered holistically.

    https://www.lesswrong.com/posts/ARhanRcYurAQMmHbg/the-historical-parallels-preliminary-reflection

    You will be shocked to learn that this summary is a bit lacking in detail. According to https://en.wikipedia.org/wiki/SS_Eastland

    Because the ship did not meet a targeted speed of 22 miles per hour (35 km/h; 19 kn) during her inaugural season and had a draft too deep for the Black River in South Haven, Michigan, where she was being loaded, the ship returned in September 1903 to Port Huron for modifications, [...] and repositioning of the ship's machinery to reduce the draft of the hull. Even though the modifications increased the ship's speed, the reduced hull draft and extra weight mounted up high reduced the metacentric height and inherent stability as originally designed.

    (my emphasis)

    The vessel experiences multiple listing incidents between 1903 and 1914.

    Adding lifeboats:

    The federal Seamen's Act had been passed in 1915 following the RMS Titanic disaster three years earlier. The law required retrofitting of a complete set of lifeboats on Eastland, as on many other passenger vessels.[10] This additional weight may have made Eastland more dangerous by making her even more top-heavy. [...] Eastland's owners could choose to either maintain a reduced capacity or add lifeboats to increase capacity, and they elected to add lifeboats to qualify for a license to increase the ship's capacity to 2,570 passengers.

    So. Owners who knew they had an issue with stability elected profits over safety. But yeah it's the fault of regulators.

  • More big “we had to fund, enable, and sane wash fascism b.c. the leftist wanted trans people to be alive” energy from the EA crowd. We really overplayed our hand with the extremist positions of Kamala fuckin Harris fellas, they had no choice but to vote for the nazis.

    (repost since from that awkward time on Sunday before the new weekly thread)

195 comments