Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
帖子
55
评论
768
加入于
1 yr. ago

  • Found an attempt to poison LLMs in the wild recently, aimed at trying to trick people into destroying their Cybertrucks:

    This is very much in the same vein as the iOS Wave hoax which aimed to trick people into microwaving their phones, and Eduardo Valdés-Hevia's highly successful attempt at engorging Google's AI summary systems.

    Poisoning AI systems in this way isn't anything new (Baldur Bjarnason warned about it back in 2023), but seeing it used this way does make me suspect we're gonna see more ChatGPT-powered hoaxes like this in the future.

  • Fear: There’s a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure.

    I can think of some more realistic ideas. Like AI-generated foraging books leading to people being poisoned, or chatbot-induced psychosis leading to suicide, or AI falsely accusing someone and sending a lynch mob after them, or people becoming utterly reliant on AI to function, leaving them vulnerable to being controlled by whoever owns whatever chatbot they're using.

    All of these require zero jumps in practicality, and as a bonus, they don't need the "exponential growth" setup LW's AI Doomsday Scenarios™ require.

    EDIT: Come to think of it, if you really wanted to make an AI Doomsday™ kinda movie, you could probably do an Idiocracy-style dystopia where the general masses are utterly reliant on AI, the villains control said masses through said AI, and the heroes have to defeat them by breaking the masses' reliance on AI.

  • To slightly expand on that, there's also a rather well-known(?) quote by English mathematician G.H. Hardy, written in A Mathematician's Apology in 1940:

    A science is said to be useful if its development tends to accentuate the existing inequalities in the distribution of wealth, or more directly promotes the destruction of human life.

    (Ironically, two of the theories which he claimed had no wartime use - number theory and relativity - were used to break Enigma encryption and develop nuclear weapons, respectively.)

    Expanding further, Pavel has noted on Bluesky that Russia's mathematical prowess was a consequence of the artillery corps requiring it for trajectory calculations.

  • The only way in which it may succeed as a deterrent is that it actually costs some money (film and processing cost real money and it’s not cheap) and requires actual work to do those extra steps.

    I expect the "requires actual work" part will work well in deterring AI bros - they're lazy fucks by nature, anything more difficult than "press button for instant gratification" is gonna be a turn-off for them.

  • Stumbled across a particularly odd case of AI hype in the wild today:

    I will say it certainly does look different than standard AI slop, but like AI slop, its nothing particularly impressive - I can replicate something like this pretty easily, and without boiling an ocean to do it. Anyways, here's a sidenote:

    In the wake of this bubble, part of me suspects physical media (e.g. photographic film) will earn a boost in popularity, alongside digital formats which LLMs struggle to generate. In both cases, the reason will be the same - simply by knowing something came on physical media or "slop-hardened media", you already have strong reason to believe the piece is human-made.

  • Baldur Bjarnason has given his thoughts:

    I mean… yeah.

    Also, between this and seeing tech types link glowingly to a crazypants “colonise the light cone by exploring latent space” type of delusional bullshit and I’m staring to worry that computers, as a concept, might not be salvageable after these clowns have run the show into the ground

  • And just a couple paragraphs before that:

    Reading HPMOR gave me a sense of crushing second-hand despair that I’ve only previously experienced when finding out things about Chris-Chan. It really is that bad.

    (the real cognitohazard was the friends we made along the way)

  • fuck me why did i go into computer science

    That's a question I ask myself sometimes. It usually ends with "I focused too much on trying to make easy cash". Fuck it, I'm going to write out a sidenote:

    On a wider front, part of me expects the AI bubble will inflict a serious blow to computer science/programming's public image after it bursts.

    On one front, there's the heavy number of promptfondlers in computer science and other related fields. which will likely give birth to a stereotype of prorammers/software engineers being all promptfondlers who need a computer to think for them.

    On a related front, the heavy damage this bubble's dealt to artists, and AI's continued and uniquely severe failures in creative fields (plus promptfondlers' failures to recognise said failures), has all combined to produce the public perception that promptfondlers are artless at best and hostile to art/artists at worst - a perception I expect will colour public perception of programmers/software engineers as a consequence of the previous stereotype I mentioned above.

  • Picked up a sneer in the wild (through trawling David Gerard's Bluesky):

    You want my take, Kathryn's on the money - future expectations on how people speak will actively shift away from anything that could be mistaken for sounding like an LLM, whether because you want to avoid being falsely accused of posting slop, or because the slop-nami has pushed your writing habits away from slop-like traits.

  • I've already predicted that scraper activity would crash due to AI in my most recent MoreWrite essay, and seeing this only makes me confident in that assessment.

    On a wider front, I suspect that web search in general's gonna dive in popularity - even if scraper activity remains the same during the AI winter, the bubble (with plenty of help from Google) has completely broken the web ecosystem which allowed search engines (near-exclusively Google) to thrive, through triggering a slop-nami that drowned human-made art, supercharging SEO's ability to bury quality output, and enabling AI Summary™ services which steal traffic through stealing work.

  • Basically. Its to explicitly prevent Xe from becoming the load-bearing peg for a massive portion of the Internet, thus ensuring this project doesn't send her health down the shitter.

    You want my prediction, I suspect future FOSS projects may decide to adopt mascots of their own, to avoid the "load-bearing maintainer" issue in a similar manner.

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 13th July 2025

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 6th July 2025

    TechTakes @awful.systems

    Rolling the ladder up behind us - Xe Iaso on the LLM bubble

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 29th June 2025

    SneerClub @awful.systems

    The Psychology Behind Tech Billionaires

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 22nd June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 15th June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 8th June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 1st June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 25th May 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 18th May 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 11th May 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 4th May 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 27th April 2025

    TechTakes @awful.systems

    "OpenAI Is A Systemic Risk To The Tech Industry"

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 20th April 2025

    TechTakes @awful.systems

    Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 13th April 2025

    SneerClub @awful.systems

    Big Tech Backed Trump for Acceleration. They Got a Decel President Instead