Skip Navigation
Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024
  • finally, my sneercon cosplay! it’s like Tuxedo Mask on a severe budget

  • A Rant about Front-end Development
  • fucking exactly! I’ve been doing a lot of CSS-only work for the sneer archive rewrite, and it’s shocking how fast everything renders without JS, and how much functionality you can retain with a good enough CSS framework and careful markup

    I’m also working on a JavaScript library and associated rant named fuckery because it turns out you can’t use Web Components without some utterly unnecessary JavaScript, because the W3C decided to do a fuckery

  • Why do people who hate IP laws/copyright think we should be allowing AI companies to copy the whole internet when pirates still get arrested for piracy?
  • you’re wrong but that’s so obvious it’s boring

    but that weird bit of anti-furry shit you slipped in here looked significantly more interesting! and hey, what’s this in a thread about republicans doing anti-LGBT+ shit during pride:

    If you house the fascists and bigots, you become a house of fascists and bigots. It doesn’t matter what the individual believes if they continue to give audience to those people. They become indistinguishable from them.

    And when you’re just a normal straight person who gets the willies when gay people hit on you, fuck you right?

    Normal straight dude here. I can’t tell you how crazy often I have gay dudes creeping their way into my life just to eventually say they’re trying to get in my pants.

    bonus post: here’s you in a thread about a nude deepfake made of a 15 year old girl without her consent:

    Eve seen a deep fake nude of someone ugly? People make them because they wanna see you naked. Can’t see how that’s an insult.

    so, ah, there’s that. fuck off now.

  • A Rant about Front-end Development
  • it’d be very nice to have a progressively enhanced static frontend instead since there’s really nothing about any of this that should require JavaScript (and something like unpoly would give us react SPA style functionality strictly as an enhancement on top of plain HTML)

    this might be a cool project for someone to pick up once work on Philthy gets going; most of the alternative Lemmy frontends still have an unnecessary JS framework dependency, or are lacking features for essentially no reason

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • ah yeah, 3 downvotes (and one of them’s mine) and zero replies of scorn directed towards you (well, one now)

    how about you take your bullshit elsewhere

  • Mother Jones sues OpenAI and Microsoft - best pic of Sam Altman-Fried
  • yeah I’m deleting your posts cause this is just the paradox of tolerance and you know it

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • holy fuck I read the comments

    all the musksucking sub-linkedin reply guys chittering at each other about how this is clearly the kind of breakthrough musk was hoping for when he devised this genius-level challenge. you know, basic lossy audio compression applied by an asshole pretending to be an audio engineer (or maybe this fuckhead is the reason the FLACs I pay for are sometimes just converted from MP3s?)

    e: I got to the post where he supposedly makes a point about ADC and container bit depths. this fucking idiot is almost certainly an audiophile cosplaying, and the only audio engineering he’s done is set his amp on a block of wood

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • I seriously don’t know where these fuckers have been that they’re supposedly excited about implanted BCIs now that musk’s doing a monstrously shitty one, but somehow they managed not to ever read about any of the previous research into this that had the same outcome as neuralink (basic, inaccurate computer mouse control) with the same major caveats (the electrodes become unusable in short order due to scarring and can’t be repaired), except that the neuralink version is unnecessarily risky* cause startups gotta go fast

    [*] and the risk here is that something truly fucking awful will happen to the patient, because it’s the fucking human brain and they’re treating it like a submarine made of secondhand carbon fiber, including the ignored track record of failed lab tests before disaster struck

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • you are downplaying how impossible the requirements were.

    oh absolutely! but only out of a sense of shame for being in a career field where a medical device company posting that horseshit compression challenge didn’t immediately prompt a strong backlash and repercussions for neuralink’s ability to attract and retain talent, in lieu of a functioning regulator maybe possibly shutting them down before they can fucking mutilate someone else with this brainfart of an invention

    I feel bad for anyone who gets that e-waste implanted into their head and ends up with an implant that absolutely cannot do the things it’s marketed to do, barely does ordinary 90s brain implant shit, stops working very quickly (to the apparent surprise of the people in charge) and will most likely cause injury and severe discomfort to the patients saddled with it

    I wish my field had ethics. I’d sleep better if we did.

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • Kurzweil really is indistinguishable from a shitty phone psychic, including the followers who cherry pick “correct” predictions and interpret the incorrect ones so loosely they could mean anything (I’m waiting for some fucker to pop up and go “yeah duh Apple Vision Pro” in response to half of those, ignoring the inconvenient “works well and is popular” parts of the predictions)

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • Literally the first human-cpu interface?

    I wish I had the confidence of a techbro who thinks any of the BCI tech in neuralink is new and isn’t just a set of techniques that have existed for decades and have a shitty track record

    the only thing neuralink seems to add is wireless control, which doesn’t work, partially due to impossible bandwidth and compression requirements, but mostly because it’s a project driven by Musk’s whims

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • for you? you get banned for jacking off in public, because all of your posts here are masturbation

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • fuck almighty it’s gonna be one of those weekends isn’t it

  • A Rant about Front-end Development
  • javascript spits in the face of lisp and common sense every time it evaluates, and that’s a lot of spit

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024
  • how dare you deny them their bounty of GPUs for every server in the rack

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024
  • having seen the horror from a distance: VNC, a fuckton of clicking, occasionally mouse and keyboard macros, possibly a networked KVM (itself not a bad idea at all for emergency access to hardware too commodity or misdesigned to have a sensible serial console, but we’re talking day to day here), and a massive chip on their shoulder about being forced off their beloved Windows Server 2003 and onto Linux

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024
  • according to Mozilla’s track record, they’re making it a core feature so it’s impossible to remove without a custom fork, and they’ll relentlessly goad the user into enabling it via ads pushed with every update. implementing it as a core feature also means they can easily infect the search bar and other core functionality with this horseshit

    we’ve also only got mozilla’s word that this shit can be disabled once it’s in production Firefox at all, and we’ve seen, repeatedly, how Mozilla’s AI team does with consent — they use LLMs and marketing tactics to fabricate it

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024
  • oh hell. we’re beating all my initial survivability projections by a lot

    do we throw an instance birthday party thread? will there be cocktails? will the deployment get mopey if I don’t buy it more disk space? (yes, eventually)

  • the tea protocol is still predictably a gigantic source of PR spam
    www.web3isgoinggreat.com tea.xyz causes open source software spam problems, again

    The tea.xyz protocol first earned an entry on Web3 is Going Just Great in late February, when their plan to reward open source software contributors resulted in crypto enthusiasts with no intention of participating in OSS opening endless pull requests to claim ownership of prominent OSS projects. Th...

    who could have seen this coming, other than everyone who told the homebrew tree inverter guy this was a bad idea they absolutely shouldn’t do

    0
    Amazon’s 'Just Walk Out' grocery stores are dead
    gizmodo.com Amazon Ditches 'Just Walk Out' Checkouts at Its Grocery Stores

    Amazon Fresh is moving away from a feature of its grocery stores where customers could skip checkout altogether.

    (via https://hachyderm.io/@jbcrawford/112202942593125987, archive: https://archive.is/VnqRZ)

    surprise, Amazon’s godawful surveillance grocery stores were just exploiting hidden labor and calling it innovation, and even that was too expensive

    even worse, the few times I’ve seen one of these fucking things in the wild, it still had 1-2 employees hovering near the entrance to make sure nobody did the utterly obvious (fuck with the payment system and get free shit), a job that’s also known as a fucking cashier, but with much worse pay, much harder labor (physically stopping shoplifters), and no counter to lean on or opportunity to even sit down

    0
    “just open sourced someone else’s code ama” - a case study in fucking around and finding out
    coolmathgam.es Jules :notnet: (@notnite@coolmathgam.es)

    just open sourced someone else's code ama https://github.com/jawslouis/MakePlacePlugin/pull/26

    from the linked github thread:

    >Your project is in violation of the AGPL, and you have stated this is intentional and you have no plans to open source it. This is breaking the law, and as such I've began to help you with the first steps of re-open sourcing the plugin.

    the project author (who gets paid for violating the AGPL via patreon) responds like a mediocre crypto grifter and insists their violation of the law be debated on the discord they control (where their shitty community can shout down the reporter):

    >While keeping code private doesn't guarantee security, it does make it harder for bad actors to keep up with changes. You are welcome to debate this matter in the MakePlace discord: https://discord.com/invite/YuvcPzCuhq If you are able to convince the MakePlace community that keeping the code open-source is better, I will respect the wishes of the community.

    aaaand the smackdown:

    >Respectfully, I won't attempt to "debate" or "convince" anyone; I'm leaving this pull request and my fork here for others to see and use. It is not a matter of "better"; you are violating a software license and the law. It does not "make it harder" for anyone; Harmony hooking exists, IL modification exists, you can modify plugins from other plugins.

    2
    the ACM publishes an article about AI being a cargo cult, and the orange site cargo cult reacts

    >The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

    a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

    >This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

    drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

    >> There's actually a cargo cult around downplaying AI. >> >> The high level characteristics of this AI is something we currently cannot understand. > >The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

    no, you’re all the cargo cult! I asked my cargo and it told me so

    0
    my chatbot is so efficient it only needs one $2000 GPU per user

    >Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become. > > [cut out many, many paragraphs of LLM-generated output which prove… something?]

    my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

    you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

    not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

    0
    the r/SneerClub archives are up! [incredibly janky v1]

    the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

    as a v1, you'll notice a lot of jank. known issues are:

    • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
    • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
    • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
    • likewise, comments display a unix epoch instead of a rendered time
    • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
    • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

    if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

    0
    as always, the orange site would rather carry water for right-wing weirdos than admit rationalwiki is right

    >RationalWiki is a highly biased cancel community which has attacked people like Scott Aaronson and Scott Alexander before.

    >Background on the authors according to a far-left website. > >Let's at least be honest.

    >That is profiling work. (Not just "Ad hominem".) > >The clash with the name "rational-wiki" is too strong not to be noted.

    as the infrastructure admin of a highly biased far-left cancel community that attacks people like Scott Aaronson and Scott Alexander: mmm delicious

    for bonus sneers, see the entire rest of the thread for the orange site’s ideas on why they don’t need therapy:

    >I was about to start psychotherapy last month, I ask my family's friend therapist If he could recommend me where to go. So he interviewed me for about 30 mins and ask me about all my problems. > >A week later he send me the number of the therapist. I didnt write her yet, I think I dont need it as badly as before. > >Those 30 mins were key. I am highly introspective and logical, I only needed to orderly speak my problems.

    to quote Key & Peele: motherfucker, that’s called a job

    0
    won’t anyone think of corporate hacker culture

    hey let’s see what the people who killed and buried hacker culture think should go in the jargon file!

    >If the spirit of the original Jargon file was to be a living document, alas, it failed to keep with the times. > >Hackers at large have moved away from Lisp despite Paul Graham and other evangelists […] > >Hackers also have moved away from academia at large, and 9-5 jobs at tech behemoths are more natural habitats for them, which also shaped the lingo. I mean, there’s a whole layer of slang usually pertinent to outsourcing agencies and to cubicle farms.

    I can’t wait for the corporate-approved jargon file, with any hint of anti-capitalism replaced with fun words and quotes from billionaires to share as the soul leaves my body

    >So in order for the document to evolve, we need a system to determine consensus. Everyone who cares runs a program on their computer that joins the network and registers their intent. With each proposed change, a query goes out to the network, and it's up to everyone on the network to say yea or nay to the proposal. With enough "yea"s, the document is updated. > >...this is starting to sound like a blockchain, isn't it.

    for the absolute sake of fuck. coming soon: HackerDAO! collect 10xer tokens and finally prove to the junior devs why corporate gives you so many points to crunch on! vote on fun new jargon, but only if it’s crypto-related! surely you’re hacker enough to be on the pump side of this pump and dump!

    0
    the weird “co-op” behind the mastodon instance hachyderm.io just started a generative AI project
    hachyderm.io aburka 🫣 (@aburka@hachyderm.io)

    Welp, looks like @nivenly is funding AI nonsense so I'm definitely leaving #hachyderm as soon as I decide where to go. Contribute to your instance admins, but not if the money goes to AI.

    reposting here for better visibility (let me know if there’s a better way to do this now that we’re federated): the owners of hachyderm.io just started a generative AI for gaming project, and it looks like donations to them will likely end up going to that

    0
    the orange site musksucks like it’s 2012

    > It's because there is a vast concerted effort by the political left to destroy Musk now that he is no longer regarded as being strictly on their side. It's at Trumpian scale these days, in terms of the venom being directed at him. > >Reddit is overflowing with non-stop Musk hatred. They spout lie after lie about him in every single thread where he's a topic. The most popular lie being that his business efforts - ie his success - were funded by an emerald mine that his father owned (neither thing is true). > > I say this as an emotionally independent, objective observer of the craziness, I have no stake in it, and don't feel one way or another about it. The seeming mental illness the topic of Musk seems to draw out of people is astounding however, the herd promptly acts like deranged lunatics when he comes up as a topic. > >Until Trump I had never seen anything like it before, in person or online. There must be a name for such a massive scale of crowd insanity, to describe the frothing-at-the-mouth irrationality.

    my deranged lunatic hivemind venomous mentally ill frothing-at-the-mouth leftist brain can’t handle how rational this fucking asshole’s take on musk and trump is

    there’s also this at the top of the thread:

    > I've been mostly ambivalent about the Musk-era at Twitter—mostly because I just don't care enough to have an opinion. > >This, though. This one makes me angry and disappointed. > >Twitter has had such a solid brand for so long. It's accomplished things most marketers only dream of: getting a verb like "Tweet" into the standard lexicon is like the pinnacle of branding.

    turning one of the most popular sites on the internet into a cesspit of transphobia and nazis: ambivalent

    musk fails to appreciate the Twitter brand: angry and disappointed

    remember, hacker news is a bastion of high-quality discussion. fucking shitheads

    0
    the Bevy game engine: a cozy ECS that punches above its weight
    bevyengine.org Bevy Engine

    Bevy is a refreshingly simple data-driven game engine built in Rust. It is free and open-source forever!

    Bevy Engine

    Bevy is a fun, cozy game engine to play with if you’re looking for something very flexible that implements some surprisingly advanced features. things I like:

    • it’s all rust, which is an advantage for me and the chemical burns I have from handling the dialect of C++ a lot of older game engines used to be written in
    • it implements a flexible entity component system, which I found pretty great for specifying game and rendering logic for things like roguelikes and simulations, where multiple game systems might interact in dynamic ways
    • the API is very cozy and feels like querying an extremely fast database at times
    • it’s a lot lower level than something like Unity or Godot, but you get some pretty advanced rendering features included
    • the main developer seems to have a lot of industry experience and a solid roadmap
    0
    Zero to Nix: a gentle introduction to the Nix package manager
    zero-to-nix.com Zero to Nix

    An unofficial, opinionated, gentle introduction to Nix

    Nix is one of the few pieces of software I trust. I use it on just about every computer I work on — awful.systems is managed and deployed by just nixos-rebuild and a deployment flake, as are almost all the computers in my house (including a few embedded into the house itself). in general it makes both software development and configuring Linux a lot more fun compared with the traditional way of doing things

    I often call Nix fucking incomprehensible, but it doesn’t need to be. Zero to Nix is one of the documentation projects that’s intended to be a more gentle goal-oriented introduction to Nix concepts, and it’s definitely worth following along if you’re curious about Nix and want to be able to do something useful with it right away

    if you end up liking Nix and want more of it, NixOS is an entire Linux distro configured and managed by Nix, and it’s incredibly powerful and stable. I run it on a full-fat gaming PC as my primary OS and the experience of running it is surprisingly very good; feel free to ask and I’ll summarize how I run stuff like games on NixOS

    0
    Google is prototyping a DRM system for websites in Chromium
    github.com Don't. · Issue #28 · RupertBenWiser/Web-Environment-Integrity

    Sometimes you have to ask the question whether something should be done at all, and trusted computing is certainly one of those cases where the answer is obviously a big fat NO. So please reconside...

    Don't. · Issue #28 · RupertBenWiser/Web-Environment-Integrity

    the API is called Web Environment Integrity, and it’s a way to kill ad blockers first and a Google ecosystem lock-in mechanism second, with no other practical use case I can find

    0
    Collapse OS: exactly what it sounds like

    > Winter is coming and Collapse OS aims to soften the blow. It is a Forth (why Forth?) operating system and a collection of tools and documentation with a single purpose: preserve the ability to program microcontrollers through civilizational collapse.

    imagine noticing that civilization is collapsing around you and not immediately opening an emacs lisp buffer so you can painstakingly recreate the entire compiler toolchain and runtime environment for the microcontrollers around you as janky code running in your editor. fucking amateurs

    0
    Stephen Wolfram decides that plugging random numbers into an image AI makes an alien brain

    Wolfram’s post is fucking interminable and consists of about 20% semi-interesting math and 80% goofy shit like deciding that the creepy (to Wolfram) images in the AI model’s probability space must represent how aliens perceive the world. to my memory, this is about par for the course for Wolfram

    the orange site decides that the reason why the output isn’t very interesting is because the AI isn’t a robot:

    >What we see from AI is what you get when you remove the "muscle module", and directly apply the representations onto the paper. There's no considering of how to fill in a pixel; there's just a filling of the pixel directly from the latent space. > >It's intriguing. Also makes me wonder if we need to add a module in between the representational output and the pixel output. Something that mimics how we actually use a brush.

    this lack of muscle memory is, of course, why we have never done digital art once in the history of humanity. all claims to the contrary are paid conspirators in the pocket of Big Dick Blick

    >Of course, the AIs can't wake up if we use that analogy. They are not capable of anything more than this state right now. > >But to me, lucid dreaming is already a step above the total unconsciousness of just dreaming, or just nothing at all. And wakefulness always follows shortly after I lucid dream.

    only 10x lucid dreamers wake up after falling asleep

    >> we can progressively increase the numerical values of the weights—eventually in some sense “blowing the mind” of the network (and going a bit “psychedelic” in the process) > >I wonder if there's a more exact analog of the action of psychedelics on the brain that could be performed on generative models?

    >I always find it interesting how a hero dose of LSD gives similar visuals to what these image AI's do to achieve a coherent image. > >[more nonsense] > >I feel like the more we get AI to act like humans, and the more those engineers and others use LSD, the more convergence we are going to have with curiosity and breakthroughs about how we function.

    the next time you’re in an altered state, I want you to close your eyes and just imagine how annoyed you’d be if one of these shitheads was there with you, trying to get you to “form a BCI” or whatever by typing free association words into ChatGPT

    0
    ebikes & pedantry: the orange site has an interminable argument over self-driving car safety

    you know it’s a fucking banger when you try to collapse the top comment in the thread to skip all the folks litigating over the value of an ebike and more than 2/3rds of the comments in an 884 comment long thread disappear

    also featuring many takes from understanders of statistics:

    >I'm wary about using public roads to test these, but I think the way the data is presented is misleading. I'm not sure how it's misleading, but separating "incidents" into categories (safety, traffic, accident, etc) might be a good start. > >For example, I could start coning cruise cars, and cause these numbers to skyrocket. While that's an inconvenience to other drivers, it's not a safety issue at all. > >By the way, as a motorcyclist (and thus hyper annoyed at bad driving), I find Uber/Lyft/Food drivers to be both much more dangerous and inconveniencing than these self driving cars.

    0
    Mozilla asked gpt-4 to verify the accuracy of its gpt-3.5 MDN AI
    tech.lgbt Antoine Leblanc :transHaskell: (@nicuveo@tech.lgbt)

    oh my god i had missed this from the postmortem, but the way they validated the test queries they ran (asking the LLM for explanations of code samples) was... TO FEED THE RESULTS INTO GPT-4 AND ASK IT TO CLASSIFY THEM AS "ACCURATE" OR "INACCURATE". THEY ONLY REVIEWED SOME OF THE ONES LABELLED AS "IN...

    see also the github thread linked in the mastodon post, where the couple of gormless AI hypemen responsible for MDN’s AI features pick a fight with like 30 web developers

    from that thread I’ve also found out that most MDN content is written by a collective that exists outside of Mozilla (probably explaining why it took them this long to fuck it up), so my hopes that somebody forks MDN are much higher

    0
    the DIDcomm Messaging spec is even more worthless than DID

    there’s a fun drinking game you can play where you take a shot whenever the spec devolves into flowery nonsense

    >§1. Purpose and Scope

    >The purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs.

    >It is the second half of this sentence, not the first, that makes DIDComm interesting. “Methodology” implies more than just a mechanism for individual messages, or even for a sequence of them. DIDComm Messaging defines how messages compose into the larger primitive of application-level protocols and workflows, while seamlessly retaining trust. “Built atop … DIDs” emphasizes DIDComm’s connection to the larger decentralized identity movement, with its many attendent virtues.

    you shouldn’t have pregamed

    0
    watching Mozilla destroy its reputation in real time with AI
    blog.mozilla.org Responsibly empowering developers with AI on MDN | The Mozilla Blog

    Generative AI technologies powered by Large Language Models (LLMs), such as OpenAI’s ChatGPT, have shown themselves to be both a big boon to productivity

    Responsibly empowering developers with AI on MDN | The Mozilla Blog

    today Mozilla published a blog post about the AI Help and AI Explain features it deployed to its famously accurate MDN web documentation reference a few days ago. here’s how it’s going according to that post:

    >We’re only a handful of days into the journey, but the data so far seems to indicate a sense of skepticism towards AI and LLMs in general, while those who have tried the features to find answers tend to be happy with the results.

    got that? cool. now let’s check out the developer response on github soon after the AI features were deployed:

    >it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.

    oh dear

    >That is demonstrably wrong. There is no demo of that code showing it in action. A developer who uses this code and expects the outcome the AI said to expect would be disappointed (at best). > >That was from the very first page I hit that had an accessibility note. Which means I am wary of what genuine user-harming advice this tool will offer on more complex concepts than simple stricken text.

    >So the "solution" is adding a disclaimer and a survey instead of removing the false information? 🙃 🙃 🙃

    >This response is clearly wrong in its statement that there is no closing tag, but also incorrect in its statement that all HTML must have a closing tag; while this is correct for XHTML, HTML5 allows for void elements that do not require a closing tag

    that doesn’t sound very good! but at least someone vetted the LLM’s answers, right?

    >MDN core reviewer/maintainer here. > >Until @stevefaulkner pinged me about this (thanks, Steve), I myself wasn’t aware that this “AI Explain” thing was added. Nor, as far as I know, were any of the other core reviewers/maintainers aware it’d been added. Nor, as far as I know, did anybody get an OK for this from the MDN Steering Committee (the group of people responsible for governance of MDN) — nor even just inform the Steering Committee about it at all. > >The change seems to have landed in the sources two days ago, in e342081 — without any associated issue, instead only a PR at #9188 that includes absolutely not discussion or background info of any kind. > >At this point, it looks to me to be something that Mozilla decided to do on their own without giving any heads-up of any kind to any other MDN stakeholders. (I could be wrong; I've been away a bit — a lot of my time over the last month has been spent elsewhere, unfortunately, and that’s prevented me from being able to be doing MDN work I’d have otherwise normally been doing.) > >Anyway, this “AI Explain” thing is a monumentally bad idea, clearly — for obvious reasons (but also for the specific reasons that others have taken time to add comments to this issue to help make clear).

    (note: the above reply was hidden in the GitHub thread by Mozilla, usually something you only do for off topic replies)

    so this thing was pushed into MDN behind the backs of Mozilla’s experts and given only 15 minutes of review (ie, none)? who could have done such a thing?

    …so anyway, some kind of space alien comes in and locks the thread:

    >Hi there, 👋

    >Thank you all for taking the time to provide feedback about our AI features, AI Explain and AI Help, and to participate in this discussion, which has probably been the most active one in some time. Congratulations to be a part of it! 👏

    congratulations to be a part of it indeed

    0
    self self @awful.systems
    Posts 28
    Comments 45
    Moderates