Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
41
Comments
504
Joined
1 yr. ago

  • The “legal proof” part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like they’re shielded from copyright law, he’s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.

    It'll probably set a very bad precedent that fucks up copyright law in various ways (because we can't have anything nice in this timeline), but I'd like to see him get his ass beaten as well. Thankfully, removing watermarks is already illegal, so the courts can likely nail him on that and call it a day.

  • The most generous reading of that email I can pull is that Dr. Greg is an egotistical dipshit who tilts at windmills twenty-four-fucking-seven.

    Also, this is pure gut instinct, but it feels like the FOSS community is gonna go through a major contraction/crash pretty soon. I've already predicted AI will kneecap adoption of FOSS licenses before, but the culture of FOSS being utterly rancid (not helped by Richard Stallman being the semi-literal Jeffery Epstein of tech (in multiple ways)) definitely isn't helping pre-existing FOSS projects.

  • A human-curated search engine would likely be easy to sell as well - the obvious approach to marketing it would be to bring attention to the human-curation involved, and claim no algorithms are involved in determining search results. This is arguably bullshit - you'll need an algorithm to sort the search results at minimum - but it'd evoke the idea that the engine is giving customers what they want, and not what someone else wants.

    Additionally, you can pull out the somewhat old standby of claiming the search engine to be AI-free - with LLMs and slop generators defining how the public views AI, presenting yourself as a bulwark against the slop-nami will be an easy marketing win.

    (Sidenote: Between the ever-growing backlash against AI, boiling resentment against Silicon Valley, and the fact I found this an easy sell, I suspect this idea's time has indeed come.)

  • If good search does come back, it'll likely require heavy human curation to keep LLM noise as low as humanly possible. Automated methods can be easily SEO'd to death, but human curation's gonna be rather tough to game.

  • Over/under on when this gets compared to God Of War Ragnarok constantly spoiling puzzle solutions (relevant GMTK(?))?

    This awful new thing will be tested on Xbox Insider Program members from next month. It will shortly be telling you to git gud and how it wants to spend some quality time with your mother.

    call-backs my beloved

  • ...eh, fuck it, here's my sidenote on Brian's piece:

    Google and OpenAI's campaign gives me the suspicion that the ongoing copyright lawsuits may be what finally pops this bubble. Large Language Models are built though large-scale copyright infringement, and built to facilitate large-scale copyright infringement - if the actions of OpenAI and pals are ruled not to be fair use, it would be open season on LLMs.

  • In other news, BlueSky's put out a proposal on letting users declare how their data gets used, and BlueSky post announcing this got some pretty hefty backlash - not for the proposal itself, but for the mere suggestion that their posts were scraped by AI. Given this is the same site which tore HuggingFace a new one and went nuclear on ROOST, I'm not shocked.

    Additionally, Molly White's put out her thoughts on AI's impact on the commons, and recommended building legal frameworks to enforce fair compensation from AI systems which make use of the commons.

    Personally, I feel that building any kind of legal framework is not going to happen - AI corps' raison d'etre is to strip-mine the commons and exploit them in as unfair a manner as possible, and are entirely willing to tear apart any and all protection (whether technological or legal) to make that happen.

    As a matter of fact, Brian Merchant's put out a piece about OpenAI and Google's assault on copyright as I was writing this.