BlueMonday1984 @ BlueMonday1984 @awful.systems Posts 41Comments 504Joined 1 yr. ago
The “legal proof” part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like they’re shielded from copyright law, he’s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.
It'll probably set a very bad precedent that fucks up copyright law in various ways (because we can't have anything nice in this timeline), but I'd like to see him get his ass beaten as well. Thankfully, removing watermarks is already illegal, so the courts can likely nail him on that and call it a day.
In other news, Ed Zitron discovered Meg Whitman's now an independent board director at CoreWeave (an AI-related financial timebomb he recently covered), giving her the opportunity to run a third multi-billion dollar company into the ground:
As an added bonus, its clear he's getting trolled for his terminal startup brain:
EDIT: Found some dipshit trying to defend the guy in the wild, rehashing the arguments used for AI art:
The most generous reading of that email I can pull is that Dr. Greg is an egotistical dipshit who tilts at windmills twenty-four-fucking-seven.
Also, this is pure gut instinct, but it feels like the FOSS community is gonna go through a major contraction/crash pretty soon. I've already predicted AI will kneecap adoption of FOSS licenses before, but the culture of FOSS being utterly rancid (not helped by Richard Stallman being the semi-literal Jeffery Epstein of tech (in multiple ways)) definitely isn't helping pre-existing FOSS projects.
Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers
I've updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.
You could probably do good art with an AI
Hot take: A plagiarism machine built to spew signal-shaped noise is incapable of making good art
Starting things off here with a couple solid sneers of some dipshit automating copyright infringement - one from Reid Southen, and one from Ed-Newton Rex:
A human-curated search engine would likely be easy to sell as well - the obvious approach to marketing it would be to bring attention to the human-curation involved, and claim no algorithms are involved in determining search results. This is arguably bullshit - you'll need an algorithm to sort the search results at minimum - but it'd evoke the idea that the engine is giving customers what they want, and not what someone else wants.
Additionally, you can pull out the somewhat old standby of claiming the search engine to be AI-free - with LLMs and slop generators defining how the public views AI, presenting yourself as a bulwark against the slop-nami will be an easy marketing win.
(Sidenote: Between the ever-growing backlash against AI, boiling resentment against Silicon Valley, and the fact I found this an easy sell, I suspect this idea's time has indeed come.)
If good search does come back, it'll likely require heavy human curation to keep LLM noise as low as humanly possible. Automated methods can be easily SEO'd to death, but human curation's gonna be rather tough to game.
Hopefully, this will also probably kill any notion of tech being apolitical for good.
Over/under on when this gets compared to God Of War Ragnarok constantly spoiling puzzle solutions (relevant GMTK(?))?
This awful new thing will be tested on Xbox Insider Program members from next month. It will shortly be telling you to git gud and how it wants to spend some quality time with your mother.
call-backs my beloved
...eh, fuck it, here's my sidenote on Brian's piece:
Google and OpenAI's campaign gives me the suspicion that the ongoing copyright lawsuits may be what finally pops this bubble. Large Language Models are built though large-scale copyright infringement, and built to facilitate large-scale copyright infringement - if the actions of OpenAI and pals are ruled not to be fair use, it would be open season on LLMs.
In other news, BlueSky's put out a proposal on letting users declare how their data gets used, and BlueSky post announcing this got some pretty hefty backlash - not for the proposal itself, but for the mere suggestion that their posts were scraped by AI. Given this is the same site which tore HuggingFace a new one and went nuclear on ROOST, I'm not shocked.
Additionally, Molly White's put out her thoughts on AI's impact on the commons, and recommended building legal frameworks to enforce fair compensation from AI systems which make use of the commons.
Personally, I feel that building any kind of legal framework is not going to happen - AI corps' raison d'etre is to strip-mine the commons and exploit them in as unfair a manner as possible, and are entirely willing to tear apart any and all protection (whether technological or legal) to make that happen.
As a matter of fact, Brian Merchant's put out a piece about OpenAI and Google's assault on copyright as I was writing this.
New piece from WIRED: Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models
I'll let Baldur do the talking here:
Literally what I and many others have been warning about. Using LLMs in your work is effectively giving US authorities central control over the bigotry and biases of your writing
No firefox with ublock origin? Seems like that would be the obvious choice here (or maybe not due to Mozilla’s recent antics)
Librewolf with uBlock Origin's probably the go-to right now.
some of their userland decisions partially inspired my rant about privacy communities; the other big inspiration was privacyguides.
I need to see this rant. If you can link it here, I'd be glad.
translation is IMO one of the use cases where LLMs actually have some use
How the fuck can a hallucinating bullshit machine have use in translation
In other news, techbros are reportedly pushing their children into the arts.
This is pure gut instinct, but I suspect those kids are gonna be relentlessly bullied at school.