BlueMonday1984 @ BlueMonday1984 @awful.systems Posts 42Comments 521Joined 1 yr. ago
Between the gen-AI industry's nonstop lawsuits and public embarrassments like Google's glue pizza debacle, I'd bet good money Microsoft and OpenAI are gonna struggle to convince local journos that gen-AI's alleged benefits are worth the inevitable retractions/lawsuits/general pain and suffering.
We're gonna have enough material for a Elon diss track at this rate
Okay, personal thoughts:
This is just gut instinct, but it feels like generative AI is going to end up becoming a legal minefield once the many lawsuits facing OpenAI and others wrap up. Between the likes of Nashville's ELVIS Act, the federal bill for the COPIED Act, the solid case for denying Fair Use protection, and the absolute flood of lawsuits coming down on the AI industry, I suspect gen-AI will come to be seen by would-be investors as legally risky at best and a lawsuit generator at worst.
Also, Musk would've been much better off commissioning someone to make the image he wanted rather than grabbing a screencap Aicon openly said he was not allowed to use and laundering it through some autoplag. Moral and legal issues aside, it would have given something much less ugly to look at.
Kendrick Zitron dropped - its mainly focusing on Prabhakar Raghavan's recent kicking upstairs, and Google's bleak future.
Main highlight was this snippet:
I am hypothesizing here, but I think that Google is desperate, and that its earnings on October 30th are likely to make the street a little worried. The medium-to-long-term prognosis is likely even worse. As the Wall Street Journal notes, Google's ad business is expected to dip below 50% market share in the US in the next year for the first time in more than a decade, and Google's gratuitous monopoly over search (and likely ads) is coming to an end. It’s more than likely that Google sees AI as fundamental to its future growth and relevance.
In other news, there's been a statement on AI training that's racked up over 10k signatures, which is unsurprisingly lambasting the rampant stealing that went into creating the autoplag machines:
Now, I'm way too much of a fan of sidenotes, so I'll whip one out:
Beyond simple content theft being publicly lambasted, I suspect that even licensed use of artists' work for gen-AI will ignite some controversy - if Eagan Tilghman's run-in with controversy last year is any indication, any usage of gen-AI, regardless of context, will be met with hostility.
I found the git master branch naming controversy a bit misguided, since to my mind the analogy was more “master copy” or “master recording” than “master of a slave”. This isn’t IDE. Who names their VCS branch “slave”?
In a better world, this would've probably been a solid argument for letting the master/slave naming convention stick around. We don't live in a better world.
Parents Sue School That Gave Bad Grade to Student Who Used AI to Complete Assignment
An old and powerful force has entered the fraught debate over generative AI in schools: litigious parents angry that their child may not be accepted into a prestigious university.
The tabloids are gonna be going nuts over this.
Update: The QRTs are mainly sneering, but this one's particularly good
EDIT: Against my better judgment, I'm letting another sidenote come out:
If you wanna encourage people to drop the master/slave naming scheme, this guy probably gave you a good bit of ammo. Changing a random naming scheme is a pretty low-priority task under most circumstances, but it gets a lot more tempting when it lets you distance yourself from people like this
I can pull finer bars out my arse than this fucking farce
This is probably Grok - creativity from him's pretty sparse
Choom thinks he's DOOM, but he won't beat him any time soon
With how much crack this whack goes through, he'll forget this before noon
(I'm no MF DOOM, but anything I can put out will beat this artless twat any day)
I'd be happy if the VCs responsible for this bubble died penniless, but I'll take them losing a lot of money
Now that the content mafia has realized GenAI isn’t gonna let them get rid of all the expensive and troublesome human talent. it’s time to give Big AI a wedgie.
Considering the massive(ly inflated) valuations running around Big AI and the massive amounts of stolen work that powers the likes of CrAIyon, ChatGPT, DALL-E and others, I suspect the content mafia is likely gonna try and squeeze every last red cent they can out of the AI industry.
Considering Glaze and Nightshade have been around for a while, and I talked about sabotaging scrapers back in July, arguably, it already has.
Hell, I ran across a much smaller scale case of this a couple days ago:
Not sure how effective it is, but if Elon's stealing your data for his autoplag no matter what, you might as well try to force-feed it as much poison as you can.
New piece from The Atlantic: The Age of AI Child Abuse is Here, which delves into a large-scale hack of Muah.AI and the large-scale problem of people using AI as a child porn generator.
And now, another personal sidenote, because I cannot stop writing these (this one's thankfully unrelated to the article's main point):
The idea that "[Insert New Tech] Is Inevitabletm" (which Unserious Academic interrogated in depth BTW) took a major blow when NFTs crashed and burned in full view of the public eye and got rapidly turned into a pop-culture punchline.
That, I suspect, is helping to fuel the large scale rejection of AI and resistance to its implementation - Silicon Valley's failure to make NFTs a thing has taught people that Silicon Valley can be beaten, that resistance is anything but futile.
Update: My previous statement was wrong, turns out @ai_shame is still around
Found a pretty good Tweet about HL: Alyx today:
🎶 Tryna strike a chord and its probably A minorrrrrrrrrrr
(seriously, what the fuck HN)
anyone wanna take bets on how much pearlclutching surprisedpikachu we’ll see
I suspect we'll see a fair amount. Giving some specifics:
- I suspect we'll see Sammy accused of endangering all of humanity for a quick buck - taking Altman at his word, OpenAI is attempting to create something which they themselves believe could wipe out humanity if they screw things up.
- I expect calls to regulate the AI industry will louden in response to this - what Sammy's doing here is giving the true believers more ammo to argue Silicon Valley may potentially trigger the robot apocalypse that Silicon Valley themselves have claimed AI is capable of unleashing.
A quick update: @ai_shame is quitting Twitter, and Musk using posts for AI training is the reason why:
New piece from The Atlantic: The AI Boom Has an Expiration Date
The full piece is worth a read, but the conclusion's pretty damn good, so I'm copy-pasting it here:
All of this financial and technological speculation has, however, created something a bit more solid: self-imposed deadlines. In 2026, 2030, or a few thousand days, it will be time to check in with all the AI messiahs. Generative AI—boom or bubble—finally has an expiration date.