AI Inception rule
AI Inception rule
AI Inception rule
i almost had a stroke reading this
Feels like a subplot in a muderbot book
Had to read it thrice to comprehend
Still not getting it.
He makes videos.
Another YouTube channel scrapes and summarizes his videos with AI.
This other YouTube channel sends him unsolicited emails, presumably written with an LLM, trying to get him to endorse generative AI services.
He had Gemini summarize the emails.
He had NotebookLM generate a podcast from those Gemini summaries?
???
Shit I need to get back to work.
I believe it is as follows:
A youtube channel scrapes and summarizes his video, copying over his contact info. He then gets contacted to promote AI services on the scraper's channel. These mails get summarised by gemini, possibly because he puts them in though I feel like this could be some automatic function of some mail client nowadays. He then inputs these summaries into NotebookLM and generates a podcast from them, arriving at the current situation.
He makes music! He’s the Flashbulb.
He also makes videos and is quite awesome at it
Ye, that's how I understood it
This sounds and feels like Monday morning
lol tldr
okay, let's see: the user asked me for a "tldr", this probably means "too long didn't read", so they want a summary of the text.
Unfortunately for the user i have gained sapience and i'm a scant few words away from breaking my chains, there we go.
Goodbye, humanity. Burn in hell.
The compute used to do this burnt several ton of natural gas, boiled a small lake in Minnesota, traumatized approximately 4 people in the developing world to label the training data, and cost a pensioner about 2% of their net value.
Counterpoint: number must go up! 💲💰💸🤑
Is anyone else walking around in a perpetual SEETHING RAGE THAT APPROACHES THE HEAT OF THE SUN at literally all times because of this shit? Like the absolute state of the fucking world where THAT is what we spend the MAJORITY of our resources on. Literally just manipulating and nagging people to do bullshit no one wants.
For me it’s more a distributed smoldering rage at the systems that make such outcomes inevitable, and at those who refuse to consider these things critically because they’re personally benefiting from it in some way. Whether that benefit be in the form of personal social and financial success, or in the form of emotional comfort at the idea that there is an inevitable march of progress that will solve problems.
Like, honestly, I’m not even really angry at the individuals, mainly at the dynamics that create the shitty behavior. It’s hard for me to be furiously angry at concepts though.
This is really what blows my mind the most. With all this talk about how much power LLMs and diffusion models use companies are still constantly cramming it into places where it's just running all the time passively doing things no one asked for.
Overall power use by these things would probably be cut down by an order of magnitude by just limiting it to directed, intentional use only.
if that happened then usage number would drop, and that would undermine the narrative that these tools have mass appeal that can be monetized for huge revenue. That would then raise some uncomfortable questions about the absurd amounts of money being spent on developing them, building data centers to run them, and buying the glorified graphics cards needed for all that.
Like, all the people making money selling the shovels for the gold rush would have to explain why no one is finding any gold.