Yeah, the generative AI pollution feels alot like the whole steel thing - since the nuclear tests it's been impossible for new steel to not be slightly radioactive, which means if they need uncontaminated steel they get it from ships that sunk before those.
This is the exact metaphor I've been using when talking to people about the issue. Did we both get it from somewhere I can't remember, or is it just perfect?
Luckily radiation levels have pretty much dropped back to pre-war levels now so new steel can be low-background as well. It was possible to make new low-background steel from 1945 onward too it just would have been more expensive than salvaging pre-war ships. I like the analogy though, it fits.
That makes sense. Way too many web search results look and feel like they weren't written by a human lately. It's gotten even more difficult for me to figure out what's trustworthy and what isn't.
Yep, and the fact they continue to feed these same results back to the AI is going to eventually make them lose their shit. I saw it mentioned in an article or video (can't remember now which) that when AI starts taking AI created output as input it gets hallucinations, almost like schizophrenia.
When the first three results look like high schoolers copied with slight wording changes from the same source and they are all written in an extremely passive tone, my assumption is AI. Questions on things like cooking temps are the worst in my experience, and I assume that is something which is easy to automate.
Yeah, when I was looking for information about Tears of the Kingdom around 90% of my search results was AI slop. I think was looking for info about how weapon durability and fusion worked and I kept getting a badly reworded version of the explanation of fusion from the gameplay teaser.
Actually.. that reminded me of another TotK search I did, I was looking for where to farm some variety of lizalfos tails and kept getting AI articles that confused BotW locations with TotK. Amusingly, I eventually tried Google's chatbot out of exasperation and it actually proved more accurate than my search results.
Tie this with the obvious oil pollution, and newly musks radio transmission pollution... Fucking corporations get to pollute the world in every way imaginable to chase a buck and we're left having to cope with their waste...
I agree that the approach is no longer viable but I strongly disagree with the rationale. It boils down to three key aspects:
Wordfreq works by scraping the "open web". As a result, it is being inundated with massive amounts of gpt spam articles. This is problematic in that it is not "natural language" between people but... those articles never were. If you think anyone talks like the average SEO recipe blog then... more on that later.
Sites are increasingly locking down access to scraping their text. This... I actually think is really good. I strongly dislike that that locking down means "so that only people who pay us can train off of you" but I have always disliked the idea that people just train models off of social media with no consent whatsoever
Funding for NLP research is basically dead. No arguments there and I have similar rants from different perspectives. But... that is when you learn how to call what you do AI to get back your old funding.
But I think the bigger part, that I strongly disagree with, is the idea that this is not the language of a post-2021 society. With points like
Including this slop in the data skews the word frequencies.”
But... look up "so-cal-ification" and how many people have some "valley girl" idioms and cadence to their normal speech because that is what we grew up on. Like, I say "like" a lot to chain thoughts together and am under no illusions that came from TV. Same with how you can generally spot someone who grew up reading SFF based on how they use some semi-obscure words and are almost guaranteed to mispronounce them.
Because it is the same logic as "literally there is no word that means literally anymore". Yeah, it is true. Yeah, it is annoying. But language evolves and it doesn't always evolve in ways that make sense.
Or, just look at how many people immediately started using the phrase "enshittification" every chance they got. Or who learned about the Ship of Theseus and apply it every chance they get.
Like (there it is again!), a great example is cell phones. Reality TV popularized the idea of putting your phone on speaker, holding it in the palm of your hand, and talking into it. That is fucking obnoxious and has made the world a worse place. But part of that was necessity (in reality tv it is so that the audience gets both perspectives. In reality life it is because of shit like the iphone having a generation or two that would drop calls if you held it like a god damned phone) and then it is just that feedback loop. Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.
AI has already begun to change language and it will continue to do so in the future. That is just reality and it is no different than radio and especially television leading to many regional dialects being outright wiped out.
The problem is that LLMs aren't human speech and any dataset that includes them cannot be an accurate representation of human speech.
It's not "LLMs convinced humans to use 'delve' a lot". It's "this dataset is muddy as hell because a huge proportion of it is randomly generated noise".
What is "human speech"? Again, so many people (around the world) have picked up idioms and speaking cadences based on the media they consume. A great example is that two of my best friends are from the UK but have been in the US long enough that their families make fun of them. Yet their kid actually pronounces it "al-you-min-ee-uhm" even though they both say "al-ooh-min-um". Why? Because he watches a cartoon where they pronounce it the British way.
And I already referenced socal-ification which is heavily based on screenwriters and actors who live in LA. Again, do we not speak "human speech" because it was artificially influenced?
Like, yeah, LLMs are "tainted" with the word "delve" (which I am pretty sure comes from youtube scripts anyway but...). So are people. There is a lot of value in researching the WHY a given word or idiom becomes so popular but, at the end of the day... people be saying "delve" a lot.
Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.
I strongly disagree with this. They're designed to look good no matter what. TV is an afterthought in the design of smartphones. But what do I know… I only worked on one of those projects.
But "enshitification" refers to a very specific cultural trend. The Ship of Theseus is someone trying to sound smart. These are not the same thing, even if some asshole tries to sound smart talking about the former. Others who are industry-enthusiasts use it as a shorthand for a very specific larger conversation.
Taking a step back, i wonder... we are reading this stuff now, it effects us too. What if we have already stepped into a linguistic death-spiral of a telephone-game where each generation gets rehashed garbage from the last?