The sole purpose of language models is to lower the market value of human skills.
The sole purpose of language models is to lower the market value of human skills.
The sole purpose of language models is to lower the market value of human skills.
that's the point of every technological development lately
Government operates by accepting our individual political authority, and utilizing that investment of power to provide paid services to customers ("taxpayers").
Our investment of political power makes us shareholders. Like any shareholder, we are owed a return on our investment. The government should be paying a dividend to each and every citizen.
I'm 12 and this is deep
No tools in your shed— yet you curse the rising sun for stealing your craft.
All the world’s a stage and here you are playing pretend middle management with racist pile of linear algebra. Adorable.
Here ya go. Learn how to write without getting your hand held by a synthetic text extruding machine that guzzles more water than Ai fanboys slurping corporates sloppy seconds.
I think in a non market economy I would still work on language models. It's cool that a machine can hold a conversation.
It's not the purpose of LLMs to lower human skills' value, it's just the inevitable outcome.
Transcriptionist? Industry died with good voice recognition 10-20 years ago.
Ditch digging shovel crew? Dramatically de-valued with the advent of the steam-shovel...
and on and on... The theory goes that it gives people more free time, but the way wealth is distributed it is dividing people into those with jobs serving the wealthy and those who live on handouts.
I think: non-stigmatized "handouts" for everybody are the way of a brighter future. UBI FTW.
Go outside. Touch grass. Talk to humans.
Talking to a hallucinating machine rots your brain:
I do those too! That's where the ideas for new architectures, datasets, and training tweaks come from! Math is fun, and it's fascinating that math can talk sometimes.
Edit: And I see now that we're editing messages after people reply? Rude, no? Designing a hallucinating machine certainly doesn't rot your brain.
Not entirely true. This works only for unskilled labour, meaning anyone who is not skilled in lying and cheating your way into a management positon. Or poor people.
50 years ago - you won't believe how many paper pushers we'll be able to get rid of once this invention called computers takes off!
Also, man these robotic arms sure help automate our factories, but if you turn your head upside down you will see it as 'lowering the market value of human skills'.
Bwahahahahaha my aren’t you a “useful idiot”
Ai literally makes people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
AND they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
At least a damn computer and machines work without hallucinating or faking the product.
The sole purpose of all tools ever created is to lower the value of human skills.
LLM's are hyped but have some value as a tool.
Only tool I see is you. Ai apologist get blocked.
Stop living in a dumbass fucking filter bubble.
Blocking is for people who are abusive, not people who coherently express a point slightly different then the one you made.
And they are literally unquestionably and objectively right. Literally every single tool ever created was made to reduce the amount of labour it takes to do a task, which reduces the value of human labour. It's called automation. Read Karl Marx if you think you're such a leftist, he'll explain it you.
I don't understand the wild eyed hate for a spell checker on steroids. LLMs predict the next word like the phone keyboard you use every day.
Hate the hype, not the tool.
I'd rather be a decent and respectful ai apologist and tool than a shade-thrower. I implore you to research Ghandi friend
Please do me next. I hate it when people reply to me with dumb shit like that, missing the entire point, as I've seen with your other comments.
No it's purpose is to censor information and trap humans into bubble where the only options will be corporate locked chat or physical book. After that cut the books by lobbying for replacing public libraries with chat ( because it's cheaper ) and leave humanity with nothing. Look how they pressure education now. They want every human to speak with robot so they can control us. It's not about robot being good or wrong it's about you reading what it prints and puttng it into your brain so it melts it.
Nah, thats just a side effect. The primary purpose of AI is being a hype object that companies can use to inflate their stock. I like how this blogpost explained it https://pluralistic.net/2025/06/30/accounting-gaffs/
The labor market effect will be quite short lived. LLMs as a replacement to human labor will be gone before long simply because they will ruin any company, government or project that relies on them. Its a sort of natural selection.
I hope you're right.
I have doubt, but also hope.
LLMs have a more mystical pull than other previous hypes, but look at what happened with crypto and blockchain. Every fucking government and company was saying that its the future and now nobody gives a shit about those anymore. It might be later than you and i would like, but eventually it will pass.
Most people don’t want to use them in the workplace, only the super tech bros from what I’ve seen. I don’t see AI having a big impact, especially since people don’t care to have it replace their jobs. I think if AI was here the. lots of jobs could be at risk, but LLMs aren’t AI and I doubt they will be in the next 5-10 years. LLM prompts can’t do 90% of all jobs. They can help summarize and pull a first draft for some stuff, but as an end product they can be pretty trash and don’t sound natural whatsoever.
I think you’re right about the LLMs ruining the entities that rely on them though.
Probably the biggest victim of the LLM hype is academia. I dont have anything quantitative to back this up, but the students around me that rely on chatgpt a lot seem to be failing their exams more than those that arent.