My AI Skeptic Friends Are All Nuts
My AI Skeptic Friends Are All Nuts

My AI Skeptic Friends Are All Nuts

My AI Skeptic Friends Are All Nuts
My AI Skeptic Friends Are All Nuts
Every six months the tone of these "why won't you use my hallucinating slop generator" get more and more shrill.
I think his point that you basically give a slop generator a fitness function in the form of tests, compilation scripts, and static analysis thresholds, was pretty good. I never really thought of forcing the slop generator to generate slop randomly until it passes tests. That's a pretty interesting idea. Wasteful for sure, but I can see it saving someone a lot of time.
you basically give a slop generator a fitness function in the form of tests, compilation scripts, and static analysis thresholds, was pretty good.
forcing the slop generator to generate slop randomly until it passes tests.
I have to chuckle at this because it's practically the same way that you have to manage junior engineers, sometimes.
It really shows how barely "good enough" is killing off all the junior engineers, and once I die, who's going to replace me?
I'd much rather the slop generator wastes its time doing these repetitive and boring tasks so I can spend my time doing something more interesting.
Wasteful for sure, but I can see it saving someone a lot of time.
Many degrowth proposals call for some aggregate reduction of energy use or material throughput. The issue with these proposals is that they conflict with the need to give the entire planet public housing, public transit, reliable electricity, modern water-sewage services, etc., which cannot be achieved by "shrinking material throughput". According to modeling from Princeton University (this may be outdated), it suggests that zeroing emissions by 2050 will require 80 to 120 million heat pumps, up to 5 times an increase in electricity transmission capacity, 250 large or 3,800 nuclear reactors, and the development of a new carbon capture and sequestration industry from scratch. Degrowth policies, while not intending to result in ecological austerity, effectively do so through their fiscal commitment to budgetary constraints which inevitably require government cuts.
The reason for the above paragraph is to give an analogy to the controversy of "AI wastefulness". Relying on manual labor for software development could actually lead to more wastefulness long term and a failure to resolve the climate crisis in time. Even though AI requires a lot of power, creating solutions faster (especially in green industries as well as emissions reduced from humans such as commuting to work) could lead to a better and faster impact on reducing emissions.
The argument that workers should capture AI instead of the ruling class is interesting, but let me ask you.
Has there been a single technology entirely captured and for the workers in history, ever? Has not every piece of technology been used primarily by the working class, yes, but the direction it develops and what value it produces is decided by the ruling class? Always has been unless we can remove them from controlling the mode of production..
I think China is an interesting example of this, where the worker's party controls the majority of the economy and wouldn't let a program like DeepSeek threaten to unemploy half of it's economy (America does probably have a larger segment dedicated to programming, though, silicon valley and all). Even then, the average worker there has more safety nets.
The threat I see is the dominance of AI services provided by an oligarchy of tech companies. Like Google dominance of search. It's a service that they own.
Thankfully China is a source of alternative AI services AND open source models. The bonus is that Chinese companies like Huawei are also an alternative source of AI hardware. This allows you to run your own AI models so you don't necessarily need their services.
You're thinking of class war. There's only one proven way to win that war: The working class rises up, kill some MFers and takes over. There's no point smashing the loom - kill the loom owners and take their looms.
I'm well aware, I'm just wary of the framing of the idea that we need to "take over this tool" when in reality we'll just interact with it and use it like we do any technology under the mode of production. Any technology, any tool can realistically be turned that way. I don't see how AI is special in this regard, though other than for its obvious uses in coding.
The mistake I think we can avoid is letting AI making management or executive decisions as like the old IBM quote goes, they can never be held accountable.
Has there been a single technology entirely captured and for the workers in history, ever?
No, technology has no ideology, which is why we shouldn't be opposed to using the tools that the ruling class uses against us. The chinese communists didn't win the civil war without using guns or without studying military tactics and logistics.
Absolutely not. I'm not saying that we shouldn't, I suppose looking at my response to Yogthos explains my position better.
Also, I think the framing of the idea that people are against it because it doesn't have a clear, distinct use-case in politics or against the capitalists yet isn't being anti-A.I nor reactionary. I think being cautious with any new technology is reasonable.
So much this!
Technology absolutely has an ideology. All technology produces winners and losers, complicates previous tasks while making some easier, and overlaps heavily with futurism. If tech doesn't have an ideology, then we would say Luddites and Amish are merely social clubs, and not social movements.
Correct. We can use carbines and rifle equivalents while the enemy is building massive data-centers in third world countries and marginalized communities as the technology is used on their side to ramp up global exploitation of the third world, squeezing out their minor white-collar industries for even more productivity as they use it to race and keep up with ever-lowering wages as productivity sky-rockets globally.
I'm glad while this happens we can have an open-source equivalent. Do you see why people are so glum or dismal about it?
I mean, technology will be used to oppress workers under capitalism. That is why Marxists fundamentally reject capitalist relations. However, given that people in the west do live under capitalism currently, the question has to be asked whether this technology should be developed in the open and driven by community or owned solely by corporations. This is literally the question of workers owning their own tools.
It already is, as far as I'm aware. The issue that I'm having is the idea of it being framed as a technology we as Marxists can co-opt. If it has it's uses in coding or for projects within Marxism, sure, but as far as I'm concerned I don't really see a valid use in integrating it as it exists within parties or politically other than data storage/organization..which I imagine there is better options for that. Maybe in the future, though.
As long as capitalism exists, I don't think we "own" any tools without a proper worker's party to enforce regulations and protect workers in the West. That is the reason I brought up China. I have no objections to open-source alternatives though, but I don't think us developing open-source tools is going to stop the majority of the use of this tool harming workers. Hence my issue with the idea of "owning it". We certainly can use it though.
If people can build it, it can serve the people. Think of open-weights LLMs. If we got a couple of 32B models that score as high as GPT-4o and Claude-3.5, why not use them? It can be run on mid-high end hardware. There are developers out there doing a good job. It doesn't need to be a datacenter/big tech company centered scenario.
There are many technologies that serve the people that regardless are captured and extracted value mainly by the ruling class of our mode of production. Extracting value from it ourselves and our own projects doesn't mean that we own it.
My point was also that despite our efforts; corporations and the ruling class will build destructive datacenters/big tech.
Thanks for sharing these AI posts.
Paid employment could mean retraining under socialism. Remember communism is moneyless, stateless and classless. The aim of society is the socialisation of all labour to free up time to do more leisure including art. People will still want art from humans without AI but there’s a difference between that and the preservation of regression through ludditism to maintain less productive paid labour.
Equating anti-capitalism to anti-corporatism, the appeal to ludditism, the defense of proprietorship, or the appeal to metaphysical creativity is not going to cut it, and that is a low bar to clear for marxists.
My party is trying their best to understand and implement AI and it's causing some friction within the party. The official stance that is now adopted is the one of 'we need to understand it and use it to our advantage' and 'we need to prevent AI being solely a thing of the ruling class' and to me that makes sense. I wasn't around at the time but I imagine it was the same with the coming of the internet some decades ago and we can see how that ended. I hope socialist orgs don't miss the boat this time.
I think that's precisely the correct stance. As materialists we have to acknowledge that this technology exists, and that it's not going away. The focus has to be on who will control this tech and how it will be developed going forward.
I think this is the best approach. I'm still wary, but this is the viewpoint I've been coming to understand.
The difference is the internet wasn't built on theft with the explicit goal of disempowering workers
not many people know this, but DARPA created it out of love and kindness
But neither is AI. It is not sentient. It is pushed in the direction of the ones controlling it. Which currently is the tech oligarchy. Realising that and trying to find a way to navigate that will put you at an advantage.
See, for coding AI makes a lot of sense, since a lot of it is very tedious. You still need to understand the output to be able to debug it and make novel programs though, because the limitation is that the LLM can only recreate code its seen before in fairly generic configurations.
Right, I agree with the author of the article that LLMs are great at tackling boring code like boilerplate, and freeing you up to actually do stuff that's interesting.
I find the tone kind of slapdash. Feel like the author could have condensed it to a small post about using AI agents in certain contexts, as that seems to be the crux of their argument for usefulness in programming.
I do think they have a valid point about some in tech acting squeamish about automation when their whole thing has been automation from day one. Though I also think the idea of AI doing "junior developer" level of work is going to backfire massively on the industry. Seniors start out as juniors and AI is not going to progress fast enough to replace seniors probably within decades (I could see it replacing some seniors, but not on the level of trust and competency that would allow it to replace all of them). But AI could replace a lot of juniors and effectively lock the field into a trajectory of aging itself out of existence, due to it being too hard for enough humans to get the needed experience to take over the senior roles.
Edit: I mean, it's already the case that dated systems sometimes use languages nobody is learning anymore. That kind of thing could get much worse.
The developer pipeline is the big question here. My experience using these tools is that you absolutely have to know what you're doing in order to evaluate the code LLMs produce. Right now we have a big pool of senior developers who can wrangle these tools productively and produce good code using them because they understand what the proper solution should look like. However, if new developers start out using these tools directly, without building prior experience by hand, then it might be a lot harder for them to build such intuition for problem solving.
Yogthos is really relentless with all these AI posts. You're not fighting for the poor defenseless AI technologies against the tyrannical masses with these posts.
People are clearly pissed off at the current state of these technologies and the products of it. I would have expected that here out of all places that the current material reality would matter more than the idealistic view of what could be done with them.
I don't mean for this comment to sound antagonistic, I just feel that there's more worthwhile things to focus on than pushing back against people annoyed by AI-generated memes and comics and calling them luddites.
This post is about what could be done with them though. It's not about image generators, it's about coding agents. LLMs are really good at programming certain things and it's gotten to the point where avoiding them puts you at a needless disadvantage. It's not like artisanally typed code is any better than what the bot generates.
That is all well and good, but this is not a conversation about "using LLMs in this specific scenario is advantageous". I'm talking about the wider conversation mostly happening on this instance.
It's quite frustrating when people express certain material concerns about the current state of the technology and its implications and are met with bad-faith arguments, hand-waving, and idealism. Especially when it's not an important conversation to be happening here anyway. It's mostly surfaced because people here react negatively to the AI-generated memes that Yogthos posts and that of course makes them irrational primitivists.
It's needless antagonism that is not productive whatsoever over a topic that is largely out of the hands of workers anyway.
Irrational hate of new technology isn't going to accomplish anything of value.
That's completely missing the point I am making. I am not advocating for 'irrational hate' of a technology. I am saying that people are not receptive to the current implementations of it, and that trying to combat this through pushing against this sentiment is ultimately a waste of time.
Assess the situation on what is, not on the premise of a utopian ideal.
Yogthos is really relentless with all these AI posts.
They are consistent, in their boosterism, so credit where credit is due.
Turns out my assumptions about how LLM-assisted programming works were completely wrong or outdated. This new way sounds super efficient.
At this point, anti-AI sentiment is just cope. AI is here to stay. For the people against AI, what is the praxis that must be undertaken against AI? AI, like any other tool, is lifeless but has living users that use, support, and develop it, so the question of praxis against AI becomes a question of praxis against workers who use, develop, and propagate AI.
This is why the Luddites failed. The Luddites had enough people to conduct organized raids, but the fact that those machinations were installed and continued to be installed by other workers meant that they represented a minority of workers. If they had a critical mass of workers on their side, those machinery would quite simply not be installed in the first place. Who else is going to install the machinery, the bourgeoisie, the gentry, and a bunch of merchants involved in human trafficking of Africans slaves?
Those looms didn't sprout legs and installed themselves. They were installed by other workers, workers who, for whatever reason, disagreed with the Luddite's praxis or ideology. Viewed in this context, it made sense why the Luddites failed in the end. Who cares if 500 looms got smashed by the Luddites if 600 looms got installed by non-Luddite workers anyways.
Corps are already starting to build underground data centers, so you and your plucky guerilla band of anti-AI insurgents can't just firebomb a data center that's build from a repurposed nuclear bunker. Pretty much all of the AI scientists who push the field forward are Chinese scientists safely located within the People's Republic of China, so liquidating AI scientists for being class traitors is out of the question. Then what else is left in terms of anti-AI praxis besides coping about it online and downvoting pro-AI articles from some cheap knockoff of Rddit?
This is precisely what I've been trying to explain to people as well. Corporations will keep developing this technology. Nothing will stop this. It’s happening. So the only question that matters is: How will it be developed, and who controls it?
The irony is that fighting against the use of this tech outside corporations guarantees corps become its sole owners. The only rational path is to back community-driven development, just like any other open-source alternatives to corporate tools. Worker-owned. Community-controlled.
It’s mind-boggling that so many people fail to understand this.
Do you like fine Japanese woodworking? All hand tools and sashimono joinery?
this should sell it, why would anyone want something more expensive just because it was hand made instead of mass produced.
Counterpoint: Vibe Coding Will Rob Us of Our Freedom - IT Notes https://it-notes.dragas.net/2025/06/05/vibe-coding-will-rob-us-of-our-freedom/
If you're not coding in assembly you're not a real programmer vibes there.
Really? I got "if you don't understand the code you're producing, then that's a real problem, not just for you but for software development as a whole".
I don't think that was their vibes.
From article:
the point is not to let ourselves be replaced by AIs, but to use them to improve ourselves and our productivity
My take:
The role of the programmer is ultimately to solve the problems. There are many ways to skin the cat. The better solutions comes from the better programmers.
Bosses under capitalism have less understanding of the pros/cons of a particular solution. Hence they will often use their decision making powers to choose the quick solution rather than the best.
why did I do computer science god I fucking hate every person in this field it's amazing how much of an idiot everyone is.
You can tell the ones that got A's in their comp sci classes and C's in their core/non-major classes by how bloodthirsty they are.
Me, the enlightened centrist, just got C's in everything
Writer is the type of guy to only fail his ethics courses