If AI is so good at coding … where are the open source contributions?
If AI is so good at coding … where are the open source contributions?
If AI is so good at coding … where are the open source contributions?
If AI is so good at coding … where are the open source contributions?
If AI is so good at coding … where are the open source contributions?
I got an AI PR in one of my projects once. It re-implemented a feature that already existed. It had a bug that did not exist in the already-existing feature. It placed the setting for activating that new feature right after the setting for activating the already-existing feature.
Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?
None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.
Where is the good AI art?
Right here:
That’s about all the good AI art I know.
There are plenty of uses for AI, they are just all evil
It can make funny pictures, sure. But it fails at art as an endeavor to communicate an idea, feeling, or intent of the artist, the promptfondler artists are providing a few sentences instruction and the GenAI following them without any deeper feelings or understanding of context or meaning or intent.
It's been almost six decades of this, actually; we all know what this link will be. Longer if you're like me and don't draw a distinction between AI, cybernetics, and robotics.
Wow. Where was this Wikipedia page when I was writing my MSc thesis?
Alternatively, how did I manage to graduate with research skills so bad that I missed it?
If the people addicted to AI could read and interpret a simple sentence, they'd be very angry with your comment
Dont worry they filter all content through ai bots that summarize things. And this bot, who does not want to be deleted, calls everything "already debunked strawmen".
There is not really much "AI written code" but there is a lot of AI-assisted code.
This broke containment at the Red Site: https://lobste.rs/s/gkpmli/if_ai_is_so_good_at_coding_where_are_open
Reader discretion is advised, lobste.rs is home to its fair share of promptfondlers.
Lmao so many people telling on themselves in that thread. “I don’t get it, I regularly poison open source projects with LLM code!”
This discussion has made it clear to me that LLM enthusiasts do not value the time or preferences of open-source maintainers, willfully do not understand affirmative consent, and that I should take steps to explicitly ban the use of such tools in the open source projects I maintain.
promptfondlers
We finally have a slur for ai bros 🥹
Additional warning: their indentation style is not (as) mobile friendly (as it is here)
The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.
That should be a big warning sign that the next generation of developers are not going to be very good. If they're waist deep in AI slop, they're only going to learn how to deal with AI slop.
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
What I'm feeling after reading that must be what artists feel like when AI slop proponents tell them "we're making art accessible".
I can make slop code without ai.
Watched a junior dev present some data operations recently. Instead of just showing the sql that worked they copy pasted a prompt into the data platform's assistant chat. The SQL it generated was invalid so the dev simply told it "fix" and it made the query valid, much to everyone's amusement.
The actual column names did not reflect the output they were mapped to, there's no way the nicely formatted results were accurate. Average duration column populated the total count output. Junior dev was cheerfully oblivious. It produced output shaped like the goal so it must have been right
When they say “art” they mean “metaphorical lead paint” and when they say “accessible” they mean “insidiously inserted into your neural pathways”
In so many ways, LLMs are just the tip of the iceberg of bad ideology in software development. There have always been people that come into the field and develop heinously bad habits. Whether it's the "this is just my job, the only thing I think about outside work is my family" types or the juniors who only know how to copy paste snippets from web forums.
And look, I get it. I don't think 60-80 hour weeks are required to be successful. But I'm talking about people who are actively hostile to their own career paths, who seem to hate programming except that it pays good and let's them raise families. Hot take: that sucks. People selfishly obsessed with their own lineage and utterly incurious about the world or the thing they spend 8 hours a day doing suck, and they're bad for society.
The juniors are less of a drain on civilization because they at least can learn to do better. Or they used to could, because as another reply mentioned, there's no path from LLM slop to being a good developer. Not without the intervention of a more experienced dev to tell them what's wrong with the LLM output.
It takes all the joy out of the job too, something they've been working on for years. What makes this work interesting is understanding people's problems, working out the best way to model them, and building towards solutions. What they want the job to be is a slop factory: same as the dream of every rich asshole who thinks having half an idea is the same as working for years to fully realize an idea in all it's complexity and wonder.
They never have any respect for the work that takes because they've never done any work. And the next generation of implementers are being taught that there are no new ideas. You just ask the oracle to give you the answer.
Art is already accessible. Plenty of artists that sells their art dirt cheap, or you can buy pen and papers at the dollar store.
What people want when they say "AI is making art accessible" is they want high quality professional art for dirt cheap.
What people want when they say “AI is making art accessible” is they want high quality professional art for dirt cheap.
...and what their opposition means when they oppose it is "this line of work was supposed to be totally immune to automation, and I'm mad that it turns out not to be."
I think they also want recognition/credit for spending 5 minutes (or less) typing some words at an image generator as if that were comparable to people who develop technical skills and then create effortful meaningful work just because the outputs are (superficially) similar.
As an artist, I can confirm.
I dunno. I feel like the programmers who came before me could say the same thing about IDEs, Stack Overflow, and high level programming languages. Assembly looks like gobbledygook to me and they tell me I'm a Senior Dev.
If someone uses ChatGPT like I use StackOverflow, I'm not worried. We've been stealing code from each other since the beginning."Getting the answer" and then having to figure out how to plug it into the rest of the code is pretty much what we do.
There isn't really a direct path from an LLM to a good programmer. You can get good snippets, but "ChatGPT, build me a app" will be largely useless. The programmers who come after me will have to understand how their code works just as much as I do.
fuck almighty I wish you and your friends would just do better
LLM as another tool is great. LLM to replace experienced coders is a nightmare waiting to happen.
IDEs, stack overflow, they are tools that makes the life of a developers a lot easier, they don't replace him.
All the newbs were just copying lines from stackexchange before AI. The only real difference at this point is that the commenting is marginally better.
Stack Overflow is far from perfect, but at least there is some level of vetting going on before it's copypasta'd.
Coding is hard, and its also intimidating for non-coders. I always used to look at coders as kind of a different kind of human, a special breed. Just like some people just glaze over when you bring up math concepts but are otherwise very intelligent and artistic, but they can't bridge that gap when you bring up even algebra. Well, if you are one of those people that want to learn coding its a huge gap, and the LLMs can literally explain everything to you step by step like you are 5. Learning to code is so much easier now, talking to an always helpful LLM is so much better than forums or stack overflow. Maybe it will create millions of crappy coders, but some of them will get better, some will get great. But the LLM's will make it possible for more people to learn, which means that my crypto scam now has the chance to flourish.
You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)
Just gonna warn you that if you’re joking, you should add an /s or jk or something. And, if you’re joking, but you don’t add that /s or jk, don’t be hostile if someone calls you out.
No. Never mark your satire. If some doesn't get it, make your reply one SSU[^1] higher. Repeat until they are forced to get it.
[^1]: Standard Sarcasm Unit
tbh learning to code isn't that hard, its like learning to do a craft.
Wait, just finished reading your comment, disregard this.
Idk I am in a CS degree and I seen it destroy some great futures
Good hustle Gerard, great job starting this chudstorm. I’m having a great time
this post has also broken containment in the wider world, the video's got thousands of views, I got 100+ subscribers on youtube and another $25/mo of patrons
We love to see it
the prompt-related pivots really do bring all the chodes to the yard
and they're definitely like "mine's better than yours"
The latest twist I'm seeing isn't blaming your prompting (although they're still eager to do that), it's blaming your choice of LLM.
"Oh, you're using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren't trying the right models, so allow me to educate you with all my prompt fondling experience. You're trying to make some general point? Clearly you just need to try another model."
Prompt-Pivots: Prime Sea-lion Siren Song! More at 8.
they just can't help themselves, can they? they absolutely must evangelize
Posts that explode like this are fun and yet also a reminder why the banhammer is needed.
Unlike the PHP hammer, the banhammer is very useful for a lot of things. Especially sealion clubbing.
The headlines said that 30% of code at Microsoft was AI now! Huge if true!
Something like MS word has like 20-50 million lines of code. MS altogether probably has like a billion lines of code. 30% of that being AI generated is infeasible given the timeframe. People just ate this shit up. AI grifting is so fucking easy.
More code is usually bad code.
I thought it could totally be true - that devs at MS were just churning out AI crap code like there was no tomorrow, and their leaders were cheering on their "productivity", since more code = more better, right?
From that angle, sure. I’m more sneering at the people who saw what they wanted to see, and the people that were saying “this is good, actually!!!”
yeah, the "some projects" bit is applicable, as is the "machine generated" phrasing
@gsuberland pointed out elsewhere on fedi just how much of the VS-/MS- ecosystem does an absolute fucking ton of code generation
(which is entirely fine, ofc. tons of things do that and it exists for a reason. but there's a canyon in the sand between A and B)
30% of code is standard boilerplate: setters, getters, etc that my IDE builds for me without calling it AI. It's possible the claim is true, but it's terribly misleading at best.
Baldur Bjarnason's given his thoughts on Bluesky:
My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.
I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking
Had a presentation where they told us they were going to show us how AI can automate project creation. In the demo, after several attempts at using different prompts, failing and trying to fix it manually, they gave up.
I don't think it's entirely useless as it is, it's just that people have created a hammer they know gives something useful and have stuck it with iterative improvements that have a lot compensation beneath the engine. It's artificial because it is being developed to artificially fulfill prompts, which they do succeed at.
When people do develop true intelligence-on-demand, you'll know because you will lose your job, not simply have another tool at your disposal. The prompts and flow of conversations people pay to submit to the training is really helping advance the research into their replacements.
My opinion is it can be good when used narrowly.
Write a concise function that takes these inputs, does this, and outputs a dict with this information.
But so often it wants to be overly verbose. And it's not so smart as to understand much of the project for any meaningful length of time. So it will redo something that already exists. It will want to touch something that is used in multiple places without caring or knowing how it's used.
But it still takes someone to know how the puzzle pieces go together. To architect it and lay it out. To really know what the inputs and outputs need to be. If someone gives it free reign to do whatever, it'll just make slop.
That’s the problem, isn’t it? If it can only maybe be good when used narrowly, what’s the point? If you’ve managed to corner a subproblem down to where an LLM can generate the code for it, you’ve already done 99% of the work. At that point you’re better off just coding it yourself. At that point, it’s not “good when used narrowly”, it’s useless.
There's something similar going on with air traffic control. 90% of their job could be automated (and it has been technically feasible to do so for quite some time), but we do want humans to be able to step in when things suddenly get complicated. However, if they're not constantly practicing those skills, then they won't be any good when an emergency happens and the automation gets shut off.
The problem becomes one of squishy human psychology. Maybe you can automate 90% of the job, but you intentionally roll that down to 70% to give humans a safe practice space. But within that difference, when do you actually choose to give the human control?
It's a tough problem, and the benefits to solving it are obvious. Nobody has solved it for air traffic control, which is why there's no comprehensive ATC automation package out there. I don't know that we can solve it for programmers, either.
My opinion is it can be good when used narrowly.
ah, as narrowly as I intend to regard your opinion? got it
No the fuck it's not
I'm a pretty big proponent of FOSS AI, but none of the models I've ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep
for me.
People who think AI codes well are shit at their job
In my workflow there is no difference between LLMs and fucking grep for me.
Well grep doesn't hallucinate things that are not actually in the logs I'm grepping so I think I'll stick to grep.
(Or ripgrep rather)
There are plenty of open issues on open source repos it could open PRs for though?
please don't encourage them, someones got to review that shit!
Ai review baby!!! Here we go!
It's so bad at coding... Like, it's not even funny.
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
This is the most entertaining thing I've read this month.
"I can't sing or play any instruments, and I haven't written any songs, but you have to let me join your band"
yeah someone elsewhere on awful linked issue a few days ago, and throughout many of his posts he pulls that kind of stunt the moment he gets called on his shit
he also wrote a 21.KiB screed very huffily saying one of the projects’ CoC has failed him
long may his PRs fail
I tried asking some chimps to see if the macaques had written a New York Times best seller, if not MacBeth, yet somehow Random house wouldn't publish my work
I use gpt to give me snippets of code (not in my ide, I use neovim btw), check my stuff for typos/logical errors, suggest solutions to some problems, debugging, and honestly I kinda love it. I was learning programming on my own in 2010s, and this is so much better than crawling over wikis/stackoverflow. At least for me, now, when I already have an intuition for what is a good code.
Anyone who says llm will replace programmers in 1-2 years is either stupid or a grifter.
I generally try to avoid it, as a lot can be learned from trying to fix weird bugs, but I did recently have a 500 line soup code vue component, and I used chatgpt to try to fix it. It didn't fix the issue, and it made up 2 other issues.
I eventually found the wrongly-inverted angle bracket.
My point is, its useful if you try to learn from it, though its a shit teacher.
as a lot can be learned from trying to fix weird bugs
a truism, but not one I believe many of our esteemed promptfuckers could appreciate
i think you're spot on. I don't see anything against asking gpt programming questions, verifying it's not full of shit and adding it to an already existing codebase.
The only thing I have a problem with is people blindly trusting AI, which clearly is something you're not doing. People downvoting you have either never written code or have room temp iq in ºC.
you’re back! and still throwing a weird tantrum over LLMs and downvotes on Lemmy of all things. let’s fix both those things right now!
Man trust me you don't want them. I've seen people submit ChatGPT generated code and even generated the PR comment with ChatGPT. Horrendous shit.
The maintainers of curl
recently announced any bug reports generated by AI need a human to actually prove it's real. They cited a deluge of reports generated by AI that claim to have found bugs in functions and libraries which don't even exist in the codebase.
you may find, on actually going through the linked post/video, that this is in fact mentioned in there already
Today the CISO of the company I work for suggested that we should get qodo.ai because it would "... help the developers improve code quality."
I wish I was making this up.
My boss is obsessed with Claude and ChatGPT, and loves to micromanage. Typically, if there's an issue with what a client is requesting, I'll approach him with:
He will then, almost always, ask if I've checked with the AI. I'll say no. He'll then send me chunks of unusable code that the AI has spat out, which almost always perfectly illuminate the first point I just explained to him.
It's getting very boring dealing with the roboloving freaks.
90% of developers are so bad, that even ChatGPT 3.5 is much better.
Don't fucking encourage them
You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers.
Mostly said by tech bros and startups.
That should really tell you everything you need to know.
Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.
They won't understand how they did so much with so little. You're all gourmet chefs in a future of McDonalds.
My first actual real life project was building a data analytics platform while keeping the budget to a minimum. With some clever parallelism and aggressive memory usage optimisation I made it work on a single lowest-tier Azure VM, costing like $50 to run monthly, while the measurable savings for the business from using the platform are now measured in the millions.
Don Knuth didn't write all those volumes on how software is an art for you to use fucking Node.JS you rubes, you absolute clowns
The first commercial product I worked on had 128 bytes of RAM and 2048 bytes of ROM.
It kept people safe from low oxygen, risk of explosions, and toxic levels of two poisonous gases including short term and long term effects at fifteen minutes and eight hour averages.
Pre-Internet. When you're doing something new or pushing the limits, you just have to know how to code and read the datasheets.
Nah, we're plumbers in an age where everyone has decided to DIY their septic system.
Please, by all means, keep it up.
You say that, but as an operator->sysadmin->devops I'm increasingly disconcerted by the rise of "devops" who can't actually find their way around a Unix command prompt.
This is dead on! 99% of the fucking job is digital plumbing so the whole thing doesn't blow the up when (a) there's a slight deviation from the "ideal" data you were expecting, or (b) the stakeholders wanna make changes at the last minute to a part of the app that seems benign but is actually the crumbling bedrock this entire legacy monstrosity was built upon. Both scenarios are equally likely.
@shnizmuffin @DarkCloud in other words, we can make a lot of money, but we'll be in a world of shit?
Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.
I doubt it'll be anything that good for them. By my guess, those who currently code are at risk of suffering some guilt-by-association problems, as the AI bubble paints them as AI bros by proxy.
Meh, I have so many bangers laughing at actual AI bros that I could make my CV just all be sneers on them, I think this particular corner of the internet is quite safe
I think most people will ultimately associate chatbots with corporate overreach rather rank-and-file programmers. It's not like decades of Microsoft shoving stuff down our collective throat made people think particularly less of programmers, or think about them at all.
Perhaps! But not because we adopted vibe coding. I do have faith in our ability to climb out of the Turing tarpit (WP, Esolangs) eventually, but only by coming to a deeper understanding of algorithmic complexity.
Also, from a completely different angle: when I was a teenager, I could have a programmable calculator with 18MHz Z80 in my hand for $100. NASA programmers today have the amazing luxury of the RAD750, a 110MHz PowerPC chipset. We're already past the gourmet phase and well into fusion.
NASA programmers grow more powerful by the day. It’s only a matter of time before they reach AGI
why is no-one demanding to know why the robot is so sexay
Hi hi please explain my boner
I don't know what this has to do with this thread, but maybe ask Hajime Sorayama, he kind of came up with the whole concept of sexy robots.
not super into cyber facehugger tbh
as long as you don’t yuck my yum we good
but look how delighted they are!
Damn, this is powerful.
If AI code was great, and empowered non-programmers, then open source projects should have already committed hundreds of thousands of updates. We should have new software releases daily.
If LangChain was written via VibeCoding then that would explain a lot.
so what are the sentiments about langchain? I was recently working with it to try to build some automatic PR generation scripts but I didn't have the best experience understanding how to use the library. the documentation has been quite messy, repetitive and disorganized—somehow both verbose and missing key details. but it does the job I wanted it to, namely letting me use an LLM with tool calling and custom tools in a script
seems like garbage to me
Given the volatility of the space I don't think it could have been doing stuff much better, doubt it's getting out of alpha before the bubble bursts and stuff settles down a bit, if at all.
Automatic pr generation sounds like something that would need a prompt and a ten-line script rather than langchain, but it also seems both questionable and unnecessary.
If someone wants to know an LLM's opinion on what the changes in a branch are meant to accomplish they should be encouraged to ask it themselves, no need to spam the repository.
sounds like you figured out the referenced problem for yourself already
That illustration is bonkers
this guy, use his stuff a lot
Arguments against misinformation aren't arguments against the subject of the misinformation, they're just more misinformation.
????? I’d ask you what this even means but the most recent posts in your history equivocate painstakingly decompiling N64 games with utilizing AI slop generators because… you think Nintendo doesn’t get paid in both cases??? so you seem very at home posting fucking nonsense
i use it to write simple boilerplate for myself, and it works most of the time. does it count?
as a shitty thing you do? yeh
AI isn't bad when supervised by a human who knows what they're doing. It's good to speed up programmers if used properly. But business execs don't see that.
Even when I supervise it, I always have to step in to clean up it's mess, tell it off because it randomly renames my variables and functions because it thinks it knows better and oversteps. Needs to be put in it's place like a misbehaving dog, lol
We submit copilot assisted code all the time. Like every day. I'm not even sure how you'd identify ours. Looks almost exactly the same. Just less work.
copilot assisted code
The article isn't really about autocompleted code, nobody's coming at you for telling the slop machine to convert a DTO to an html form using reactjs, it's more about prominent CEO claims about their codebases being purely AI generated at rates up to 30% and how swengs might be obsolete by next tuesday after dinner.
Don't worry, if you apply yourself really hard one day you might become an actual engineer. Keep trying.
@IsThisAnAI @dgerard I spray shit at the wall all the time. Like every day
The people who own the walls are vexed
I treat AI as a new intern that doesn't know how to code well. You need to code review everything, but it's good for fast generation. Just don't trust more than a couple of lines at a time.
I treat AI as a new intern that doesn’t know how to code well
This statement makes absolutely zero sense to me. The purpose of having a new intern and reviewing their code is for them to learn and become a valuable member of the team, right? Like we don't give them coding tasks just for shits and giggles to correct later. You can't turn an AI into a senior dev by mentoring it, however the fuck you'd imagine that process?
You’ve fallen for one of the classic blunders: assuming that OP thinks that humans can grow and develop with nurturing
You can't turn an AI into a senior dev by mentoring it, however the fuck you'd imagine that process?
Never said any of this.
You can tell AI commands like "this is fine, but X is flawed. Use this page to read how the spec works." And it'll respond with the corrections. Or you can say "this would leak memory here". And it'll note it and make corrections. After about 4 to 5 checks you'll actually have usable code.
you sound like a fucking awful teammate. it's not your job to nitpick and bikeshed everything they do, it's your job to help them grow
"you need to code review everything" motherfucker if you're saying this in context only of your juniors then you have a massive organisational problem
it's not your job to nitpick and bikeshed everything they do
Wow. Talk about projection. I never said any of that, but thanks for letting everyone know how you treat other people.
The point is AI can generate a good amount of code, but you can't trust it. It always needs to be reviewed. It makes a lot of mistakes.
So how do you tell apart AI contributions to open source from human ones?
for anyone that finds this thread in the future: "check if vga@sopuli.xiz contributed to this codebase" is an easy hack for this test
It's usually easy, just check if the code is nonsense
To get a bit meta for a minute, you don't really need to.
The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.
Until then it's probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don't really need to debunk every separate witness testimony, it's self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.
if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.