Good thing by then we'll have oracle LLM. You may only use it for writing software. But we'll definitely charge you for answering questions about life the universe and everything.
That'll be all your profit this year minus the C-level bonuses please.
They will conclude that they did not actually save money by replacing human developers with LLMs.
The next CTO might realize that. If there hasn't been a change in upper-level management, they'll just double down and blame the few remaining human developers for the mess.
A fellow had just been hired as the new CEO of a large high tech corporation. The CEO who was stepping down met with him privately and presented him with three numbered envelopes. "Open these if you run up against a problem you don't think you can solve," he said.
Well, things went along pretty smoothly, but six months later, sales took a downturn and he was really catching a lot of heat. About at his wit's end, he remembered the envelopes. He went to his drawer and took out the first envelope. The message read, "Blame your predecessor."
The new CEO called a press conference and tactfully laid the blame at the feet of the previous CEO. Satisfied with his comments, the press -- and Wall Street - responded positively, sales began to pick up and the problem was soon behind him.
About a year later, the company was again experiencing a slight dip in sales, combined with serious product problems. Having learned from his previous experience, the CEO quickly opened the second envelope. The message read, "Reorganize." This he did, and the company quickly rebounded.
After several consecutive profitable quarters, the company once again fell on difficult times. The CEO went to his office, closed the door and opened the third envelope.
@TootSweet@ajsadauskas They'll just completely rewrite it from scratch using a newer LLM and that will be considered normal. In those 5yrs the percentage of developers who remember the idea of code having longevity will be tiny.
Outsourcing is such a mixed bag. I have 2 projects outsourced to a company in India: one is magnificent and well documented and the other looks like a crack fiend wrote it. Both work, but only one is sustainable
I donāt disagree, but Iāve heard this before. Assembly devs complaining about compiled languages. C/c++ devs complaining about every newer language. Traditional devs complaining about web developers. Backend web developers complaining about blogs/cms tools. Nearly everyone complaining about electron.
And honestly I think those folks had a point. The old stuff written when the tools were simple and memory scarce were almost works of art. The quality of software development (especially with regard to optimization) has been going downhill for decades. What ever the llms do will just be part of this trend.
The use of LLMs though is more similar to outsourcing than it is to a new technology. No one is talking a out changing how we program, we're talking about changing who does the programming.
While outsourcing has had its ups and downs, I think most companies have found that skilled technical people can't really be outsourced easily/cost money everywhere. I suspect we'll see a similar thing here with LLMs because the core compentcy that makes programmers/engineers expensive is knowing what to do, not how to do it.
Well, once you understand llms, you might not be so sure. They are not a magic machine generating code out of thin air, it's based on experience and not much inference happens in the process.
Yep. This is the old school way of thinking that leads to things being shitty and not improving. "Why change if it's not broke?" Cue Uber, Google, Netflix any tech company that replaced the old guards.
Which have all descended, or are in the process of descending, into suckitude because of business issues rather than technical ones. And trying to replace programmers with LLMs is fundamentally a business issue.
@veronica@ajsadauskas@technology The hype around AI in software engineering seems to be that it is āprovenā that devs produce code quicker. it is going to be interesting to see if the corporate world values code quality over development velocity. There seems to be a pervasive belief that āmove fast and break thingsā is how the big guys do software engineering. A few points to note:
this idiom only applies when you fail fast, realize it, and address the problem that has been introduced.
Break things does not mean enshittify ie create tech debt by virtue of poor code
It really only applies if you have enough development resources to do the rework. That is to say, can afford to get it wrong often. #AI#copilot
I haven't seen any talk of wholesale replacement of developers with LLMs in my organisation. What has happened is that these tools have been made extensively available to developers. I think right now they are basically being assessed in terms of how much they help developer productivity. Not sure about other places though, I agree with the idea that it's not really feasible to just straight up replace devs with an LLM.
@BrianTheeBiscuiteer McDonaldās actually puts a burger on the bun. Not the best, but adequate for a quick bite. General LLMs put bullshit into the ether.
Tha Superior Tech ā¢ā¢ā¢āā writes garbage code that a 15 year old can produce after years of data hawking and information stealing. Newer models in 5 years may get at a university junior level of coding.
AI bros are on more skooma than Crypto bros
I use AI to write mundan code and I always have to almost rewrite everything anyway
@themurphy In this case, the new tool requires no real competence to use. This is in fact one of the main reasons the quality of the work it produces is currently shit.
Yeah the amount of good ai can do for the world is staggering, even just giving a speed boost and quality improvement to open source Devs will unlock a lot of new potential.
The problem is people in a certain age bracket often fear change because they feel they've put effort into learning how things work and if things change then all that effort will be worthless.
It doesn't really matter though, gangs of idiots literally smashed the prototype looms when they were demonstrated because despite the cost of cloth being one of the major factors in poverty at the time a handful of people took it on themselves to fight to maintain the status quo -- of course we know how it turned out, the same that it always does...
Areas that resisted technological and social growth stagnated and got displaced by those which welcomed it
"gangs of idiots" smashed the prototype looms because they knew they would put them out of work. And they were right, even though the machines were probably a net benefit for society in the end.
It will be the same with AI. If it ends up actually benefiting humanity as a whole, it will 100% be a side effect of a few assholes getting insanely rich, or from massive governmental regulation.