The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?
This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn't change the game in any meaningful way.
"I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well"
For how long will we be forced to endure different versions of the same article?
The study said 86.66% of the generated software systems were "executed flawlessly."
Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it "execute flawlessly." Whether or not code executes it not the bar by which we judge it.
Even if it were to hit 100%, which it does not, there's so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.
LLMs are not capable of this kind of meaningful collaboration, despite all this hype.
A test that doesn’t include a real commercial trial or A/B test with real human customers means nothing. Put their game in the App Store and tell us how it performs. We don’t care that it shat out code that compiled successfully. Did it produce something real and usable or just gibberish that passed 86% of its own internal unit tests, which were also gibberish?
Every company I've been at follows this cycle: offshore to Cognizant for pennies, C-suite gets a bonus for saving money. In about two years, fire Cognizant because they suck and your code is a disaster, onshore, get a bonus for solving a huge problem. In about two years, offshore to Cognizant and get a bonus for saving money. Repeat forever.
This will follow the same rhythm but with different actors: the cheap labor is always there, and sometimes senior devs come in to replace the chatbots because the bots are failing in ways offshore can't make up for: either fundamental design problems that shouldn't have been used as a roadmap, or incompetently generated code that offshore assumes is correct because it compiles. This will all get built up and built around until it's both a broken design AND deeply embedded in your stack. The new role of a senior dev will be contract work slicing these Gordian knots.
At the designing stage, the CEO asked the CTO to "propose a concrete programming language" that would "satisfy the new user's demand," to which the CTO responded with Python. In turn, the CEO said, "Great!" and explained that the programming language's "simplicity and readability make it a popular choice for beginners and experienced developers alike."
I find it extremely funny that project managers are the ones chatbots have learned to immitate perfectly, they already were doing the robot’s work: saying impressive sounding things that are actually borderline gibberish
the CTO responded with Python. In turn, the CEO said, "Great!" and explained that the programming language's "simplicity and readability make it a popular choice for beginners and experienced developers alike."
What a load of bullshit. If you have a group of researchers provide "minimal human input" to a bunch of LLMs to produce a laughable program like tic-tac-toe, then please just STFU or at least don't tell us it cost $1. This doesn't even have the efficiency of a Google search. This AI hype needs to die quick
This research seems to be more focused on whether the bots would interoperate in different roles to coordinate on a task than about creating the actual software. The idea is to reduce "halucinations" by providing each bot a more specific task.
Similar to hallucinations encountered when using LLMs for natural language querying, directly
generating entire software systems using LLMs can result in severe code hallucinations, such as
incomplete implementation, missing dependencies, and undiscovered bugs. These hallucinations
may stem from the lack of specificity in the task and the absence of cross-examination in decision-
making. To address these limitations, as Figure 1 shows, we establish a virtual chat -powered software tech nology company – CHATDEV, which comprises of recruited agents from diverse social identities, such as chief officers, professional programmers, test engineers, and art designers. When presented with a task, the diverse agents at CHATDEV collaborate to develop a required software, including
an executable system, environmental guidelines, and user manuals. This paradigm revolves around leveraging large language models as the core thinking component, enabling the agents to simulate the entire software development process, circumventing the need for additional model training and mitigating undesirable code hallucinations to some extent.
AI chatbots like OpenAI's ChatGPT can operate a software company in a quick, cost-effective manner with minimal human intervention, a new study has found.
Based on the waterfall model — a sequential approach to creating software — the company was broken down into four different stages, in chronological order: designing, coding, testing, and documenting.
After assigning ChatDev 70 different tasks, the study found that the AI-powered company was able to complete the full software development process "in under seven minutes at a cost of less than one dollar," on average — all while identifying and troubleshooting "potential vulnerabilities" through its "memory" and "self-reflection" capabilities.
"Our experimental results demonstrate the efficiency and cost-effectiveness of the automated software development process driven by CHATDEV," the researchers wrote in the paper.
The study's findings highlight one of the many ways powerful generative AI technologies like ChatGPT can perform specific job functions.
Nevertheless, the study isn't perfect: Researchers identified limitations, such as errors and biases in the language models, that could cause issues in the creation of software.
The original article contains 639 words, the summary contains 172 words. Saved 73%. I'm a bot and I'm open source!
Future software is going to be written by AI, no matter how much you would like to avoid that.
My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.
So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.
That's why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.
EDIT: gawdDAMN there's a lot of naysayers. I'm not talking stable diffusion here, guys. I'm talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn't new speculation. It's coming fo dat ass, in maybe a generation or two... or more...