AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1.
AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1.

AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1.

The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?
This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn't change the game in any meaningful way.
Yeah, I'm already quite content, if I know upfront that our customer's goal does not violate the laws of physics.
Obviously, there's also devs who code more run-of-the-mill stuff, like yet another business webpage, but those are still coded anew (and not just copy-pasted), because customers have different and complex requirements. So, even those are still quite a bit more complex than designing just any Gomoku game.
Haha, this is so true and I don't even work in IT. For me there's bonus points if the customer's initial idea is solvable within Euclidean geometry.
Now I am curious what the most outlandish request or goal has been so far?
If you just let it do a full rewrite again and again, what protects against breaking changes in the API? Software doesn't exist in a vacuum, there might be other businesses or people using a certain API and relying on it. A breaking change could be as simple as the same endpoint now being named slightly differently.
So if you now start to mark every API method as "please no breaking changes for this" at what point do you need a full software developer again to take care of the AI?
I've also never seen AI modify an existing code base, it's always new code getting spit out (80% correct or so, it likes to hallucinate functions that don't even exist). Sure, for run of the mill templates you can use it, but even a developer who told me on here they rely heavily on ChatGPT said they need to verify all the code it spits out, because sometimes it's garbage.
In the end it's a damn language model that uses probability on what the next word should be. It's fantastic for what it does, but it has no consistent internal logic and the way it works it never will.
Which is why plenty of companies merely pay lip service to it, or don’t do it at all and outsource it to ‘communities’
Absolutely true, but many direction into implementing those solution with AIs.