Future software is going to be written by AI, no matter how much you would like to avoid that.
My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.
So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.
That's why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.
EDIT: gawdDAMN there's a lot of naysayers. I'm not talking stable diffusion here, guys. I'm talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn't new speculation. It's coming fo dat ass, in maybe a generation or two... or more...
That all sounds pointless. Why would we want to use something built on top of a system that's constantly changing for no good reason?
Unless the accuracy can be guaranteed at 100% this theoretical will never make sense because you will ultimately end up with a system that could fail at any time for any number of reasons. Predictive models cannot be used in place of consistent, human verified and tested code.
For operating systems I can maybe see llms being used to script custom actions requested by users(with appropriate guard rails), but not much beyond that.
It's possible that we will have large software entirely written by machines in the future, but what it will be written with will not in any way resemble any architecture that currently exists.
Funny you should say that. Currently AI utterly fails at the most trivial shell scripting tasks. I have a 0% success rate at getting it to write, or debug a shell scripts that actually works. It just spits out nice looking nonsense over and over.
I don't think so. Having a good architecture is far more important and makes projects actually maintainable. AI can speed up work, but humans need to tweak and review its work to make sure it fits with the exact requirements.
This is very much like the people saying airplanes will never fly after watching the prototypes fail in the 1900s.
It's 100% guaranteed that computers will be able to write software much better and faster than humans. The only variable is how long it will take.
I think within a decade. Could be wrong and it could be two decades but I doubt it.
Think about it - these bots are already being used by humans to solve tasks every day. The only difference now compared to the future is that now there is a slow human typing something on a keyboard.
In the future, you will have bots talking to bots, millions of times per second, and models will learn in real time, not being pre-trained.
Computers are already able to write software faster than humans, this is called compilation. Languages are only a way to describe a problem and the computer automatically builds an extremely efficient software to solve it. No language models involved, so no randomicity, no biases and no errors caused by the inability to follow elementary syllogisms.
Language models are not intelligent and they will never be. Yes, true AI imho is possible, but it won't be a statistical model trying to predict words in a phrase, this is ridiculous and just companies marketing which you continue to take the bait.
Yep true AI is not language models, this is just the beginning.
Compilation turns source code into binaries, and that's because humans wants to write code rather than machine code, again because we are not smart enough to quickly write machine code.
I expect computers to skip all these steps completely in the future and just generate programs immediately.
Programming languages are only a way to describe a problem. Even with AI if you want it to build software for you, you have to describe your problem and to describe problems human languages are not that efficient, soooo... AI would require kinda a programming language, just an high level one.
Maybe you can avoid writing logics, but as a software engineer, writing logics is the easiest and less time demanding task in serious software development.
You will be able to talk to the computer but more importantly, the computer will already know a lot of patterns what is the best way to do something under the conditions you will describe.
It will be like having only top programmers write code when you explain to them what you want, except much faster. There could also be brain implants to interact directly, but that I think is at least 30 years away.
Of course, if you look far enough into the future. Look far enough and the whole concept of "software" itself could become obsolete.
The main disagreements are about how close that future is (years, decades, etc), and whether just expanding upon current approaches to AI will get us there, or we will need a completely different approach.