High skilled jobs will just start using AI as a tool to automate routine (or have already started, in some cases). The most efficient use of AIs we have now is to pair it with a human, anyway
The worry is focused on the amount of damage that is likely to be done by the people in decision-making positions thinking they can save money by removing more paid positions.
The problem with humans reviewing AI output is that humans are pretty shit at QA. Our brains are literally built to ignore small mistakes. Digging through the output of an AI that's right 95% of the time is nightmare fuel for human brains. If your task needs more accuracy, it's probably better to just have the human do it all, rather than try to review it.
Should note that a lot of the Microsoft Recall project revolves around capturing human interactions on the computer in real time continuously, with the hope of training a GPT-5 model that can do basic office tasks automagically.
Will it work? To some degree, maybe. It'll definitely spit out some convincing looking gibberish.
But the promise is to increasingly automate away office and professional labor.
Do you mean AI, just Generative models, or LLMs in particular? I'm pretty thoroughly convinced that AI is a general solution to automation, while generative models are only a partial but very powerful solution.
I think the larger issue is actually that displacement from the workforce causes hardship to those who have been displaced. If that were not the case, most people either wouldn't care or would actively celebrate their jobs being lost to automation.
I was really confident. Then I lost a job to AI. Then they hired me back a few months later after realizing that replacing half the support team with an AI was not working out.
This was exactly my experience. Freaked myself out last year and decided best thing was to dive headfirst into it to figure out how it worked and what it's capabilities are.
Which - it has a lot. It can do a lot, and it's impressive tech. Coded several projects and built my own models. But, it's far from perfect. There are so so so many pitfalls that startups and tech evangelists just happily ignore. Most of these problems can't be solved easily - if at all. It's not intelligent, it's a very advanced and unique prediction machine. The funny thing to me is that it's still basically machine learning, the same tech that we've had since the mid 2000s, it's just we have fancier hardware now. Big tech wants everyone to believe it's brand new... and it is... kind of. But not really either.
The funny thing to me is that it’s still basically machine learning, the same tech that we’ve had since the mid 2000s, it’s just we have fancier hardware now.
So much of the modern Microsoft/ChatGPT project is effectively brute-forcing intelligence from accumulated raw data. That's why they need phenomenal amounts of electricity, processing power, and physical space to make the project work.
There are other - arguably better, but definitely more sophisticated - approaches to developing genetic algorithms and machine learning techniques. If any of them prove out, they have the potential to render a great deal of Microsoft's original investment worthless by doing what Microsoft is doing far faster and more efficiently than the Sam Altman "Give me all the electricity and money to hit the AI problem with a very big hammer" solution.
It takes a lot of energy to train the models in the first place, but very little once you have them. I run mixture of agents on my laptop, and it outperforms anything openai has released on pretty much every benchmark, maybe even every benchmark. I run it quite a bit and have noticed no change in my electricity bill. I imagine inference on gpt4 must almost be very efficient, if not, they should just switch to piping people open sourced llms run through MoA.
It's not exactly the same tech but it's very similar. The Transformer architecture made a big difference. Before that we only had LSTMs which do sequence modelling in a different way that made far back things influence the result less.
Have you coded with Claude Sonnet 3.5 yet? It is mind-blowingly better than Opus 3, which was already noticeably better than anything openAI has put out yet. Gpt 4 was nice to code with, but this is on a whole other level. I can't imagine what Opus 3.5 will be able to do.
The issue with sonnet 3.5 is, in my limited testing, is that even with explicit, specific, and direct prompting, it can't perform to anything near human ability, and will often make very stupid mistakes. I developed a program which essentially lets an AI program, rewrite, and test a game, but sonnet will consistently take lazy routes, use incorrect syntax, and repeatedly call the same function over and over again for no reason. If you can program the game yourself, it's a quick way to prototype, but unless you know how to properly format JSON and fix strange artefacts, it's just not there yet.
There's a pretty good argument for that in a lot of other cases too, however those on the top doesn't want it, or at least not in a sensible way, see the whole "UBI through stock exchange" and "Universal Basic Compute" fiasco.