I've been concerned about AI as x risk for years before big tech had a word to say on the matter. It is both possible for it to be a threat, and for large companies to be trying to take advantage of that.
Those concerns mostly apply to artificial general intelligence, or "AGI". What's being developed is another can of worms entirely, it's a bunch of generative models. They're far from intelligent; the concerns associated with them is 1) energy use and 2) human misuse, not that they're going to go rogue.
It's a theory. A plausible but unlikely one. Just like it was unlikely that the first atomic bomb would set the atmosphere on fire - but possible. I think that events with consequences of this magnitude deserves some consideration. I doubt humans are anywhere even near the far end of the intelligence spectrum and only a human is stupid enough to think that something that is would not posses any potential danger to us.
I believe true AI might in fact be an extinction risk. Not likely, but not impossible. It would have to end up self-improving and wildly outclass us, then could be a threat.
Of course the fancy autocomplete systems we have now are in no way true AI.
I'm not so sure about that. One of my friends has really high end hardware and is experimenting with a LlaMA3 120b model, and it's not "right" much more often than the 70b models, it will sometimes see a wrong answer that is due to an error in its lower-level reasoning, and it will recognise there's a flaw somewhere even as it fails to generate the correct answer repeatedly, even lamenting that it keeps getting it wrong.
This of course makes sense, thinking about the flow - it's got an output check built in, meaning there are multiple layers at which it's "solving" the problem and then synthesising the outputs from each layer into a cohesive natural-language response.
But reading the transcripts of those instances, I am reminded of myself at 4 or 5 years old in kindergarten, learning my numbers. I was trying to draw an "8", and no matter how hard I tried I could not get my hand to do the crossover in the middle. I had a page full of "0". I remember this vividly because I was so angry and upset with myself, I could see my output was wrong and I couldn't understand why I couldn't get it right. Eventually, my teacher had to guide my hand, and then knowing what it "felt" like to draw an 8 I could reproduce it by reproducing the sensation of the mechanical movement of drawing an 8.
So, it seems to me those "sparks" of AGI are getting just a little brighter.
In your case that was a motor control issue, not a flaw in reasoning. In the LLM case it's a pure implementation of a Chinese Room and the "book and pencils" (weights) semi-randomly generate text that causes humans to experience textual pareidolia more often than not.
It can be useful - that book is very large, and contains a lot of residue of valid information and patterns, but the way it works is not how intelligence works (still an open question of course, but 'not that way' is quite clear.)
This is not to say that true AI is impossible - I believe it is possible, but it will have to be implemented differently. At the very least, it will need the ability to self-modify in real time (learning)
Currently any AI models that have the ability to make complex decisions are trained to recreate patterns from their training data. At its current state, you'd have to be pretty exceptionally stupid to make an AI that wants to kill you, and give it that ability at the same time. Of course - who knows what's going on at all of these private corporations and military contractors, but I think regular war, fascism, and nuclear weapons are by orders of magnitude the bigger threat.