Extremely narrow field of expertise ✔️
Misplaced confidence in its abilities outside its area of expertise ✔️
A mind filled with millions of things that have been read, and near zero from interactions with real people✔️
An obsession over how many words can get published over the quality and correctness of those words ✔️
A lack of social skills ✔️
A complete lack of familiarity of how things work in the real world ✔️
It would have to actually have intelligence, period, for it to have PhD level intelligence. These things are not intelligent. They just throw everything at the wall and see what would stick.
All aboard the hype train! We need to stop using the term "AI" for advanced auto complete. There is not even a shred of intelligence in this. I know many of the people here already know this, but how do we get this message to journalists?! The amount of hype being repeated by respectable journalists is sickening.
No it won't. At some point, some AI will, but that point is still far away.
I'm sure it'll know how to string words and sentences together real nice, even to the point where it makes sense. It will still not have a clue what it's talking about, it'll still not understand basic concepts as "understanding" requires a whole lot more than just an advanced ability of pushing words together.
What a bunch of bullshit. I've asked ChatGPT recently to do a morphological analysis of some Native American language's very simple sentences, and it gave absolute nonsense as an answer.
And let's be clear: It was an elementary linguistics task. Something that I did learn to do on my own by just doing a free course online.
So copying everyone else’s work and rehashing it as your own is what makes a PhD level intelligence? (Sarcastic comments about post-grad work forthcoming, I’m sure)
Unless AI is able to come up with original, testable, verifiable, repeatable previously unknown associations, facts, theories, etc. of sufficient complexity it’s not PhD level…using big words doesn’t count either.
I like how they have no road map on how to achieve general artificial intelligence (apart from lets train LLMs with a gazillion parameters and the equivalent of yearly energy consumed by ten large countries) but yet pretend chatgpt 4 is only two steps away from it
Is it weird that I still want to go for my PhD despite all the feedback about the process? I don’t think I’ve ever met a PhD or candidate that’s enthusiastically said “do it!”
The fact that I have a PhD while I knew that I wouldn't use it quickly after I begun, thus loosing years of my life is the proof that I'm dumb as a rock. Fitting for ChatGPT.
Oh... that's the same person (in the image at least) who said "Yeah AI is going to take those creative jobs, but those jobs maybe shouldn't have existed in the first place".
I can't imagine looking at the world and thinking we need more industry. Also, I know a lot of PhDs. Knowing a lot of things about a particular subject in know way correlates with intelligence.
If AI was that capable then using human workers would eventually become cost prohibitive. If we're still stuck having to work to live under a capitalist system by then, there's gonna be serious problems. A post-labor economy doesn't need to charge for even a modestly comfortable standard of living, and the overwhelming majority of people will go looking for things to do no matter how many politicians swear otherwise.
I would be completely fine with this if they said "We will train it on a very large database of articles and finding relevant scientific information will be easier than before". But no they have to hype it up with nonsense expectations so they can generate short term profits for their fucking shareholders. This will either come at the cost of the next AI winter or senseless allocation of major resources to a model of AI that is not sustainable in the long run.