It's "artificial intelligence" - not "artificially intelligent". The biggest problem is people are equating one to the other.
Generally speaking, intelligence references the ability to think, reason, apply logic, etc, whereas describing someone as intelligent generally refers to their "smartness" - how high their intelligence or mental capacity are.
They are not the same thing. Expecting AI to be intelligent is just plain wrong, especially generative AI.
Can confirm. My company issued a press release out of the blue last month that explained how we were deploying AI into our products. None of us knew anything about it and we're assuming that Marketing has just relabelled things like A-B testing, chatbots, and some chatGPT plug-ins as proof that our company has somehow built an AI R&D lab.
PS. It's also amusing to hear the execs talk about it on their earnings calls. The investors are just eating it up without question.
It's very useful but you've got to point it in the right direction and nudge it around to get what you want. If your using it right it does the bullshit work so you can focus on the content.
@zoe I think that's the most important topic. I mean, it would be one thing if this was an experimental thing that was only being used as a way to explore programming options, but there's guys out there pushing for this to replace all sorts of writing and communication. They're selling Lifelike Lie Machines as writers. That is going to hurt people.
Guys, you say? Don't worry, it's just major publications like National Geographic firing off all their writers, tech companies downsizing in the hopes that ChatGPT will code for free, a plague of nonsensical AI written books flooding the market under legitimate authors' names, and all of Hollywood hoping their writers will work for pennies out of desperation for food and shelter.
It doesn't matter that ChatGPT constantly gives wrong answers and has about as much personality as a bran muffin. With the dawn of AI, us humans don't need humans anymore.