Not really. Just going to post a comment I post whenever this topic comes up, related to how our company utilities it as software developers / engineers.
Sure they make mistakes, but even when wrong it still gives a good start. Meaning in writing less syntax.
Particularly for boring stuff.
Example: My boss is a fan of useMemo in react, not bothered about the overhead, so I just write a comment for the repetitive stuff like sorting easier to write
// Sort members by last name ascending
And then pressing return a few times. Plus with integration in to Visual Studio Professional it will learn from your other files so if you have coding standards it’s great for that.
Is it perfect? No. Does it same time and allow us to actually solve complex problems? Yes.
Yep. I use it to scaffold, rubber duck, and comment. I check it after but it does a pretty good job at that. It gives slightly better feedback than the stuffed bear on my desk and the comments are usually pretty good.
In addition, I'll write up a big ol' presentation that needs to be summarized but I'm not great at self editing. So I'll feed it through and have it give me an elevator pitch, an executive summary, and a shorter presentation. It does a passable job with very few edits for that.
It's definitely not going to replace me. It can't do my job or even anything close. But it has helped me be better at my job the same way that VS Code and PyCharm make things faster and better than just typing in Notepad.
Right. I use it every day and I'm probably 600% more productive now. What's shocking to me is the apparent difference in quality between ChatGPT and Google's AI. ChatGPT is incredibly useful. Google's AI seems dangerous and ridiculous.
I don't think its ALWAYS true that the effort is equal. I think that for important things that need to have very low error rates it is going to be equal or exceed.
But if you are doing creative writing and have writers block. Generative AI can help give ideas that can be helpful even if wrong because you can identify why they are wrong which might point you in the right direction.
I wonder how long it will take for Google to pull the plug of that feature. I mean, this is hilarious of course so there's that, but I'm pretty sure that relying on the awareness of the user to fix the oopsie woopsies is not a good idea. This will end badly.
For everything deadly answer that is also hilarious and obviously wrong for most people, there might be tens or hundreds of deadly answers that are not as obvious for a significant percentage of the users.
I is just a question of when Googles artificial "intelligence" will kill someone.
The actually scary thing is when the AI suggestions become less ridiculous. It seems very unlikely for anyone to try to sauté garlic in gasoline, but at a certain point these suggestions are going to sound more reasonable but be equally as dangerous. Say, what medication is safe. Or what to do in an emergency. Those are the things that are going to get people killed.
Though before that happens, it might launch all the nukes because someone asks it to boost rocket power and forgets to specify just for a rocket design they are working on.