I'm a data analyst moving into data science. I have been ranting about this since the beginning but everyone's too obsessed with the new shiny thing.
It's just an optimisation algorithm that's learned certain words have different probabilities of occurring depending on context.
I had to remind my friend, "when it tells you something, it has no idea what it's just told you" because that's all it really does; it spits out text on a guessed context but has no precognition of that context.
Reminds me of an article I read a few years ago about an Israeli start up aiming to use AI to analyse images of people and detect what they are likely to work with. Doctor, teacher, terrorist, etc.
Basically phrenology. They were making a digital racist uncle.
If the model is accurate though, I’d be okay with it. I understand it’ll never be perfect and laws should be put in place banning people being arrested or interrogated just for that kind of suspicion, but still
Yes, yes. We do this already and we are probably safer for it. If we had a model we could point to evidence that, in general, this helps. It should never be admissible evidence in a court, but it would help police figure out who to keep eyes on