Could be helpful if it silently (or at least subtly) warns the user that they're approaching those boundaries. I wouldn't mind a little extra assistance preventing those embarrassing after-the-fact realizations. It'd have to be done in a way that preserves privacy though.
Like most scientific and technical advances, it could be an amazing tool for personal use. It won’t, of course. It will be used to make someone rich even richer, and to control or oppress people. Gotta love humanity.
I'm extremely skeptical of medical diagnosis AIs. Without being able to explain why it comes to a conclusion, how do we know it won't just accidentally find correlations? One example I heard of recently was an AI that was extremely good at detecting TB... based on the age of the machine that took the x-ray. Because it turns out places with older machines tend to be poorer, and poorer places tend to have more TB.
The only positive use I can think of is time saving measures. A researcher can feed a study to ChatGPT and have it write a rough first draft of the abstract. A Game Master could ask it for inspiration on the next few game sessions if they're underprepared. An internet commenter could ask it for a third example of how it could save time.
But for anything serious, until it can explain why it comes to the conclusions it comes to, and can understand when a human says "no, you're doing it wrong," I can't see it being a real force for good.
Ehh...at least we know we don't understand how the AI reached its conclusion. When you study human cognition long enough you discover that our beliefs about how we reach our conclusions are just stories the conscious mind makes up to justify after the fact.
"No, you're doing it wrong" isn't really a problem - it's fundamental to most ML processes.
Nice use of the scary word 'violation," really gets the blood pumping. A more apt description would be 'deviations from social norms,' which more follows the stated DARPA desire to help with language and cultural barriers. It could give invad- er, liberating American forces a heads up when they are about to commit a cultural SNAFU. Helps build better relationships with the locals.
Psycho-Pass raises a lot of interesting questions and dilemmas because the technology depicted actually works in that setting (aside from a few outlying cases that help drive interesting plots). If there was a scanner of some sort that actually genuinely could detect violent intention before it was acted on then I think it would be reasonable and moral to use it in at least some manner.
Unless this is just for identifying social norms violations in written communication for the purpose of government to government communication, this seems vastly.... Infeasible, I guess. Because norms change over time, and you're going to have to be updating this model when it's finally noticed that a change has occurred. If anything, it might generate a completely new form of grammar/phrasing expectations due to the feedback from this likely-to-not-change-very-much ruleset... As in, if you thought politically correct phrasing was annoying now, just wait until the ai says you're not doing it well enough.
Idk though, this isn't my specialty area, anyone care to tell me how I'm wrong? What good can this really do?
(I swear I did read the article, it just isn't clicking over the sound of my loud pessimism)