President Javier Milei creates security unit as some say certain groups may be overly scrutinized by the technology
Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.
The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.
While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.
By innate definition, everyone has the potential for criminality, especially those applying and enforcing the law; as a matter of fact, not even the ai is above the law unless that's somehow changing. We need a lot of things on Earth first, like an IoT consortium for example, but an ai bill of rights in the US or EU should hopefully set a precedent for the rest of the world.
The AI is a pile of applied stastistic models. The humans in charge of training it, testing it and acting on its input have full control and responsibility for anything that comes out of it. Personifying or otherwise separating an AI system from being the will of its controllers is dangerous as it erodes responsibility.
Racist cops have used "I go where the crime is" as an exuse to basically hunt minorities for sport. Do not allow them to say "the AI model said this was efficient" and pretend it is not their own full and knowing bias directing them.
That's not even the problem here... AI, big data, a consultant - it's all just an excuse to point to when they do what they wanted to do anyways, profile "criminals" and harass them
There's actually a subtle knock on Scientology that I think even Tom Cruise missed in that film. The drug he's addicted to that ruins his life is called 'Clarity.'
Oh god...soon we wont be able to create any more Sci-fi movies out of fear some idiot with too much money and power thinks to use them like "How to..." videos.
Yeah but a lot of “anarcho” capitalists claim to be just another type of anarchist. This is the point I’m making, which is that they are very much not real anarchists.
Since it’s a shallow ideology with no strong moral principles, it’s not surprising that its adherents hold contradictory viewpoints like social conservatism.
Anarchy: yet another term hijacked by fascists and mangled beyond recognition.
It's just extreme economic liberalism, small/no Government so that corporations can rule over us as warlords. It's a smokescreen for corporate feudalism.
Yeah, but Person of Interest turns it around (at least for quite some time) and makes it like the precrime thing is a good idea. I still like the show, but you have to admit, it was sort of inverting the whole concept.
That's already tried. In the end the AI is just an electronic version of existing police biases.
Police files more reports and arrests in poor neighborhoods because they patrol more there. Reports get used as training data and AI predicts more crime in poor areas. Those areas now get over patrolled and the tension leads to more crime. The system is celebrated for being correct.
It's like, in "Minority Report", some of these crimes weren't even premediated crimes, for example the crime they stop at the beginning. The guy was about to stab his wife because he found out she's been cheating on him. Chances are if given time to process his feelings, he wouldn't have done it.
This sounds too surveillancey for the so self proclaimed libertarian and too much of a flamboyant economic investment for the guy that said to cut down all unnecessary costs
Part of the problem with this approach is that prediction engines are predicted on the idea that there's more of a thing to predict.
So unless they really, really go out of their way with modeling the records to account for this, they'll have a system very strongly biased towards predicting more criminal behavior for everyone fed into it.
"Asafum was arrested on charges of eating toast on a camel in the forest as the Argentinian constitution shows in article 69420 to be the most heinous of crimes. Brought to you by GoogmetopenAIsandwitch GPT."
Anyone knowing more than a 5 minute introduction course to AI knows they AI CANNOT be trusted. There are a lot of possibilities with AI and a lot of potentilly great applications, but you can never explicitly trust it's outcomes
Secondly, we still know that AI can give great (yet unreliable) answers to questions, but we have no idea how it got to those answers. This was true 30 years ago, this remains true today as well. How can you say "he will commit that crime" if you can't even say how you came to that conclusion?
This system is terrible but it wouldn't be an LLM like ChatGPT. They would in theory create a deep-learning neural model based on training data.
In any case, it is a truly horrible and dystopian idea. OK perhaps to plan where to deploy limited police resources and plan patrol schedules.
If you give these dummies a magic eight ball and tell them it's real "police tool" they will shoot the first person it says, "signs point to yes" on when they ignorantly ask, "has this person I suspect with no evidence done crimes before or will they in the future?"
Beep boop. This action was performed automatically. If you dont like me then please block me.💔
If you have any questions or comments about me, you can make a post to LW Support lemmy community.
Disappointing. Any reason to believe this might be a mistake or an outlier? I was just starting to seriously consider adding mbfc to the usual set of tools I depend on online.