Spain has become reliant on an algorithm to score how likely a domestic violence victim may be abused again and what protection to provide — sometimes leading to fatal consequences.
The way to use these kinds of systems is to have the judge came to an independent decision, then, after that's keyed in, the AI spits out theirs and whichever predicts more danger is then acted on.
Relatedly, the way you have an AI select people and companies to get spot-checked by tax investigators is not to show investigators the AI scores, but mix in AI suspicions among a stream of randomly selected people.
Relatedly, the way you have AI involved in medical diagnoses is not to tell the human doctor results, but suggest additional tests to be made. The "have you ruled out lupus" approach.
And from what I've heard the medical profession actually got that right from the very beginning. They know what priming and bias is. Law enforcement? I fear we'll have to ELI5 them the basics for the next five hundred years.
I don't think there's any AI involved. The article mentions nothing of the sort, it's at least 8 17 years old (according to the article) and the input is 35 yes/no questions, so it's probably just some points assigned for the answers and maybe some simple arithmetic.
Edit: Upon a closer read I discovered the algorithm was much older than I first thought.
Sounds like an expert system then (just judging by the age) which was AI before the whole machine learning craze, in any case you need to take the same kind of care when integrating them into whatever real-world structures there are.
Medicine used them with quite some success problem being they take a long time to develop because humans need to input expert knowledge, and then they get outdated quite quickly.
Back to the system though: 35 questions is not enough for these kinds of questions. And that's not an issue of number of questions, but things like body language and tone of voice not being included.
so it’s probably just some points assigned for the answers and maybe some simple arithmetic.
Why yes, that's all that machine learning is, a bunch of statistics :)
so it’s probably just some points assigned for the answers and maybe some simple arithmetic.
Why yes, that’s all that machine learning is, a bunch of statistics :)
I know, but that's not what I meant. I mean literally something as simple and mundane as assigning points per answer and evaluating the final score:
// Pseudo code
risk = 0
if (Q1 == true) {
risk += 20
}
if (Q2 == true) {
risk += 10
}
// etc...
// Maybe throw in a bit of
if (Q28 == true) {
if (Q22 == true and Q23 == true) {
risk *= 1.5
} else {
risk += 10
}
}
// And finally, evaluate the risk:
if (risk < 10) {
return "negligible"
} else if (risk >= 10 and risk < 40) {
return "low risk"
}
// etc... You get the picture.
And yes, I know I can just write if (Q1) {, but I wanted to make it a bit more accessible for non-programmers.
The article gives absolutely no reason for us to assume it's anything more than that, and I apparently missed the part of the article that mentioned that the system had been in use since 2007. I know we had machine learning too back then, but looking at the project description here: https://eucpn.org/sites/default/files/document/files/Buena practica VIOGEN_0.pdf it looks more like they looked at a bunch of cases (2159) and came up with the 35 questions and a scoring system not unlike what I just described above.
VioGén’s algorithm uses classical statistical models to perform a risk evaluation based on the weighted sum of all the responses according to pre-set weights for each variable. It is designed as a recommendation system but, even though the police officers are able to increase the automatically assigned risk score, they maintain it in 95% of the cases.
... which incidentally matches what the article says (that police maintain the VioGen risk score in 95% of the cases).