Reminds me of an early application of AI where scientists were training an AI to tell the difference between a wolf and a dog. It got really good at it in the training data, but it wasn't working correctly in actual application. So they got the AI to give them a heatmap of which pixels it was using more than any other to determine if a canine is a dog or a wolf and they discovered that the AI wasn't even looking at the animal, it was looking at the surrounding environment. If there was snow on the ground, it said "wolf", otherwise it said "dog".
The idea of AI automated job interviews sickens me. How little of a fuck do you have to give about applicants that you can't even be bothered to have even a single person interview them??
That shit works IRL too. Why do you think therapy practices often have themselves positioned in front of a wall of books? Not that it's a bad thing; it's good for outcomes to believe your therapist is competent and well educated.
That reminds me of the time, quite a few years ago, Amazon tried to automate resume screening. They trained a machine learning model with anonymized resumes and whether the candidate was hired. Then they looked at what the AI was looking at. The model had trained itself on how to reject women.
Someone should build a little AI app that scrapes a job listing, then takes a resume and rewrites it in subtle ways to perfectly match the job description.
One of my favorite examples is when a company from India (I think?) trained their model to regulate subway gates. The system was supposed to analyze footage and open more gates when there were more people, and vice versa. It worked well until one holiday when there were no people, but all gates were open. They eventually discovered that the system was looking at the clock visible on the video, rather than the number of people.
I do that shit when I have a web interview. Put up a guitar just visible in the camera, a small bookshelf, a floor lamp, make sure my tennis bag is visible despite not playing in ages...
Whether they realize it or not, people do take this stuff in. Not sure why some algorithm based on these very same interviews wouldn't do the same.
One web LLM I was screwing around with had Job Interview as a preset. Ok. Played it totally straight the first time and had a totally positive outcome. Thought the interviewer way too agreeable. The next time I said the most inappropriate stuff I could imagine and still the interviewer agreed to come home with me to check out the rock collection I keep under my bed and listen to Captain Beefheart albums.
Why are the different scales connected? How exactly does one interpolate between agreeableness and neuroticism? This is the kind of diagram I used to draw as an 8 year old, and they put this crap in a real product...
"Machine learning" is perfectly cromulent. The bias is what it learned, because that's what it was taught. (Not intentionally, I don't think. It's just hard to get this stuff right sometimes.)
I really hate that we are calling this wave of technology "AI", because it isn't. It is "Machine Learning" sure, but it is just brute force pattern recognition v2.0.
The desired outcomes you define and then the data you train it on both have a LOT of built-in biases.
It's a cool technology I guess, but it's being misused across the board. It is being overused and misused by every company with FOMO. Hoping to get some profit edge on the competition. How about we have AI replace the bullshit CEO and VP positions instead of trying to replace fast food drive through workers and Internet content.
I guess that's nothing new for humans... One human invents the spear for fishing and the rest use them to hit each other over the head.
Answering the question in the image: machine learning arose from the industrial control world. The idea was to teach a machine how to detect defects in supposedly identical objects out of a manufacturing line, most often with “machine vision” (ie. a camera). Applying it to humans was asinine.
I wonder if it's actually interpreting the bookshelf or if having such a busy background is taking a toll on the compression. That would alter the details on the person's face
I don't understand why anyone writing, reading or commenting on this think a bookshelf would not change the outcome? Like what do you people think these ml models are, human brains? Are we still not below even the first layer of understanding?