While many social media users are blaming the pedestrian for reportedly crossing against the light, the incident highlights the challenge autonomous driving faces in complex situations.
TBF, the car is stupid to, not because of this just in general AI is stupid, if it was a human in the car we would just say he was angry but with AI we know it wasn’t angry and made a mistake, it happened to catch the mistake before it killed somebody but that mistake is in the programming of every single car of that type in the world, letting of a small problem like this is equal to saying it’s legal, since it will be a nation wide bug that is allowed.
i recommend trying https://www.moralmachine.net/ and answering 13 questions to get some bigger picture. it will take you no more than 10 minutes.
you may find out that the problem is not as simple as 4 word soundbite.
In this week’s Science magazine, a group of computer scientists and psychologists explain how they conducted six online surveys of United States residents last year between June and November that asked people how they believed autonomous vehicles should behave. The researchers found that respondents generally thought self-driving cars should be programmed to make decisions for the greatest good.
Sort of. Through a series of quizzes that present unpalatable options that amount to saving or sacrificing yourself — and the lives of fellow passengers who may be family members — to spare others, the researchers, not surprisingly, found that people would rather stay alive.
First off, ignoring the pitfalls of AI:
There is the issue at the core of the Trolley problem. Do you preserve the life of a loved one or several strangers?
This translates to: if you know the options when you're driving are:
Drive over a cliff / into a semi / other guaranteed lethal thing for you and everyone in the car.
Hit a stranger but you won't die.
What do you choose as a person?
Then, we have the issue of how to program a self diving car on that same problem. Does it value all life equally, or is it weighted to save the life of the immediate customer over all others?
Lastly, and really the likely core problem, is that modern AI aren't capable of full self driving, and the current core architecture will always have a knowledge gap, regardless of the size of the model. They can, 99% of the time, only do things that are in their data models. So if they don't recognize a human or obstacle, in all of the myriad forms we can take and move as, they will ignore it. The remaining 1% is hallucinations that end up being randomly beneficial. But, particularly for driving, if it's not in the model they can't do it.
I don't think "posts on social media" is a good indicator for what the public thinks anymore, if ever. The amount and reach of bot or bought accounts are disturbingly high.
With the terrible demographic distribution, the absolute sewage social media is and the bots that make more than half the content. If you want to know what the general public thinks, you could not chose any worse source
Reading the comments I get the impression that most people didn't actually read the article, which says that a woman was barely touched and not injured by a self-driving car while crossing the street with a red light.
There barely is "news" here, as the car correctly halted as soon as possible after noticing the pedestrian unforeseeable move, so let alone sides to take.
I am perfectly aware that self-driving technology still has numerous problems corroborated by the incidents reported from time to time, but if anything this article seems a proof that these cars will at least not crush to death the first pedestrian that does a funky move.
How does "driverless cars hitting people is so incredibly rare that a single instance of it immediately becomes international news" at all signify "boring dystopia"? If anything we should be ecstatic that the technology to eliminate the vast majority of car deaths is so close and seems to be working so well.
Don't let perfect be the enemy of ridiculously, insanely amazing.
I'd argue they need to fuck up less than the alternative means of transport that we could be transitioning to if we weren't so dead-set on being car dependent. So dead-set, in fact, that we are allowing ourselves to be made complacent; by billion-dollar companies that peddle entirely new technology to excuse the death and destruction to our environment and social fabric that they've wrought upon us and continue to perpetuate; instead of us demanding new iterations of the old, safer, more affordable, more efficient, but unfortunately less profitable tech that our country sold out to those same monied interests for them to dismantle.
Exactly. As early as the technology still is, it seems like it's already orders of magnitude better than human drivers.
I guess the arbitrary/unfeeling impression of driverless car deaths bothers people more than the "it was just an accident" impression of human-caused deaths. Personally, as long as driverless car deaths are significantly rarer than human-caused deaths (and it already seems like they are much, much rarer), I'd rather take the lower chance of dying in a car accident, but that's just me.
I see you're not familiar with the trend of autonomous vehicles hitting pedestrians and parked cars. They've been completely banned They were suspended from San Francisco after many, many incidents. So far their track is inferior to humans (see Tesla Autopilot, Waymo, and Cruise), so you don't need to worry about perfect.
As someone who was literally just in San Fran, the driverless cars are not only a thing, but they're booked out days in advance so idk where you're getting your info from
In December, Waymo safety data—based on 7.1 million miles of driverless operations—showed that human drivers are four to seven times more likely to cause injuries than Waymo cars.
From your first article.
Cruise, which is a subsidiary of General Motors, says that its safety record "over five million miles" is better in comparison to human drivers.
From your second.
Your third article doesn't provide any numbers, but it's not about fully autonomous vehicles anyway.
In short, if you're going to claim that their track record is actually worse than humans, you need to provide some actual evidence.
Edit:Here's a recent New Scientist article claiming that driverless cars "generally demonstrate better safety than human drivers in most scenarios" even though they perform worse in turns, for example.