I think this is an issue with people being offended by definitions. Slavery did “help” the economy. Was it right? No, but it did. Mexico’s drug problem helps that economy. Adolf Hitler was “effective” as a leader. He created a cultural identity for people that had none and mobilized them to a war. Ethical? Absolutely not. What he did was horrendous and the bit should include a caveat, but we need to be a little more understanding that it’s a computer; it will use the dictionary of the English language.
People think of AI as some sort omniscient being. It's just software spitting back the data that it's been fed. It has no way to parse true information from false information because it doesn't actually know anything.
Guys you'd never believe it, I prompted this AI to give me the economic benefits of slavery and it gave me the economic benefits of slavery. Crazy shit.
Why do we need child-like guardrails for fucking everything? The people that wrote this article bowl with the bumpers on.
If you ask an LLM for bullshit, it will give you bullshit. Anyone who is at all surprised by this needs to quit acting like they know what "AI" is, because they clearly don't.
Slavery was great for the slave owners, so what's controversial about that?
And yes, of course it's economically awesome if people work without getting much money for it, again a huge plus for the bottom line of the companies.
Capitalism is evil against people, not the AI...
Hitler was also an effective leader, nobody can argue against that. How else could he conquer most of Europe? Effective is something that evil people can be also.
That women in the article being shocked by this simply expected the AI to remove Hitler from all included leaders because he was evil. She is surprised that an evil person is included in effective leaders and she wanted to be shielded from that and wasn't.
There needs to be like an information campaign or something... The average person doesn't realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.
I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff at the bottom of the page are inaccurate aren't cutting it.
Not only has it been caught spitting out completely false information, but in another blow to the platform, people have now discovered it's been generating results that are downright evil.
Case in point, noted SEO expert Lily Ray discovered that the experimental feature will literally defend human slavery, listing economic reasons why the abhorrent practice was good, actually.
That enslaved people learned useful skills during bondage — which sounds suspiciously similar to Florida's reprehensible new educational standards.
The pros included the dubious point that carrying a gun signals you are a law-abiding citizen, which she characterized as a "matter of opinion," especially in light of legally obtained weapons being used in many mass shootings.
Imagine having these results fed to a gullible public — including children — en masse, if Google rolls the still-experimental feature out more broadly.
But how will any of these problems be fixed when the number of controversial topics seems to stretch into the horizon of the internet, filled with potentially erroneous information and slanted garbage?
The original article contains 450 words, the summary contains 170 words. Saved 62%. I'm a bot and I'm open source!
Obviously it doesn't "think" any of these things. It's just a machine repeating back a plausible mimicry.
What does scare me though is what google execs think.
They will be tweaking it to remove obvious things like praise of Hitler, because PR, but what about all the other stuff?
Like, most likely it will be saying things like what a great guy Masaji Kitano was for founding Green Cross and being such an experimental innovator, and no one will bat an eye because they haven't heard of him.
As we outsource more and more of our research and fact checking to machines, errors in knowledge are going to be reproduced and reinforced. Like how Cinderella now has "glass" slippers.
A bit of a nitpick but it was technically right on that one thing….
Hitler was an “effective” leader….
Not a good or a moral one but if he had not been as successful creating genocide then i doubt he be more than a small mention in history.
Now a better ai should have realized that giving him as an example was offensive in the context.
In an educational setting this might be more appropriate to teach that success does not equal morally good. Sm i wish more people where aware off.
So the AI provided factual information and they did not like that because 'slavery bad, therefore there was no benefit to it.'
There were benefits to slavery, mainly for the owners. US had a huge cotton export at one point, with the fields being worked by slaves.
But also a very few slaves did benefit, like being able to work a job that taught them very useful skills, which let them buy their own freedom, as they were able to earn money from it.
Of course being a slave in the first place would be far better, but when you are one already, learning a skill that makes you earn your freedom and get a job afterwards is quite the blessing.
Plus for a few individuals it might've been living in such terrible conditions, that being forced to work while getting fed might've not been so bad...