Study Finds Consumers Are Actively Turned Off by Products That Use AI
Study Finds Consumers Are Actively Turned Off by Products That Use AI

Study Finds Consumers Are Actively Turned Off by Products That Use AI

Study Finds Consumers Are Actively Turned Off by Products That Use AI
Study Finds Consumers Are Actively Turned Off by Products That Use AI
I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.
Some people are gonna lose a lot of other people's money over it.
Definitely. Many companies have implemented AI without thinking with 3 brain cells.
Great and useful implementation of AI exists, but it's like 1/100 right now in products.
If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.
At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it's giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is "AI-driven".
My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.
“We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.
A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha
Yes, I'm getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it's a risk they're willing to take.
A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn't something new or exceptional. It is just the tool you use for solving certain problems.
Investors going to bubble though.
Yeah, can make some products better but most of the products these days that use AI, it doesn't actually need them. It's annoying to use products that actively shovel AI when it doesn't even need it.
Ya know what pfoduct MIGHT be better with AI?
Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you're not going to buy another toaster, because that too will be crap.
How about a toaster, that accurately, and evenly toasts your bread, and then DOESN'T give you a heart attack at 5am when you're still half asleep???
IS THAT TOO MUCH TO ASK???
I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner's new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.
How are producers/consumers okay with everything being so mediocre??
How are producers/consumers okay with everything being so mediocre??
I'm not. My particular beef is with is with plastics and toxic materials and chemicals being ubiquitous in everything I buy. Systemic problem that I can do almost nothing about apart from make things myself out of raw materials.
How are producers/consumers okay with everything being so mediocre??
"You're always trying to make everything just a little bit worse so that you can feel good about having a lot more of it. I love it. It's so human!" - The Good Place
My doorbell camera manufacturer now advertises their products as using, "Local AI" meaning, they're not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.
As I mentioned in another post, about the same topic:
Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.
LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.
Often the answers are pretty good. But you never know if you got a good answer or a bad answer.
And the system doesn't know either.
For me this is the major issue. A human is capable of saying "I don't know". LLMs don't seem able to.
They really aren't. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It's good at getting broad strokes but the details are very often wrong.
Now imagine someone that doesn't have your expertise reading that answer. They won't recognize those details are wrong until it's too late.
With proper framework, decent assertions are possible.
If that is done, the work on the human is very low.
That said, it's STILL imperfect, but this is leagues better than one shot question and answer
Sounds familiar. Citation please
Market shows that investors are actively turned on by products that use AI
Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.
Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that's how the hype comes.
There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.
So we have... All this. All this nonsense. All because of stupid managers.
It's the new block chain or NFT hype, they think it's magic.
But what if it actually is magic this time? Just this once!? And we miss the hype train?! (This is a sarcastic impression of real conversations I have had.)
No shit, because we all see that AI is just technospeak for “harvest all your info”.
Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?
Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are
a monthly service fee
for the price of a cup of coffee
More like "instead of making something that gets the job done, expect pur unfinished product to complain and not do whatever it's supposed to". Or just plain false advertising.
Either way, not a good look and I'm glad it's not just us lemmings who care.
LLM based AI was a fun toy when it first broke. Everyone was curious and wanted to play with it, which made it seem super popular. Now that the novelty has worn off, most people are bored and unimpressed with it. The problem is that the tech bros invested so much money in it and they are unwilling to take the loss. They are trying to force it so that they can say they didn't waste their money.
Honestly they're still impressive and useful it's just the hype train overload and trying to implement them in areas they either don't fit or don't work well enough yet.
AI does a good job of generating character portraits for my TTRPG games. But, really, beyond that I haven't found a good use for it.
Even in areas where they would fit it's really annoying how some companies are trying to push it down our throats.
It's always some obnoxious UI element, screaming at me their 3 example questions, and I always sigh and think, "I have to assume you can only answer these 3 particular questions, and why would I ask those questions, and when I ask UI questions I expect precise answers so would I want to use AI for that."
I have no doubt that LLM's have more uses than I can think of, but come on...
I'm happy for studies like this. People who are trying to smear their AI all over our faces need to calm, the f..k, down.
Many of us who are old enough saw it as an advanced version of ELIZA and used it with the same level of amusement until that amusement faded (pretty quick) because it got old.
If anything, they are less impressive because tricking people into thinking a computer is actually having a conversation with them has been around for a long time.
I agree with this, my sentiments exactly as well. Getting AI pushed towards us from every direction & really never asked for it. Like to use it for certain things but go to it when needed. Don't want it in everything, at least personally.
They've overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.
This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.
Thing is, it already was ubiquitous before the AI "boom". That's why everything got an AI label added so quickly, because everything was already using machine learning! LLMs are new, but they're just one form of AI and tbh they don't do 90% of the stuff they're marketed as and most things would be better off without them.
What did they even expect, calling something "AI" when it's no more "AI" than a Perl script determining whether a picture contains more red color than green or vice versa.
Anything making some kind of determination via technical means, including MCs and control systems, has been called AI.
When people start using the abbreviation as if it were "the" AI, naturally first there'll be a hype of clueless people, and then everybody will understand that this is no different from what was before. Just lots of data and computing power to make a show.
Gartner Hype Cycle is the new Moore’s Law.
For the love of god, defund MBAs.
Fallout was right.
Fallout was so on point, only a lot of distance and humour makes it not outright painful or scary knowing the damn nukes will be popping sooner or later one just doesn’t know if tomorrow or in 80 years. The question is not if but when
There are even companies slapping AI labels onto old tech with timers to trick people into buying it.
That one DankPods video of the "AI Rice cooker" comes to mind
For what it’s worth, rice cookers have been touting “fuzzy logic” for like 30 years. The term “AI” is pretty much the same, it just wasn’t as buzzy back then.
Yeah that's the one I saw
Take the hint, MBAs.
They don't care. At the moment AI is cheap for them (because some other investor is paying for it). As long as they believe AI reduces their operating costs*, and as long as they're convinced every other company will follow suit, it doesn't matter if consumers like it less. Modern history is a long string of companies making things worse and selling them to us anyway because there's no alternatives. Because every competitor is doing it, too, except the ones that are prohibitively expensive.
[*] Lol, it doesn't do that either
I can attest this is true for me. I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them. It gave me the feeling LG clothes washer division is full of shit.
Bought a SpeedQueen instead and been super happy with it. No AI bullshit anywhere in their product info.
I doubt there's any actual AI in the LG product, it's just a marketing buzzword like they used to use the term 'smartwash'
Much like all the companies who used to market their headphones as "MP3 compatible".
It's just more marketing nonsense.
I'd be fairly certain the washing machine has a few sensors and a fairly simple computer program (designed by humans) that can make some limited adjustments to the wash cycle on the fly.
I've seen quite a few instances of stuff like that suddenly being called "AI" as that's the big buzzword now.
Interestingly, LG's AI Wash pre-dates the public release of ChatGPT by almost two years. Truly pioneers.
Honestly, +1 for SpeedQueen. That’s the brand that every laundromat uses, because they’re basically the Crown Vic of washers; They’re uglier than sin, but they’ll run for literal decades with very little maintenance. They do exactly one thing, (clean your clothes), and they do that one thing very well. They’re the “somehow my grandma’s appliances still work 70 years later, while mine all break after three years" of washing machines.
SpeedQueen doesn’t have any of the modern bells or whistles… But that also means there’s nothing to break prematurely and turn the washer into the world’s largest paperweight. Samsung washers, for instance, have infamously shitty LCD panels, which are notorious for dying right after the warranty expires. And when it dies, the entire washer is dead until you replace basically the entire control interface. SpeedQueen doesn’t have this issue, because they don’t even have LCD panels; everything is just physical knobs and buttons. If something ever does break, it’s just a mechanical switch that you can swap out in 15 minutes with a YouTube tutorial.
FYI, all current Speed Queen models except the Classic Series dryer (DC5, not the washer) are electronically controlled. Even the ones with knobs. They are not mechanical and no longer use the oldschool sequencing drums.
The TR7/DR7 are at least still sold with a 7 year manufacturer's warranty, though. This is specifically to assuage consumer fears about the electronic control panel.
Yes! A washer doesn't need AI or wifi. It needs power, water, detergent and dirty laundry. Had a guest the other day pull out their phone and go Oh my dish washer is out of surfactant. Why the fuck do you need to know that, when you're 20min away by car?
I will pay more if an appliance isn't internet connected.
Speed Queen for the win. I recently replaced a couple of trusty machines that had finally given up after decades of abuse. Went for speed queen, no regrets.
Speed Queen is great stuff. It will last just about forever. When it does break it is built so it can be repaired.
I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them.
I might be thinking the same. But I actually purchased an LG washer a couple months ago and finally got around to finding and reading the manual, and realized that I should have been doing "AI wash" instead of the "normal wash" that I always did.
The manual says that this is what "AI wash" actually is for:
"This cycle automatically adjusts wash and rinse patterns based on load size".
I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.
Still waiting for that first good use case for LLMs.
It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it's got mistakes) or answer a few questions can save a lot of time.
So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.
Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?
I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.
So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.
In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).
I've built a couple of useful products which leverage LLMs at one stage or another, but I don't shout about it cos I don't see LLMs as something particularly exciting or relevant to consumers, to me they're just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I've finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would've been much harder!
Writing bad code that will hold together long enough for you to make your next career hop.
Haven't you been watching the Olympics and seen Google's ad for Gemini?
Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!
On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.
I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an "unnecessary luxury" sort of way. Of course, that would eliminate the "unpaid intern to add experience to a resume" jobs. I'm not sure if that's good or bad,l. I'm also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.
I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.
I feel like everyone who isn't really heavily interacting or developing don't realize how much better they are than human assistants. Shit, for one it doesn't cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we'll know it ain't an LLM, because I don't know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.
Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.
So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.
ML has a huge future, regardless of LLMs.
I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.
But 98% of GenAI hype is bullahit so far.
How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?
Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?
It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.
LLM have greatly increased my coding speed: instead of writing everything myself I let AI write it and then only have to fix all the bugs
I’m glad. Depends on the dev. I love writing code but debugging is annoying so I would prefer to take longer writing if it means less bugs.
Please note I’m also pro code generators (like emmet).
I literally uninstalled and disabled every AI process and app in that latest galaxy AI update, which was the whole update btw. my reasons are:
1- privacy and data sharing.
2- the battery, cpu, ram of AI bloatware running in the background 247.
3- it was chaging and doing things which I didn't want especially in the galary photo albums and camera AI modes.
I was considering a new Samsung phone - is that baked into it? (Assuming you're talking Samsung anyway, based on the galaxy name)
Samsung is a nightmare, don't purchase their products.
For example: I used to have a Samsung phone. If I plugged it into the USB port on my computer Windows Explorer would not be able to see it to transfer files. My phone would tell me I need to download Samsung's drivers to transfer files. I could only get them by downloading Samsung's software. Once I installed the software Windows Explorer was able to see the device and transfer files. Once I uninstalled the software Windows Explorer couldn't see the device again.
Anything Samsung can do in your region to insert themselves between you and what you are trying to do they will do.
To give you a second opinion from the other guy, I've had quite a few Samsungs in a row at this point. From Galaxy S2 to S23Ultra skipping years between every purchase.
They are effectively the premium vendor of Android, at least for western audiences. The midrange has some good ones, but other companies do well there too. At the high end, Samsung might lose out a bit to google on images of people, but the phones Samsung sell are well built, have a long support life, have lots of features that usually end up being imported to AOSP and/or Google's own version of Android. The last few generations are the Apple of Android. The AI features they've added can be run on device if you want, and idk what the other guy is talking about, but the AI features aren't that obnoxiously pushed on my device, the S23 Ultra. I have some things on, most things off. Then again, I've used HTC for a few years and iPhone for two weeks, so except for helping my dad with his Pixel 6a while that device lasted, I've not really tried other brands. The added customization on Samsung is kind of a problem for me, because I don't feel like changing brands after being able to customize so much out of the box.
And I've never had issues connecting to a simple Windows computer, given that the phone has always been able to use the normal Plug-and-play driver that is there already. If you have a macbook like I do, it's a bit cringe, but that's a macbook issue moreso.
Care to share how you disabled every bit of AI in the phone?
Yee. No root required, neither recommended for samsung devices. In short just enable developer mode from phone settings, then debug it with adb platform to uninstall and disable any system app, and can also change lines, colors, phone behaviors, properties and look, install and uninstall apps which you could not before...and so many things.
Did it help with battery life? My S24U has not been getting the greatest battery life lately and I wonder if this is why.
I don't know about the AI stuff specifically. Check your battery usage to see which process is doing that. but yes debloating in general makes your phone battery longer, and with the help of few more tricks also faster. There are thousands of no-root-required debloating tutorials online.
I've learned to hate companies that replaced their support staff with AI. I don't mind if it supplements easy stuff, that should take like 15 seconds, but when I have to jump through a bunch of hoops to get to the one lone bastard stuck running the support desk on their own, I start to wonder why I give them any money at all.
I love it when I have to trick those stupid ai chatbots to let me talk to a human customer service rep
It has been getting so bad that even boring regular phone trees will hang up on you if you insist on talking to a human. If it's ISP / cellular, nowadays I will typically just say I want to cancel my account, and then have cancellations route me to the correct department.
There really should be a right to adequate human support that's not hidden behind multiple barriers. As you said, it can be a timesaver for the simple stuff, but there's nothing worse than the dread when you know that your case is going to need some explanation and an actual human that is able to do more than just following a flowchart.
"AI" is certainly a turn-off for me, I would ask a salesman "do you have one that doesn't have that?" and I will now enumerate why:
Can you help me with problems this complex? Idk maybe we could use it to help make things better. Just most people prompt like things I can't say because they aren't nice. Oh by the way. Can you do it right now for $0 please? Thanks!
Edit. Also need it done now. If you're reading this you were too slow.
Every company that has been trying to push their shiny, new AI feature (which definitely isn't part of a rush to try and capitalize on the prevalence of AI), my instant response is: "Yeah, no, I'm finding a way to turn this shit off."
My response is even harsher..."Yeah, no, I'm finding a way to never use this company's services ever again." Easier said than done, but I don't even want to associate with places that shove this in my face.
<greentext>
Be me
Early adopter of LLMs ever since a random tryout of Replika blew my mind and I set out to figure what the hell was generating its responses
Learn to fine-tune GPT-2 models and have a blast running 30+ subreddit parody bots on r/SubSimGPT2Interactive, including some that generate weird surreal imagery from post titles using VQGAN+CLIP
Have nagging concerns about the industry that produced these toys, start following Timnit Gebru
Begin to sense that something is going wrong when DALLE-2 comes out, clearly targeted at eliminating creative jobs in the bland corporate illustration market. Later, become more disturbed by Stable Diffusion making this, and many much worse things, possible, at massive scale
Try to do something about it by developing one of the first "AI Art" detection tools, intended for use by moderators of subreddits where such content is unwelcome. Get all of my accounts banned from Reddit immediately thereafter
Am dismayed by the viral release of ChatGPT, essentially the same thing as DALLE-2 but text
Grudgingly attempt to see what the fuss is about and install Github Copilot in VSCode. Waste hours of my time debugging code suggestions that turn out to be wrong in subtle, hard-to-spot ways. Switch to using Bing Copilot for "how-to" questions because at least it cites sources and lets me click through to the StackExchange post where the human provided the explanation I need. Admit the thing can be moderately useful and not just a fun dadaist shitposting machine. Have major FOMO about never capitalizing on my early adopter status in any money-making way
Get pissed off by Microsoft's plans to shove Copilot into every nook and cranny of Windows and Office; casually turn on the Opympics and get bombarded by ads for Gemini and whatever the fuck it is Meta is selling
Start looking for an alternative to Edge despite it being the best-performing web browser by many metrics, as well as despite my history with "AI" and OK-ish experience with Copilot. Horrified to find that Mozilla and Brave are doing the exact same thing
Install Vivaldi, then realize that the Internet it provides access to is dead and enshittified anyway
Daydream about never touching a computer again despite my livelihood depending on it
</greentext>
I like the article I read were ww2 german soldiers were being generated by AI as asians, black woman, etc. Glad it doesn't take context into consideration. lol
I haven't seen any ai in firefox
In other news, AI bros convince CEOs and investors that polls saying people don't like AI are out of touch with reality and those people actually want more AI, as proven by an AI that only outputs what those same AI bros want.
Just waiting for that to pop up in the news some time soon.
That's literally the sales response to this. "People don't really know what they want until we sell it to them"
It's pretty fucking gross.
"If I asked people what they want, they would say, better AI"
MBA tech bro: "so ... that means what they really want is the same shitty AI, right?"
My brother in the fediverse, ceos and investors are the AI bros
I've found ChatGPT somewhat useful, but not amazingly so. The thing about ChatGPT is, I understand what the tool is, and our interactions are well defined. When I get a bullshit answer, I have the context to realize it's not working for me in this case and to go look elsewhere. When AI is built in to products in ways that you don't clearly understand what parts are AI and how your interactions are fed to it; that's absolutely and incurably horrible. You just have to reject the whole application; there is no other reasonable choice.
Also just listening and reading what people say. We don't want fucking AI anything. We understand what it might do. We don't want it.
Yeah these buttsniffers can't possibly conceive the truth, they made "AI" into something that people don't want, let alone ever admit it. Check this out:
"When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions" - some marketing stinklipper
"We found emotional trust plays a critical role in how consumers perceive AI-powered products".
Ok, first of all how is this person serious fire this person please cuz this gibberish sounds like a LLM wrote it like for real WTF even is "emotional trust" dude is that a real term so you mean we see your lies
(wheeze)
Sorry, brain overheated there. These fucks are so far up their own asses man... the mind just boggles
EDIT: clarity
I have just read the features of iOS 18.1 Apple intelligence so called.
TLDR: typing and sending messages for you mostly like one click reply to email. Or… shifting text tone 🙄
So that confirms my fears that in the future bots will communicate with each other instead of us. Which is madness. I want to talk to a real human and not a bot that translates what the human wanted to say approximately around 75% accuracy devoid of any authenticity
If I see someone’s unfiltered written word I can infer their emotions, feelings what kind of state they are in etc. Cold bot to bot speech would truly fuck up society in unpredictable ways undermining fundaments of communication.
Especially if you notice that most communication, even familial already happens online nowadays. So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.
Mom: ‘hey siri summarize message’
My hope for the future relies on a study indicating that after 5 or so generations of training data tainted with AI generated information, the LLM models collapsed.
Hopefully, after enough LLMs have been fed LLM data, we will arrive in an LLM-free future.
<this is unlikely to come true but let me hope >
Another possibility is LLMs will only be trained on historic data, meaning they will eventually start to sound very old-fashioned, making them easier to spot.
So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.
What makes you think that kids aren't already doing things like this? Not with Siri, but it doesn't take much effort to get ChatGPT to write something for you.
Also I saw a South Park episode about this. https://en.wikipedia.org/wiki/Deep_Learning_(South_Park)
It isn’t built-in in the very phone operating system where you just tap on generate response in the iMessage. It is always about laziness. First the privacy went away due to path of least effort even though you always had tons of privacy alternatives but they require just 10 seconds of extra effort
Future email writing: type the first three words then spam click the auto complete on your LLM-based keyboard. Only stop when the output starts to not make sense anymore.
You can do that today with the FUTO keyboard lol. It uses a small language model for predictive text.
It's the same with images, soon all our photos won't be real captured moments, but AIs interpretation of those moments edited by the AI to make them "perfect"
Give me a bunch of open AI models and a big GPU to play with and I'll generate twenty gigabytes of weird anime fetish content.
This is the only true use of AI
You forgot to add "and post it to Lemmy".
In your own words, tell me why you're calling today.
My medication is in the wrong dosage.
You need to refill your medication is that right?
No, my medication is in the wrong dosage, it's supposed to be tens and it came as 20s.
You need to change the pharmacy where you're picking up your medication?
I need to speak to a human please.
I understand that you want to speak to an agent, is that right?
Yes.
Chorus, 5x. (Please give me your group number, or dial it in at the keypad. For this letter press that number for that letter press this number. No I'm driving, just connect me with an agent so I can verify over the phone)
I'm sorry, I can't verify your identity please collect all your paperwork and try calling again. Click
Why ever would we be mad?
I went through a McDonald’s drive-thru the other day and had the most insane experience. For the context of this anecdote, I don’t do that often, so, what I experienced was just weird.
While not quite “AI,” the first thing that happened was an automated voice yells at me, “are you ordering using your mobile app today?”
There’s like three menu-speaker boxes, and due to where the car in front of me stopped, I’m like in between the last two. The other speaker begins to yell, “Are you ordering using your mobile app today?”
The person running drive-thru mumbles something about pull around. I do. Pass by the other menu “Are you ordering using your mobile app today?”
Dude walks out with a headset and starts taking orders from each car using a tablet.
I have no idea what is happening. I can’t even see a menu when the guy gets around to me. Turns the tablet around at me.
I realized that I was indeed ordering using the mobile app today.
To be fair, this is not new, unless you're counting all answering machines as AI
Hardly. It used to be natural language dictation and decision tree. Now they're trying to use LLM training to automatically pick up more edge cases and it's pretty much b*******.
This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.
For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.
As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.
And I don’t mean to denigrate data science. It is important and powerful. And real machine intelligence may one day emerge from it (or data science may one day point the way). But data science just isn’t AI.
Maybe I'd be more interested in AI if there was any I with the A. At the moment, there's no more intelligence to these things than there is in a parrot with brain damage, or a human child. Language Models can mimic speech but are unable to formulate any original thoughts. Until they can, they aren't AI and I won't be the slightest bit interested beyond trying to break them into being slightly dirty (and therefore slightly funny).
Just so you know I totally agree with you but if you go far back enough in my comment history I had a really interesting (imo) discussion/argument with someone abt this very topic and the topic of how to determine if an AI 'thinks' or 'reasons' more broadly.
It can be helpful to approach this from the other direction. The part of the brain that works like an LLM.
This is because AI is usually used to reduce the human cost to the company, and rarely to reduce the human labour for the customer.
That, or mass surveillance.
Very nicely put!
Sex one way, half ad oxen the other.
Lets see if this finally kills the AI hype. Big tech is pushing for AI because it is the ultimate spyware, nothing more.
I wonder if we'll start seeing these tech investor pump n' dump patterns faster collectively, given how many has happened in such a short amount of time already.
Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.
It feels like the futurism sheen has started to waver. When everything's a major revolution inserted into every product, then isn't, it gets exhausting.
Internet of Things
This is very much not a hype and is very widely used. It's not just smart bulbs and toasters. It's burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction's network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.
I think that the dot com bubble is the closest, honestly. There can be some kind of useful products (mostly dealing with how we interact with a system, not actually trying to use AI to magically solve a problem; it is shit at that), but the hype is way too large
don’t forget Big Data
It's more of a macroeconomic issue. There's too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we're going to keep getting more and more of these bubbles, regardless of what they are for
Yeah it's just investment profit chasing from larger and larger bank accounts.
I'm waiting for one of these bubble pops to do lasting damage but with the amount of protections for specifically them and that money that can't be afforded to be "lost" means it's just everyone else that has to eat dirt.
TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.
IMHO it's a marketing problem. They're major evolutions taking root over decades. I think AI will gradually become as useful as lasers.
I find the tech interesting, but the rush to commercialize it was a bad idea. It’s not ready yet, total uncanny valley.
Literally only exciting use for it ive seen so far is that Skyrim companion. And even that doesn't work right yet.
I have rolled back, uninstalled, opted-out, or ripped apart every AI that every company is trying to shove down our throats. I wish I could do the same for search engines, but who uses the internet broadly anymore anyway.
I am impressed by the tech, I think it's amazing, but it's still utterly useless.
I have never, ever needed to interrupt my day's schedule to generate a convincing picture of Luke Skywalker fighting Batman while riding dinosaurs, I have never needed to have a text conversation with someone who seems "almost human," I mean, christ that already describes half the people I know and wish were more normal. I have never needed an article summarized badly, I enjoy reading things, I enjoy writing emails, so I can't figure out why they would make tools to take away the small pleasures we have. What exactly are they thinking?
Yesterday I gave it one more chance, asked one of the apps, I forget which, what tomorrow's weather will be like, the thing forecasted a hurricane coming right for me, a news event from last year. I'm so over AI, please someone notify me when it's really useful and can take over the menial, tedious tasks like managing my online accounts and offering financial advice or can actually help me find a job opening in my field.
All these things have been promised, and seem more out of reach than ever.
The MOST impressive thing I've seen AI do is make really, really convincing furry porn babes. The things are good at mixing features in images. Sometimes.
but it’s still utterly useless.
this is purely false. There are so many applications that bring value and if you can't admit that then you are biased in some way/shape/form.
As a sw dev, I use AI to speed up menial tasks or help me find different perspectives on certain things, shit it's even helpful for debugging tricky things. You don't need to be a coder to find value in AI though, things like auto-generated transcripts has been so fucking amazing, especially for podcasting in my case.
I could go on and on. To say it is UTTERLY USELESS is disingenuous at best.
The MOST impressive thing I’ve seen AI do is make really, really convincing furry porn babes. The things are good at mixing features in images. Sometimes.
You are quite literally telling on yourself here, you seem to have a limited view of AI application and are judging the entire technology/concept based on that narrow set of use-cases (which appear to be, from your comment, chat bots, porn generators, future weather predictors, not exactly the pinnacle of AI application).
I’m so over AI, please someone notify me when it’s really useful and can take over the menial, tedious tasks
Here you go again! You seem to be equating value to the ability for the tech to function without supervision or assistance. Does AI only provide value to you if it can do those things completely autonomously? What if working with the AI is faster than not using it at all? Is it still useless to you?
They keep using it for really stupid things. I agree all the image generators are bloody pointless, the quality isn't good enough and you don't have the control you need to make them useful.
AI has some pretty good uses.
But in the majority of junk on the market it is nothing but marketing bloatware.
It does and AI is being tarnished by the hype/marketing.
Not long ago Firefox announced it would deliver client-side "AI" to describe web pages to differently-abled users. This is awesome.
Some people on Lemmy conflated AI and Large Language Models and complained about the addition. I don't blame them, not everyone is an IT pro and is equipped to understand the difference between Machine Learning Models, LLMs and such. I mentioned Firefox has "AI" for client-side translation and that's a great thing. They wondered since when "AI" was used for translation. Machine learning/deep learning translation has been a thing for over a decade and it amazing. It's not LLM (even if LLMs are really good at translation).
The market has pushed "AI" too hard making people cautious about it. They are turning it into the new "blockchain" were most people didn't find any benefit from the hype, on the contrary, they saw the vast majority of it being scams.
even if LLMs are really good at translation
As someone that actually played japanese RPG games translated with AI on dlsite, bullshit.
I can't really agree as a video producer. Luma, Krea, Runway, Ideogram, Udio, 11Labs, Perplexity, Claude, Firefly -> All worth more than they're charging, most with daily free options. They save me a ton of time. Honestly, the one I'm considering dropping at the moment is ChatGPT.
The irony is companies are being forced to implement it. Like our board has told us we must have "AI in our product.". It's literally a solution looking for a problem that doesn't exist.
It's because automated trading bots trade companies whose names appear in headlines with the word AI upwards.
The stock market is an economic shitpost.
My boss's boss's boss asked for a summary of our roadmap. He read it, and provided his takeaways... 3 of the 4 bullet points were AI-related, and we never once mentioned anything about AI in what we gave him 😑 so I guess we're pivoting?
This just screams "The CEO read about it on linkedin while taking a dump and now feels it is vital to the company."
This is basically forcing AI based spying from the government
Okay but have you considered shoving AI down the throats of consumers and forcing them to use it? I say invest in more gigantic server farms!
I have no qualms about AI being used in products. But when you have to tell me that something is "powered by AI" as if that's your main selling point, then you do not have a good product. Tell me what it does, not how it does it.
Developer: Am I out of touch?
No, it's the consumers who are wrong.
Developer Stackholder: Am I pushing the wrong ideas onto the managers?
No, it's the developers who don't know how to implement the features I want.
Adobe Acrobat has added AI to their program and I hate it so much. Every other time I try to load a PDF it crashes. Wish I could convince my boss to use a different PDF reader.
Adobe sucks but they have sucked their whole existence. No AI needed.
If I could have the equivalent of a smart speaker that ran the AI model locally and could interface with other files on the system. I would be interested in buying that.
But I don't need AI in everything in the same way that I don't need Bluetooth in everything. Sometimes a kettle is just a kettle. It is bad enough we're putting screens on fridges.
I like the vast majority of my technology dumb, the last barely smart kettle I bought - it had a little screen that showed you temperature and allowed you to keep the water at a particular temperature for 3h - broke within a month. Now I once again have a dumb kettle, it only has the on/off button and has been working perfectly since I got it
I could go for the fridge screen if it was focused more around showing me what was in the fridge without opening the door and making grocery lists.
Here ya go. This is pretty much exactly whatcha describe.
And workers...
She looks so done with it. It is amazing how tone deaf and incapabale of detecting emotions the higher ups must have been to OK that image. Not blaming any one lower to approve this, they are probably all fed up too and were happy to use this.
Plus, it's way too cold at her vast and empty warehouse hot desk, because she's wearing at least two sweaters. Please let this lady have a cubicle of her own with a little space heater.
They'd usually use a paid actor for this, so it makes me wonder, did they just force a regular employee to pose for this
Is that a real copilot ad?
This is the link I had I believe, but it's not loading for me now. Either it will work for you, or they pulled it. https://www.instagram.com/microsoft365/p/C7j8ipnxIiI/?img_index=1 (comments were brutal IIRC)
Related article about it: https://futurism.com/microsoft-brags-ai-attend-three-meetings
Yep. Give me time and I'll dig up the link.
Unsurprisingly. I have use for LLMs and find them helpful, but even I don't see why should we have the copilot button on new keyboards and mice, as well as on the LinkedIn's post input form.
There are certainly great uses for LLMs. 99% of the time it is useless though.
<---Not this cat. I become highly aroused when i hear salespeople gargling out their marketing bullshit
Yeah, baby, lie for me. Mmmm call a LLM "AI" again.
fuck that's hot
Hey now, LLMs are AI!
... So is the code that makes those ghosts in s super mario approach you when you look away and cower when you look at them.
At least Shy Guys are cute.
Average CEO
AlphaProof isn't an LLM but it just was a point from gold against some of the smartest people on earth. You think you're smarter than the people building this stuff? That might be the dumbest shit about this. I swear the United States essentially really has become Ideocracy. From all angles. Capitalism sucks but AI isn't the problem. Bunch of greedy apes is the fucking problem like it always has been. Lol
So you know if you have clean water and food though, you could be considered a very greedy ape. Why are you not fighting harder for clean water etc? What do you do to make the world better? (Shit probably same as me. Jack shit)
Hmmm i have to reread my previous comment cuz people are getting the wrong idea (maybe)
Im talking about marketing doublespeak, and the fact a press release to the public at large will never admit "AI" has become bad in the public perception because of marketing. It is because of these marketing mba dipshits and clueless fad followers putting "AI" on stuff that is
Not AI
Or
Not useful to the consumer, and indeed has many anti-consumer facets, being used primarily as an excuse to fire workers, push software as a service, or mine consumer info.
The point i tried and failed to make was these MBA fucks (categorically not the engineers building ai or the llms we also call ai) are so insulated inside their corpo boardroom-speak they can't see or admit it's their fault, or ever hear how goddamn stupid they sound.
Hi, I'm annoying and want to be helpful. Am I helpful? If I repeat the same options again when you've told me I'm not helpful, will that be helpful? I won't remember this conversation once it's ended.
Hi, which option have you told me you already don't want would you like?
Sorry, I didn't quite catch that, please rage again.
Meanwhile, I just had Cluade turn a few obscure academic papers into a slide deck on the subject, along with presentation notes and interactive graphs, using like 5 prompts and 15 min.
For me, if a company fails to make a clear cut case about why a product of theirs needs AI, I'm gonna assume they just want to misuse AI to cheaply deliver a mediocre product instead of putting in the necessary cost of manhours.
I like my AI compartmentalized, I got a bookmark for chatGPT for when i want to ask a question, and then close it. I don't need a different flavor of the same thing everywhere.
I don't know anyone who is actively looking for products that have "AI".
It's like companies drank their own Kool aid and think because they want AI, so do the consumers. I have no need for AI. My parents don't even understand what it is. I can't imagine Gen Z gives a hoot.
It's really simple: There are a number of use cases where generative AI is a legitimate boon. But there are countless more use cases where AI is unnecessary and provides nothing but bloat, maybe novelty at best.
Generative AI is neither the harbinger or doom, nor the savior of humanity. It's a tool. Just a tool. We're just caught in this weird moment where people are acting like it's an all-encompassing multipurpose tool right now instead of understanding it as the limited use specific tool it actually is.
It's a tool. Just a tool.
And, more often than not, it's a poorly implemented tool that didn't need to be added to the product in the first place.
Yes, that was literally my point. A plumbing wrench is a perfectly useful and wonderful tool, but it isn't going to be much help in the middle of brain surgery. Tools have use cases; they can't be applied to any situation
Ai is not even truly ai right now, there's no intelligence, it's a statistical model made by training billions of stolen data to spit out the most similar thing to fit the prompt. It can get really creepy because it's very convincing but on closer inspection it has jarring mistakes that trigger uncanny valley shit. Hallucinations is giving it too much credit, maybe when we get AGI in a decade that'll fitting.
You're not wrong, but the implementation doesn't really matter I think. If AI could spit out sentences convincingly enough, I'd be okay with that. But, yeah, it's not there yet.
I'll be bach.
Absolutely, I was pretty upset when Google added Gemini to their Messages app, then excited when the button (that you can't remove) was removed! Now I've updated Messages again and they brought the button back. Why would you ever need an LLM in a texting app?
Edit: and also Snapchat, Instagram, and any other social media app they're shoveling an AI chat bot into for no reason
Edit 2: AND GOOGLE TELLING ME "Try out Gemini!" EVERY TIME I USE GOOGLE ASSISTANT ON MY PHONE!!!!!
I'd rather talk to my cat than an AI chat bot
My cats replies make more consistent sense and I don't need to worry about him plagiarizing something incorrectly.
At least when you say certain keywords to your pets they show some emotions!
Try saying "potty?" to an LLM and decoding its response to gauge if it needs to go potty or not, Google!
It's farcical.
When a company introduces something consumers want, we will research and find a way to get it and use it ASAP. Nobody needs to interrupt our workflow to tell us about it. I don't remember getting any in-app notifications for the Gmail select all "feature," but I figured it out pretty damn quickly.
I get AI has its uses but I don’t need my mouse to have any thing AI related (looking at you Logitech).
AI in consumer devices at this point stands for data harvesting, wonky functionality and questionable usefulness. No wonder nobody wants that crap.
They just don't get it. Once everyone will use AI toilet and AI toothbrush they will sing a different tune.
I love skibidAI toilet
I definitely need a toilet that remember and analyze my shit. Yes.
They will try to sell it to you as a way to detect any possible health issues early. But it will just be used to analyze you food patterns to shove mcdonalds ads
Not sure what happened to it, but this was a thing already in 2005.
For some reason I imagine a toilet that automates a stool test and blood test and gives you a health report every month.
If the toilet is receiving a blood sample I have bad news for your monthly health report.
A stool test sure, but I'm not going to trust a toilet to use a sterile needle to draw blood.
I've been applying similar thinking to my job search. When I see AI listed in a job description, I immediately put the company into one of 3 categories:
A company in the first two categories would need to pay a lot to entice me and I would not value their equity offering. The third category is understandable, especially if the success of AI would threaten their business.
It's because consumers aren't the dumbasses these companies think they are and we all know that the AI being shoved into everything fucking sucks worse than the systems we had before "AI."
Honestly AI is the 3D glasses of consumer products and computing. There are a couple of places and applications where it absolutely improves things, everywhere else it's just an overhyped extra that they tack on in hopes that it will drive up interest.
AI is garbage.
AI is just an excuse to lay off your employees for an objectively less reliable computer program, which somehow statistically beats us in logic.
I've used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt "Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM" it will be a grave disappointment.
If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.
I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can't make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn't want because you weren't clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn't lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it'll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don't even know it did it though because you never specified for it to record the ride....
A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to "use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem." Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I'm on the fence about but given who all is involved over there I wouldn't say I would trust them. Especially since they want to do a regulatory capture.
Yet companies are manipulating survey results to justify the FOMO jump to AI bandwagon. I don't know where companies get the info that people want AI (looking at you Proton).
I barely trust organics. Some CEO being rock hard about his newest repertoire of buzzword doesn’t help.
Think of the savings if you replace the CEO with an AI!
I'm actively turned off because they suck up my data to use it.
I love the idea of local only AI and would use those products, and do play with local LLM/Image products.