As I mentioned in another post, about the same topic:
Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.
LLM based AI was a fun toy when it first broke. Everyone was curious and wanted to play with it, which made it seem super popular. Now that the novelty has worn off, most people are bored and unimpressed with it. The problem is that the tech bros invested so much money in it and they are unwilling to take the loss. They are trying to force it so that they can say they didn't waste their money.
They've overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.
This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.
I can attest this is true for me. I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them. It gave me the feeling LG clothes washer division is full of shit.
Bought a SpeedQueen instead and been super happy with it. No AI bullshit anywhere in their product info.
I've learned to hate companies that replaced their support staff with AI. I don't mind if it supplements easy stuff, that should take like 15 seconds, but when I have to jump through a bunch of hoops to get to the one lone bastard stuck running the support desk on their own, I start to wonder why I give them any money at all.
"AI" is certainly a turn-off for me, I would ask a salesman "do you have one that doesn't have that?" and I will now enumerate why:
LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it's nothing but an extremely elaborate and expensive party trick. I don't want actual services like web searches replaced with elaborate party tricks.
In a lot of cases it's being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word "smart" to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn't any more "AI" than my 90's microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I'd like to see fail so spectacularly that people in the advertising business go missing over it.
I already avoided "smart" appliances and will avoid "AI" appliances for the same reasons: The "smart" functionality doesn't actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they'll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there's no reason the "smart" functionality couldn't run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than "then we don't get your data."
AI is apparently consuming more electricity than air conditioning. In fact, I'm not convinced that power consumption isn't the selling point they're pushing at board meetings. "It'll keep our friends in the pollution industry in business."
Every company that has been trying to push their shiny, new AI feature (which definitely isn't part of a rush to try and capitalize on the prevalence of AI), my instant response is: "Yeah, no, I'm finding a way to turn this shit off."
In other news, AI bros convince CEOs and investors that polls saying people don't like AI are out of touch with reality and those people actually want more AI, as proven by an AI that only outputs what those same AI bros want.
Just waiting for that to pop up in the news some time soon.
Early adopter of LLMs ever since a random tryout of Replika blew my mind and I set out to figure what the hell was generating its responses
Learn to fine-tune GPT-2 models and have a blast running 30+ subreddit parody bots on r/SubSimGPT2Interactive, including some that generate weird surreal imagery from post titles using VQGAN+CLIP
Have nagging concerns about the industry that produced these toys, start following Timnit Gebru
Begin to sense that something is going wrong when DALLE-2 comes out, clearly targeted at eliminating creative jobs in the bland corporate illustration market. Later, become more disturbed by Stable Diffusion making this, and many much worse things, possible, at massive scale
Try to do something about it by developing one of the first "AI Art" detection tools, intended for use by moderators of subreddits where such content is unwelcome. Get all of my accounts banned from Reddit immediately thereafter
Am dismayed by the viral release of ChatGPT, essentially the same thing as DALLE-2 but text
Grudgingly attempt to see what the fuss is about and install Github Copilot in VSCode. Waste hours of my time debugging code suggestions that turn out to be wrong in subtle, hard-to-spot ways. Switch to using Bing Copilot for "how-to" questions because at least it cites sources and lets me click through to the StackExchange post where the human provided the explanation I need. Admit the thing can be moderately useful and not just a fun dadaist shitposting machine. Have major FOMO about never capitalizing on my early adopter status in any money-making way
Get pissed off by Microsoft's plans to shove Copilot into every nook and cranny of Windows and Office; casually turn on the Opympics and get bombarded by ads for Gemini and whatever the fuck it is Meta is selling
Start looking for an alternative to Edge despite it being the best-performing web browser by many metrics, as well as despite my history with "AI" and OK-ish experience with Copilot. Horrified to find that Mozilla and Brave are doing the exact same thing
Install Vivaldi, then realize that the Internet it provides access to is dead and enshittified anyway
Daydream about never touching a computer again despite my livelihood depending on it
I've found ChatGPT somewhat useful, but not amazingly so. The thing about ChatGPT is, I understand what the tool is, and our interactions are well defined. When I get a bullshit answer, I have the context to realize it's not working for me in this case and to go look elsewhere. When AI is built in to products in ways that you don't clearly understand what parts are AI and how your interactions are fed to it; that's absolutely and incurably horrible. You just have to reject the whole application; there is no other reasonable choice.
I have just read the features of iOS 18.1 Apple intelligence so called.
TLDR: typing and sending messages for you mostly like one click reply to email. Or… shifting text tone 🙄
So that confirms my fears that in the future bots will communicate with each other instead of us. Which is madness. I want to talk to a real human and not a bot that translates what the human wanted to say approximately around 75% accuracy devoid of any authenticity
If I see someone’s unfiltered written word I can infer their emotions, feelings what kind of state they are in etc. Cold bot to bot speech would truly fuck up society in unpredictable ways undermining fundaments of communication.
Especially if you notice that most communication, even familial already happens online nowadays. So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.
Mom: ‘hey siri summarize message’
In your own words, tell me why you're calling today.
My medication is in the wrong dosage.
You need to refill your medication is that right?
No, my medication is in the wrong dosage, it's supposed to be tens and it came as 20s.
You need to change the pharmacy where you're picking up your medication?
I need to speak to a human please.
I understand that you want to speak to an agent, is that right?
Yes.
Chorus, 5x. (Please give me your group number, or dial it in at the keypad. For this letter press that number for that letter press this number. No I'm driving, just connect me with an agent so I can verify over the phone)
I'm sorry, I can't verify your identity please collect all your paperwork and try calling again. Click
This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.
For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.
As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.
Maybe I'd be more interested in AI if there was any I with the A. At the moment, there's no more intelligence to these things than there is in a parrot with brain damage, or a human child. Language Models can mimic speech but are unable to formulate any original thoughts. Until they can, they aren't AI and I won't be the slightest bit interested beyond trying to break them into being slightly dirty (and therefore slightly funny).
I wonder if we'll start seeing these tech investor pump n' dump patterns faster collectively, given how many has happened in such a short amount of time already.
Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.
It feels like the futurism sheen has started to waver. When everything's a major revolution inserted into every product, then isn't, it gets exhausting.
The irony is companies are being forced to implement it. Like our board has told us we must have "AI in our product.". It's literally a solution looking for a problem that doesn't exist.
Adobe Acrobat has added AI to their program and I hate it so much. Every other time I try to load a PDF it crashes. Wish I could convince my boss to use a different PDF reader.
I have no qualms about AI being used in products. But when you have to tell me that something is "powered by AI" as if that's your main selling point, then you do not have a good product. Tell me what it does, not how it does it.
If I could have the equivalent of a smart speaker that ran the AI model locally and could interface with other files on the system. I would be interested in buying that.
But I don't need AI in everything in the same way that I don't need Bluetooth in everything. Sometimes a kettle is just a kettle. It is bad enough we're putting screens on fridges.
Unsurprisingly. I have use for LLMs and find them helpful, but even I don't see why should we have the copilot button on new keyboards and mice, as well as on the LinkedIn's post input form.
Hi, I'm annoying and want to be helpful. Am I helpful? If I repeat the same options again when you've told me I'm not helpful, will that be helpful? I won't remember this conversation once it's ended.
Hi, which option have you told me you already don't want would you like?
Sorry, I didn't quite catch that, please rage again.
For me, if a company fails to make a clear cut case about why a product of theirs needs AI, I'm gonna assume they just want to misuse AI to cheaply deliver a mediocre product instead of putting in the necessary cost of manhours.
I like my AI compartmentalized, I got a bookmark for chatGPT for when i want to ask a question, and then close it. I don't need a different flavor of the same thing everywhere.
I don't know anyone who is actively looking for products that have "AI".
It's like companies drank their own Kool aid and think because they want AI, so do the consumers. I have no need for AI. My parents don't even understand what it is. I can't imagine Gen Z gives a hoot.
Ai is not even truly ai right now, there's no intelligence, it's a statistical model made by training billions of stolen data to spit out the most similar thing to fit the prompt. It can get really creepy because it's very convincing but on closer inspection it has jarring mistakes that trigger uncanny valley shit. Hallucinations is giving it too much credit, maybe when we get AGI in a decade that'll fitting.
It's really simple: There are a number of use cases where generative AI is a legitimate boon. But there are countless more use cases where AI is unnecessary and provides nothing but bloat, maybe novelty at best.
Generative AI is neither the harbinger or doom, nor the savior of humanity. It's a tool. Just a tool. We're just caught in this weird moment where people are acting like it's an all-encompassing multipurpose tool right now instead of understanding it as the limited use specific tool it actually is.
Absolutely, I was pretty upset when Google added Gemini to their Messages app, then excited when the button (that you can't remove) was removed! Now I've updated Messages again and they brought the button back. Why would you ever need an LLM in a texting app?
Edit: and also Snapchat, Instagram, and any other social media app they're shoveling an AI chat bot into for no reason
Edit 2: AND GOOGLE TELLING ME "Try out Gemini!" EVERY TIME I USE GOOGLE ASSISTANT ON MY PHONE!!!!!
I've been applying similar thinking to my job search. When I see AI listed in a job description, I immediately put the company into one of 3 categories:
It is an AI company that may go out of business suddenly within the next few years leaving me unemployed and possibly without any severance.
Management has drank the Kool-Aid and is hoping AI will drive their profit growth, which makes me question management competence. This also has a high likelihood of future job loss, but at least they might pay severance.
The buzzword was tossed in to make the company look good to investors, but it is not highly relevant to their business. These companies get a partial pass for me.
A company in the first two categories would need to pay a lot to entice me and I would not value their equity offering. The third category is understandable, especially if the success of AI would threaten their business.
It's because consumers aren't the dumbasses these companies think they are and we all know that the AI being shoved into everything fucking sucks worse than the systems we had before "AI."
Yet companies are manipulating survey results to justify the FOMO jump to AI bandwagon. I don't know where companies get the info that people want AI (looking at you Proton).
Yeah and that is largely fueled by two things; poor/forced use of AI, and anti-AI media sentiment (which is in turn fueled by reactionary/emotional narratives that keep hitting headlines, commonly full of ignorance)
AI can still provide actual value right now and can still improve. No it's not the end-all but it doesn't have to solve humanity's problems to be worth using.
This unfortunate situation is largely a result of the rush to market because that's the world we live in these days. Nobody gives a fuck about completing a product they only care about completing it first, fuck quality that can come later. As a sr software engineer myself I see it all too often in the companies I've worked for. AI was heralded as christ's second coming that will magically do all of this stuff while still in relative infancy, ensuring that an immature product was rushed out the door and applied to everything possible. That's how we got here, and my first statement is where we are now.
Listen up you kids, this old fart saw this same crap in the 70s when LCDs became common and LCD clocks became the norm. They felt that EVERYTHING needed to have an LCD clock stuck in it, lamps, radios, blocks of cheese, etc. A similar thing happened in the internet boom/bust in the late 90s where everyone needed a website, even gas stations.
Now AI is the media and business darling so they are trying to stick AI in everything, partly to justify pissing away so much money on it. I can't even do a simple search on FB because it wants to force me to use the damn meta AI instead.
I occasionally use chat gpt to find info on error code handling and coding snippets but I feel like I'm in some sort of "can you phrase it exactly right?" contest. Anything with even the slightest vagueness to it returns useless garbage.
For the first time in years I thought about buying a new phone. The S23 Ultra, the previous versions had been improving significantly but the price was a factor. Then I got a promotion and figured I would splurge, the S24 Ultra, but it was all aout AI so I just stayed where I am...it does everything anyway.
I keep thinking about how Google has implemented it. It sums up my broader feelings pretty well. They jammed this half-baked "AI" product into the very fucking top of their search results. I can't not see it there - its huge and takes up most of my phone's screen after the search, but I always have to scroll down past it because it is wrong, like, pretty often, or misses important details. Even if it sounds right, because I've had it be wrong before I have to just check the other links anyway. All it has succeed at doing in practice is make me scroll down further before I get to my results (not unlike their ads, I might add). Like, if that's "AI" it's no fucking wonder people avoid it.
AI is a neat toy... but that's all it is. It's horrible at almost every real-world application it's been forced into, and that's before you wander into the whole shifting minefield of ethical concerns or consider how wildly untrustworthy they are.
We're seeing a bunch of promises made when LLM were the novel hot shit. Now that we've plateaued on how useful they are to the average consumer every AI product is just a beta test that will drop support as soon as something newer and shinier comes along.
I was at the optometrist recently and saw a poser for some lenses (transitions) that somehow had "AI"....I was like WTF how / why / do you need to carry a small supercomputer around with you as well.
I hate the feeling that they are continuing to dump real humans who can communicate and respond to issues outside of the rigid framework when it comes to support. AI is also only as good as its data and design. It feels like someone built a self driving car, stuck it on a freshly paved and painted highway and decided it was good to go. Then you take it on an old rural road and end up hitting a tree.
To me AI helps me bang out small functions and classes for personal projects and act as a Google alternative for mundane stuff.
Other than that any product that uses it is no different than a digital assistant asking chat gpt to do things. Or at least that seems like the perception from a consumer level.
Besides it's bad enough I probably use a homes energy trying to make failing programming demos much less ordering pizza from my watch or whatever.
When I have no idea what I am talking about, have no or incorrect terminology, I have found Copilot and GPT4 (separate not the all-in-one) to be game changing compared to flat Google.
I'm not using the data straight off the query result, but the links to the data that was provided in the result.
And embarrassingly, when I'm drunk and babbling into a microphone, Copilot finds the links to what I am looking for.
Now if you are just straight using the results and not researching the answers your mileage will vary.
I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.
The problem is that generative AI can't determine fact from fiction, even though it has enough information to do so. For instance, I'll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn't work, it will respond with "Sorry about that. You can't do [x] with [y] because [z] reasons."
The reasons are often correct but ChatGPT isn't "intelligent" enough to ascertain that an approach will fail based on data that it already has before suggesting it.
It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.
So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.
It seems more like a niche thing that's useful for generating rough drafts or lists of ideas, but the results are hardly useable on their own and still require additional work to finesse them. In alot of ways, it reminds me of my days working on a production line with welding robots. Supposedly these robots could do hundreds/thousands of parts without making a mistake... BUT that was never the case and people always needed to double-check the robot's work (different tech, not "AI", just programmed movements, but similar-ish idea). By default, I just don't trust really anything branded as "AI", it still requires a human to look over what it's done, it's just doing a monotonous task and doing it faster than a person could, but you still can't trust what it gives you.
I've sold actual zero trust, actual AI, actual DevX, etc.. I'm so tired of "yeah, everyone else just throws a label on, why the fuck do I need AI in my bank app? We have the REAL blah blah blah"