Study: Your coworkers hate you for using AI at work
Study: Your coworkers hate you for using AI at work

Study: Your coworkers hate you for using AI at work

Study: Your coworkers hate you for using AI at work
Study: Your coworkers hate you for using AI at work
They hate me regardless of llm
Classmates too... i know im not gonna use ai and that the people using ai are harming themselves but its still shit to see how little work they have to put into stuff while im writing essays for hours. But the results do show, my teachers are pretty dissapointed in the writing/speech skills of my classmates while im doing pretty well(as well as most other classmates who dont use ai). I could very clearly see how ai could lead to cognitive decline in certain metrics.
A good 20% of my weekly workload is writing summaries of reports that I know nobody reads.
I haven't had a raise in years. They say there no room in the budget but won't entertain any solutions that cut costs.
So I gave myself a raise while I'm looking for other opportunities.
Mine is not a usual situation. I don't recommend LLMs for work that matters in the least. I'm just killing some mindless busywork while people wonder why my brain hasn't melted from it yet.
The new CTO at my workplace just the other day announced they'd partnered with Google to get Gemini Code Complete and they'd be piloting its use on a particular project. They had someone from Google present all the "benefits" of using AI for writing code and everything.
Most of my team is facepalming so hard. Except the one guy who's an AI enthusiast. (That guy is also a massive conspiracy theorist.) I'm not sure about everyone's sentiment, though. I'm kindof sortof "in charge" on my team, and apparently the company's stance is now that AI use is acceptable. If I told folks not to use AI, I don't think the business would back me, so I'm having to be "diplomatic" about it.
So, I've told the team "just... don't push any code to the central repo without understanding it at least as well as if you'd written it yourself."
But yeah. I'm pretty pissed. Hopefully the CTO isn't high enough on the smell of his own farts to decide that "pilot" project is an unmitigated success despite all evidence to the contrary.
So, they're ok with sending all your (customers') code to Google. I'm mildly positively surprised my company isn't.
Yeah, it's pretty weird. My employer is a traditionally brick-and-mortor sort of business that's only recently starting to learn that the internet isn't a "fad". My employer's policy has always been that we don't use "the cloud" specifically so we can keep our trade secrets secret. They're only now starting to approach the idea of running our code on off-premisis hardware our company doesn't own. They're using Google Cloud rather than AWS, though, because "Amazon is a competitor" and they're paranoid that Amazon is going to steal our trade secrets. Which... doesn't make sense because so is Google. (Of course, that doesn't really give away what industry my employer is in because both Google and Amazon do basically everything.) Google pinky swears they'll never steal/use/share our data. And of course that might be true right up until the moment they decide to change that and meanwhile we're screwed because we're all vendor-lockin'd with them.
Anyway, yeah. My employer is nincompoops. Lol.
They'll just send the code to an AI text gen to "summarize" what it does and put that in the PR body.
You have to understand most of these devs don't give a shit (in a based kind of way. fuck work) At the same time it comes to a detriment because they will use code gens so they don't have to do any work. and then it bites you in the ass because you're the one reviewing.
Are we solving AI slop with peer pressure ??
"Here are the technical points I am going to implement in my IT service area and in the tactical order they are going to be done. Please dumb this down for me so that a management group can understand it and approve it."
I don't have time to "explain it like you're five", I have real work to do. Judge me all you like.
I'll tell my coworks to gfts if they submit AI code for me to review. If you can generate AI code, I can also generate AI code, and why do I want to review your code instead of my own? Why not, instead of me reviewing your code, I take your task and throw twenty AI output at you, and you tell me which one of them actually works?
I use slop for our review process because the management already decides what the results are before we even begin. Is that an acceptable use case? Ijust need filler text that has no impact on anything.
I personally don't think it matters that much, but if you wanna avoid using generative AI, use a Lorem Ipsum or a Markov chain.
No, I'm the one doing the hating.
I use it for regex. But ill check it on regex101.
I think I could write the regex myself faster than doing those two things.
i don’t understand what was the ai they were using that made others so unhappy
If I can use AI to do my boring work or learn something new, then I dgaf what coworkers think.
this looks like one of those studies that’ll look really dumb in a few years
"In fact, asking the chatbot comes across worse than not asking for help at all."
Hey, wouldn't want anyone to be self sufficient and want to complete a task on their own or anything! Remain dumb! Don't search! Just throw your hands up and go "I don't know!" and then don't do ANYTHING!
Yeah. Completely sane thinking.....
Every search you've ever done for information on the internet, has been steeped in the beginnings of what LLMs are today. LLMs are the culmination of the work done on search engines to find words and phrases relation to other words. Even the ones that aren't "chatbots", use similar methodology to try and get you what you're looking for. That's what propelled Google to the top of search.
Username checks out.
LLMs are the culmination of the work done on search engines to find words and phrases relation to other words. Even the ones that aren’t “chatbots”, use similar methodology to try and get you what you’re looking for. That’s what propelled Google to the top of search.
Then why is LLM output worse than google from 10+ years ago? 10+ years ago I could type pretty much anything I thought of into google and get results that matched what I was looking for. LLMs just vomit up something that looks like it could be what I'm looking for, but isn't.
Because SEO is a game, and those with the mist shit that matches what Google wants is rewarded with a higher ranking.
Asking a coworker for help is usually a much better way to get a relevant and correct answer on the first try. On my work computer I can’t reconfigure anything, so I’m forced to see chatbot results for my basic web searches. When I need a quick answer, I search myself before bugging anyone. Since I have to scroll past the “AI” answer to get to the relevant results, I’ve often checked to see if it’s right.
It has never been right.
I gotta stress that. It has never given me an answer I can use. I work in a field that isn’t particularly niche, and use software that millions of people use. If I need to figure out how to do something in an application, the chatbot answer will literally invent entire menus that don’t exist, just to show me exactly how not to do the thing I need. It’s all made up. I wish it would just say something like “Sorry, we don’t know how that software works yet.” But nope, it just makes shit up.
So if I still can’t figure out how to do the thing without wasting too much time researching, I send a quick slack message to a coworker, and in 30 seconds I get a screenshot with big red arrow pointing at what I need. Humans win every time, and it’s insulting to your coworkers when you don’t take advantage of their experience. Bonus you’ll never need to bug anyone about how to do that thing again, so everybody wins.
In quite a few cases, misinformation is worse than no information. Friend of mine now spends half his work day rejecting broken code generated from ChatGPT that would break the codebase if merged.