Skip Navigation
24 comments
  • A good 20% of my weekly workload is writing summaries of reports that I know nobody reads.

    I haven't had a raise in years. They say there no room in the budget but won't entertain any solutions that cut costs.

    So I gave myself a raise while I'm looking for other opportunities.

    Mine is not a usual situation. I don't recommend LLMs for work that matters in the least. I'm just killing some mindless busywork while people wonder why my brain hasn't melted from it yet.

  • "Here are the technical points I am going to implement in my IT service area and in the tactical order they are going to be done. Please dumb this down for me so that a management group can understand it and approve it."

    I don't have time to "explain it like you're five", I have real work to do. Judge me all you like.

  • i don’t understand what was the ai they were using that made others so unhappy

  • this looks like one of those studies that’ll look really dumb in a few years

  • "In fact, asking the chatbot comes across worse than not asking for help at all."

    Hey, wouldn't want anyone to be self sufficient and want to complete a task on their own or anything! Remain dumb! Don't search! Just throw your hands up and go "I don't know!" and then don't do ANYTHING!

    Yeah. Completely sane thinking.....

    Every search you've ever done for information on the internet, has been steeped in the beginnings of what LLMs are today. LLMs are the culmination of the work done on search engines to find words and phrases relation to other words. Even the ones that aren't "chatbots", use similar methodology to try and get you what you're looking for. That's what propelled Google to the top of search.

    • Username checks out.

    • LLMs are the culmination of the work done on search engines to find words and phrases relation to other words. Even the ones that aren’t “chatbots”, use similar methodology to try and get you what you’re looking for. That’s what propelled Google to the top of search.

      Then why is LLM output worse than google from 10+ years ago? 10+ years ago I could type pretty much anything I thought of into google and get results that matched what I was looking for. LLMs just vomit up something that looks like it could be what I'm looking for, but isn't.

      • Because SEO is a game, and those with the mist shit that matches what Google wants is rewarded with a higher ranking.

    • Asking a coworker for help is usually a much better way to get a relevant and correct answer on the first try. On my work computer I can’t reconfigure anything, so I’m forced to see chatbot results for my basic web searches. When I need a quick answer, I search myself before bugging anyone. Since I have to scroll past the “AI” answer to get to the relevant results, I’ve often checked to see if it’s right.

      It has never been right.

      I gotta stress that. It has never given me an answer I can use. I work in a field that isn’t particularly niche, and use software that millions of people use. If I need to figure out how to do something in an application, the chatbot answer will literally invent entire menus that don’t exist, just to show me exactly how not to do the thing I need. It’s all made up. I wish it would just say something like “Sorry, we don’t know how that software works yet.” But nope, it just makes shit up.

      So if I still can’t figure out how to do the thing without wasting too much time researching, I send a quick slack message to a coworker, and in 30 seconds I get a screenshot with big red arrow pointing at what I need. Humans win every time, and it’s insulting to your coworkers when you don’t take advantage of their experience. Bonus you’ll never need to bug anyone about how to do that thing again, so everybody wins.

    • In quite a few cases, misinformation is worse than no information. Friend of mine now spends half his work day rejecting broken code generated from ChatGPT that would break the codebase if merged.

24 comments