Survey: Why Non-Tech Workers Don't Trust AI—and Fear for Their Jobs
Survey: Why Non-Tech Workers Don't Trust AI—and Fear for Their Jobs
inc.com
Survey: Why Non-Tech Workers Don't Trust AI—and Fear for Their Jobs
inc.com
Pretty sure people in tech hate it too outside of those trying to sell it
There's a contingent that like it. For some, they don't have to even pretend to have social skills since they can outsource writing to AI. They are also increasingly using it in place of google/copy-pasting from stackoverflow/etc to get "quick fix" solutions to their problems. It's not particularly good at those tasks IMO, but I genuinely think for some people the dopamine hit of copy-pasting something directly from chatgpt and not having to so much as lift a finger and it working first try, is addictive, and even though they usually have to troubleshoot it and re-prompt and then make changes by hand, they just keep trying for that sweet no-effort fix. For some of them they seem to treat it like a junior coworker you can offload all your work onto, forever.
In my experience (I've literally never used it but had coworkers try to feed its answers to me when we're working together on something, or giving what it spit out to me to fix for them), it tends to do okay for common use-cases, ones that you can almost always just look up in documentation or stackoverflow anyhow, but in more niche problems, it will often hallucinate that there's a magic parameter that does exactly what you want. It will never tell you "Nope, can't be done, you have to restructure around doing it this other way", unless you basically figure it out yourself and prompt it into doing so.
in more niche problems, it will often hallucinate that there's a magic parameter that does exactly what you want. It will never tell you "Nope, can't be done, you have to restructure around doing it this other way"
This was why, in spite of it all, I had a brief glimmer of hope for DeepSeek -- it's designed to reveal both its sources and the process by which it reaches its regurgitated conclusions, since it was meant to be an open-source research aid rather than a proprietary black-box chatbot.
I have been meaning to try deepseek for a chuckle/to see what the hype is about. I have pretty much no drive to use AI instead of learning or doing the work myself, but I am willing to accept that, free of the shackles of capitalism, it might be useful and non-destructive technology for some applications in the future, and maybe it's a tiny glimpse towards that
make weird prompts and get it to do weird outputs, it's kind of fun. I put in such a strange prompt that I got it to say something suuuuper reddity like "le bacon is epic xD" completely unprompted. I think my prompt involved trying to make the chatbot's role like an unhinged quirky catgirl or something. unfortunately this tends to break the writing model pretty quickly and it starts injecting structural tokens back into the stream but its very funny
Anthropic's latest research shows the chain of thought reasoning shown isn't trustworthy anyway. It's for our benefit, and doesn't match the actual reasoning used internally 1:1.
thank you anthropic for letting everyone know your product is hot garbage
As you say, hallucinating can be solved by adding meta-awareness. Seems likely to me we'll be able to patch the problem eventually. We're just starting to understand why these models hallucinate in the first place.
I dont think hallucination is that poorly understood, tbh. Its related to the grounding problem to an extent, but also a result of the fact that its a non-cognitive generative model. Youre just sampling from a distribution. Its considered "wrong output" because to us, truth is obvious. But if you look at generative models beyond language models, the universality of this behavior is obvious. You cannot have the ability to make a picture of minion JD vance without LLMs hallucinating (or the ability to have creative writing, for a same-domain analogy). You can see it in humans too in things like wrong words or word salads/verbal diarrhea/certain aphasias. Language function is also preserved in instances even when logical ability damaged. With no way to re-examine and make judgements about its output, and also no relation to reality (or some version of it), the unconstrained output of the generative process is inherently untrustworthy. That is to say - all LLM output is hallucination, and only has relation to the real when interpreted by the user. Massive amounts of training data are used to "bake" the model such that the likelihood of producing text that we would consider "True" is better than random (or pretty high in some cases). This extends to the math realm too, and is likely why CoT improves apparent reasoning so dramatically (And also likely why CoT reasoning only works when a model is of sufficient size). They are just dreams, and only gain meaning through our own interpretation. They do not reflect reality.
many chuds work in tech. you will sadly see plenty of them bootlicking musk, ai and shit