"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "
The real issue is people need to realize how LLMs work. It's just a really good next word generator that sounds plausible to a human. Accuracy and truth isn't part of consideration for the most part. The AI doesn't even see words, it just breaks words down to numbers and treats it like a giant math problem.
It's an amazing tool that will massively boost productivity, but people need to know its limitations and what it's actually capable of. That's where the hype is overblown.
I work on AI research. I've been trying to explain it to people as an improv actor that takes suggestions from the audience. It just plays along with the prompt you give it. It's not an expert, it's just an actor playing a role.
I think this is downplaying what LLMs do. Yeah, they are not the best at doing things in general, but the fact that they were able to learn the structure and semantic context of language is quite impressive, even if it doesn't know what the words converted into tokens actually mean. I suspect that we will be able to use LLMs as one part of a full digital "brain", with some model similar to our own prefrontal cortex calling the LLM (and other things like vision model, sound model, etc.) and using its output to reason about a certain task and take an action. That's where I think the hype will be validated, is when you put all these parts we've been working on together and Frankenstein a new and actually intelligent system.
Ironically, I think you also are overlooking some details about how LLMs work. They are not just word generators. Stuff is going on inside those neural networks that we're still unsure of.
For example, I read about a study a little while back that was testing the mathematical abilities of LLMs. The researchers would give them simple math problems like "2+2=" and the LLM would fill in 4, which was unsurprising because that equation could be found in the LLM's training data. But as they went to higher numbers the LLM kept giving mostly correct results, even when they knew for a fact that the specific math problem being presented wasn't in the training data. After training on enough simple addition problems the LLM had actually "figured out" some of the underlying rules of math and was using those to make its predictions.
Being overly dismissive of this technology is as fallacious as overly hyping it.
No. Just.... No. The LLM has not "figured out" what's going on. It can't. These things are just good at prediction. The main indicator is in your text: "mostly correct". A computer that knows what to calculate will not be "mostly correct". One false answer proves one hundred percent that it has no clue what it's supposed to do.
What we are seeing with those "studies" is that social study people try to apply the same rules they apply to humans (where "mostly correct" is as good as "always correct") which is bonkers, or behavioral researchers try to prove some behavior they attribute to the AI as if it was a living being, which is also bonkers because the AI will mimic the results in the training data which is human so the data will be biased as fuck and its impossible to determine if the AI did anything by itself at all (which it didn't, because that's not how the software works).
yea, more like: ceos and white collar hurt consumers and workers. those jobs need to be eliminated and corps need to be heavily taxed so that universal basic income becomes ubiquitous. idk if any of this makes sense but i think this how things should be going
Stopping math is never a good idea. By limiting your own constituents, you set their progress back from what other governments' constituents can achieve.
Also, effectively replacing a CEO requires AGI level capabilities. We're closer to that than ever before, but LLMs in their current state aren't it.
They may create text which appears to human eyes like the result of thinking, reasoning, or understanding, but it is in fact anything but.
For generation of fictional text and images that's fine.
There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers
Like any other case of automation in the history of society.
[...] and impact consumers by creating lesser quality products
That sounds very subjective.
and allowing more erroneous outputs.
Large language models should not be used as a source of facts, that's why they all warn you about their limitations. LLMs are tools and should be used properly. A blow torch can get your balls burnt if used improperly.
I'm reminded of a phenomenon in the 70s and 80s the computer is never wrong in which pricing mistakes and bank errors were expected to be impossible since there was a computer involved.
As an aside, I wonder if this is in any way related to the rush of patents in the 90s and aughts, for things humans obviously do, but on a computer or on the web like transferring money or making transactions. We still have lawsuits like that.
Also related, the predictive policing software that some US counties bought, unvetted, and is used to justify longer sentences for poor and nonwhite convicts so that no judge has to attach his name to bigoted rulings.
We humans seem to imagine that since there's a magic box involved in the computation of our answers that the answer is automatically more precise. Perhaps it's related to the notion that were considering more factors, but that only works if we've properly measured those factors and applied them appropriately to the model. Otherwise, as the saying goes (also from early computing) Garbage in; garbage out.
At least people are coming around to why it's called AI. Artificial Intelligence is called that because it's a facsimile of intelligence. It acts intelligent, but has no intellect. It's an algorithm, usually one designed in a black box so no one can analyze exactly how the output occurred
We don't need to develop tech that can't be analyzed directly. AI can and has been developed in a way that can be easily analyzed, like why an output was given.
What do you think "it's an algorithm" is supposed to imply? Can nothing deterministic be considered intelligent?
Also, "designed in a black box" is misleading. It's opaque because it's emergent behavior, not because it was obfuscated or designed in secrecy or something. The algorithm itself is simple. All the interesting data is encoded in the billions to trillions of input parameters. These parameters aren't designed at all, they are learned.