LLMs still don't understand the word "no", much like their creators
LLMs still don't understand the word "no", much like their creators
AI Like ChatGPT Are No Good at ‘Not’ | Quanta Magazine
LLMs still don't understand the word "no", much like their creators
AI Like ChatGPT Are No Good at ‘Not’ | Quanta Magazine
LLMs don't understand any words.
it's almost like this thing has no internal conceptual representation! I know this can't possibly be, millions of promptfans and prompfondlers have told me it can't be so, but it sure does look that way! wild!
It must have some internal models of some things, or else it wouldn't be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn't imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it's feasible for a ML model to learn complex abstractions.