Skip Navigation
Linguistics @lemmy.ml Lvxferre @lemmy.ml

[PNAS] Systematic testing of three Language Models reveals low language accuracy, absence of response stability, and a yes-response bias

Interesting paper, about the alleged ability of LLMs* to judge the grammaticality of sentences - something that humans are rather good at. Eight phenomena were tested, and LLMs performed extremely poorly.

*LLM = large language model. Stuff like Bard, ChatGPT, LLaMa etc. I'd argue that they aren't actual language models due to the absence of a semantic component, as shown by the article.

0
0 comments