GPT-4 Understands
GPT-4 Understands
GPT-4 Understands
GPT-4 Understands
GPT-4 Understands
While whether LLMs are intelligent or not is still hotly debated. I think the author's thoughts are very interesting.
This is crazy to me. You can read in a stream of meaningless numbers (tokens) and incidentally build a reasonably accurate model of the real things those tokens represent.
The implications are vast. We may be able to translate between languages that have never had a “Rosetta Stone”. Any animals that have a true language could have it decoded. And while an LLM that’s gotten an 8 year old’s understanding of balancing assorted items isn’t that useful, an LLM that’s got a baby whale’s grasp on whale language would be revolutionary.
LLMs can't do any of those things though...
If no one teaches them how to speak a dead language, they won't be able to translate it. LLMs require a vast corpus of language data to train on and, for bilingual translations, an actual Rosetta stone (usually the same work appearing in multiple languages).
This problem is obviously exacerbated quite a bit with animals, who, definitionally, speak no human language and have very different cognitive structures to humans. It is entirely unclear if their communications can even be called language at all. LLMs are not magic and cannot render into human speech something that was never speech to begin with.
The whole article is just sensationalism that doesn't begin to understand what LLMs are or what they're capable of.
They are making sense of a language without a rosetta stone. The English llm talk is learned from English.
Now the corpus is a big work to do. But still.
What about preserving languages that are close to extinct, but still have language data available? Can LLMs help in this case?
Curious your thoughts about this. Not an LLM but likely using transformers in the architecture.
I think that's a good example of Emergent Abilities of Large Language Models.
GPT-4 cannot alter its weights once it has been trained so this is just factually wrong.
LLMs are really cool and very useful, don't get me wrong. But people get excited by what they seem to do and lose sight of what they actually can do. They are not intelligent. They create text based on inputs. That is not what intelligence is, unless you have an extremely dismal view of intelligence that humans are text creation machines with no thoughts, no feelings, no desires, no ability to plan... basically, no internal world at all.
An LLM is an algorithm, not an intelligence.
Adam Something uploaded a video starting with the definition of intelligence itself, and then explains how something that “acts” intelligent doesn’t mean it “is” intelligent.
I think even "intelligence" here is a stretch. In a very narrow sense, it is intelligent: it creates text, simulates conversations, answers questions. But that is not what intelligence is (and it is all LLMs can do).
Here is an alternative Piped link(s):
video
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.
That's kind of silly semantics to quibble over. Would you tell a robot hunting you down "you're only acting intelligent, you're not actually intelligent!"?
People need to get over themselves as a species. Meat isn't anything special, it turns out silicon can think too. Not in quite the same way, but it still thinks in ways that are useful to us.
The author is an imbecile if they haven't been able to break GPT. It took me less than one day of tooling around with it before I got it to say something which outed it as having no understanding of what we were discussing.
That doesn't mean it's not intelligent. Humans can get broken in all sorts of ways. Are we not intelligent?
The bit you quoted is referring to training.
Recent papers say otherwise.
The conclusion the author of that article comes to (LLMs can understand animal language) is.. problematic at the very least. I don't know how they expect that to happen.
In what sense does your link say otherwise? Is a world model the same thing as intelligence?
are you not an algorithm?
perfected over thousands of years?
No? Humans are not algorithms except in the most general sense.
For example, there has not been any discovery of an algorithm that allows one to predict human actions, and scientists debate whether such a thing could even exist.
It’s a state machine and so are we. If it can have the ability to alter itself the way we do, I don’t see how we are any different.
It can't; again the model does not and cannot change once it's been generated.
Define intelligence. Your last line is kind of absurd. Why can't intelligence be described by an algorithm?
LLMs do not think or feel or have internal states. With the same random seed and the same input, GPT4 will generate exactly the same output every time. Its speech is the result of a calculation, not of intelligence or self-direction. So, even if intelligence can be described by an algorithm, LLMs are not that algorithm.