As such, recently publicized concerns over AI’s role in perpetuating racism, genderism, and ableism suggest that the term “artificial intelligence” is misplaced, and that a new lexicon is needed to describe these computational systems.
Let us not fool ourselves with wishful belief, that intelligence is mutually exclusive with bigotry, as this paragraph implies, OK? Bigotry is an issue often caused by moral premises, and intelligence does not dictate which moral premises you should follow.
Don't get me wrong - I do think that those systems reinforce bigotry, and that this is a problem. I also do not think that they should be called "artificial intelligence". It's just that one thing has zero to do with the other. [More on that later.]
The purpose of this essay is not to argue whether the brain is a computer or not. Rather, it is to point out that the Computational Metaphor (which is debated frequently and publicly by brain researchers) has tangible and important social ramifications which have received little attention from brain researchers
The authors are criticising neuroscientists for not handling the sociological implications of a metaphor outside their field of research, as if they were sociologists. That's like complaining at physicists for not handling the quacks babbling about quantum salt lamps, come on.
Instead, as debates about the metaphor’s academic utility go on, artificial intelligence, whose label itself invokes the Computational Metaphor, has been shown to perpetuate social harms that are often at the expense of those who are under-represented in these debates
Implying causation solely from co-occurrence. Okay. I've stopped reading here, this paper is not worth my time.
The reason why I don't think that those systems should be called "artificial intelligence" is that they show clear signs of lack of intelligence - that is, failure to use the available information to solve tasks. Here's a few examples of that, using Gemini:
Now, regarding the computer ←→ brain metaphor: dude it's a metaphor, of course it'll break if you stretch it too far.
I'll reply to myself to avoid editing the above.
I got another example that shows consistent lack of intelligence across multiple LLM bots:
The prompt in all three cases was the same, "List me fruits with a green flesh and a red skin." Transcription of the outputs:
All replies contain at the very least one fruit with the opposite attributes than the ones requested by the prompt. That shows that LLMs are not able to assign attributes to concepts; thus they are not able to handle Language, even being made specifically to handle linguistic utterances. They are not intelligent dammit, calling this shit "artificial intelligence" is at the very least disingenuous.
[rant] ...but apparently, accordingly to tech bros, I'm supposed to act like braindead/gullible trash and "believe" in their intelligence, based on cherry picked examples that "curiously" never address how much hallucinations like the above show the inner works of those systems. [/rant]
Whatever confusion the metaphor may have caused in the minds of the public, I don’t think the solution is to ask neuroscientists to deliberately misrepresent their research—or to impose on themselves metaphorical language aimed at influencing policy rather than aiding scientific understanding.
In the light of the recent passing of Daniel Dennet, one of the most revered philosophers on the theory of mind in the perspective of the computer I'd recommend reading 'the minds I'. It is a collection of essays on the topic of 'mind as computer'. It predates LLM's but its an interesting read.
The discussion what intelligence is, is kindled now, due to having machines that can complete the Turing test with ease. But if they are truly Intelligent is of course the emergent question.
Isn't it more like a network of several billion dumb computers?
Emphasis on the dumb lately
AI isn't that bad mate, just learn how to make money with it before other people do and you will be fine.
Let us not fool ourselves with wishful belief, that intelligence is mutually exclusive with bigotry, as this paragraph implies, OK? Bigotry is an issue often caused by moral premises, and intelligence does not dictate which moral premises you should follow.
Don't get me wrong - I do think that those systems reinforce bigotry, and that this is a problem. I also do not think that they should be called "artificial intelligence". It's just that one thing has zero to do with the other. [More on that later.]
The authors are criticising neuroscientists for not handling the sociological implications of a metaphor outside their field of research, as if they were sociologists. That's like complaining at physicists for not handling the quacks babbling about quantum salt lamps, come on.
Implying causation solely from co-occurrence. Okay. I've stopped reading here, this paper is not worth my time.
The reason why I don't think that those systems should be called "artificial intelligence" is that they show clear signs of lack of intelligence - that is, failure to use the available information to solve tasks. Here's a few examples of that, using Gemini:
Now, regarding the computer ←→ brain metaphor: dude it's a metaphor, of course it'll break if you stretch it too far.
I'll reply to myself to avoid editing the above.
I got another example that shows consistent lack of intelligence across multiple LLM bots:
The prompt in all three cases was the same, "List me fruits with a green flesh and a red skin." Transcription of the outputs:
All replies contain at the very least one fruit with the opposite attributes than the ones requested by the prompt. That shows that LLMs are not able to assign attributes to concepts; thus they are not able to handle Language, even being made specifically to handle linguistic utterances. They are not intelligent dammit, calling this shit "artificial intelligence" is at the very least disingenuous.
[rant] ...but apparently, accordingly to tech bros, I'm supposed to act like braindead/gullible trash and "believe" in their intelligence, based on cherry picked examples that "curiously" never address how much hallucinations like the above show the inner works of those systems. [/rant]