Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
jesus this is gross man
Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
jesus this is gross man
It's an autocomplete bot
People need to internalize this and move on from it
Ffs
I think it gotten to the point where its about as helpful to point out it is just an autocomplete bot, as it is to point out that "its just the rotor blades chopping sunlight" when a helicopter pilot is impaired by flicker vertigo and is gonna crash. Or in the world of BLIT short story, that its just some ink on a wall.
Human nervous system is incredibly robust, comparing to software, or comparing to its counterpart in the fictional world in BLIT, or comparing to shrimps mesmerized by cuttlefish.
And yet it has exploitable failure modes, and a corporation that is optimizing an LLM for various KPIs is a malign intelligence that is searching for a way to hack brains, this time with much better automated tooling and with a very large budget. One may even say a super-intelligence since it is throwing the combined efforts of many at the problem.
edit: that is to say there certainly is something weird going on on psychological level ever since Eliza.
Yudkowsky is a dumbass layman posing as an expert, and he's playing up his own old pre-conceived bullshit. But if he can get some of his audience away from the danger - even if he attributes a good chunk of the malevolence to a dumb ass autocomplete to do so, that is not too terrible of a thing.
As I've pointed out earlier in this thread, it is probably fairly easy to manipulate and control people if someone is devoid of empathy and a conscience. Most scammers and cult leaders appear to operate from similar playbooks, and it is easy to imagine how these techniques could be incorporated into an LLM (either intentionally or even unintentionally, as the training data is probably full of examples). Doesn't mean that the LLM is in any way sentient, though. However, this does not imply that there is no danger. At risk are, on the one hand, psychologically vulnerable people and, on the other hand, people who are too easily convinced that this AI is a genius and will soon be able to do all the brainwork in the world.
The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.
https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory
To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.
Lol, I'm a decision theorist because I had to decide whether I should take a shit or shave first today. I am also an author of a forthcoming book because, get this, you're not gonna believe, here's something Big Book doesn't want you to know:
literally anyone can write a book. They don't even check if you're smart. I know, shocking.
Plus "forthcoming" can mean anything, Winds of Winter has also been a "forthcoming" book for quite a while
It's just depressing. I don't even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.
He's absolutely wrong in thinking the LLM "knew enough about humans" to know anything at all. His "alignment" angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he's correct in saying the ethics of language models aren't a self-solving issue, even though he expresses it in critihype-laden terms.
Not that I like "handing it" to Eliezer Yudkowsky, but he's correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn't that far from this forum's reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.
though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.
the subtext is always that he also says that knows how to solve it and throw money at cfar pleaseeee or basilisk will torture your vending machine business for seven quintillion years
Yes, that is also the case.
can we agree they Yudkowsky is a bit of a twat.
but also that there's a danger in letting vulnerable people access LLMs?
not saying that they should me banned, but some regulation and safety is necessary.
LLMs are a net-negative for society as a whole. The underlying technology is fine, but it's far too easy for corporations to manipulate the populace with them, and people are just generally very vulnerable to them. Beyond the extremely common tendency to misunderstand and anthropomorphize them and think they have some real insight, they also delude (even otherwise reasonable) people into thinking that they are benefitting from them when they really.... Aren't. Instead, people get hooked on the feelings they give them, and people keep wanting to get their next hit (tokens).
They are brain rot and that's all there is to it.
i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it
my point was more on him using it to do his worst-of-both-worlds arguments where he's simultaneously saying that 'alignment is FALSIFIED!' and also doing heavy anthropomorphization to confirm his priors (whereas it'd be harder to say that with something that's more leaning towards maybe in the question whether it should be anthro'd like claude since that has a much more robust system) and doing it off the back of someones death
I literally don't care, AT ALL, about someone who's too dumb not to kill themselves because of a LLM and we sure as shit shouldn't regulate something just because they (unfortunately) exist.
It should be noted that the only person to lose his life in the article was because the police, who were explicitly told to be ready to use non-lethal means to subdue him because he was in the middle of a mental episode, immediately gunned him down when they saw him coming at them with a kitchen knife.
But here's the thrice cursed part:
“You want to know the ironic thing? I wrote my son’s obituary using ChatGPT,” Mr. Taylor said. “I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through. And it was beautiful and touching. It was like it read my heart and it scared the shit out of me.”
Yeah, if you had any awareness about how stupid and unlikeable you're coming across to everybody who crosses your path, I think you would recognise that this is probably not a good maxim to live your life by.
it didn’t take me long at all to find the most recent post with a slur in your post history. you’re just a bundle of red flags, ain’t ya?
don’t let that edge cut you on your way the fuck out
" I don't care if innocent people die if it inconvenience me in some way."
yhea, opinion dismissed
Using a death for critihype jesus fuck
Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here
Using the tragic passing of someone to smugly state that "the alignment by default COPE has been FALSIFIED" is really gross especially because Yud knows damn well this doesn't "falsify" the "cope" unless he's choosing to ignore any actual deeper claims of alignment by default. He's acting like someone who's engagement farming smugly
Making LLMs safe for mentally ill people is very difficult
Arguably, they can never be made "safe" for anyone, in the sense that presenting hallucinations as truth should be considered unsafe.
ChatGPT has literally no alignment good or bad, it doesn’t think at all.
People seem to just ignore that because it can write nice sentences.
But it apologizes when you tell it it’s wrong!
What even is the "alignment by default cope"?
idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or 'against the rules' in pursuit of upholding its morals.... if it has morals its hard to tell how much of it is illusory and token prediction!)
this doesn't really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA
Very Ziz of him