Skip Navigation

Posts
11
Comments
665
Joined
2 yr. ago

  • janitorai - which seems to be a hosting site for creepy AI chats - is blocking all UK visitors due to the OSA

    https://blog.janitorai.com/posts/3/

    I'm torn here, the OSA seems to me to be massive overreach but perhaps shielding limeys from AI is wroth it

  • Here's an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don't Get Why Normies Don't Freak Out:

    For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

    (Dude then goes on to try to game-theorize this, I didn't bother to poke holes in it)

    The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of "omnicide" is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

    At least on commenter gets it:

    Most people distinguish between intentional acts and shit that happens.

    (source)

    Edit never read the comments (again). The commenter referenced above obviously didn't feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice "save", dipshit.

  • yeah but have you considered how much it's worth that gramma can vibecode a todo app in seconds now???

  • Haven't really kept up with the pseudo-news of VC funded companies acquiring each other, but it seems Windsurf (previously been courted by OpenAI) is now gonna be purchased by the bros behind Devin.

  • I found out about that too when I arrived at Reddit and it was translated to Swedish automatically.

  • This isn't an original thought, but a better matrix for comparing the ideology (such as it is) of the current USG is not Nazi Germany but pre-war US right wing obsessions - anti-FDR and anti-New Deal.

    This appears in weird ways, like this throwaway comment regarding the Niihau incident, where two ethnic Japanese inhabitants of Niihau helped a downed Japanese airman immediately after Pearl Harbor.

    Imagine if you will, one of the 9/11 hijackers parachuting from the plane before it crashed, asking a random muslim for help, then having that muslim be willing to immediately get himself into a shootouts, commit arson, kidnappings, and misc mayhem.

    Then imagine that it was covered in a media environment where the executive branch had been advocating for war for over a decade, and voices which spoke against it were systematically silenced.

    (src)

    Dude also credits LessOnline with saving his life due to unidentified <<

    <ethnics>

    >> shooting up his 'hood when he was there. Charming.

    Edit nah he's a neo-Nazi (or at least very concerned about the fate of German PoWs after WW2):

    https://www.lesswrong.com/posts/6BBRtduhH3q4kpmAD/against-that-one-rationalist-mashal-about-japanese-fifth?commentId=YMRcfJvcPWbGwRfkJ

  • LW:

    Please consider minimizing direct use of AI chatbots (and other text-based AI) in the near-term future, if you can. The reason is very simple: your sanity may be at stake.

    Perfect. No notes.

  • Also into Haskell. Make of that what you will.

  • LOL the mod gets snippy here too

    This comment too is not fit for this site. What is going on with y'all? Why is fertility such a weirdly mindkilling issue?

    "Why are there so many Nazis in my Nazi bar????"

  • LessWrong's descent into right-wing tradwife territory continues

    https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=ueRbTvnB2DJ5fJcdH

    Annapurna (member for 5 years, 946 karma):

    Why is there so little discussion about the loss of status of stay at home parenting?

    First comment is from user Shankar Sivarajan, member for 6 years, 1227 karma

    https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=opzGgbqGxHrr8gvxT

    Well, you could make it so the only plausible path to career advancement for women beyond, say, receptionist, is the provision of sexual favors. I expect that will lower the status of women in high-level positions sufficiently to elevate stay-at-home motherhood.

    [...]

    EDIT: From the downvotes, I gather people want magical thinking instead of actual implementable solutions.

    Granted, this got a strong disagree from the others and a tut-tut from Habryka, but it's still there as of now and not yeeted into the sun. And rats wonder why people don't want to date them.

  • In the recent days there's been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I don't have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:

    The argument deployed by individuals such as Bentham's Bulldog boils down to: "Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts".

    "Of course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!"

    https://www.lesswrong.com/posts/tsygLcj3stCk5NniK/you-can-t-objectively-compare-seven-bees-to-one-human

  • NYT covers the Zizians

    Original link: https://www.nytimes.com/2025/07/06/business/ziz-lasota-zizians-rationalists.html

    Archive link: https://archive.is/9ZI2c

    Choice quotes:

    Big Yud is shocked and surprised that craziness is happening in this casino:

    Eliezer Yudkowsky, a writer whose warnings about A.I. are canonical to the movement, called the story of the Zizians “sad.”

    “A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

    Good news everyone, it's popular to discuss the Basilisk and not at all a profundly weird incident which first led peopel to discover the crazy among Rats

    Rationalists like to talk about a thought experiment known as Roko’s Basilisk. The theory imagines a future superintelligence that will dedicate itself to torturing anyone who did not help bring it into existence. By this logic, engineers should drop everything and build it now so as not to suffer later.

    Keep saving money for retirement and keep having kids, but for god's sake don't stop blogging about how AI is gonna kill us all in 5 years:

    To Brennan, the Rationalist writer, the healthy response to fears of an A.I. apocalypse is to embrace “strategic hypocrisy”: Save for retirement, have children if you want them. “You cannot live in the world acting like the world is going to end in five years, even if it is, in fact, going to end in five years,” they said. “You’re just going to go insane.”

  • This true, but I'm convinced the original poster knows this and is using the term ironically.

  • Everyone is entitled to their own readership of Banks. I'm not saying mine is the one and only. But the Culture is supposed to be a background character, even if Banks spends a lot of time in the later novels "explaining" it. But if the reader only focusses on the lore, they'll miss the quite good characters and psychology that Banks was good at too.

    My personal favorite is Use of Weapons, where the focus is on the people doing the Culture's dirty work. In one scene, Zakalwe

    In Look to Windward

  • Also the Galactic Empire as an anti-scientific hellhole with secret police surveillance.

    Witness good old Hari Seldon unveiling his plans on Trantor:

    It was not a large office, but it was quite spy-proof and quite undetectably so. Spy-beams trained upon it received neither a suspicious silence nor an even more suspicious static. They received, rather, a conversation constructed at random out of a vast stock of innocuous phrases in various tones and voices.

    [Seldon] put his fingers on a certain spot on his desk and a small section of the wall behind him slid aside. Only his own fingers could have done so, since only his particular print-pattern could have activated the scanner beneath.

    […]

    “You will find several microfilms inside,” said Seldon. “Take the one marked with the letter T.”

    Gaal did so and waited while Seldon fixed it within the projector and handed the young man a pair of eyepieces. Gaal adjusted them, and watched the film unroll before his eyes.

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024

    TechTakes @awful.systems

    Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

    TechTakes @awful.systems

    Turns out that the basic mistakes spider runners fixed in the late 90s are arcane forgotten knowledge to our current "AI" overlords

    TechTakes @awful.systems

    ScottA is annoyed EA has a bad name now

    SneerClub @awful.systems

    Literally every argument about AI risk is entirely made up. All the terminology is just fart-huffing. It has the same evidentiary basis as a story about floopflorps on Gavitron9 outcompeting Nuubidon