corbin @ corbin @awful.systems Posts 8Comments 112Joined 2 yr. ago
FYI: I'm posting a non-sneer without an NSFW tag. I suspect that you might want to post this sort of article in the sister community !NotAwfulTech for non-sneering feedback; this community is explicitly for "big brain tech dude" authors who are posting "yet another clueless take."
While it would be pleasingly recursive to look at this article as such a "clueless take," I think it's clearly more well-researched than that. Also, while I personally don't like the concept of white allyship, I understand why it emerges: it takes longer to let go of one's beliefs than to embrace the people around you, and so it takes longer to let go of whiteness than to be okay with non-white folks. So, I'm not going to take that angle. I don't think it's okay to be white, but I also think that it takes a while for white folks to realize that they can stop being white.
With that all in mind, I think that it's worth pointing out that while all five suggestions are laudable, none of them address the structural and reputational problems at the heart of Mastodon. @sailor_sega_saturn@awful.systems had a killer comment on the last draft (which I can't permalink because Lemmy is trash; it's in this tree) about how ActivityPub structurally allows harassment by allowing pseudonymous interactions. In my personal conversations with ActivityPub's architects, I got the sense that they didn't understand what we call The Reputation Problem: the paths via which you give reputational incentives to participants will be reinforced according to their rewards. This is also the root of my pessimism about related projects like Spritely Goblins.
(This reminds me that I need to flesh out the bullet point in my notes headlined "The Reputation Problem & A Theory of Generalized Fuckwittery". This generalizes the Greater Internet Fuckwad Theory, Homo economicus, etc. It's all obviously connected from a distributed-systems perspective: bad actors are getting paid for their bad actions by the system's structure!)
Further, it's not clear that the community's adaptations are sustainable. TBS can't seem to shed its TERFs and it should be obvious that any similarly-structured project will be too authoritarian for a large chunk of the community. Hashtags aren't private or moderated spaces, and any sort of hashtag usage council would immediately run into the same authoritarian issues. One of the disadvantages of Balkanization is that your neighbors, safely separated from you by geographic obstacles, will start talking shit about you, and you don't want to let them police your lands.
It's very funny that, of my two Lemmy accounts, the sneer-club account has better domain reputation than the programming-specialist account.
At least it's still lactose-free~
Language designers are obligated to be linguists as well. This writeup has pushed me to distrust Graham's ability to design Lisps. In a previous sneer, I wasn't impressed by his languages, but now I'm fast-rejecting them. (Also previously folks seemed keen to defend him when they thought his essays sounded smart. Maybe those essays just sound white!)
This orange thread is about San Francisco banning certain types of landlord rent collusion. I cannot possibly sneer better than the following in-thread comment explaining why this is worthwhile:
While I agree that the giant metal spikes we put on all the cars aren't the exclusive reason that cars are lethal, I would hope we both agree that cars are less lethal when we don't cover them in giant metal spikes.
Oh, you misunderstand. It's not for me. ::: spoiler NSFW I've been to plenty of shrinks and never been diagnosed with anything outside of neurodivergence: giftedness, ADHD, and autism. I appreciate TLP because it had helped me understand and manage my narcissistic parent. ::: Nonetheless I agree with your critique of the writing style; it's got all the fake edge of a 20s frat boy learning about existentialism for the first time.
In the sense that TLP isn't Blackbeard, no, we don't. But I would suggest that, unlike Scott, TLP genuinely understands the pathology of narcissism. Their writing does something Scott couldn't ever do: it grabs the narcissist by the face and forces them to notice how their thoughts never not involve them. As far as I can tell, Scott's too much of a pill-pusher to do any genuine psychoanalysis.
Also, like, consider this TLP classic. Two things stand out if we're going to consider whether they're Scott in disguise. The first is that the dates are not recent enough, and indeed TLP's been retired for about a decade. The second is that the mythology and art history are fairly detailed and accurate, something typically beyond Scott.
(In true Internet style, I hope that there is a sibling comment soon which shows that I am not just wrong, but laughably and ironically wrong.)
Let's see how long HN takes to ban yet another literal Nazi.
What really gets me is that we never look past Schrödinger's version of the cat. I want us to talk about Bell's Cat, which cannot be alive or dead due to a contradiction in elementary linear algebra and yet reliably is alive or dead once the box opens. (I guess technically it should be "alive", "dead", or "undead", since we're talking spin-1 particles.)
At risk of being NSFW, this is an amazing self-own, pardon the pun. Hypnosis via text only works on folks who are fairly suggestible and also very enthusiastic about being hypnotized, because the brain doesn't "power down" as much machinery as with the more traditional lie-back-on-the-couch setup. The eyes have to stay open, the text-processing center is constantly engaged, and re-reading doesn't deepen properly because the subject has to have the initiative to scroll or turn the page.
Adams had to have wanted to be hypnotized by a chatbot. And that's okay! I won't kinkshame. But this level of engagement has to be voluntary and desired by the subject, which is counter to Adams' whole approach of hypnosis as mind control.
My NSFW reply, including my own experience, is here. However, for this crowd, what I would point out is that this was always part of the mathematics, just like confabulation, and the only surprise should be that the prompt doesn't need to saturate the context in order to approach an invariant distribution. I only have two nickels so far, for this Markov property and for confabulation from PAC learning, but it's completely expected weird that it's happened twice.
Show me a long-time English Wikipedia editor who hasn't broken the rules. Since WP is editable text and most of us have permission to alter most pages, rule violations aren't set in stone and don't have to be punished harshly; often, it's good enough to be told that what you did was wrong and that your edits will be reverted.
NSFW: When you bring this sort of argument to the table, you're making it obvious that you've never been a Wikipedian. That's not a bad thing, but it does mean that you're going to get talked down to; even if your question was in good faith, you could have answered it yourself by lurking amongst the culture being critiqued.
He tells on himself by saying "Gerard" vs "Scott" and "David Gerard" vs "Scott Alexander". What's really pathetic is that he thinks politics on Wikipedia is about left vs right or authoritarians vs anarchists. Somebody should let him know that words are faith, not works.
Even better, we can say that it's the actual hard prompt: this is real text written by real OpenAI employees. GPTs are well-known to easily quote verbatim from their context, and OpenAI trains theirs to do it by teaching them to break down word problems into pieces which are manipulated and regurgitated. This is clownshoes prompt engineering done by manager-first principles like "not knowing what we want" and "being able to quickly change the behavior of our products with millions of customers in unpredictable ways".
That's the standard response from last decade. However, we now have a theory of soft prompting: start with a textual prompt, embed it, and then optimize the embedding with a round of fine-tuning. It would be obvious if OpenAI were using this technique, because we would only recover similar texts instead of verbatim texts when leaking the prompt (unless at zero temperature, perhaps.) This is a good example of how OpenAI's offerings are behind the state of the art.
Not with this framing. By adopting the first- and second-person pronouns immediately, the simulation is collapsed into a simple Turing-test scenario, and the computer's only personality objective (in terms of what was optimized during RLHF) is to excel at that Turing test. The given personalities are all roles performed by a single underlying actor.
As the saying goes, the best evidence for the shape-rotator/wordcel dichotomy is that techbros are terrible at words.
Oh! My Firefox dictionary doesn't have "shoggoth".
Sometimes folks need a reminder that the Sun is an eldritch being, an elder one whose very presence scorches us and whose shrieking gibberish is blessedly quelled by the vast gulf of space, in order to appreciate the apt analogy of cosmic horror. Other times it's more useful to think about a soggoth as, say, several hundred tons of artfully-arranged FOOF. Peace be with you, Mr. "it's a computer doing math."