Skip Navigation

Instance banned from awful.systems for debating the groupthink

  1. Post in !techtakes@awful.systems attacks the entire concept of AI safety as a made-up boogeyman
  2. I disagree and am attacked from all sides for "posting like an evangelist"
  3. I give citations for things I thought would be obvious, such as that AI technology in general has been improving in capability compared to several years ago
  4. Instance ban, "promptfondling evangelist"

This one I'm not aggrieved about as much, it's just weird. It's reminiscent of the lemmy.ml type of echo chamber where everyone's convinced it's one way, because in a self-fulfilling prophecy, anyone who is not convinced gets yelled at and receives a ban.

Full context: https://ponder.cat/post/1030285 (Some of my replies were after the ban because I didn't PT Barnum carefully enough, so didn't realize.)

47 comments
  • I am not sure if I read the correct thread, but I personally didn't find your arguements convincing, although I think a full ban is excessive (at least initially).

    Keep in mind that I do use local LLM (as an elaborate spell-checker) and I am a regular user of ML based video upscaleling (I am a fan of niche 80s/90s b-movies).

    Forget the technical arguments for a seconds. And look at the social-economic component behind US-style VC groups, AI companies, and US technology companies in general (other companies are a separate discussion).

    It is not unreasonable to believe that the people involved (especially the leadership) in the abovementioned organizations are deeply corrupt and largely incapable of honesty or even humanity [1]. It is a controversial take (by US standards) but not without precedent in the global context. In many countries, if you try and argue that some local oligarch is acting in good faith, people will assume you are trying (and failing) to practise a standup comedy routine.

    If you do hold a critical attitude and don't buy into tedious PR about "changing the world", it is reasonable to assume that irrespective of the validity of "AI safety" as a technical concept, the actors involved would lie about it. And even the concept was valid, it is likely they would leverage it for PR while ignoring any actual academic concepts behind "AI safety" (if they do exist).

    One could even argue that your arguementation approach is an example of provincialism, group-think and generally bad faith.

    I am not saying you have to agree with me, I am more trying to show a different perspective.

    [1] I can provide some of my favourite examples if you like, I don't want to make this reply any longer.

    • I’m not saying that any of what you just said is not true. I’m saying that all of that can be true, and AI can still be dangerous.

      • That's not what we are discussing though. We are discussing whether aweful.systems was right or wrong in banning you. Below is the title of your post:

        Instance banned from awful.systems for debating the groupthink

        I will note that I don't think they should be this casual with giving out a bans. A warning to start with would have been fine.

        An argument can be made that you went in to awful.systems with your own brand of groupthink; specifically complete rejection of even the possibility that we are dealing with bad faith actors. Whether you like it or not, this is relevant to any discussion on "AI safety" more broadly and that thread specifically (as the focus of the linked article was on Apollo Research and Anthropic and AI Doomerism as a grifting strategy).

        You then go on to cite a YT video by "Robert Miles AI Safety", this is a red flag. You also claim that you can't (or don't want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue. This is not some sort of bad faith request.

        Further on you start talking about "Dunning-Kruger effect" and "deeper understanding [that YT fellow has]". If you know the YT fellow has a deeper understanding of the issue, why can't you explain in layman terms why this is the case?

        I did watch the video and it has nothing to do with grifting approaches used by AI companies. The video is focused on explaining a relatively technical concept for non-specialists (not AI safety more broadly in context of real world use).

        Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

        You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of "AI safety" messaging from organization like Apollo Research and Anthropic?

    • It is a controversial take (by US standards)

      The only people in denial about corruption within us government and corporate systems are boomers.

      Clearly most common folk know the deal hence why Luigi is a hero and we don't even know if he actually destroyed that parasit Brian Thompson

      • It's been a while since I've been/lived in the US (I do have close friends who lived there though), but I disagree. It seemed like a general social issue that crosses all demographic segments.

  • The old school tech guys are super anti-AI. I think it’s the usual refusal to keep up with new tech.

    I’ll admit, I was in the same basket, until I heard a professor speak about it at my son’s university. They were talking about university concerns of students cheating with AI, and one progressive professor told us about how she encourages its use. She said now that pandora’s box has been opened, the best thing she could do to best prepare them for the real world was teach them to use it properly to improve their workflow, rather than try to ban its use.

    There was more to it, but at the end of the talk I realized I’d made the mistake of writing it off. I made the exact same mistake a lot of these tech guys are now, and underestimated how fast it’s advancing. I messed with AI a couple years prior, wasn’t impressed, and let that form my opinions. When I tried it again, I couldn’t believe how much more impressive it was than before. Then I stayed with it, and I couldn’t believe how fast I was watching it improve every single month. If you’re not working with it regularly, you really cannot understand how fast this is moving.

    Realizing this was similar to the invention of the digital calculator, I tried to spread the word to the old farts that the abacus would soon be dead. But none of them want to hear it. Saying anything positive about AI will get you slammed with downvotes and bans, and lots of lectures like ‘I tried it two years ago, and it was a joke.’ It was shocking to me, how many Luddites are in tech.

    Screw’em. Let them get left behind. Can’t drag someone into the future who wants to be stuck in the past. It still has a long way to go, but I’ve started using it to speed up my workflow. Even with the mistakes it makes, it’s worth it for how fast I can now get through the blank page phase of a project. No more boiler plate work slowing me down. It probably won’t become sentient in my lifetime, but damn if it isn’t an incredibly useful tool.

47 comments