Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CT
Posts
4
Comments
558
Joined
2 yr. ago

  • Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

  • So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economics can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.

    The explosion of research into AI? It’s use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.

  • Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

    If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

  • Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.

  • Wow, such a compelling argument.

    If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU used to generate double digit tokens per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?

    It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst also ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.

  • Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end

    a dead end.

    That is simply verifiably false and absurd to claim.

    Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.

  • Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

    One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

    If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

    As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

    Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

    So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

  • Bezos was on social media asking, “Who’d you pick as the next Bond?”. Suggestions included Tom Cruise, Elon Musk and Top Gear’s James May.

    I don’t know what’s more confusing, that people gave those answers or that the telegraph posted them. Though a disturbing part of me would want it to be Elon just so all his dick riders could watch the entire world shit on their fat, man-child lord jiggling on screen, role playing as a British secret agent.

    Wouldn’t mind May as long as it was a comedy the whole time without telling him. That and the last scene ends with someone calling him a pillock off screen.

  • Right, but as you had explained ages ago by this point it isn’t going to be “unmoderated”. So pack it in.

    As much as they’ve been obnoxious recently, I don’t have a problem with conservatives having a space here so I don’t think they would/should be deleted unless more of this sort of content continues.

  • Bonus post hoc whinge for people pointing out pretty well known and documented facts about SA trying to pretend it’s racism/westerners/preaching. The guy posts antagonistic stuff on the daily critical of other countries, can dish it out but can’t take it. He definitely hesitated though guys.

    Edit: they got your boi Velociraptors first https://lemm.ee/post/56156039

  • All the US what abouts and reaching just to include Gaza in every conversation doesn’t change the facts about SA. Let’s not forget the public executions with swords that still occur. Nor slavery’s history (or modern existence) in SA and the former Ottoman Empire. Seems like the US and SA have more in common than you selectively recall.