Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
42
Comments
523
Joined
1 yr. ago

  • (Another post so soon? Another post so soon.)

    "Gen AI competes with its training data, exhibit 1,764":

    Also got a quick sidenote, which spawned from seeing this:

    This is pure gut feeling, but I suspect that "AI training" has become synonymous with "art theft/copyright infringement" in the public consciousness.

    Between AI bros publicly scraping against people's wishes (Exhibit A, Exhibit B, Exhibit C), the large-scale theft of data which went to produce these LLMs' datasets, and the general perception that working in AI means you support theft (Exhibit A, Exhibit B), I wouldn't blame Joe Public for treating AI as inherently infringing.

  • New piece from Ars Technica: Meta smart glasses can be used to dox anyone in seconds, study finds:

    Two Harvard students recently revealed that it's possible to combine Meta smart glasses with face image search technology to "reveal anyone's personal details," including their name, address, and phone number, "just from looking at them."

    In a Google document, AnhPhu Nguyen and Caine Ardayfio explained how they linked a pair of Meta Ray Bans 2 to an invasive face search engine called PimEyes to help identify strangers by cross-searching their information on various people-search databases. They then used a large language model (LLM) to rapidly combine all that data, making it possible to dox someone in a glance or surface information to scam someone in seconds—or other nefarious uses, such as "some dude could just find some girl’s home address on the train and just follow them home,” Nguyen told 404 Media.

    This is all possible thanks to recent progress with LLMs, the students said.

    Putting my off-the-cuff thoughts on this:

    1. Right off the bat, I'm pretty confident AR/smart glasses will end up dead on arrival - I'm no expert in marketing/PR, but I'm pretty sure "our product helped someone dox innocent people" is the kind of Dasani-level disaster which pretty much guarantees your product will crash and burn.
    2. I suspect we're gonna see video of someone getting punched for wearing smart glasses - this story's given the public a first impression of smart glasses that boils down to "this person's a creep", and its a lot easier to physically assault someone wearing smart glasses than some random LLM
    3. This is a gut feeling I've had since Baldur talked about AI's public image nearly three months ago, but this gives me further reason to expect the public are gonna be outright hostile to the tech industry once the AI bubble pops.
  • If we're lucky, it'll cut off promptfondlers' supply of silicon and help bring this entire bubble crashing down.

    It'll probably also cause major shockwaves for the tech industry at large, but by this point I hold nothing but unfiltered hate for everyone and everything in Silicon Valley, so fuck them.

  • I vaguely remember mentioning this AI doomer before, but I ended up seeing him openly stating his support for SB 1047 whilst quote-tweeting a guy talking about OpenAI's current shitshow:

    I've had this take multiple times before, but now I feel pretty convinced the "AI doom/AI safety" criti-hype is going to end up being a major double-edged sword for the AI industry.

    The industry's publicly and repeatedly hyped up this idea that they're developing something so advanced/so intelligent that it could potentially cause humanity to get turned into paperclips if something went wrong. Whilst they've succeeded in getting a lot of people to buy this idea, they're now facing the problem that people don't trust them to use their supposedly world-ending tech responsibly.

  • oh hey its that reoccurring thought I have about the "AI Safety" criti-hype popping back into my head

    Expanding on that, part of me feels Altman is gonna find all the rhetoric he made about "AI doom" being used against him in the future - man's given the true believers reason to believe he'd commit omnicide-via-spicy-autocomplete for a quick buck.

    Hell, the true believers who made this pretty explicitly pointed out Altman's made arguing for regulation a lot easier: