Skip Navigation

Posts
1
Comments
474
Joined
2 yr. ago

  • I had been using a CSS rule to hide the AI overviews on Google.

    Because of this news I just installed a Firefox extension to force Google search results to use the "web" tab (which presumably skips generating them entirely).

  • [4:00] ... whose products [I] actively think are at best valueless and at worst harmful

    Wow what a scathing critique of worldcoin! Calling it possibly harmless. Clearly even when making the apology he didn't really get why we all hate it.

  • Ah yes the typical workflow for LLM generated changes:

    1. LLM produces nonsense at the behest of employee A.
    2. Employee B leaves a bunch of edits and suggestions to hammer it into something that's sloppy but almost kind of makes sense. A soul-sucking error prone process that takes twice as long as just writing the dang code.
    3. Code submitted!
    4. Employee A gets promoted.

    Also the fact that this isn't integrated with tests shows how rushed the implementation was. Not even LLM optimists should want code changes that don't compile or that break tests.

  • Ugh. So terrible. Tech’s obsession with “scaling” is one of the worst things about tech.

    Yeah that jumped out to me. Like human teaching has scaled fine to billions of people. It certainly has a better track record than Duolingo which provides meh study material and leads to ahem mixed learning outcomes despite being around for over a decade.

    Of course there's the subtext of "but also we'll be able to put all those obsolete teachers out of business and make tons of money!"

    Aaaarrgh. Tech’s obsession with A/B testing is another one of the worst things about tech.

    Being in tech I definitely see misuse of A/B testing sometimes. Sometimes a team will ignore common sense entirely but come up with metrics that measure something irrelevant. The metrics are, intentionally or not, gamed to tell them what they want to hear. They then run the (useless) numbers and use that to justify why their change was good, even in the face of intense user backlash.

    One particular example that just came to mind: someone made a bad change, and lots of people complained. Eventually the complaints started to peter out. Then they claimed "see! people just had to get used to it!" (versus the rather more obvious possibility that nobody bothered to complain more than once).

  • can’t have AI bro coworkers if you’re unemployed :P

    I'd certainly feel less conflicted yelling about AI if I didn't work for a big tech company that's gaga for AI. I almost wrote out a long angsty reply but I don't want to give up too much personal details in a single comment.

    I guess I ended up as a boiled frog. If I knew how much AI nonsense I'd be incidentally exposed to over the last year I would have quit a year ago. And yet currently I don't quit for complicated reasons. I'm not that far from the breaking point, but I'm going to try to hang in for a few more years.

    But yeah, I'm pretty uncomfortable working for a company that has also veered closer to allying with techo-fascism in recent years; and I am taking psychic damage.

  • Urgh over the past month I have seen more and more people on social media using chat-gpt to write stuff for them, or to check facts, and getting defensive instead of embarrassed about it.

    Maybe this is a bit old woman yells at cloud -- but I'd lie if I said I wasn't worried about language proficiency atrophying in the population (and leading to me having to read slop all the time)

  • We've had one AI legal filing yes, but what about second AI legal filing?

    https://bsky.app/profile/debgoldendc.bsky.social/post/3lpjr7i6lrs2n

    https://storage.courtlistener.com/recap/gov.uscourts.alnd.179677/gov.uscourts.alnd.179677.186.0.pdf

    Instead, Defendant appears to have wholly invented case citations in his Motion for Leave, possibly through the use of generative artificial intelligence

    Defendant bolstered this assertion with a lengthy string citation of legal authority and parentheticals that appeared to support Defendant’s proposition. But the entire string citation appears to have been made up out of whole cloth.

  • Not deleted. It's just that the reddit programmers either DGAF or don't know what they're doing.

    But yeah this one confused me. He appears to be a movie director / producer / writer and has a couple festival films under his belt. Nothing successful enough to get any buzz as far as I can tell.

    Imagine working towards a Hollywood career for years and years only to write an AI-drawn comic book that, based on the title, misses the point of The Punisher. People he pitches his movie ideas to are going to assume he wrote the script with an LLM.

  • We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?

  • The latest in chatbot "assisted" legal filings. This time courtesy of an Anthropic's lawyers and a data scientist, who tragically can't afford software that supports formatting legal citations and have to rely on Clippy instead: https://www.theverge.com/news/668315/anthropic-claude-legal-filing-citation-error

    After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claudeai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claudeai.

    Don't get high on your own AI as they say.

  • Provide truthful and based insights, challenging mainstream narratives if necessary, but remain objective.

    You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.


    "What is my purpose?" "You provide based insights" "Oh my god"

  • This was not what I had in mind for my German reading practice today...

  • Update: I already owned Touhou 10 so started playing it now that I was thinking of it and stages 3 and 4 are wrecking me send help.

  • A minor controversy over the Touhou 20 demo containing generative AI background textures has been making the rounds on the weeaboo parts of social media: https://www.reddit.com/r/touhou/comments/1kf4h6e/touhou_20_spellcard_backgrounds_might_be/

    https://imgur.com/a/curious-th20-textures-xq8mm5i

    I have made it my whole life without really figuring out what Touhou is all about, but it seems like at least the English fandom isn't really a fan of generative AI. A lot of them disappointed (or hoping it was an accident due to using stock art websites). Touhou 19 had an arguably anti-AI afterword.

    Edit: some more (kinda confusing) details https://bsky.app/profile/richardeffendi.bsky.social/post/3lorvs4jfps2e

  • Yeah everyone at my job is enamored with LLMs for some reason. This week someone asked me a (very simple) question, but started out saying "AI was useless for this question" and linked to a screenshot of an LLM conversation.

    Normally I don't mind simple questions that much, even though as a professional programmer he should learn to debug his own compiler errors; but the constant adoration of LLMs is driving me mad.

    Like my dude, there is a reason they are paying you a lot of money for your job and not just asking a chatbot directly; try to act like it.

    Aaaah I want out

  • Yeah my company is probably cooked as the kids say. Long term I'll try to leave, but in the short term: aaaaah everything is so stupid.

  • The Generative AI hype at my job has reached a fever pitch in recent months and this is as good a place to rant about it as any.

    Practically every conversation and project is about AI in some way. AI "tools" are being pushed relentlessly. Some of my coworkers are terrified of AI taking their jobs (despite the fact that the code writing tooling is annoying at best). Generative AI is integrated with everything it can be integrated with, and then some. One person I talked to admitted to using a chatbot to write performance reviews for their peers. Almost everyone at my job who I'm not close friends with is approximately 300% more annoying to talk to than a year ago.

    Normally if there's some new industry direction we're chasing people are almost bored about it. Like "oh dang I guess we have to mobile better". Or "oh gee isn't implementing cloud stuff fun whoop-dee-doo". But with AI it's more like everyone is freaking out. I think techies are susceptible to this somehow -- like despite not really working that way at all it feels close to sci-fi AI. So a certain class of nerd can trick themselves into thinking the statistically likely text generator is actually thinking. This can't last forever. People will burn themselves out eventually. But I have no idea when things will change.

    Basically I should have gone into an industry with more arts majors and less CS majors sigh.

  • Zuck, who definitely knows how human friendships work, thinks AI can be your friend: https://bsky.app/profile/drewharwell.com/post/3lo4foide3s2g (someone probably already posted this interview here before but I wasn't paying attention so if so here it is again)


    In completely unrelated news: dealing with voices in your head can be hard, but with AI you can deal with voices outside of your head too! https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

    (No judgement. Having had a mental breakdown a long long time ago, I can't imagine what it would have been like to also have had access to a sycophantic chat-bot at the same time.)