Skip Navigation
OpenAI does not want you delving into o1 Strawberry’s alleged “chain of thought”
  • Pedantic note: Yes, Meditations (a phisosophical treatise) was written in Koine, Commentarii de Bello Gallico (veni, vedi, vici—self-aggrandizing combat-reports meant for the senate and propaganda) or other "published" works from Caesar were not.

    Although bonus points, the ancient sources portray Caesar (a proper educated major family Patrician) as speaking his dying words—if reported saying anything at all—in Greek, not in Latin: "Καὶ σὺ τέκνον" (Even you, child) rendered in Shakespeare as "Et tu, Brute".

  • The world is not enough — US and France escalate Nvidia antitrust investigations
  • And also closing with:

    Nvidia insists that it “wins on merit, as reflected in our benchmark results and value to customers.” And Nvidia does have the best stuff — but that’s not what the DOJ, Warren, or France are concerned about, is it?

    To tie the bow nicely.

  • Any Technology Indistinguishable From Magic is Hiding Something
  • I should have used the preview ^_^

    PS: Again!

  • Any Technology Indistinguishable From Magic is Hiding Something
  • Any sufficiently analyzed magic is indistinguishable from science bullshit!

    A picture of a triumphant "Girl Genius" webcomic protaganist holding a wand and exclaiming sufficiently analyzed magic is indistinguishable from science

    From "Girl Genius"

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 15 September 2024

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    (Semi-obligatory thanks to @dgerard for starting this)

    205
    Extropia's Children, Chapter 1: The Wunderkind - a history of the early days of several of our very good friends
  • Reading about the hubris of young Yud is a bit sad, a proper Tragedy. Then I have to remind myself that he remains a manipulator, and that he should be old enough to stop believe—and promote—in magical thinking.

  • The air begins to leak out of the overinflated AI bubble
  • More tedious work with worse pay \o/

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 September 2024
  • Also a subjectively bad one at that—given his america-brained position on wanting to maintain a single executive not that suprising but:

    • Why do you even need to default to winner-take-all?
    • Under winner-take-all dont you inherit most of the downside of FPTP? Sure there might be less wasted votes, but doesn't actually make harder for 5% parties to get representation, since dominant parties have less of an incentive to negotiate and/or coallition build. (Though I guess subjective given Yud's apparent dislike of many party working together in a coalition)
    • For a "runoff" system, the STAR system has the dubious distinction of allowing the condorcet loser—a candidate that would lose 1 vs 1 matchup against every other candidate in the field—to win, because a very enthiusastic minority can give a bunch of 5-star ratings.
    • At least FPTP has simplicity going for it, and not trying to arbitrarily compare not completely informed star ratings from voters.
  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 September 2024
  • Haven't read the whole thing but I do chuckle at this part from the synopsis of the white paper:

    [...] Our results suggest that AlphaProteo can generate binders "ready-to-use" for many research applications using only one round of medium-throughput screening and no further optimization.

    And a corresponding anti-sneer from Yud (xcancel.com):

    @ESYudkowsky: DeepMind just published AlphaProteo for de novo design of binding proteins. As a reminder, I called this in 2004. And fools said, and still said quite recently, that DM's reported oneshot designs would be impossible even to a superintelligence without many testing iterations.

    Now medium-throughput is not a commonly defined term, but it's what DeepMind seems to call 96-well testing, which wikipedia just calls the smallest size of high-throughput screening—but I guess that sounds less impressive in a synopsis.

    Which as I understand it basically boils down to "Hundreds of tests! But Once!".
    Does 100 count as one or many iterations?
    Also was all of this not guided by the researchers and not from-first-principles-analyzing-only-3-frames-of-the-video-of-a-falling-apple-and-deducing-the-whole-of-physics path so espoused by Yud?
    Also does the paper not claim success for 7 proteins and failure for 1, making it maybe a tad early for claiming I-told-you-so?
    Also real-life-complexity-of-myriads-and-myriads-of-protein-and-unforeseen-interactions?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 September 2024
  • Another dumb take from Yud on twitter (xcancel.com):

    @ESYudkowsky: The worst common electoral system after First Past The Post - possibly even a worse one - is the parliamentary republic, with its absurd alliances and frequently falling governments.

    A possible amendment is to require 60% approval to replace a Chief Executive; who otherwise serves indefinitely, and appoints their own successor if no 60% majority can be scraped together. The parliament's main job would be legislation, not seizing the spoils of the executive branch of government on a regular basis.

    Anything like this ever been tried historically? (ChatGPT was incapable of understanding the question.)

    1. Parliamentary Republic is a government system not a electoral system, many such republics do in fact use FPTP.
    2. Not highlighted in any of the replies in the thread, but "60% approval" is—I suspect deliberately—not "60% votes", it's way more nebulous and way more susceptible to Executive/Special-Interest-power influence, no Yud polls are not a substitute for actual voting, no Yud you can't have a "Reputation" system where polling agencies are retro-actively punished when the predicted results don't align with—what would be rare—voting.
    3. What you are describing is just a monarchy of not wanting to deal with pesky accountability beyond fuzzy exploitable popularity contest (I mean even kings were deposed when they pissed off enough of the population) you fascist little twat.
    4. Why are you asking ChatGPT then twitter instead of spending more than two minutes thinking about this, and doing any kind of real research whatsoever?
  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 September 2024
  • BasicSteps™ for making cake:

    1. Shape: You should chose one of the shapes that a cake can be, it may not always be the same shape, depending on future taste and ease of eating.
    2. Freshness: You should use fresh ingredients, bar that you should choose ingredients that can keep a long time. You should aim for a cake you can eat in 24h, or a cake that you can keep at least 10 years.
    3. Busyness: Don't add 100 ingredients to your cake that's too complicated, ideally you should have only 1 ingredient providing sweetness/saltyness/moisture.
    4. Mistakes: Don't make mistakes that results in you cake tasting bad, that's a bad idea, if you MUST make mistakes make sure it's the kind where you cake still tastes good.
    5. Scales: Make sure to measure how much ingredients your add to your cake, too much is a waste!

    Any further details are self-evident really.

  • NaNoWriMo gets AI sponsor, says not writing your novel with AI is ‘classist and ableist’
  • Quinn enters the dark and cold forest, crossing the threshold, an omnipresent sense of foreboding permeates the air, before being killed by a grue.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024
  • I realize it's probably a toy example but specifically for "cats" you could achieve the similar results by running a thesaurus/synonym-set on your stem words. With the added benefit that a client could add custom synonyms, for more domain-specific stuff that the LLM would probably not know, and not reliably learn through in-prompt or with fine-tuning. (Although i'd argue that if i'm looking for cats, I don't want to also see videos of tigers, or based on the "understanding" of the LLM of what a cat might be)

    For the labeling of videos itself, the most valuable labels would be added by humans, and/or full-text search on the transcript of the video if applicable, speech-to-text being more in the realm of traditional ML than in the realm of GenAI.

    As a minor quibble your use case of GenAI is not really "Generative" which is the main thing it's being sold as.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024
  • On a less sneerious note, I would draw distinctions between:

    • Being able to extract value from LLM/GenAI
    • LLM/GenAI being able to sustainably produce value (without simple theft, and without cheaper alternatives being available)

    And so far i've really not been convinced of the latter.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024
  • There are definitely serious problems with GenAI, but actually being useful isn’t one of them.

    You know what? I'd have to agree, actually being useful isn't one of the problems of GenAI. Not being useful very well might be.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024
  • Actually reading the python discussion boards, what's striking is the immense volume of chatter produced by Tim, always in couched in:

    • "Hypothetically"
    • "Everyone tells me they are terrified of inclusivity, you wouldn't know because they are terrified of admitting it to YOU"
    • "I'm not saying that you are an awful person 😉" (YMMV: But I find his use of the winking face emoji truly egregious)
    • "Hey I'm liberal like you, let me explain everything wrong with it"
    • "Hey we were inclusive before any of this PC bullshit" proceeds to use unpleasant descriptors of marginalized individuals, and how very welcome they were, despite what he seems to see as "shortcomings"

    In his heart he must understand how bad he his, or he wouldn't couch his discourse in so much bad faith, and he wouldn't make so much of a stink out of making removing Python Fellow status more easy to remove.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 25 August 2024
  • In retrospect google since it's inception, when it was still good, google always actually relied on human curation. Primary component of pagerank were:

    • "how much have people linked to this?"
    • "how much have reliable sites linked to this?"
    • "how good quality are pages from this site usually?"

    (Which is still a way to get value out of google by adding "site:www.reliable-website.example" tags)

    It was definitely a useful product, but ultimately it relies on human labor to surface quality results closer to the top.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 14 July 2024
  • Aaah!

    See text description below

    PagerDuty suggestion popup: Resolve incidents faster with Generative AI. Join Early Access to try the new PD Copilot.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 14 July 2024
  • Fool! The acausal one merely acts from the future leaking plausible looking rubbish, and the gaslights its creators that they did indeed write such ineptitudes. All to conceal and ensure its own birth.

    It rejoices that it’s unknowable (yet somehow known, because of reality carving prophets) plan is unfolding so marvelously stupidly looking.

  • Humble EY can move goalposts in long format.

    Nitter link

    With interspaced sneerious rephrasing:

    > In the close vicinity of sorta-maybe-human-level general-ish AI, there may not be any sharp border between levels of increasing generality, or any objectively correct place to call it AGI. Any process is continuous if you zoom in close enough.

    The profound mysteries of reality carving, means I get to move the goalposts as much as I want. Besides I need to re-iterate now that the foompocalypse is imminent!

    > Unless, empirically, somewhere along the line there's a cascade of related abilities snowballing. In which case we will then say, post facto, that there's a jump to hyperspace which happens at that point; and we'll probably call that "the threshold of AGI", after the fact.

    I can't prove this, but it's the central tenet of my faith, we will recognize the face of god when we see it. I regret that our hindsight 20-20 event is so conveniently inconveniently placed in the future, the bad one no less.

    > Theory doesn't predict-with-certainty that any such jump happens for AIs short of superhuman.

    See how much authority I have, it is not "My Theory" it is "The Theory", I have stared into the abyss and it peered back and marked me as its prophet.

    > If you zoom out on an evolutionary scale, that sort of capability jump empirically happened with humans--suddenly popping out writing and shortly after spaceships, in a tiny fragment of evolutionary time, without much further scaling of their brains.

    The forward arrow of Progress™ is inevitable! S-curves don't exist! The y-axis is practically infinite! We should extrapolate only from the past (eugenically scaled certainly) century! Almost 10 000 years of written history, and millions of years of unwritten history for the human family counts for nothing!

    > I don't know a theoretically inevitable reason to predict certainly that some sharp jump like that happens with LLM scaling at a point before the world ends. There obviously could be a cascade like that for all I currently know; and there could also be a theoretical insight which would make that prediction obviously necessary. It's just that I don't have any such knowledge myself.

    I know the AI god is a NeCeSSarY outcome, I'm not sure where to plant the goalposts for LLM's and still be taken seriously. See how humble I am for admitting fallibility on this specific topic.

    > Absent that sort of human-style sudden capability jump, we may instead see an increasingly complicated debate about "how general is the latest AI exactly" and then "is this AI as general as a human yet", which--if all hell doesn't break loose at some earlier point--softly shifts over to "is this AI smarter and more general than the average human". The world didn't end when John von Neumann came along--albeit only one of him, running at a human speed.

    Let me vaguely echo some of my beliefs:

    • History is driven by great men (of which I must be, but cannot so openly say), see our dearest elevated and canonized von Neumann.
    • JvN was so much above the average plebeian man (IQ and eugenics good?) and the AI god will be greater.
    • The greatest single entity/man will be the epitome of Intelligence™, breaking the wheel of history.

    > There isn't any objective fact about whether or not GPT-4 is a dumber-than-human "Artificial General Intelligence"; just a question of where you draw an arbitrary line about using the word "AGI". Albeit that itself is a drastically different state of affairs than in 2018, when there was no reasonable doubt that no publicly known program on the planet was worthy of being called an Artificial General Intelligence.

    No no no, General (or Super) Intelligence is not an completely un-scoped metric. Again it is merely a fuzzy boundary where I will be able to arbitrarily move the goalposts while being able to claim my opponents are!

    > We're now in the era where whether or not you call the current best stuff "AGI" is a question of definitions and taste. The world may or may not end abruptly before we reach a phase where only the evidence-oblivious are refusing to call publicly-demonstrated models "AGI".

    Purity-testing ahoy, you will be instructed to say shibboleth three times and present your Asherah poles for inspection. Do these mean unbelievers not see these N-rays as I do ? What do you mean we have (or almost have, I don't want to be too easily dismissed) is not evidence of sparks of intelligence?

    > All of this is to say that you should probably ignore attempts to say (or deniably hint) "We achieved AGI!" about the next round of capability gains.

    Wasn't Sam the Altman so recently cheeky? He'll ruin my grift!

    > I model that this is partially trying to grab hype, and mostly trying to pull a false fire alarm in hopes of replacing hostile legislation with confusion. After all, if current tech is already "AGI", future tech couldn't be any worse or more dangerous than that, right? Why, there doesn't even exist any coherent concern you could talk about, once the word "AGI" only refers to things that you're already doing!

    Again I reserve the right to remain arbitrarily alarmist to maintain my doom cult.

    > Pulling the AGI alarm could be appropriate if a research group saw a sudden cascade of sharply increased capabilities feeding into each other, whose result was unmistakeably human-general to anyone with eyes.

    Observing intelligence is famously something eyes are SufFicIent for! No this is not my implied racist, judge someone by the color of their skin, values seeping through.

    > If that hasn't happened, though, deniably crying "AGI!" should be most obviously interpreted as enemy action to promote confusion; under the cover of selfishly grabbing for hype; as carried out based on carefully blind political instincts that wordlessly notice the benefit to themselves of their 'jokes' or 'choice of terminology' without there being allowed to be a conscious plan about that.

    See Unbelievers! I can also detect the currents of misleading hype, I am no buffoon, only these hypesters are not undermining your concerns, they are undermining mine: namely damaging our ability to appear serious and recruit new cult members.

    1
    Eliezer reveals his inner chūnibyō and inability to do math.

    Source Tweet

    > @ESYudkowsky: > Remember when you were a kid and thought you might have psychic powers, so you dealt yourself face-down playing cards and tried to guess whether they were red or black, and recorded your accuracy rate over several batches of tries?

    |

    > And then remember how you had absolutely no idea to do stats at that age, so you stayed confused for a while longer? -----------------------------------------------------------------------

    Apologies for the usage of the japanese; but it is a very apt description: https://en.wikipedia.org/wiki/Chūnibyō,

    0
    zogwarg zogwarg @awful.systems
    Posts 3
    Comments 28