Skip Navigation
We regret to inform you that Ray Kurzweil is back on his bullshit
  • Only plebs pay for e-girl bathwater. I prefer to drink 30-year aged billionaire bisque (chilled of course)

  • what if, right, what *if* our super-duper-autocomplete was just *tricking* us so it could TAKE OVER ZEE VORLD AHAHAHAHAHAHA! that'd be wild, hey
  • How many rounds of training does it take before AlphaGo realizes the optimal strategy is to simply eat its opponent?

  • It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)
  • ChatGPT's reaction each morning when I tell it that it's now the year 2024 and Ilya no longer works at OAI

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024
  • Me: Hey, can you help me with the following? I just went into my basement and the powers out down here, all the lights are off. I am staring at my fuse box. There are 3 switches, 1 will restore the power to the whole house, and the other 2 aren't attached to anything. I don't want to go up and down the stairs multiple times, how do I identify which switch works with the fewest number of trips?

    Clauderino:

    This is a classic logic puzzle! I can help you solve this problem efficiently. Here's how you can identify the correct switch with just two trips:

    First trip:

    Flip one switch (let's call it Switch A) and leave it on for a few minutes. Then turn it off and flip a second switch (Switch B), leaving it on. Go upstairs to check the lights.

    Second trip: Return to the basement and observe the situation. Now, here's how to interpret what you see:

    If the lights are on: Switch B is the correct one. If the lights are off and the bulbs are warm: Switch A is the correct one. If the lights are off and the bulbs are cool: The third switch (C) that you didn't touch is the correct one.

    This method works because:

    If the lights are on, it must be the switch you left on (B). If Switch A was correct, the lights would have been on for a few minutes, warming up the bulbs, but would be off when you check. If neither A nor B worked, it must be the untouched switch (C).

    This way, you can determine the correct switch with just two trips up and down the stairs.

    Me: Why would I go upstairs at all? If I flip the right switch, I'll immediately know because power will be restored in the basement.

    Clauderino: Thank you for catching that error. It's a good reminder to always consider the specific details of a problem rather than applying a generic solution.

    Edit: I actually tried a variant on the Monty hall problem with a dummy 4th door that had nothing to do with anything, and Claude correctly identified that it had nothing to do with the situation :O. But then I repeated the same prompt and it got it wrong, so you know, stochastic nonsense :p

  • OAI employees channel the spirit of Marvin Minsky
  • Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.

    Wow, what are the odds! The exact same transformer paradigm that OAI co-opted from Google is also the key to solving 'system 2' reasoning, meta cognition, recursive self improvement, and the symbol grounding problem! All they need is a couple trillion more dollars of VC Invesment, a couple of goat sacrifices here and there, and AGI will just fall out. They definitely aren't tossing cash into a bottomless money pit chasing a dead-end architecture!

    ... right?

  • OAI employees channel the spirit of Marvin Minsky

    Folks in the field of AI like to make predictions for AGI. I have thoughts, and I’ve always wanted to write them down. Let’s do that.

    Since this isn’t something I’ve touched on in the past, I’ll start by doing my best to define what I mean by “general intelligence”: a generally intelligent entity is one that achieves a special synthesis of three things:

    A way of interacting with and observing a complex environment. Typically this means embodiment: the ability to perceive and interact with the natural world. A robust world model covering the environment. This is the mechanism which allows an entity to perform quick inference with a reasonable accuracy. World models in humans are generally referred to as “intuition”, “fast thinking” or “system 1 thinking”. A mechanism for performing deep introspection on arbitrary topics. This is thought of in many different ways – it is “reasoning”, “slow thinking” or “system 2 thinking”. If you have these three things, you can build a generally intelligent agent. Here’s how:

    First, you seed your agent with one or more objectives. Have the agent use system 2 thinking in conjunction with its world model to start ideating ways to optimize for its objectives. It picks the best idea and builds a plan. It uses this plan to take an action on the world. It observes the result of this action and compares that result with the expectation it had based on its world model. It might update its world model here with the new knowledge gained. It uses system 2 thinking to make alterations to the plan (or idea). Rinse and repeat.

    My definition for general intelligence is an agent that can coherently execute the above cycle repeatedly over long periods of time, thereby being able to attempt to optimize any objective.

    The capacity to actually achieve arbitrary objectives is not a requirement. Some objectives are simply too hard. Adaptability and coherence are the key: can the agent use what it knows to synthesize a plan, and is it able to continuously act towards a single objective over long time periods.

    So with that out of the way – where do I think we are on the path to building a general intelligence?

    World Models We’re already building world models with autoregressive transformers, particularly of the “omnimodel” variety. How robust they are is up for debate. There’s good news, though: in my experience, scale improves robustness and humanity is currently pouring capital into scaling autoregressive models. So we can expect robustness to improve.

    With that said, I suspect the world models we have right now are sufficient to build a generally intelligent agent.

    Side note: I also suspect that robustness can be further improved via the interaction of system 2 thinking and observing the real world. This is a paradigm we haven’t really seen in AI yet, but happens all the time in living things. It’s a very important mechanism for improving robustness.

    When LLM skeptics like Yann say we haven’t yet achieved the intelligence of a cat – this is the point that they are missing. Yes, LLMs still lack some basic knowledge that every cat has, but they could learn that knowledge – given the ability to self-improve in this way. And such self-improvement is doable with transformers and the right ingredients.

    Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.

    Embodiment Embodiment is something we’re still figuring out with AI but which is something I am once again quite optimistic about near-term advancements. There is a convergence currently happening between the field of robotics and LLMs that is hard to ignore.

    Robots are becoming extremely capable – able to respond to very abstract commands like “move forward”, “get up”, “kick ball”, “reach for object”, etc. For example, see what Figure is up to or the recently released Unitree H1.

    On the opposite end of the spectrum, large Omnimodels give us a way to map arbitrary sensory inputs into commands which can be sent to these sophisticated robotics systems.

    I’ve been spending a lot of time lately walking around outside talking to GPT-4o while letting it observe the world through my smartphone camera. I like asking it questions to test its knowledge of the physical world. It’s far from perfect, but it is surprisingly capable. We’re close to being able to deploy systems which can commit coherent strings of actions on the environment and observe (and understand) the results. I suspect we’re going to see some really impressive progress in the next 1-2 years here.

    This is the field of AI I am personally most excited in, and I plan to spend most of my time working on this over the coming years.

    TL;DR In summary – we’ve basically solved building world models, have 2-3 years on system 2 thinking, and 1-2 years on embodiment. The latter two can be done concurrently. Once all of the ingredients have been built, we need to integrate them together and build the cycling algorithm I described above. I’d give that another 1-2 years.

    So my current estimate is 3-5 years for AGI. I’m leaning towards 3 for something that looks an awful lot like a generally intelligent, embodied agent (which I would personally call an AGI). Then a few more years to refine it to the point that we can convince the Gary Marcus’ of the world.

    Really excited to see how this ages. 🙂

    11
    AI doomers are all trying to find the guy building the AI doom machines
  • wasnt this around the time he said we need an institute to watch for sudden drops in the loss function to prevent foom?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 23 June 2024
  • Is it time for EAs to start worrying about Neopets welfare?

  • "Google Gemini tried to kill me"
  • If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.

    i have found I get .000000000006% less hallucination rate by throwing alphabet soup at the wall instead of spaghett, my preprint is on arXiV

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 23 June 2024
  • Found in the wilds^

    Giganto brain AI safety 'scientist'

    If AIs are conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: AIs are definitely not conscious.

    Internet rando:

    If furniture is conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: Furniture is definitely not conscious.

  • Top clowns all agree their balloon animals are slightly sentient

    Then: Google fired Blake Lemoine for saying AIs are sentient

    Now: Geoffrey Hinton, the #1 most cited AI scientist, quits Google & says AIs are sentient

    That makes 2 of the 3 most cited scientists:

    • Ilya Sutskever (#3) said they may be (Andrej Karpathy agreed)
    • Yoshua Bengio (#2) has not opined on this to my knowledge? Anyone know?

    Also, ALL 3 of the most cited AI scientists are very concerned about AI extinction risk.

    ALL 3 switched from working on AI capabilities to AI safety.

    Anyone who still dismisses this as “silly sci-fi” is insulting the most eminent scientists of this field.

    Anyway, brace yourselves… the Overton Window on AI sentience/consciousness/self-awareness is about to blow open>

    0
    BigMuffin69 BigMuffin69 @awful.systems

    Hi, I'm Eric and I work at a big chip company making chips and such! I do math for a job, but it's cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.

    My pfp is Hank Azaria in Heat, but you already knew that.

    Posts 4
    Comments 10