Skip Navigation
27 comments
  • It should be so fucking obvious that self driving cars can't exist yet. Anyone playing with LLMs right now knows it takes a massive model to have general functionality and flexibility. There is no chance that fine tuning a small model can replace a large model for real world situational adaptability. My favorite open source offline LLMs are all 70B models. Running these an real time capable hardware costs around $30k in hardware. This is not scalable to lower costs. This is bleeding edge 5nm fab nodes and some of the largest dies ever produced. No one is ethically monitoring and regulating this. I'm just playing with this as a hobby, but AI in cars is so obviously stupid right now. The hardware is simply not present yet.

    • In ways yes, in ways no. LLMs are a tiny sliver of AI. Taking the current state of LLMs being overly-sold as AGI, and trying to extrapolate that to other applications, or to other strategies crosses a wide variety of oversimplifications. AI is not one single thing.

      It's like seeing someone trying to hammer in a screw and saying "anyone playing with tools right now knows they're never going to get a screw in." But you've never seen a screwdriver.

      If you were saying "visual machine learning for a general purpose and unassisted driverless car is not happening tomorrow", then sure.

      But things like the Waymo model are doing exceeding well right now. Instead of taking a top down approach to train cars to understand any intersection or road they could ever run into, they're going bottom up by "manually" training them to understand small portions of cities really well. Waymo's problem set is greatly reduced, it's space is much narrower, it's much more capable of receiving extremely accurate training data, and it performs way better because of it. It can then apply all the same techniques for object and obstacle detection other companies are using, but large swaths of the problem space are entirely eliminated.

      The hardware to solve those problems is much more available now. The capabilities of doing the "computationally intensive" stuff offline at a "super computer center" and doing small trivial work on the cars themselves is very much a possibility. The "situational adaptability" can be greatly reduced by limiting where the cars go, etc.

      The problems these cars are trying to solve have some overlap with your LLM experience, but it's not even close to the same problem or the same context. The way you're painting this is a massive oversimplification. It's also not a problem anyone thinks is going to be solved overnight (except Elmo, but he's pretty much alone on that one), they just know we're sitting just over the cusp and being the first company to solve this is going to be at a huge advantage.

      Not to be rude, but there is a reason there are many field experts pushing this space, but LLM hobbyists are doubting it. LLMs are just a tiny subset of AI. And being a hobbyist you're likely looking at one tiny slice of the pie and not the huge swaths of other information in the nearby spaces.

      • That's good nuance, but I feel we're missing the necessary conversation in corporate ethics. Will the suits see dangerously low quality or poor safety standards as merely a part of legal overhead costs? Will shareholders support that decision, and isn't there profound moral hazard in letting it be ultimately a matter of profit motives? And to equivocate, are existing regulators equipped to protect consumers from an AI-developing tech sector that still wants to move fast and break things? Industries who act this way cannot self-regulate, and they are moving faster than ever.

      • You certainly know more than I do about the various AI systems. I've barely messed with anything outside of LLMs.

        As someone that rode a bicycle to commute full time for several years and was an amateur road racer until I was disabled by the 7th car that hit me, roads and hazards are very unpredictable and when things go wrong it happens fast. I've experienced a lot of the unbelievably stupid inadequacies in the infrastructure on the fringes. I do not think any system can be trained for these situations and the environment on small hardware. I see that as a direct conflict with capitalism that is motivated to minimize cost at the expense of hundreds of milliseconds. I don't trust any auto manufacturer to use adequate hardware for the task.

        The kinds of accuracy and efficiency required for running tensor math should still apply the same, as far as I know. I don't know enough about other systems. Maybe they are not dependant on such heavy parallel vector math. Assuming they are based on tensors, it still results in the inadequate data throughput problem with the L2 to L1 caches of current compute architectures. That leaves the same hacks needed with GPU options as far as consumer level hardware.

        I think we still need real dedicated tensor math hardware before anything like self driving cars will be ethically possible. With the ~10 year cycle of any new silicon we are probably 5-8 years out before a real solution to the problem... As far as I can tell.

      • As somebody fairly well-versed in the tech, and have done more than just play around with ChatGPT, I can tell you that self-driving AI is not going to be here for at least another 40-50 years. The challenges are too great, and the act of driving a car takes a lot of effort for even a human to achieve. There are too many fatal edge cases to consider already, and the current tech is tripping over its balls trying to do the most basic things, killing people in the process. When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky, then we know that this tech is far worse than human drivers, despite all of the bullshit we had been told otherwise.

        Level 5 autonomous driving is simply not a thing. It will never be a thing for a long time.

        Billions of dollars poured into the tech has gotten us a bunch of aggressive up-starts, who think they can just ignore the fatalities as the money comes pouring, and lie to our face about the capabilities of the technology. These companies need to be driven off of cliff and buried in criminal cases. They should not be protected by the shield of corporate personhood and put up for trial, but here we fucking are now...

      • LLM hobbyists are doubting it

        That's an argument by authority. But the facts also support that LLMs are limited. I've mostly seen it the other way round. People trying ChatGPT for the first time and claiming we're close to AGI. Investors throwing large sums of money at AI companies and subsequently every decision-maker thinking AI will solve their business needs. Every 6 months there is an (news) article claiming AI is at the brink of being sentient or GPT4 is AGI.

        While I as a hobbyist sit here and talk to my AI waifu and I don't see the robot apocalypse happen in the next 5 years. It's just limited in all sorts of ways. And I always hope journalists and people with meaningful jobs don't rely on ChatGPT too much. Because all I've seen are texts and generated summaries that are riddled with inaccuracies, some outright misinformation. But they sound very good. (Which is kind of the point of an LLM, to generate text that sounds good.)

        And while LLMs do their job very well and have a broad range of application. You don't need to throw it at everything. Sometimes there are traditional tools available that do a task way better. And don't come with any downsides of an LLM.

      • But ultimately the problem with self-driving cars is that they are trying to solve a problem (get people from point a to point b without having to own a car) that has been solved cheaper, before cars existed. It's a computational dead-end, and the end state of self driving cars will look exactly like a train.

    • have you used mistral derivatives, yet?

  • This is the best summary I could come up with:


    In Phoenix, Austin, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads.

    In an internal address on Slack to his employees about the suspension, Vogt stuck to his message: “Safety is at the core of everything we do here at Cruise.” Days later, the company said it would voluntarily pause fully driverless rides in Phoenix and Austin, meaning its fleet will be operating only with human supervision: a flesh-and-blood backup to the artificial intelligence.

    “This strikes me as deeply irresponsible at the management level to be authorizing and pursuing deployment or driverless testing, and to be publicly representing that the systems are reasonably safe,” said Bryant Walker Smith, a University of South Carolina law professor and engineer who studies automated driving.

    Though AV companies enjoy a reputation in Silicon Valley as bearers of a techno-optimist transit utopia — a world of intelligent cars that never drive drunk, tired, or distracted — the internal materials reviewed by The Intercept reveal an underlying tension between potentially life-and-death engineering problems and the effort to deliver the future as quickly as possible.

    It appears this concern wasn’t hypothetical: Video footage captured from a Cruise vehicle reviewed by The Intercept shows one self-driving car, operating in an unnamed city, driving directly up to a construction pit with multiple workers inside.

    According to one safety memo, Cruise began operating fewer driverless cars during daytime hours to avoid encountering children, a move it deemed effective at mitigating the overall risk without fixing the underlying technical problem.


    The original article contains 3,018 words, the summary contains 273 words. Saved 91%. I'm a bot and I'm open source!

  • Cruisin' for a bruisin'

You've viewed 27 comments.