Skip Navigation
Waymo Reaches 150k Autonomous Rides Per Week Milestone While Tesla FSD Lags Behind
  • Yeah we used to joke that if you wanted to sell a car with high-resolution LiDAR, the LiDAR sensor would cost as much as the car. I think others in this thread are conflating the price of other forms of LiDAR (usually sparse and low resolution, like that on 3D printers) with that of dense, high resolution LiDAR. However, the cost has definitely still come down.

    I agree that perception models aren't great at this task yet. IMO monodepth never produces reliable 3D point clouds, even though the depth maps and metrics look reasonable. MVS does better but is still prone to errors. I do wonder if any companies are considering depth completion with sparse LiDAR instead. The papers I've seen on this topic usually produce much more convincing pointclouds.

  • What are your AI use cases?
  • I use a lot of AI/DL-based tools in my personal life and hobbies. As a photographer, DL-based denoising means I can get better photos, especially in low light. DL-based deconvolution tools help to sharpen my astrophotos as well. The deep learning based subject tracking on my camera also helps me get more in focus shots of wildlife. As a birder, tools like Merlin BirdID's audio recognition and image classification methods are helpful when I encounter a bird I don't yet know how to identify.

    I don't typically use GenAI (LLMs, diffusion models) in my personal life, but Microsoft Copilot does help me write visualization scripts for my research. I can never remember the right methods for visualization libraries in Python, and Copilot/ChatGPT do a pretty good job at that.

  • What are your AI use cases?
  • There is no "artificial intelligence" so there are no use cases. None of the examples in this thread show any actual intelligence.

    There certainly is (narrow) artificial intelligence. The examples in this thread are almost all deep learning models, which fall under ML, which in turn falls under the field of AI. They're all artificial intelligence approaches, even if they aren't artificial general intelligence, which more closely aligns with what a layperson thinks of when they say AI.

    The problem with your characterization (showing "actual intelligence") is that it's super subjective. Historically, being able to play Go and to a lesser extent Chess at a professional level was considered to require intelligence. Now that algorithms can play these games, folks (even those in the field) no longer think they require intelligence and shift the goal posts. The same was said about many CV tasks like classification and segmentation until modern methods became very accurate.

  • When RTX 5090 is realesed...
  • I work in CV and a lot of labs I've worked with use consumer cards for workstations. If you don't need the full 40+GB of VRAM you save a ton of money compared to the datacenter or workstation cards. A 4090 is approximately $1600 compared to $5000+ for an equivalently performing L40 (though with half the VRAM, obviously). The x090 series cards may be overpriced for gaming but they're actually excellent in terms of bang per buck in comparison to the alternatives for DL tasks.

    AI has certainly produced revenue streams. Don't forget AI is not just generative AI. The computer vision in high end digital cameras is all deep learning based and gets people to buy the latest cameras, for an example.

  • should I completely jumpship to linux when windows 10 ends support/am ready or dualboot ltsc and linux
  • Yeah there's a good chance you're right. Maybe something to do with memory management as well.

    Long term I'll probably end up switching back to Darktable. I used it before and honestly it is quite good, but I currently have a free license for CC from my university and the AI denoise features in LR are pretty nice compared to the classical profiled denoise from Darktable. It does also help that the drivers for my SD card reader are less finicky on Windows so it's easier for me to quickly copy over images from my camera on there instead of Linux. Hopefully that also gets better over time!

  • should I completely jumpship to linux when windows 10 ends support/am ready or dualboot ltsc and linux
  • I don't know exactly, but it's apparently a thing. Some game anti-cheat software such as Easy Anti-Cheat will give you an error message saying something along the lines of "Virtual machines are not supported." Some are easy to bypass by just tweaking your VM config, others not so much.

  • should I completely jumpship to linux when windows 10 ends support/am ready or dualboot ltsc and linux
  • Fair enough! I think it's more common for games to do that, but sometimes I had trouble with software on Windows that used virtualization elements themself. I probably just didn't properly configure HyperV settings, but I know nested virtualization can be tricky.

    For me it's also because I'm on a laptop, and my Windows VM relies on me passing through an external GPU over TB3 but my laptops' dedicated GPU has no connection to a display, so it would be tricky to try and do GPU passthrough on the VM if I were on the go. I like being able to boot Windows on the go to edit photos in Lightroom, for example, but otherwise I'd prefer to run the Linux host and use the Windows VM only as needed.

  • should I completely jumpship to linux when windows 10 ends support/am ready or dualboot ltsc and linux
  • I'm a fan of dual booting AND using a passthrough VM. It's easiest to set up if your machine has two NVMe slots and you put each OS on its own drive. This way you can pass the Windows NVMe through to the VM directly.

    The advantage of this configuration is that you get the convenience of not needing to reboot to run some Windows specific software, but if you need to run software that doesn't play nice with virtualization (maybe a program has too large a performance hit with virtualization, or software you want to run doesn't support virtualized systems, like some anticheat-enabled games), you can always reboot to your same Windows installation directly.

  • Qualcomm abruptly cancels Snapdragon X Elite dev kit — refunds customers for mini PC, ends sales and support for the device immediately
  • GPU and overall firmware support is always better on x86 systems, so makes sense that you switched to that for your application. Performance is also usually better if you don't explicitly need low power. In my use case I use the Orange Pi 5 Plus for running an astrophotography rig, so I needed something that was low power, could run Linux easily, had USB 3, reasonable single core performance, and preferably had the possibility of an upgradable A key WiFi card and a full speed NVMe E key slot for storage (preferably PCIe 3.0x4 or better). Having hardware serial ports was a plus too. x86 boxes would've been preferable but a lot of the cheaper stuff are older Intel mini PCs which have pretty poor battery life, and the newer power efficient stuff (N100 based) is more expensive and the cheaper ones I found tended to have onboard soldered WiFi cards unfortunately. Accordingly the Orange Pi 5 Plus ended up being my cheapest option that ticked all my boxes. If only software support was as good as x86!

    Interesting to hear about the NPU. I work in CV and I've wondered how usable the NPU was. How did you integrate deep learning models with it? I presume there's some conversion from runtime frameworks like ONNX to the NPU's toolkit, but I'd love to learn more.

    I'm also aware that Collabora has gotten the NPU drivers upstreamed, but I don't know how NPUs are traditionally interfaced with on Linux.

  • Qualcomm abruptly cancels Snapdragon X Elite dev kit — refunds customers for mini PC, ends sales and support for the device immediately
  • A lot of the cheap tablet SoC vendors like Rockchip (whose SoCs end up in low cost SBCs) really only do the bare minimum when it comes to proper linux support. There's usually next to no effort to upstreaming their patches so oftentimes you're stuck on their vendor kernel. Luckily for the RK3588(S), Collabora has done a considerable amount of work on supporting the SoC and its peripherals upstream. I run my Orange Pi 5 Plus (RK3588) on a mainline kernel and it works for my needs.

    This practice is a lot easier to defend for a low cost SoC compared to something as expensive as a Snapdragon Elite though...

  • Nobel Prize 2024
  • I work in an ML-adjacent field (CV) and I thought I'd add that AI and ML aren't quite the same thing. You can have non-learning based methods that fall under the field of AI - for instance, tree search methods can be pretty effective algorithms to define an agent for relatively simple games like checkers, and they don't require any learning whatsoever.

    Normally, we say Deep Learning (the subfield of ML that relates to deep neural networks, including LLMs) is a subset of Machine Learning, which in turn is a subset of AI.

    Like others have mentioned, AI is just a poorly defined term unfortunately, largely because intelligence isn't a well defined term either. In my undergrad we defined an AI system as a programmed system that has the capacity to do tasks that are considered to require intelligence. Obviously, this definition gets flaky since not everyone agrees on what tasks would be considered to require intelligence. This also has the problem where when the field solves a problem, people (including those in the field) tend to think "well, if we could solve it, surely it couldn't have really required intelligence" and then move the goal posts. We've seen that already with games like Chess and Go, as well as CV tasks like image recognition and object detection at super-human accuracy.

  • So glad I'm ditching these fucking idiots
  • One of the big changes in my opinion is the addition of a "Smart Dimension" tool where the system interprets and previews the constraint that you want to apply instead of requiring you to pick the specific constraint ahead of time(almost identical to SOLIDWORKS), and the ability to add constraints such as length while drawing out shapes (like Autodesk Inventor, probably also Fusion but I haven't used that). It makes the sketcher workflow more like other CAD programs and requires a little less manual work with constraints.

  • So glad I'm ditching these fucking idiots
  • Hmm, that actually sounds like a bug. I haven't seen that on my end (once I set the locations of options in my toolbars they stay there, even after restarts). You may want to report that on their issue tracker. Sorry that you're having a rough experience!

  • So glad I'm ditching these fucking idiots
  • Last I tried it, there was no fix. Their latest update on the website says:

    The work on the toponaming problem is an ongoing project, and we are very grateful to the FreeCAD community for contributing a lot to that effort. But it’s not complete yet, there will be much more to say when it’s largely done. So let’s focus on the other three.

    So I take it they haven't implemented a fix. They previously said they were going to work with the FreeCAD team on mainlining a toponaming fix, using realthunder's work as a proof of concept, but said fix has not landed in mainline FreeCAD yet. I believe that's the major feature they're looking to implement for FreeCAD 1.0.

    Definitely excited for Ondsel though! Hopefully that fix can be integrated quickly.

  • So glad I'm ditching these fucking idiots
  • IMO the bigger problem with FreeCAD is the topological naming problem. It's very easy to get frustrated because your model broke due to a change you made in an earlier feature.

    The UI isn't amazing though, and that unfortunately happens quite a bit with open source software. Hopefully it'll go the way of Blender and KiCAD with an eventual major release that overhauls the UI.

  • So glad I'm ditching these fucking idiots
  • Ondel has a nicer user interface, but I personally use and recommend realthunder's LinkStable branch of FreeCAD. Mainline FreeCAD (and by extension, Ondsel) suffer from the topological naming problem, which can be especially jarring to users coming from proprietary CAD software. realthunder put a lot of work into a solution that handles the problem pretty well, so I'm using his fork until toponaming gets mainlined.

  • A crowd destroyed a driverless Waymo car in San Francisco
  • Yep, and for good reason honestly. I work in CV and while I don't work on autonomous vehicles, many of the folks I know have previously worked at companies or research institutes on these kinds of problems and all of them agree that in a scenario like this, you should treat the state of the vehicle as compromised and go into an error/shutdown mode.

    Nobody wants to give their vehicle an override that can potentially harm the safety of those inside it or around it, and practically speaking there aren't many options that guarantee safety other than this.

  • ditch discord!
  • Yeah I think Lemmy would actually work pretty reasonably. It reminds me of how lots of software and projects have Reddit communities. I agree that being able to share 1 account over many services, and especially not having to pay for infrastructure is something that drives discord use over forum-based platforms.

  • ditch discord!
  • Personally, I'd prefer that projects use forums for community discussions rather than realtime chat platforms like Discord or Matrix. I think the bigger problem of projects using Discord is not that it's closed source, but rather that it makes it difficult to search (since no indexing by search engines) and the format deprioritizes having discussion on a topic over a long period of time. Since Matrix is also intended for chat, it has these same issues (though at least you can preview a room without making an account).

  • M42 - The Orion Nebula

    Equipment details:

    • Mount: OpenAstroMount by OpenAstroTech
    • Lens: Sony 200-600 @ 600mm f/7.1
    • Camera: Sony A7R III
    • Guidescope: OpenAstroGuider (50mm, fl=163) by OpenAstroTech
    • Guide Camera: SVBONY SV305m Pro
    • Imaging Computer: ROCKPro64 running INDIGO server

    Acquisition & Processing:

    • Imaged and Guided/Dithered in Ain Imager
    • 420x30s lights, 40 darks, 100 flats, 100 biases, 100 dark-flats over two nights
    • Prepared data and stacked in SiriLic
    • Background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++ in SiriLic
    • Adjusted curves, enhanced saturation of the nebula and recombined with star mask in GIMP, desaturated and denoised background

    This is my first time doing a multi-night image, and my first time using SiriLic to configure a Siril script. Any tips there would be helpful. Suggestions for improvement or any other form of constructive criticism are welcome!

    4
    M31 - Andromeda Galaxy

    Equipment details:

    • Mount: OpenAstroMount by OpenAstroTech
    • Lens: Sony 200-600 @ 600mm f/7.1
    • Camera: Sony A7R III
    • Guidescope: OpenAstroGuider (50mm, fl=153) by OpenAstroTech
    • Guide Camera: SVBONY SV305m Pro
    • Imaging Computer: ROCKPro64 running INDIGO server

    Acquisition & Processing:

    • Imaged and Guided/Dithered in Ain Imager
    • 360x30s lights, 30 darks, 30 flats, 30 biases
    • Stacked in Siril, background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++
    • Enhanced saturation of the galaxy and recombined with star mask in GIMP, desaturated and denoised background

    Suggestions for improvement or any other form of constructive criticism welcome!

    1
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KI
    KingRandomGuy @lemmy.world
    Posts 2
    Comments 51