Skip Navigation
World's first bioprocessor uses 16 human brain organoids for ‘a million times less power’ consumption than a digital chip
  • Is this legit? This is the first time I've heard of human neurons used for such a purpose. Kind of surprised that's legal. Instinctively, I feel like a "human brain organoid" is close enough to a human that you cannot wave away the potential for consciousness so easily. At what point does something like this deserve human rights?

    I notice that the paper is published in Frontiers, the same journal that let the notorious AI-generated giant-rat-testicles image get published. They are not highly regarded in general.

  • Google AI making up recalls that didn’t happen
  • DuckDuckGo is an easy first step. It's free, publicly available, and familiar to anyone who is used to Google. Results are sourced largely from Bing, so there is second-hand rot, but IMHO there was a tipping point in 2023 where DDG's results became generally more useful than Google's or Bing's. (That's my personal experience; YMMV.) And they're not putting half-assed AI implementations front and center (though they have some experimental features you can play with if you want).

    If you want something AI-driven, Perplexity.ai is pretty good. Bing Chat is worth looking at, but last I checked it was still too hallucinatory to use for general search, and the UI is awful.

    I've been using Kagi for a while now and I find its quick summaries (which are not displayed by default for web searches) much, much better than this. For example, here's what Kagi's "quick answer" feature gives me with this search term:

    Room for improvement, sure, but it's not hallucinating anything, and it cites its sources. That's the bare minimum anyone should tolerate, and yet most of the stuff out there falls wayyyyy short.

  • Just booted fresh Debian install with 7900 XTX: Getting errors at boot
  • I recently upgraded to a 7900 XTX on Debian stable, as well. I'm running the newest kernel from Debian's backports repo (6.6, I think), and I didn't have that same problem.

    I did have other problems with OpenCL, though. I made a thread about this and solved it with some trouble. Check my post history if you're interested. I hope it helps. I can take a closer look at my now-working system for comparison if you have further issues.

  • How can I get OpenCL to work on Debian Bookworm with an AMD 7900 XTX?
  • IT WORKS NOW! I will need time to run additional tests, but the gist of my solution was:

    1. Backport llvm-18 from sid following the guide you linked at https://wiki.debian.org/SimpleBackportCreation

    2. After compiling and installing all those deb files, I then installed the "jammy" version of amdgpu-install_6.0.60002-1.deb from https://www.amd.com/en/support/linux-drivers

    3. Downloaded the latest kernel sources from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git, and simply copied all the files from its lib/firmware/amdgpu folder into my system's /lib/firmware/amdgpu. Got that idea from https://discussion.fedoraproject.org/t/amdgpu-doesnt-seem-to-function-with-navi-31-rx-7900-xtx/72647

    4. sudo update-initramfs -u && sudo reboot

    I'm not totally sure it step 3 was sane or necessary. Perhaps the missing piece before that was that I needed to manually update my initramfs? I've tried like a million things at this point and my system is dirty, so I will probably roll back to my snapshot from before all of this and attempt to re-do it with the minimal steps, when I have time.

    Anyway, I was able to run a real-world OpenCL benchmark, and it's crazy-fast compared to my old GTX 1080. Actually a bigger difference than I expected. Like 6x.

    THANKS FOR THE HELP!

  • How can I get OpenCL to work on Debian Bookworm with an AMD 7900 XTX?
  • Thanks for the links! I've never attempted making my own backport before. I'll give it a shot. I might also try re-upgrading to sid to see if I can wrangle it a little differently. Maybe I don't actually need mesa-opencl-ics if I'm installing AMD's installer afterwards anyway. At least, I found something to that effect in a different but similar discussion.

  • How can I get OpenCL to work on Debian Bookworm with an AMD 7900 XTX?
  • Update: I upgraded to Sid. Unfortunately, mesa-opencl-icd depends on libclc-17, which uninstalls -18. So I can't get OpenCL working while the correct libclc is installed.

    No idea where to go from here. I'll probably restore my Bookworm snapshot, since I don't want to be on Sid if it doesn't solve this problem.

  • How can I get OpenCL to work on Debian Bookworm with an AMD 7900 XTX?
  • Update: Running amdgpu-install did not provide those files. There were a few errors regarding vulkan packages when I attempted, I guess because it's assuming Ubuntu repos. Trying with just opencl and not vulkan succeded, but still clinfo reported the missing files.

    I don't think I can get this working without a whole newer llvm.

  • How can I get OpenCL to work on Debian Bookworm with an AMD 7900 XTX?
  • Ah, somehow I didn't see 18 there and only looked at 17. Thanks!

    I tried pulling just the one package from the sid repo, but that created a cascade of dependencies, including all of llvm. I was able to get those files installed but not able to get clinfo to succeed. I also tried installing llvm-19 from the repo at https://apt.llvm.org/, with similar results. clinfo didn't throw the fatal errors anymore, but it didn't work, either. It still reported Number of devices 0 and OpenCL-based tools crashed anyway. Not with the same error, but with something generic about not finding a device or possibly having corrupt drivers.

    Should I bite the bullet and do a full ugprade to sid, or is there some way to this more precisely that won't muck up Bookworm?

  • How can I get OpenCL to work on Debian Bookworm with an AMD 7900 XTX?
  • Thanks, that's good advice. There are lower-numbered gfx* files in there. 900, 902, 904, 906. No 1030 or 1100. Same after reinstalling.

    Looks like these files are actually provided by the libclc-15 package. libclc-16 has the same set of files. Even libclc-17 from sid has the same files. So I guess upgrading to testing/unstable wouldn't help.

    apt-file search gfx1100-amdgcn-mesa-mesa3d.bc yields no results, so I guess I need to go outside of the Debian repos. I'll try the AMD package tonight.

  • How can I get OpenCL to work on Debian Bookworm with an AMD 7900 XTX?

    I looked this up before buying the GPU, and I read that it should "just work" on Debian stable (Bookworm, 12). Well, it doesn't "just work" for me. :(

    clinfo returns two fatal errors:

    ``` fatal error: cannot open file '/usr/lib/clc/gfx1100-amdgcn-mesa-mesa3d.bc': No such file or directory

    fatal error: cannot open file '/usr/lib/clc/gfx1030-amdgcn-mesa-mesa3d.bc': No such file or directory

    ```

    I get similar errors when trying to run OpenCL-based programs.

    I'm running a backported kernel, 6.6.13, and the latest Bookworm-supported mesa-opencl-icd, 22.3.6. From what I've found online, this should work, though Mesa 23.x is recommended. Is it safe/sane to install Mesa from Debian Trixie (testing)?

    I've also seen references to AMD's official proprietary drivers. They do not officially support Debian, but can/should I run the Ubuntu installer anyway?

    I'm hoping to get this up and running without any drastic measures like distro hopping. That said, if "upgrade to Testing or Unstable" is the simplest approach, I am willing to entertain the idea.

    Thanks in advance for any help you can offer.

    16
    DuckDuckGo now has an AI prompt available in beta, and I can already see it on my devices. Does it affect privacy? If so, what alternatives do we have?
  • Thanks! I didn't see that. Relevant bit for convenience:

    we call model providers on your behalf so your personal information (for example, IP address) is not exposed to them. In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests that includes not using Prompts and Outputs to develop or improve their models as well as deleting all information received within 30 days.

    Pretty standard stuff for such services in my experience.

  • DuckDuckGo now has an AI prompt available in beta, and I can already see it on my devices. Does it affect privacy? If so, what alternatives do we have?
  • I'm not entirely clear on which (anti-)features are only in the browser vs in the web site as well. It sounds like they are steering people toward their commercial partners like Binance across the board.

    Personally I find the cryptocurrency stuff off-putting in general. Not trying to push my opinion on you though. If you don't object to any of that stuff, then as far as I know Brave is fine for you.

  • DuckDuckGo now has an AI prompt available in beta, and I can already see it on my devices. Does it affect privacy? If so, what alternatives do we have?
  • If you click the Chat button on a DDG search page, it says:

    DuckDuckGo AI Chat is a private AI-powered chat service that currently supports OpenAI’s GPT-3.5 and Anthropic’s Claude chat models.

    So at minimum they are sharing data with one additional third party, either OpenAI or Anthropic depending on which model you choose.

    OpenAI and Anthropic have similar terms and conditions for enterprise customers. They are not completely transparent and any given enterprise could have their own custom license terms, but my understanding is that they generally will not store queries or use them for training purposes. You'd better seek clarification from DDG. I was not able to find information on this in DDG's privacy policy.

    Obviously, this is not legal advice, and I do not speak for any of these companies. This is just my understanding based on the last time I looked over the OpenAI and Anthropic privacy policies, which was a few months ago.

  • Why I Lost Faith in Kagi
  • I've been using Kagi for a while, so I'll post a few quick thoughts I had after reading the article, linked blog, and mastodon thread.

    The one thing in the blog post I strongly disagree with is her statement that the summarizer is "the same old AI bullshit". I think they just assumed that without actually testing it. The summarizer is fantastic, and is a great example of the right way to use LLMs. Its output comes entirely from the URL or file you specify. It does not hallucinate. You can ask it follow-up questions about the document, and again, its replies are limited in scope to what's actually in that document. If you ask it something out of scope it'll tell you that it can't find that information in the document. This is great because it's using the LLM for what LLMs are actually good for — complex language parsing — and not for what they're bad for, like reasoning or information storage/retrieval. It's actually quite difficult to misuse the summarizer. It's straightforward and effective. This is Kagi's killer feature, IMO.

    I can't speak as highly of its search-integrated AI features like FastGPT. They mostly take information from the contents of the first few search results, but they also seem to "fill in the blanks" more than I'd like. Sometimes it gives me information that is simply not in the sources that it cites. It's not as bad as using ChatGPT (which hallucinates all day every day, and if it ever tries to cite source is hallucinates those, too) but it needs improvement.

    That said, Kagi doesn't shove the AI down your throat like you might think reading the blog post. These are separate features that need to be explicitly activated. Plain searches don't return results from the LLMs. If you want that, you need to explicitly go to the assistant or trigger the "quick answer" feature on the search results page. EDIT: I just realized that this is not totally true as of a few weeks ago, when they added a feature to automatically trigger the "quick answer" blurb for search queries ending in a question mark. Perhaps that's why Lori felt like it was over-emphasized.

    Anyway, back to the summarizer, here is an example of it in action. I gave it the URL of the Mastodon post. I think this is an excellent result. I also asked it an unrelated followup question so you can see that it limits itself to the information in the URL. It will not make shit up!

    The summarizer lets me download conversations in markdown form, so I'll just paste it right in here so you can see.

    Exploring https://hackers.town/@lori/112255132348604770

    Assistant:
    Key points:

    • The author wrote a blog post criticizing the search engine Kagi, which then prompted the CEO of Kagi, Vlad, to email the author directly to argue against the criticism. [1]
    • The author woke up to an email from Vlad, the CEO of Kagi, who was upset about the blog post the author had written. [1]
    • The author decided to publicly share the email exchange with Vlad, as they felt it was important for people to see his unhinged behavior. [1]
    • The author refused to engage in a private debate with Vlad, as they did not want to be taken "to Crime Scene Number Two" without any accountability. [1]
    • The author believes Vlad is unable to accept criticism or disagreement, and that it "eats him alive" when someone doesn't agree with him or his company. [1]
    • The author states that no other search engine CEO has emailed them directly to "mansplain their company" in this manner. [1]
    • The author sees Vlad's behavior as "petty and sad" and believes he has no understanding of boundaries. [1]
    • Other users in the thread share similar experiences of Vlad's inability to accept criticism and his tendency to simply restate his own opinions rather than address counterpoints. [1]
    • The author and others characterize Vlad's behavior as that of a narcissist who cannot tolerate disagreement. [1]
    • The author and others are amazed by Vlad's decision to set up a separate company in another country just to print and give away t-shirts, rather than focus on improving Kagi's core search functionality.

    [1] lori (@lori@hackers.town)

    Query:
    What's the capital of North Dakota?

    Assistant:
    The knowledge provided does not contain any information about the capital of North Dakota. The context is focused on an email exchange between the author and the CEO of Kagi search engine regarding criticism of the company.

  • Orange Pi Neo is new handheld powered by AMD and comes preinstalled with Manjaro
  • Because it's not the same class of device. The PS Portal is very niche. It's a $200 device that basically just runs the PS Remote Play app.

    I've used PS Remote Play on my phone and laptop, and it's just not good in the cases I actually want to use it: when traveling away from home. Even with a good Internet connection it's only "okay". It's utterly useless when in transit (trains, places, etc.), and 99% useless in any public place (e.g. cafe or library WiFi).

    These are all cases where the Switch, Deck, and similar devices excel. The PS Portal addresses a much smaller market.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
    hersh @literature.cafe
    Posts 1
    Comments 74