Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ML
Posts
1
Comments
13
Joined
5 mo. ago

  • Oh, I thought you could get 128gb ram or more, but I can see it does not make sense with the <24gb… sorry for spreading misinformation, I guess, in this case a GPU of the same GB ram would probably be better

  • It all depends on the size of the model you are running, if it cannot fit in GPU memory, then it has to go back and forth with the host (cpu memory or even disk) and the GPU. This is extremely slow. This is why some people are running LLMs on macs, as they can have a large amount of memory shared between the GPU and CPU, making it viable to fit some larger models in memory.

  • Nix / NixOS @programming.dev

    NixOS containers vs. Docker containers

  • I tired it once with my buddy and it seemed to work fine on the element client. Not sure if this was placebo, but it felt like the unencrypted video call had better quality, however it wasn’t very noticeable. Though it might be a bigger problem with many people in the call, haven’t tested it though

  • Matrix? I think you can setup text channels and also do voice/video/screen sharing in the channels as well if you’re using element, though I havn’t been able to convince my friends to jump ship yet, so don’t know how it compares to discord

  • You should only need to have Java and then download the server and open the port if they want to play vanilla mc. If they want modded then idk.

    You might also want to check this out, haven’t used it myself but it looks cool if you don’t like wasting server resources: https://github.com/timvisee/lazymc