Skip Navigation
Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • Sorry, didn't check back here for a few days. There absolutely can be German Shepherds!


    These are the initial generations before upscaling/processing. Generated with Bifrost Project (SDXL model).

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • Here are a few more I made with Bifröst Project. It can handle generating in 1280x1024 which is pretty nice. I don't really like the square format that much.

    Expand

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • Glad you liked it. Here's one more for the road!

    Expand

    Maybe he's not in perfect shape but that's not going to stop him from enjoying li— I mean being activated.

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • No one expects the laser from the back.

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • They say war changes a man. I guess that goes for poodobots as well.

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • The messed up powerlines were annoying me, so here's one more attempt.

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • Sure... It's a bit hard to get pitbull, lab and robot to all come through clearly but I tried! (I also took it easy on you and didn't make him too pitiful looking.) First image is the completed version, second one is the initial 1024x1024 generation.

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • Not bad, but he looks a bit too upbeat for this thread. Let me fix that for you.

    edit: I did the other one for fun also.

    He seems very helpful and willing to give you a hand.

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
  • Any requests? I can make various dog breeds, wolves, foxes, whatever it takes to completely extinguish the joy in your heart.

    Bonus white Dobermann and Siberian Husky pup.

  • Having a cheerful holiday? Let me fix that for you with a sad robot dog.
    1. At home
    2. As above. Happy days!
    3. On the streets for one day.
    4. On the streets for two weeks.
    5. On the streets... well, who can guess how long he's been standing there? It's not like they fall down once the batteries run out.

    Model: FenrisXL, made in ComfyUI.

    21
    MilliMobile is a tiny, self-driving robot powered only by light and radio waves
  • That is the worst site I've seen in a long time. Do yourself a favor and add

    www.verticalfarmdaily.com###zijkant
    www.verticalfarmdaily.com###banners_zijkant
    

    to your uBlock rules before following the link. If you don't have a way to block elements, may $diety have mercy on your soul.

  • YouTube isn't happy you're using ad blockers — and it's doing something about it
  • Then it’s a cat-and-mouse game between the anti-adblock tech and the anti-anti-adblock tech.

    My money (not literally though :) is on the anti-anti-adblock tech. That can be crowdsourced and generally adapts much faster than big companies.

  • The fastest ever human-made object keeps breaking its own speed record
  • Probably the furthest man made object from Earth at this point for sure.

    The article says "Scientists believe compression heating caused the cap to vaporize as it sped through the atmosphere."

  • Netflix to Open Stores Where Fans Can Play, Shop and Eat in 2025
  • Fans? Customers yeah, but fans?

    They actually did at one point, but they threw it all away.

  • Deleted
    *Permanently Deleted*
  • Smaller models (7B down to 350M) can handle long conversations better

    What are you basing that on? I mean, it is true there are more small models that support very long context lengths than big ones, but it's not really because smaller models can handle them better, but because training big models takes a lot more resources. So people usually do that kind of fine-tuning on small models since training a 70B to 32K would take a crazy amount of compute and hardware.

    If you could afford fine tuning it though, I'm pretty sure the big model has at least the same inherent capabilities. Usually larger models deal with ambiguity and stuff better, so there's a pretty good chance it would actually do better than the small model assuming everything else was equal.

  • Kroger introducing AI at self checkout to lower both accidental and organized crime theft.
  • The article seems to repeat the same stuff over and over again.

    On Lemmy, a popular social networking site, user KerfuffleV2 astutely noted that the article repeated points that had already been stated in the article.

    "It seems like the article repeated the same content multiple times" said KerfuffleV2, a user on the social networking site Lemmy. "Perhaps they get paid by the word." the user added.

    A rather uncreative article on thestreet.com triggered some snarky online comments including one from a user named KerfuffleV2. This user noted that the article repeated the same content multiple times.

  • Atheists of lemmy, what is your coping strategy when things goes downhill?
  • Can you provide an example where science cannot explain a situation, because I can’t honestly think of any.

    Not OP, but there is some stuff. One big example is qualia. How does matter give rise to actual feelings, experiences of things? This isn't something we can measure directly and it actually seems like it won't be something we ever can measure. Might also be able to use something like "what was there before the big bang?" and that kind of thing.

    Of course, the fact that science can't explain something doesn't really justify falling back on magic as an explanation though. Some stuff just may not have an answer.

  • Cabal of 'gay furry hackers' claims over 3,000 files stolen in NATO website breach
  • Pretty sure it's mainly non-furry non-gay hackers that take down the majority of websites.

  • Amazon Prime Video is able to remove a video from your library after purchase.
  • From dealing with their support in the past and stuff they've accommodated, I wouldn't be surprised if you could just ask them to do it for a small amount like that. If you do a web search, you can also find a lot of information and people claiming it's possible to do stuff like transfer it to a Paypal account, etc.

    I haven't tried to do that personally, so maybe it really just isn't possible. It's still only something that will affect someone that's never going to spend money at Amazon again, right? If I'm going to spend $5.99 at some point, it's effectively the same as a cash refund for me. If I'm going to spend $10.99 at some point it's almost the same as getting double the refund, since I would have spent cash instead in those cases.

  • Removed
    3D-printed carrot does not rely on large areas of land or maintenance costs, can be cheaper
  • Do we need to be more efficient?

    I mean, it's usually a beneficial thing. Using less resources (including land) to produce the same amount of food is probably going to mean less environmental damage. In the case of switching to vat grown meat it also means not torturing billions of animals every year.

    We have the resources to feed everyone on Earth and have leftovers

    Sure. No one starves because the food just isn't on this planet, they starve because the people who have it won't give it to them. That said, we're also not using resources very sustainably so saying we produce enough food currently isn't the same as saying we can continue this way.

    We could also increase efficiency even further by reducing meat/dairy consumption.

    I don't eat any animal products so you can probably guess this is something I'm strongly in favor of as well!

    Anyway, I was just responding to what I quoted not specifically arguing for 3d-printed foods. Depending on how it's implemented, it may or may not be better environmentally than the status quo

  • Amazon Prime Video is able to remove a video from your library after purchase.
  • I agree it’s still better than walking away empty handed, but let’s not pretend that got their money back.

    In the rare case the person has just stopped spending money at Amazon, I guess. For anyone that's spending $10/month, it's effectively the same as cash. (Also, you probably can transfer the credit to a bank account if you really want to.)

  • std::any::Any for slices?

    I recently ran into an issue where I wanted to use Any for slices. However, it only allows 'static types (based on what I read, this is because you get the same TypeId regardless of lifetimes).

    I came up with this workaround which I think is safe:

    ```rust use std::{ any::{Any, TypeId}, marker::PhantomData, };

    #[derive(Clone, Debug)] pub struct AnySlice<'a> { tid: TypeId, len: usize, ptr: *const (), marker: PhantomData<&'a ()>, }

    impl<'a> AnySlice<'a> { pub fn from_slice(s: &'a [T]) -> Self { Self { len: s.len(), ptr: s.as_ptr() as *const (), tid: TypeId::of::(), marker: PhantomData, } }

    pub fn as_slice(&self) -> Option<&'a [T]> { if TypeId::of::() != self.tid { return None; } Some(unsafe { std::slice::from_raw_parts(self.ptr as *const T, self.len) }) }

    pub fn is(&self) -> bool { TypeId::of::() == self.tid } } ```

    edit: Unfortunately it seems like Lemmy insists on mangling the code block. See the playground link below.

    T: Any ensures T is also 'static. The lifetime is preserved with PhantomData. Here's a playground link with some simple tests and a mut version: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=3116a404c28317c46dbba6ed6824c8a9

    It seems to pass Miri, including the mut version (which requires a bit more care to ensure there can only be one mutable reference). Any problems with doing this?

    8
    Roasting your own coffee is really easy and all you need is an oven, a cookie sheet and some green coffee beans

    Why?

    Even though green coffee beans tend to be heavier due to the higher water content, generally it's cheaper to roast your own compared to buying them pre-roasted.

    You can roast the same beans at different levels to get some variety without having to go out and buy a new batch.

    It's kind of fun and a decent conversation topic.

    Notes

    Don't be scared by how long this post is. It basically just comes down to spread beans on a cookie sheet, put in preheated oven, wait around 12-15 minutes and then take them out and cool them.

    Since we're talking about roasting beans, naturally you're going to need a grinder to actually use them.

    The process will create some smoke, even with a light roast. Basically, darker roast, more smoke. So far I've mainly done pretty light roasts and even though my kitchen doesn't have much ventilation (and my oven doesn't have fancy modern contraptions like, you know, a light or a fan) it hasn't been an issue.

    Your oven should be reasonably clean if you don't want the roasted coffee to taste like random stuff.

    If you're a super coffee snob and it has to be perfect, this may not be for you. It's pretty easy, but odds are the first few tries aren't going to be perfect especially if you like darker roasts.

    You're going to want something like a large metal mixing bowl and colander for the cooling process. My colander is plastic, so you can probably get away with that if you don't put the red hot beans in it directly out of the oven.

    You'll also probably need access to an outside area where bits of coffee chaff blowing around aren't going to bother people. I don't think there's really an easy way to deal with coffee chaff indoors.

    By the way, don't try to grind green coffee beans in a normal grinder. They are insanely, and I mean insanely hard and tough. You'll destroy your grinder unless it is an absolute tank. (I'd say it's also not really worth trying, green coffee didn't taste very good to me.)

    How

    Here's the process:

    1. Start preheating your oven to 500f/260c. (Some people say as hot as possible, some people use a slightly lower temperature like 460-475f.)
    2. Get a cookie sheet ready. Just a standard cookie sheet. Mine aren't super clean so I put a layer of silver foil on it. Don't preheat the cookie sheet itself.
    3. Measure out about 1 cup of green coffee beans. (I've found you can fit about 2 cups on a single sheet but it's probably better to start small.) You want to make sure the beans are spread out evenly in a single layer.
    4. Look for beans that are discolored/damaged and toss them away. Don't be a perfectionist though, just get rid of 10-15 of the worst looking beans. Something like that.
    5. Place the cookie sheet in the oven once it's reached the correct temperature. I put mine on the bottom rack near the (electric) heating element. If you're going for a darker roast, I guess this might make burning them more likely.
    6. Set a timer for ~12 minutes. I wouldn't recommend roasting longer than 14 minutes your first time.
    7. Now you wait a bit. Probably around the 8 minute mark, you're going to start hearing sharp cracking/popping sounds. Don't worry, the beans won't jump around like popcorn and the sound is fairly loud so you're not likely to miss it. At this point (or in 1-2 minutes) you can remove the beans and have a light roast. This point is known as the "first crack".
    8. After a couple of minutes, the sounds will die off and you won't hear anything for a little bit. If you keep roasting, you'll start to hear a softer, more muted crackling sound start. This is the "second crack". I would not recommend roasting past this point until you're comfortable with the process and have an idea of how roasted the beans are at this point. If you roast much longer, it's very easy to burn them and there's also going to be a lot more smoke.
    9. Remove the beans from the oven. You can let them rest for 1-2 minutes on the cookie sheet if you want, then transfer to something like a metal mixing bowl. It has to be something that can deal with 500f stuff touching its surface.
    10. Ideally get another mixing bowl/colander/whatever as well. Pouring the beans back and forth through the air is a good way to cool them off and remove chaff. What's chaff you ask? The beans are coated with a papery layer of chaff. Don't worry though, once they're roasted it's really easy to remove. You want to try to cool off the beans pretty quickly at this point.
    11. Go outside and blow gently on the roasted beans in your bowl. You should see a bunch of super light, papery chaff fly out. You can pour the hot beans from one bowl to another, and if there's a bit of a breeze that'll help a lot. Otherwise, you can just blow on them. You could also stir them around with a wooden spoon or something to encourage the chaff to separate.
    12. Once the chaff is mostly gone (it's fine if there's a little left, or little pieces stuck to some beans) and the beans are fairly cool you can just leave them in a safe place for around 12 hours to fully cool and vent CO2. Don't put them in a sealed container for the first 12-ish hours.

    Conclusion

    One thing to note is you don't want to actually grind/use the beans for at least 12 hours. It might seem unintuitive, but from what I've read as freshly roasted as possible isn't necessarily best. Depending on the beans/roast level, the coffee might reach its optimal tastiness even a couple weeks after roasting.

    I'm far from an expert, but feel free to ask questions in the comments if you want. I can recommend a grinder/beans to get started with if anyone needs information like that.

    61
    I've been working on a number of Rust projects related to large language models

    This subject is kind of niche, but hey... It's new content of some kind at least! Also just want to be up front: These projects may have reached the point of usefulness (in some cases) but they're also definitely not production ready.

    ***

    ggml-sys-bleedingedge

    GGML is the machine learning library that makes llama.cpp work. If you're interested in LLMs, you've probably already heard of llama.cpp by now. If not, this one is probably irrelevant to you!

    ggml-sys-bleedingedge is a set of low level bindings to GGML which are automatically generated periodically. Theoretically it also supports stuff like CUDA, OpenCL, Metal via feature flags but this is not really tested.

    Repo: https://github.com/KerfuffleV2/ggml-sys-bleedingedge

    Crate: https://crates.io/crates/ggml-sys-bleedingedge

    ***

    llm-samplers

    You may or may not already know this: When you evaluate an LLM, you don't get any specific answer back. LLMs have a list of tokens they understand which is referred to as their "vocabulary". For LLaMA models, this is about 32,000 tokens. So once you're done evaluating the LLM, you get a list of ~32,000 f32s out of it representing the probability for each token.

    The naive approach of just picking the most probable token actually doesn't work that well ("greedy sampling") so there are various approaches to filtering, sorting and selecting tokens to produce better results.

    Repo: https://github.com/KerfuffleV2/llm-samplers

    Crate: https://crates.io/crates/llm-samplers

    ***

    rusty-ggml

    Higher level bindings built on the ggml-sys-bleedingedge crate. Not too much to say about this one: if you want to use GGML in Rust, there aren't that many options and using low level bindings directly isn't all that pleasant.

    I'm actually using this one in the next project, but it's very, very alpha.

    Repo: https://github.com/KerfuffleV2/rusty-ggml

    Crate: https://crates.io/crates/rusty-ggml

    ***

    smolrsrwkv

    If you're interested in LLMs, most (maybe all) of the models you know about like LLaMA, ChatGPT, etc are based on the Transformer paradigm. RWKV is a different approach to building large language models: https://github.com/BlinkDL/RWKV-LM

    This project started out "smol" as an attempt to teach myself about LLMs but I've gradually added features and backends. It's mostly useful as a learning aid/example of some of the other projects I made. In addition to being able to run inference using ndarray (pretty slow) it now supports GGML as a backend and I'm in the process of adding llm-samplers support.

    Repo: https://github.com/KerfuffleV2/smolrsrwkv

    repugnant-pickle

    Last (and possibly least) is repugnant-pickle. As far as I know, it is the only Rust crate available that will let you deal with PyTorch files (which are basically zipped up Python pickles). smolrsrwkv also uses this one to allow loading PyTorch RWKV models directly without having to convert them first.

    If that's not enough of a description: Pickle is the default Python data serialization format. It was designed by crazy people, though: it is extremely difficult to interoperate with unless you're Python because it's basically a little stack based virtual machine and can call into Python classes. Existing Rust crates don't fully support it.

    repugnant-pickle takes the approach of best-effort scraping pickled data rather than trying to be 100% correct and can deal with weird pickle stuff that other crates throw their hands up at.

    Repo: https://github.com/KerfuffleV2/repugnant-pickle

    Crate: TBD

    0
    Reddit @lemmy.ml Kerfuffle @sh.itjust.works
    You'd probably get more redditors to migrate if there was an old reddit type style

    Apparently Lemmy copied the new reddit layout which shoves everything into the middle of the screen and wastes a massive amount of space. Even on the monitor I oriented vertically this is the case: the post I'm typing right now is using like 30% of the available screen real-estate and wasting the other 2/3rds.

    My philosophy has always been that if reddit removed support for the old style, that's when I'd stop using reddit. Switching to Lemmy is like switching to new reddit though.

    I made an account, but I can't really see using this as a replacement. I'd guess (but I might be wrong) that the type of people clinging to the old reddit style are also the most likely to do something like switch to Lemmy out of principle.

    (I looked around and it doesn't seem like there are any browser addons or userscripts to restyle it either.)

    3
    Kerfuffle Kerfuffle @sh.itjust.works

    https://github.com/KerfuffleV2 — various random open source projects.

    Posts 5
    Comments 262