Skip Navigation

Posts
5
Comments
77
Joined
2 yr. ago

  • Banning either is fascist, although I assume/hope you were joking.

  • that's me and the dndmemes community

  • I honestly don't think this meme is 85-downvotes bad. The errors are humorous as well IMO.

  • As a solo developer, some things are out of scope like writing translations or ensuring full compliance with accessibility standards. What's important is to have some knowledge of what things block progress in these areas. For example, not treating all strings like ASCII, or preferring native widgets/html elements as those better support accessiblity tools.

  • Accessibility and internationalization first. A lot of projects start without it and tack it on later. It's so much better to have good roots and promote diversity and inclusivity from the start.

  • Obsidian is proprietary FYI I know this is Linux memes and not FOSS memes but I think it's still important to point out.

  • From what I understand if you let someone do their job, you are a piece of shit. I don't agree with that statement whatsoever.

  • If you have a high end GPU, or lots of RAM you can run some good quality LLMs offline. I recommend watching Matthew Berman for tutorials (there are some showing paid hosting aswell).

  • What about the save post button, is that a jerboa only thing?

  • Diversity

    Jump
  • Three times?!?

  • Wow, I was just about to start another bevy project too!

  • Do you already know other programming languages, or is Python your first one?

  • Sudoku, specifically 6x6 libresudoku (available on f-droid)

  • I don't know about you, but when I watch a video I'm not there to watch an ad.

    Also don't forget about the bad companies and scams (example: Established Titles).

  • Knowledge level: Enthusiastic spectator, I don't make or finetune llms, but I do watch AI news, try out local llms, and use things like Github copilot and chat gpt.

    Question: Is it better to use code llama 34b or llama2 13b for a non coding related task?

    Context: I'm able to run either model locally, but I can't run the larger 70b model. So I was wondering if running the 34b code llama would be better since it is larger. I heard that models with better coding abilities are better for other types of tasks too and that they are better with logic (I don't know if this is true I just head l heard it somewhere).

  • I appreciate your sacrifice!

  • it's a cool thing you've made, but where's the joke?

  • Even though I don't agree with some of the points, I would still hope to see discussion rather than unexplained downvotes.