Llama 3.1 Megathread
Blaed @ Blaed @lemmy.world Posts 71Comments 14Joined 2 yr. ago
Blaed @ Blaed @lemmy.world
Posts
71
Comments
14
Joined
2 yr. ago
HyperTech News Report #0003 - Expanding Horizons
We're building FOSAI models! Cast your votes and pick your tunings.
LM Studio - A new tool to discover, download, and run local LLMs
Combining 'LocalAI' + 'Continue' to Create a Private Co-Pilot Coding Assistant!
Cheetor - A New Multi-Modal LLM Strategy Empowered by Controllable Knowledge Re-Injection
Cheetor - A New Multi-Modal LLM Strategy Empowered by Controllable Knowledge Re-Injection
I used to feel the same way until I found some very interesting performance results from 3B and 7B parameter models.
Granted, it wasn’t anything I’d deploy to production - but using the smaller models to prototype quick ideas is great before having to rent a gpu and spend time working with the bigger models.
Give a few models a try! You might be pleasantly surprised. There’s plenty to choose from too. You will get wildly different results depending on your use case and prompting approach.
Let us know if you end up finding one you like! I think it is only a matter of time before we’re running 40B+ parameters at home (casually).