You're viewing a single thread.
Technically possible with a small enough model to work from. It's going to be pretty shit, but "working".
Now, if we were to go further down in scale, I'm curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
39 0 Replysquints
That says , "PHILLIPS DVD+R"
So we're looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>
15 0 Replyllama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
13 0 ReplyJust interested in the topic did you 🔨 offline privately?
1 0 ReplyI'm not an expert on them or anything, but feel free
1 0 Reply
Might be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.
11 2 ReplyIt is a DVD, can faintly see DVD+R on the left side
8 0 Reply70b model taking 1.5GB? So 0.02 bit per parameter?
Are you sure you're not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that's a lot of DVDs
7 0 ReplyAh yes probably the Smaler version, your right. Still, a very good llm better than gpt 3
9 0 ReplyLess than half of a BDXL though! The dream still breathes
6 0 ReplyFor some reason, triple layer writable blu-ray exists. 100GB each
https://www.verbatim.com/prod/optical-media/blu-ray/bd-r-xl-tl/bd-r-xl-tl/
5 0 Reply
It does have the label DVD-R
5 0 Reply
Maybe not all that LLM, https://en.wikipedia.org/wiki/ELIZA
8 0 ReplyELIZA was pretty impressive for the 1960s, as a chatbot for psychology.
7 0 Replyyes i guess it would be a funny experiment for just a local model
4 0 Replypkzip c:\chatgpt*.* a:\chatgpt.zip -&
3 0 Reply