Technically possible with a small enough model to work from. It's going to be pretty shit, but "working".
Now, if we were to go further down in scale, I'm curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
squints
That says , "PHILLIPS DVD+R"
So we're looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of
<INSERT_MODEL_NAME_HERE>
llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
Might be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.
It is a DVD, can faintly see DVD+R on the left side
70b model taking 1.5GB? So 0.02 bit per parameter?
Are you sure you're not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that's a lot of DVDs
ELIZA was pretty impressive for the 1960s, as a chatbot for psychology.
yes i guess it would be a funny experiment for just a local model
pkzip c:\chatgpt.* a:\chatgpt.zip -&
Not needed, I've got this gem.
Y'all can look, but don't touch.
Mummified corpse killed by Wikipedia
Wow, I've actually never seen this disc. However, funnily enough, I have another disc with that program burned into it. Someone didn't read the notice, it seems.
Do you also have a Route99 disc?
Fun fact: you can download llama3, an llm model made by meta (which is surprisingly good for its size), and it's only 4.7gb.
A dvd can store 4.7 gb of data, meaning you could in theory have an llm on a DVD.
Technically possible with a small enough model to work from. It's going to be pretty shit, but "working".
Now, if we were to go further down in scale, I'm curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
squints
That says , "PHILLIPS DVD+R"
So we're looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of
<INSERT_MODEL_NAME_HERE>
llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
Might be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.
It is a DVD, can faintly see DVD+R on the left side
70b model taking 1.5GB? So 0.02 bit per parameter?
Are you sure you're not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that's a lot of DVDs
It does have the label DVD-R
Maybe not all that LLM, https://en.wikipedia.org/wiki/ELIZA
ELIZA was pretty impressive for the 1960s, as a chatbot for psychology.
yes i guess it would be a funny experiment for just a local model
pkzip c:\chatgpt.* a:\chatgpt.zip -&