In Text Generation Web UI, how do I set it up for a small model?
In Text Generation Web UI, how do I set it up for a small model?
github.com
GitHub - oobabooga/text-generation-webui: A Gradio web UI for Large Language Models with support for multiple inference backends.
I'm trying to use Text Generation Web UI, but I'm stuck with small models due to being limited to using a cpu that's "Model name: Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz" How do I set this up?
You're probably aware of this but just in case you're not, LLMs are computationally intensive and a CPU from 2007 isn't going to provide a good experience.
That said if you get it working it would be interesting hear how well it works.