Skip Navigation

Local AI is one step closer through Mistral-NeMo 12B

www.techzine.eu Local AI is one step closer through Mistral-NeMo 12B

Running LLMs outside a datacenter is usually not a realistic prospect. Nvidia and Mistral are letting PC users run a new model that does work locally.

Local AI is one step closer through Mistral-NeMo 12B

Mistral NeMo 12B is the name of the new AI model, presented this week by Nvidia and Mistral. “We are fortunate to collaborate with the NVIDIA team, leveraging their top-tier hardware and software,” said Guillaume Lample, cofounder and chief scientist of Mistral AI. “Together, we have developed a model with unprecedented accuracy, flexibility, high-efficiency and enterprise-grade support and security thanks to NVIDIA AI Enterprise deployment.”

The promise of the new AI model is significant. Whereas previous LLMs were tied to datacenters, Mistral NeMo 12B moves to workstations. And it does this without sacrificing performance, or well, that’s the promise.

21

You're viewing a single thread.

21 comments
  • I don't agree with the initial premise about model size and practicality. I run much larger models on my own hardware.

    12th gen Intel laptops had a common enthusiast level build that came with a RTX 3080Ti. The Ti is very important here. The 3080 was a 8 GB GPU, while the Ti is a 16 GB version. It is loud as far as fans, and pretty much junk for battery life with the GPU running. However, for a second hand very AI capable all in one setup, you can find one well under $2k. The gigabyte aorus is an example. The ROQ model has more addressable system memory, and that is a nice bonus. Use Linux hardware probe's website to determine compatibility.

    I run quantized 8×7b models which is actually equivalent to ~47b IIRC, (models math accounting for redundancy is not 8×7=56). I also run a quantized 70b and 72b. These are massive.

    There is a major assumption people/articles make regarding models. If you're a developer looking to train models, and limited on hardware resources, you're going to be playing with stuff like 7b models because you need the full size model to accurately train and that is quite large, but you don't need this to run the model in most cases. A quantized 7b will fit natively on a flagship phone.

    If you just want to run the best model you can for your hardware without training, run the largest model you can while testing quantization. Also look for people that know how to quantize a model well. It involves different types of bit reduction techniques in different areas so that emergent behavior is preserved in the tensors. It is not just float(16) to int(4) across the board.

    Models are far more complex than they first seem. Small models are very capable, but they tend to have compounding issues that are difficult to identity if you're not familiar with them. Larger models tend to have one problem at a time and can self diagnose many of these. Problems are almost always related to safety alignment which is a form of overtraining, and are primarily needed to avoid offending people for our nonsensical cultural biases and inner conflicts we all have just under the surface of our awareness. Prompt momentum can overcome most of the issues. Smaller models need a lot more momentum to be useful. This momentum is hard to create in practice. Larger models require far less momentum to access advanced information.

    People tend to use and write articles based on what they can run for free using Google's data mining stalkerware in exchange for a free (limited) cloud GPU program. This is why few people poor enough to attempt writing for a living do not write articles based on what real consumer hardware can run. There is a 16 GB GPU in the google thing, but it doesn't have the system memory required to initially load the model or the threads needed to split the model and run like I can. That available setup is not usable.

    The best offline AI setup is to run it on a tower as a service to your local network. Get the biggest GPU memory you can afford.

You've viewed 21 comments.