I've been building MinimalChat for a while now, and based on the feedback I've received, it's in a pretty decent place for general use. I figured I'd share it here for anyone who might be interested!
Quick Features Overview:
Mobile PWA Support: Install the site like a normal app on any device.
Any OpenAI formatted API support: Works with LM Studio, OpenRouter, etc.
Local Storage: All data is stored locally in the browser with minimal setup. Just enter a port and go in Docker.
Experimental Conversational Mode (GPT Models for now)
Basic File Upload and Storage Support: Files are stored locally in the browser.
Vision Support with Maintained Context
Regen/Edit Previous User Messages
Swap Models Anytime: Maintain conversational context while switching models.
Set/Save System Prompts: Set the system prompt. Prompts will also be saved to a list so they can be switched between easily.
The idea is to make it essentially foolproof to deploy or set up while being generally full-featured and aesthetically pleasing. No additional databases or servers are needed, everything is contained and managed inside the web app itself locally.
It's another chat client in a sea of clients but it is unique in its own ways in my opinion. Enjoy! Feedback is always appreciated!
I thought sharing here might be a good idea as well, some might find it useful!
I've added some updates since even the initial post which gave a huge improvement to message rendering speed as well as added a plethora of new models to choose from and load/run fully locally in your browser (Edge and Chrome) with WebGPU and WebLLM
If u add tool use capabilities eg calculator web search etc ideally somthing the end iser can easily add to I'll definatly start using it I'm glad people are working on foss things liam this tho.
This looks great! I imagine the documents you upload are used for RAG?
If so, do you also show citations in the chat answers for what context the model used to answer the user's query?
I ask because Verba by weaviate does that, but I like yours more and I'd like to switch to it (I've had a hard time getting Verba to work in the past).
Unfortunately currently there isn't a true RAG implementation largely due to the fact that this site/app is fully self contained with no additional servers or database etc..which is typically required for RAG.
For now file uploads are stored in the browser's own local database and the content can be extracted and added to the current conversation context easily.
I definitely want to add a more full RAG system but it's a process to say the least, and if I implement it I want it to be quite effective. My experience with RAG generally has left me quite unimpressed with a few quite decent implementations being the exception.
If this project sees the value of privacy & security for local & self-hosted LLM chat, why does this project only offer proprietary, corpo means for contributions & communications?
Choosing proprietary tools and services for your free software project ultimately sends a message to downstream developers and users of your project that freedom of all usersβdevelopers includedβis not a priority.