What Kinds of Data do AI Chatbots Collect?
What Kinds of Data do AI Chatbots Collect?
A chart titled "What Kind of Data Do AI Chatbots Collect?" lists and compares seven AI chatbots—Gemini, Claude, CoPilot, Deepseek, ChatGPT, Perplexity, and Grok—based on the types and number of data points they collect as of February 2025. The categories of data include: Contact Info, Location, Contacts, User Content, History, Identifiers, Diagnostics, Usage Data, Purchases, Other Data.
- Gemini: Collects all 10 data types; highest total at 22 data points
- Claude: Collects 7 types; 13 data points
- CoPilot: Collects 7 types; 12 data points
- Deepseek: Collects 6 types; 11 data points
- ChatGPT: Collects 6 types; 10 data points
- Perplexity: Collects 6 types; 10 data points
- Grok: Collects 4 types; 7 data points
Locally run AI: 0
Are there tutorials on how to do this? Should it be set up on a server on my local network??? How hard is it to set up? I have so many questions.
I recommend GPT4all if you want run locally on your PC. It is super easy.
If you want to run in a separate server. Ollama + some kind of web UI is the best.
Ollama can also be run locally but IMO it take more learning than GUI app like GPT4all.
Check out Ollama, it’s probably the easiest way to get started these days. It provides tooling and an api that different chat frontends can connect to.
https://ollama.ai/, this is what I've been using for over a year now, new models come out regularly and you just "ollama pull
<model ID>
" and then it's available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)All free and available to everyone.
If you want to start playing around immediately, try Alpaca if Linux, LMStudio if Windows. See if it works for you, then move from there.
Alpaca actually runs its own Ollama instance.
I used this a while back, it was pretty straightforward https://github.com/nathanlesage/local-chat
If only my hardware could support it..
I can actually use locally some smaller models on my 2017 laptop (though I have increased the RAM to 16 GB).
You'd be surprised how mich can be done with how little.
It's possible to run local AI on a Raspberry Pi, it's all just a matter of speed and complexity. I run Ollama just fine on the two P-cores of my older i3 laptop. Granted, running it on the CUDA-accelerator (GFX card) on my main rig is beyond faster.