A chart titled "What Kind of Data Do AI Chatbots Collect?" lists and compares seven AI chatbots—Gemini, Claude, CoPilot, Deepseek, ChatGPT, Perplexity, and Grok—based on the types and number of data points they collect as of February 2025. The categories of data include: Contact Info, Location, Contacts, User Content, History, Identifiers, Diagnostics, Usage Data, Purchases, Other Data.
Gemini: Collects all 10 data types; highest total at 22 data points
https://ollama.ai/, this is what I've been using for over a year now, new models come out regularly and you just "ollama pull <model ID>" and then it's available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)
Check out Ollama, it’s probably the easiest way to get started these days. It provides tooling and an api that different chat frontends can connect to.
It's possible to run local AI on a Raspberry Pi, it's all just a matter of speed and complexity. I run Ollama just fine on the two P-cores of my older i3 laptop. Granted, running it on the CUDA-accelerator (GFX card) on my main rig is beyond faster.
I would hazard a guess that the only reason those others aren't as high is because they don't have the same access to data. It's not that they don't want to, they simply can't (yet).
Don't use either. Until Trump, I still considered CCP spyware more dangerous because they would be collecting info that could be used to blackmail US politicians and businesses. Now, it's a coin flip. In either case, use EU or FOSS apps whenever possible.
All services you see above are provided to EU citizens, which is why they also have to abide by GDPR. GDPR does not disallow the gathering of information. Google, for example, is GDPR compliant, yet they are number 1 on that list. That’s why I would like to know if European companies still try to have a business case with personal data or not.
+1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they've always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.
Nope, these services almost always require user login, eventually tied to cell number (ie non disposable) and associate user content and other data points with account. Nonetheless user prompts are always collected. How they're used is a good question.
I just came across this article which for people who are into self hosting can take a look and participate. It's basically a tool that generating never ending web pages with non sense that load slow (but not too slow the AI tools move on) to slow down and thus cost them more to scrape the internet if enough people are doing it. You can also hide it in a way that legit user would never see this on your site:
Pretty sure this is what they scrape from your device if you install their app. I dont know how else they would get access to contacts and location and stuff. So yeah you can just run it on a virtual android device and feed it garbage data, but i assume the app or their backend will detect that and throw out your data.