It seems like common cynicism. Mozilla adds this feature, as not to yield major features to other browsers. Mozilla's lets you natively have lots of different AI solutions to pick from.
Not every feature is for everyone. Not every feature is done being improved on at release.
And in spite of popular opinions, organizations don't do just one thing and then do just the next thing and the thing after that. Organizations can and do focus on and prioritize many things at the same time.
And for people who are naysaying AI at every mention, it has a lot of great and fascinating uses, and if you think otherwise, you really should try them more. I've used it plenty for work and life. It's not going away, might as well do some nice things with it.
Thing is, for your average user with no GPU and whp never thinks about RAM, running a local LLM is intimidating. But it shouldn't be. Any system with an integrated GPU, and the more RAM the better, can run simple models locally.
The not so dirty secret is that ChatGPT 3 vs 4 isn't that big a difference, and neither are leaps and bounds ahead of the publically available models for about 99% of tasks. For that 1% people will ooh and aah over it, but 99% of use cases are only seeing marginal gains on 4o.
And the simplified models that run "only" 95% as well? They can use 90% fewer resources give pretty much identical answers outside of hyperspecific use cases.
Running a a "smol" model as some are called, gets you all the bang for none of the buck, and your data stays on your system and never leaves.
I've been yelling from the rooftops to some stupid corporate types that once the model is trained, it's trained. Unless you are training models yourself, there is no need for the massive AI clusters, just for the model. Run it local on your hardware at a fraction of the cost.
There's the tragedy with this new feature: they fast-tracked this past more popular requests, sticking it into Release Firefox.
But they only rushed the part that connects to third parties. There was also a "localhost" option which was originally alongside the Big Five corporate offerings, but Mozilla ultimately decided to bury that one inside of the about:config settings.
I'm guessing that the reason (and a good one at that) is that simply having an option to connect to a local chatbot leads to just confused users because they also need the actual chatbot running on their system. If you can set up that, then you can certainly toggle a simple switch in about:config to show the option.
Last time I tried using a local llm (about a year ago) it generated only a couple words per second and the answers were barely relevant. Also I don't see how a local llm can fulfill the glorified search engine role that people use llms for.
Try again. Simplified models take the large ones and pare them down in terms of memory requirements, and can be run off the CPU even. The "smol" model I mentioned is real, and hyperfast.
I've never had the urge to use a chat bot personally, but I'm pretty sure I'm in the minority. Lots of people use these things all the time for so much stuff we probably wouldn't even consider.
I've worked with a few people that all but rely on these things to produce any creative work they have to do.
Maybe we run in different circles but I think a lot of people don't even talk about how they're using it.
why a fucking chatbot? translate a page better for me you fucking losers, all the translation options suck for privacy outside of specifically trained local AIs. this is the BEST use case for a small local LLM yet mozilla with all its brains and resources couldnt rub two neurons together for this.
or they could do character prediction on your typing to make typing faster. just some legit examples, why waste resources to build a chat ai into my browser when i can just open a website???
Note that you need an account to use one of these supported systems. HuggingChat allows for a few connections as a gues before cutting the access; basically a trial version, so you have to create an account.
It just adds ChatGPT or similar to your sidebar. Chatbots can do a lot of things, they are mostly good for information research and technical help, although they have serious flaws like hallucinating false information sometimes
It is a sidebar that sends a query from your browser directly to a server run by a giant corporation like Google or OpenAI, consumes an excessive amount of carbon/water, then sends a response back to you that may or may not be true (because AI is incapable of doing anything but generating what it thinks you want to see).
Not only is it unethical in my opinion, it's also ridiculously rudimentary...
From the description in the UI, it does sound like it. Theoretically, a chatbot could be created where you can ask questions about the webpage you have currently opened, so if you don't want to read a long article, for example. I guess, you could probably just throw a link into an existing chatbot either way, but yeah, direct integration might be convenient either way.
Well, or a chatbot could be created, which has access to your browser history, bookmarks and tabs, so you can ask it when you last saw certain information. However, you'd need a locally running chatbot for that, which makes it more difficult to implement.
I think Mistral is model-available (ie I'm not sure if they release training data/code but they do release model shape and weights), huggingchat definitely is open source and model-available
Sorry but HuggingChat / HuggingFace and all models on it are not open source (Edit: Oh you meant the UI HuggingChat is Open Source. Yeah sorry, I was focused on the models. And there is no Open Source model from my understanding.) -> https://opensource.org/ai/open-source-ai-definition Off course opensource.org is not the only authority on what the word opensource means, but its not a bad start.
There are no open source ai models, even if they tell you that they are. HuggingFace is the closest thing to as something like open source where you can download ai models to run locally without internet connection. There are applications for that. In Firefox the HuggingChat uses models from HuggingFace, but I think it is running them on a server and does not download from?
The reason why they are not open source is, because we don't know exactly on what data they are trained on. We cannot rebuild them on our own. And for trustworthy, I assume you are talking about the integration and the software using the models, right? At least it is implemented by Mozilla, so there is (to me) some sort of trust involved. Yes, even after all the bullshit I trust Mozilla.
It's "open weights" if they are publishing the model file but nothing about its creation. There's some hypothetical security concerns with training it to give very specific outputs for certain very specific inputs but I feel like that's one of those kind of far fetched worries especially if you want to use it for chat or summarization and the comparison is getting AI output from a server API. Local is still way better.
Yeah, it did. That feature has been there at least since when Mozilla enabled "Firefox labs" section in settings by default a few months ago, and maybe even earlier than that
Because browsers are the most useful tool on most computers. Ordinary People go on google/ask chatgpt for mundane questions. If their browser can do that they need 1 app less and it will be more convenient which is what especially non-tech savy people care about.
I will say, the Le Chat provider is pretty decent. You really can use it more natural language. "Rewrite it with a better rhyme scheme" "remove the last line" and it just got it.
Why no local option though?
Why no anonmysing option?
Edit:
There is a right click option which does make this officially actually useful for me now (summarize this!).
Other models do have RAG options and Mist real supports making agents with specified documentation too to at least fine tune too (not as good as full grounding though IMHO)