The first GPT-4-class AI model anyone can download has arrived: Llama 405B
The first GPT-4-class AI model anyone can download has arrived: Llama 405B
"Open source AI is the path forward," says Mark Zuckerberg, misusing the term.
You're viewing a single thread.
Wake me up when it works offline "The Llama 3.1 models are available for download through Meta's own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time."
25 0 ReplyWAKE UP!
It works offline. When you use with ollama, you don't have to register or agree to anything.
Once you have downloaded it, it will keep on working, meta can't shut it down.
34 0 ReplyWell, yes and no. See the other comment, 64 GB VRAM at the lowest setting.
3 0 ReplyOh, sure. For the 405B model it's absolutely infeasible to host it yourself. But for the smaller models (70B and 8B), it can work.
I was mostly replying to the part where they claimed meta can take it away from you at any point - which is simply not true.
10 0 Reply
It's available through ollama already. i am running the 8b model on my little server with it's 3070 as of right now.
It's really impressive for a 8b model
13 0 ReplyIntriguing. Is that an 8gb card? Might have to try this after all
1 0 ReplyYup, 8GB card
Its my old one from the gaming PC after switching to AMD.
It now serves as my little AI hub and whisper server for home assistant
1 0 ReplyWhat the heck is whisper? Ive been fooling around with hass for ages, haven't heard of it even after at least two minutes of searching. Is it openai affiliated hardwae?
1 0 Replywhisper is an STT application that stems from openAI afaik, but it's open source at this point.
i wrote a little guide on how to install it on a server with an NVidia GPU and hw acceleration and integrate it into your homeassistant after. https://a.lemmy.dbzer0.com/lemmy.dbzer0.com/comment/5330316
it's super fast with a GPU available and i use those little M5 ATOM Echo microphones for this.
4 0 Reply
I'm running 3.1 8b as we speak via ollama totally offline and gave info to nobody.
11 0 Reply