what's the best model these days I could fit in 128gb ram?
what's the best model these days I could fit in 128gb ram?
Yes this is a recipe for extremely slow inference: I'm running a 2013 Mac Pro with 128gb of ram. I'm not optimizing for speed, I'm optimizing for aesthetics and intelligence :)
Anyway, what model would you recommend? I'm looking for something general-purpose but with solid programming skills. Ideally obliterated as well, I'm running this locally I might as well have all the freedoms. Thanks for the tips!
With 128GB of ram on a Mac, GLM 4.5 Air is going to be one of your best options. You could run it anywhere from Q5 to Q8 depending on how you wanna manage your speed to quality ratio.
I have a different system that likely runs it slower than yours will, and I get 5 T/s generation which is just about the speed I read at. (Using q8)
I do hear that ollama may be having issues with that model though, so you may have to wait for an update to it.
I use llamacpp and llama-swap with openwebui, so if you want any tips on switching over I'd be happy to help. Llamacpp is usually one of the first projects to start supporting new models when they come out.
Edit: just reread your post. I was thinking it was a newer Mac lol. This may be a slow model for you, but I do think it'll be one of the best your can run.
oh I didn't realize I could use llamacpp with openwebui. I recall reading something about how ollama was somehow becoming less FOSS so I'm inclined to use llamacpp. Plus I want to be able to more easily use sharded ggufs. You have a guide for setting up llamacpp with openwebui?
I somehow hadn't heard of GLM 4.5 Air, I'll take a look thanks!
What happend to ollama? Did it got bought? Is it turning propitary?
Yeah setting up openwebui with llamacpp is pretty easy. I would start with building llamacpp by cloning it from github and then following the short guide for building it linked on the readme. I don't have a Mac, but I've found building it to be pretty simple. Just one or two commands for me.
Once its built just run llama-sever with the right flags telling it to load model. I think it can take huggingface links, but I always just download gguf files. They have good documentation for llama-server on the readme. You also specify a port when you run llama-server.
Then you just add http://127.0.0.1:PORT_YOU_CHOSE/v1 as one of your openai api connections in the openwebui admin panel.
Separately, if you want to be able to swap models on the fly, you can add llama-swap into the mix. I'd look into this after you get llamacpp running and are somewhat comfy with it. You'll absolutely want it though coming from ollama. At this point its a full replacement IMO.