ollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimization
ollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimization
www.phoronix.com
ollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimization
ollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimization
ollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimization
Does ollama work better than vanilla llama.cpp?
I've just migrated from LM Server to llama.cpp to try and liberate my stack a bit, and also heard ollama have up support for AMD chips
Edit: fixed very bad autocorrect
It depends on what you mean.
To me, Ollama feels like it's designed to be a developer-first, local LLM server with just enough functionality to get you to a POC, from where you're intended to use someone else's compute resources.
llama.cpp actually supports more backends, with continuous performance improvements and support for more models.
Ollama uses ROCm whereas llama.cpp uses Vulkan compute. Which one will perform better depends on many factors, but Vulkan compute should be easier to setup.
Ollama does use ROCm, however, so does llama.cpp. Vulkan happens to be another available backend supported by llama.cpp.
GitHub: llama.cpp Supported Backends
There is an old PRs which attempted to bring Vulkan support to Ollama - a logical and helpful move, given that the Ollama engine is based on llama.cpp - but the Ollama maintainers weren't interested.
As for performance vs ROCm, it does fine. Against CUDA, it also does well unless you're in a mulit-gpu setup. Its magic trick is compatibility. Pretty much everything runs Vulkan. And Vulkan is intecompatible between generations of cards, architectures AND vendors. That's how I'm running a single PC with Nvidia and AMD cards together
I dunno about better, but different. The API and model management that it offers has been nice when building things that want to use different sized models for different tasks since it will mange the given resources and schedule runners on GPU/CPU. My hardware combo is intel/nvidia so I've not had to futz with getting AMD stuff running. If you don't need any of that, and llama.cpp works for you, no reason to use ollama
That is something I wish was easier with llama.cpp
I’m using llama swap for that but you have to manually specify your models in a yaml config, then you can set up groups of modes that can run at the same time.
I also have to manually download models, which is a more cumbersome.