The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation
The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation

ai.meta.com
The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation

We’re introducing Llama 4 Scout and Llama 4 Maverick, the first open-weight natively multimodal models with unprecedented context support and our first built using a mixture-of-experts (MoE) architecture.
Seems pretty underwhelming. They're comparing a 109B to a 27B and it's kind of close. I know it's only 17B active but that's irrelevant for local users who are more likely going to be filtered by memory rather than speed.