Am I the only one who is really impressed by Granite4 from IBM?
It is small but still really good
4 comments
I have used the micro variant primarily with perplexica and I must say it is really good for summation and for answering further questions, especially when it comes to these tasks in my testing it has outclassed instruct models that are 2-3 times its size.
You are not alone. It blew my mind at how good it is per billion parameters. As an example, I can't think of another model that will give you working code at 4B or less. I havent tried it on agentic tasks but that would be interesting
I have used the micro variant primarily with perplexica and I must say it is really good for summation and for answering further questions, especially when it comes to these tasks in my testing it has outclassed instruct models that are 2-3 times its size.