PSA: If the first Smart Search in Immich takes a while
PSA: If the first Smart Search in Immich takes a while
Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you're not blowing up the dynamic volume upon restart.
In my case I changed this:
yaml
immich-machine-learning: ... volumes: - model-cache:/cache
To that:
yaml
immich-machine-learning: ... volumes: - ./cache:/cache
I no longer have to wait uncomfortably long when I'm trying to show off Smart Search to a friend, or just need a meme pronto.
That'll be all.
Oh, and if you haven't changed from the default ML model, please do. The results are phenomenal. The default is nice but only really needed on really low power hardware. If you have a notebook/desktop class CPU and/or GPU with 6GB+ of RAM, you should try a larger model. I used the best model they have and it consumes around 4GB VRAM.
Which model would you recommend? I just switched from ViT-B/32 to ViT-SO400M-16-SigLIP2-384__webli since it seemed to be the most popular.
I switched to the same model. It's absolutely spectacular. The only extra thing I did was to increase the concurrent job count for Smart Search and to give the model access to my GPU which sped up the initial scan at least an order of magnitude.
Is this something that would be recommended if self-hosting off a Synology 920+ NAS?
My NAS does have extra ram to spare because I upgraded it, and has NVME cache 🤗
That's a Celeron right? I'd try a better AI model. Check this page for the list. You could try the heaviest one. It'll take a long time to process your library but inference is faster. I don't know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There's execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.