Hi, currently have a spare GeForce 1060 lying around collecting dust. Planning
to use it with Ollama [https://ollama.com/] for self-hosting my own AI model or
maybe even for AI training. Problem is, none of my home lab devices have a
compatible connection to the GPU’s GPIO. My current setup includes...
What @mierdabird@lemmy.dbzer0.com said, but the adapters arent cheap. You're going to end up spending more than the 1060 is worth.
A used desktop to slap it in, that you turn on as needed, might make sense? Doubly so if you can find one with an RTX 3060, which would open up 32B models with TabbyAPI instead of ollama. Some configure them to wake on LAN and boot an LLM server.
I haven't used any but have researched it some:
Minisforum DEG1 looks like the most polished option, but you'd have to add an m.2 to oculink adapter and cable.
ADT-Link makes a wide variety of kits as well with varying pcie gen and varying included equipment.