September 18th, 2023:Nomic Vulkan launches supporting local LLM inference on NVIDIA and AMD GPUs.
Problem: In GPT4All, under Settings > Application Settings > Device, I've selected my AMD graphics card, but I'm seeing no improvement over CPU performance. In both cases (AMD graphics card or CPU), it crawls along at about 4-5 tokens per second. The interaction in the screenshot below took 174 seconds to generate the response.
Question: Do I have to use a specific model to benefit from this advancement? Do I need to install a different AMD driver? What steps can I take to troubleshoot this?
Sorry if this is an obvious question. Sometimes I feel like the answer is right in front of me, but I'm unsure of which key words from the documentation should jump out at me.
Rats—according to their System Requirements (Linux) page, they don't support Fedora. Even if I were to switch to a supported distro, it looks like only a small set of graphics cards are supported, and unfortunately, mine is not one of them. 😢