September 18th, 2023:Nomic Vulkan launches supporting local LLM inference on NVIDIA and AMD GPUs.
Problem: In GPT4All, under Settings > Application Settings > Device, I've selected my AMD graphics card, but I'm seeing no improvement over CPU performance. In both cases (AMD graphics card or CPU), it crawls along at about 4-5 tokens per second. The interaction in the screenshot below took 174 seconds to generate the response.
Question: Do I have to use a specific model to benefit from this advancement? Do I need to install a different AMD driver? What steps can I take to troubleshoot this?
Sorry if this is an obvious question. Sometimes I feel like the answer is right in front of me, but I'm unsure of which key words from the documentation should jump out at me.
Update: I thought I'd report back on my progress. I tried installing GPT4All in distrobox containers, several different images (Ubuntu 24.04 and 22.04, and Fedora 41), but in every case, the installation script fails due to missing dependencies. I can't get to the installer GUI. Upon further investigation, it appears that GPT4All does not support Wayland. There is an open feature request from last year, but I'm not holding my breath. I did some cursory searches for workarounds, but couldn't figure it out in the time I had available today.
[me@UbuntuTestingGpt4All ~]$ ./gpt4all-installer-linux.run
./gpt4all-installer-linux.run: error while loading shared libraries: libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory
I wonder if I would have the same issue if I tried this while running an X session on the host machine. I'll post another update if I test this scenario.