Skip Navigation
Guide to installing SDXL locally on Fedora with an AMD system...?
  • ROCm is basically AMD's answer to CUDA. Just (as usual) more open, less polished, and harder to use. Using something called HIP, CUDA application can be translated to work with ROCm instead (and therefore run on AMD cards without a complete rewrite of the app).

    AFAIK they started working on it 6 or 7 years ago as the replacement for OpenCL. Not sure why exactly, but OpenCL apparently wasn't getting enough traction (and I think Blender even recently dropped OpenCL support).

    After all the time, the HW support is still spotty (mostly only supporting the Radeon Pro cards, and still having no proper support for RDNA3 I think), and the SW support focuses mainly on Linux (and only three blessed distros, Ubuntu, RHEL and SuSe get official packages, so it can be pain to install anywhere else due to missing or conflicting dependencies).

    So ROCm basically does work, and keeps getting better, but nVidia clearly has a larger SW dev team that makes the CUDA experience much more polished and painless.

  • My experience of using AV1 so far.
  • My major discontent with AV1 has been how the encoder blurs some details completely out

    That's the main reason I did not personally switch to AV1 yet as well. (Second reason being that my laptop struggled with playback too much.) Last time I tested it was 2 years back or so, and only using libaom, so I definitely hoped it would be better by now. I was so hyped about Daala (and then AV1) all those years back, so it's a bit disappointing how it turned out to be amazing and "not good enough" at the same time. :)

    Losing details seems to be a common problem with "young" encoders. HEVC had similar problems for quite some time; I remember many people preferred x264 for transparent encodings, because HEVC encoders tended to blur fine-grained textures even at high bitrates. It may still be true even today; I didn't really pay attention to the topic for the last few years.

    IIRC, it has to do mainly with perceptual optimizations: x264 was tweaked over many years to look good, even if it hurts objective metrics like PSNR or SSIM. On the other hand, new encoders are optimized for those metrics first, because that's how you know if a change you made to the code helped or made things worse.

    I suppose only when the encoder reaches maturity and you know it preserves as much real detail as possible, then you can go wild and start adding fake detail or allocating bits to areas more important to subjective quality. I'm sure some (many? most?) of such techniques are already supported and used by AV1 encoders, but getting the most of it may still take some time.

  • he29 H:S @lemmy.world
    Posts 0
    Comments 2