Transcoding on Intel -- integrated (e.g. N100) vs. discrete (e.g. A310)
Transcoding on Intel -- integrated (e.g. N100) vs. discrete (e.g. A310)
TL;DR: Does the Arc A310 have any important advantage over recent Intel low-power CPUs with integrated graphics (e.g. N100/N150/N350/N355) specifically for use with Jellyfin, in terms of the number of streams it can transcode simultaneously or something like that?
Even if they do differ, is it something I would notice in a household context (e.g. with probably never more than 4 users at a time), or would the discrete GPU just be overkill?
How often do you actually transcode? Most jellyfin clients are capable of decoding almost all codecs. It might be worth checking if you need to encode frequently, let alone encode multiple streams at once, before considering how many streams different hardware might support.
To answer your question: the A310 and N100 appear to be pretty evenly matched when it comes to max number of streams. Intel claims that all Arc hw encoders can encode 4 AV1 streams at 4K60, but that the actual performance might be more limited by the amount of VRAM on the card. Since any iGPU would have acces to normal system RAM, which is probably a lot more than 4GB, these iGPU's might even be capable of running more parallel streams.
One thing you might want to consider: the A310 has significantly more compute power than the iGPU's in these processors. This matters if you ever decide to run a local ML model. For example, I backup all my pictures to nextcloud on the same machine that runs jellyfin, and I use the recognise app to perform facial and object recognition. You can also run this model in CPU mode though, and the performance is "good enough" in my i5 3470, so a dGPU might be overkill for this purpose. You could also run local LLM's, text2speech, speech2text, or similar models, should you care about that.
If I may add a 3rd option though: consider upgrading to a 5600G or something similar. It has more CPU power than a N350 (3x according to passmark), and the iGPU probably had more than enough hwaccell (though software encoding is also very viable with that much CPU power). You wouldn't free up the AMD hardware this way, and the 5600G doesn't support AV1, which could be a dealbreaker I guess.
Thanks, that indeed answers my question!
If were to decide I need compute, I could just put my AMD GPU back in. Even at 8 years old, my Vega 56 should still be way better than an A310 for that.
Or if I were going to get an Intel discrete GPU and had compute as a use case, I'd be talking about the B580 instead of the A310.Edit: I just found out the Arc Pro B50 is a thing. I'd definitely be going for that instead of a B580.(Or if I got really desperate, I'm pretty sure there'd be a way to let a parallel-compute-hungry service running on my Proxmox server remotely access the RX 9070 XT on my gaming PC, with enough fiddling around.)
According to this, "AMD also chose to reuse the 7nm Vega graphics engine instead of incorporating newer RDNA variants," which means it isn't any better for the purpose of transcoding than the discrete Vega I already have (except for using less power). Also, and more to the point, the Jellyfin Hardware Selection guide straight-up says "AMD is NOT recommended." Any AMD, whether integrated or discrete and no matter how new. And then it says it again in bold text! That's why I'd pretty much ruled out that option before I posted my question.
(In contrast, the same page says "if you do not need CUDA for other applications, it is highly recommended that you stick with Intel Graphics on Linux" and specifically recommends the Intel N100/12th-gen N-series in a couple of places.)
I appreciate the thinking outside the box, though!
Yeah... It was pretty late in my timezone when I replied, which I'll use as an excuse for not considering that. That would be a good solution.
I thought reducing power usage was the main goal, that's why I suggested this. Though once again, pretty decent chance this is a misunderstanding on my part.
I personally use AMD graphics in both a laptop and a desktop, and have never had any problems with decode or encode; I don't understand what the docs mean with "poor driver support".
What I will confess (and once again, forgot to consider yesterday) is that intel and Nvidia hardware encoders generally provide better quality at the same bitrate than AMD*. I do believe software encoders perform better than all hardware encoders in this aspect, which is why I never cared too much about the differences between HW encoders. If I need good quality for the bitrate, I'll just use the CPU. This is less energy-efficient though, so I guess having a good HW encoder could be pretty relevant to you.
*I happen to have hardware from AMD, intel and nvidia, so I might do my own tests to see if this still holds true.
I usually transcode due to using Jellyfin from outside my network, which has 30mbps uplink, and it is barely enough for 2k.