ipify.org is yet another.
First thing, back up your data in case something goes south.
With that out of the way, I have done this exact thing many times with both zfs and an array managed by mdadm. Depending on the filesystem on top of your raid array, there may be some additional commands you'll need to run to extend your partition to use all of the available space in your larger array. You do have to wait between each disk swap to wait for the array to rebuild so it might be quicker to just make a backup, remove the old drives, and build a new array with your new drives and then copy the data back over. That being said, it is fun to hot swap all of those disks and not lose any data or have downtime assuming everything goes smoothly.
I logged in just to downvote.
Now for a relevant comment. I used to love those high uptime values as well, but I'll echo the security sentiments of others in this thread. On the other hand, as you said it's not public facing, so not as big a deal. I still think it's kinda cool!
I used Debian years ago, but switched to Ubuntu. After using that for many years, I got frustrated with Canonical forcing snaps and switched back to Debian a few months ago.
This is exactly what I am seeing. I just tried upping federation_worker_count in the postgres database. I saw someone in another thread mention trying that so we’ll see.