I need a kubernetes cluster with high availability, load balancing and horizontal pod autoscaling, because that is something I want to learn. I don't care that it's just for wife's home-made dog collars webshop.
Switched from a raspberry pi 3 to a second hand x86 thin client (lenovo thinkcentre m920q) because raspberry pi 4 were not available at the time. Made me learn proxmox and a bunch of other cool stuff my raspi couldn't handle.
I'm rooting for ARM / RISC-V to become more popular in desktop computing / servers though.
I've found that a pi is good enough, computationally, but not reliability wise.
A lot of things like advanced light control goes through my host, so any lockups or crashes are bad. My pi held up for about 18 months before it began to play up. I've found a small NUC system has higher reliability for the same price and power usage.
See, I don't pay for the electric bill to keep my collection of old enterprise equipment running because I need the performance. I keep them running because I have no resistance to the power of blinkenlights.
So close. Started on raspberry pi. Went for a cluster with dpckrt swarm. Finished with a nas and a 10years old game computer as a mediacenter. (That the electricity bill whoch made me stop the cluster)
The only problem I've had with Raspberry Pi is that some apps want to write a lot of stuff to "disk", and the default "disk" on a Pi is a MicroSD card which dies if you keep writing things to it. Sure, you can always plug something into a USB slot, but that adds a bit of friction to the whole process.
Oh, also, I wish it were easy to power a whole bunch of Pi units. Each one needing its own wall wart is a bit annoying, and I've had iffy results using weaker, less steady power supplies with multiple ports intended for things like phones.
Yes, you can optimize a lot. Especially with Linux. I did the same and even started to replace program that did too much, bloated, with my own programs. To speed up the development I did it with AI and Cursor.
I've discovered that there are a lot of medium-tier software engineers who immediately will go straight to horizontal scaling (i.e: just throw hardware at it), and I've seen instances where very highly skilled engineers just write their code better, set things up on a bare metal server, cache things, etc. and manage with just a single badass server
I have been in for a couple months now, Proxmox cluster with two machines.
Self built pc that was my daily driver for a while, rtx 3080ti 32gb ram, ryzen 7 3700x, runs the heavy stuff like a Mac VM, LLM stuff, game servers
Rando open box mini pc I picked up on a whim from Bestbuy, Intel 300 (didn't even know these existed...) with igpu, 32gb of ram, hosts my dhcp/dns main traefik instance and all the light services like dozzle and such.
Works out nicely as I crash the first one too often and the DHCP going down was unacceptable, wish I got a slightly better cpu for the minipc but meh, maybe I can upgrade it later.
I spend all day at work exploring the inside of the k8s sausage factory so I'm inured to the horrors and can fix basically anything that breaks. The way k8s handles ingress and service discovery makes it absolutely worth it to me. The fact that I can create an HTTPProxy and have external-dns automagically expose it via DNS is really nice. I never have to worry about port conflicts, and I can upgrade my shit whenever with no (or minimal) downtime, which is nice for smart home stuff. Most of what I run tends to be singleton statefulsets or single-leader deployments managed with leases, and I only do horizontal for minimal HA, not at all for perf. If something gives me more trouble running in HA than it does in singleton mode then it's being run as a singleton.
k8s is a complex system with priorities that diverge from what is ideal for usage at home, but it can be really nice. There are certain things that just get their own VM (Home Assistant is a big one) because they don't containerize/k8serize well though.
I ran lots of containers on a Pi 4 but recently purchased two cheap Chinese mini PC's with 16GB RAM and an SSD. They're so much faster and only a bit dearer than a Pi. I run Proxmox on both.
Absolutely nothing wrong with the Pi though. The Pi 4 lives on with a USB drive attached. I have NFS configured on it to backup my Proxmox VMs to it. It also hosts all the media for Jellyfin.
Wait, you can host a website on a raspberry pi !?
But is it really cheaper than shared hosting, for instance? And even then, quality-wise, it cannot be that good, can it?
I'm actually just about to start up my server again on a rp4. It's been like 5 years since I've used it. Is dietpi still the best way to go about making a Plex media server/bare bones desktop environment that I can access with 'no-machine'?
I sear no machine just broke my autoboot setup one day and I never got around to fixing it. What do you nerds think?
I'm not interested in video streaming, just hosting my music collection and audiobooks. I remember FTP being a pain to transfer music files from my phone
Do you run Docker in a VM or on the host node? I'm running a lot of LXC at home on Proxmox but sometimes it'd be nice to run Docker stuff easily as well.
I bought a decade old Z840 and it's great for VMs, Plex, Arr stack, and a few other services but it is so overkill with 2 GPUs. I think what I should've done was buy a couple of used desktops or laptops to expand the my homelab as I needed.