7 websites, Jellyfin for 6 people, Nextcloud, CRM for work, email server for 3 domains, NAS, and probably some stuff I've forgotten on a $4 computer from a tiny thrift store in BFE Kansas. I'd love to upgrade, but I'm always just filled with joy whenever I think of that little guy just chugging along.
Maybe not shit, but exotic at that time, year 2012.
The first Raspberry Pi, model B 512 MB RAM, with an external 40 GB 3.5" HDD connected to USB 2.0.
It was running ARM Arch BTW.
Next, cheap, second hand mini desktop Asus Eee Box.
32 bit Intel Atom like N270, max. 1 GB RAM DDR2 I think.
Real metal under the plastic shell.
Could even run without active cooling (I broke a fan connector).
I was for a while. Hosted a LOT of stuff on an i5-4690K overclocked to hell and back. It did its job great until I replaced it.
Now my servers don't lag anymore.
EDIT: CPU usage was almost always at max. I was just redlining that thing for ~3 years. Cooling was a beefy Noctua air cooler so it stayed at ~60 C. An absolute power house.
4 gigs of RAM is enough to host many singular projects - your own backup server or VPN for instance. It's only if you want to do many things simultaneously that things get slow.
I'm sure a lot of people's self hosting journey started on junk hardware... "try it out", followed by "oh this is cool" followed by "omg I could do this, that and that" followed by dumping that hand-me-down garbage hardware you were using for something new and shiny specifically for the server.
My unRAID journey was this exactly. I now have a 12 hot/swap bay rack mounted case, with a Ryzan 9 multi core, ECC ram, but it started out with my 'old' PC with a few old/small HDDs
I had a old Acer SFF desktop machine (circa 2009) with an AMD Athlon II 435 X3 (equivalent to the Intel Core i3-560) with a 95W TDP, 4 GB of DDR2 RAM, and 2 1TB hard drives running in RAID 0 (both HDDs had over 30k hours by the time I put it in). The clunker consumed 50W at idle. I planned on running it into the ground so I could finally send it off to a computer recycler without guilt.
I thought it was nearing death anyways, since the power button only worked if the computer was flipped upside down. I have no idea why this was the case, the computer would keep running normally afterwards once turned right side up.
The thing would not die. I used it as a dummy machine to run one-off scripts I wrote, a seedbox that would seed new Linux ISOs as it was released (genuinely, it was RAID0 and I wouldn't have downloaded anything useful), a Tor Relay and at one point, a script to just endlessly download Linux ISOs overnight to measure bandwidth over the Chinanet backbone.
It was a terrible machine by 2023, but I found I used it the most because it was my playground for all the dumb things that I wouldn't subject my regular home production environments to. Finally recycled it last year, after 5 years of use, when it became apparent it wasn't going to die and far better USFF 1L Tiny PC machines (i5-6500T CPUs) were going on eBay for $60. The power usage and wasted heat of an ancient 95W TDP CPU just couldn't justify its continued operation.
You can do quite a bit with 4GB RAM. A lot of people use VPSes with 4GB (or less) RAM for web hosting, small database servers, backups, etc. Big providers like DigitalOcean tend to have 1GB RAM in their lowest plans.
your hardware ain't shit until it's a first gen core2duo in a random Dell office PC and 2gb of memory that you specifically only use just because it's a cheaper way to get x86 when you can't use your raspberry pi.
Also they lie most of the time and it may technically run fine on more memory, especially if it's older when dimm capacities were a lot lower than they can be now. It just won't be "supported".
3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.
I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.
Running that cluster 7 or so years now since I bought them new.
I suggest only running off shit tier since three nodes gives redundancy and enough performance. I've run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home "ARR" just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.
I started my self hosting journey on a Dell all-in-one PC with 4 GB RAM, 500 GB hard drive, and Intel Pentium, running Proxmox, Nextcloud, and I think Home Assistant. I upgraded it eventually, now I'm on a build with Ryzen 3600, 32 GB RAM, 2 TB SSD, and 4x4 TB HDD
7th gen intel, 96GB mismatched ram, 4 used 10TB HDD, one 12 with a broken sata connector that only works because it's sitting just right in a sled. A couple of 14's one M.2 and two sataSSD. It's running Unraid with 2 VM's (plex and Home Assistant), one of which has corrupted itself 3 times. A 1080 and a 2070.
I can get several streams off it at once, but not while it's running parity check and it can't handle 4k transcoding.
It's not horrible, but I couldn't do what I do now with less :)
I'm self-hosting in a 500GB HDD, 2 cores AMD A6, 8GB RAM thinkcentre (access for LAN only) that I got very cheap.
It could be better, I'm going to buy a new computer for personal use and I'm the only one in my family who uses the hosted services, so upgrades will come later 😴
Aw yep, bought an old HP pro-lient something something with 2 old-ass intel xeons and 64GB ram for practically nothing. Thing's been great. It's a bit loud but runs anything I throw at it.
Yep, mspencer dot net (what little of it is currently up, I suck at ops stuff) is 2012-vintage hardware, four boxes totaling 704 GB RAM, 8x10TB SAS disks, and a still-unused LTO-3 tape drive. I’ll upgrade further when I finally figure out how to make proper use of what I already have. Until then it’s all a fancy heated cat tree, more or less.
My home server runs on an old desktop PC, bought at a discounter. But as we have bought several identical ones, we have both parts to upgrade them (RAM!) as well as organ donors for everything else.
I used to selfhost on a core 2 duo thinkpad R60i. It had a broken fan so I had to hide it into a storage room otherwise it would wake up people from sleep during the night making weird noises. It was pretty damn slow. Even opening proxmox UI in the remotely took time. KrISS feed worked pretty well tho.
I have since upgraded to... well, nothing. The fan is KO now and the laptop won't boot. It's a shame because not having access to radicale is making my life more difficult than it should be. I use CalDAV from disroot.org but it would be nice to share a calendar with my family too.
I'm hosting a minio cluster on my brother-in-law's old gaming computer he spent $5k on in 2012 and 3 five year old mini-pcs with 1tb external drives plugged into them. Works fine.
I met someone that was throwing out old memory modules. Literally boxes full of DDR, DDR2 modules. I got quite excited, hoping to upgrade my server’s memory. Yeah, DDR2 only goes up to 2GiB. So I am stuck with 2×2GiB. But I am only using 85% of that anyways, so it’s fine.
The oldest hardware I'm still using is an Intel Core i5-6500 with 48GB of RAM running our Palworld server. I have an upgrade in the pipeline to help with the lag, because the CPU is constantly stressed, but it still will run game servers.
All my stuff is running on a 6-year-old Synology D918+ that has a Celeron J3455 (4-core 1.5 GHz) but upgraded to 16 GB RAM.
Funny enough my router is far more powerful, it's a Core i3-8100T, but I was picking out of the ThinkCentre Tiny options and was paranoid about the performance needed on a 10 Gbit internet connection
kind of..
a "AMD GX-420GI SOC: quad-core APU" the one with no L3 Cache, in an Thin Client and 8Gb Ram. old Laptop ssd for Storage (128GB)
Nextcloud is usable but not fast.
My NAS is on an embedded Xeon that at this point is close to a decade old and one of my proxmox boxes is on an Intel 6500t. I'm not really running anything on any really low spec machines anymore, though earlyish in the pandemic I was running boinc with the Open Pandemics project on 4 raspberry pis.
It's not absolutely shit, it's a Thinkpad t440s with an i7 and 8gigs of RAM and a completely broken trackpad that I ordered to use as a PC when my desktop wasn't working in 2018. Started with a bare server OS then quickly realized the value of virtualization and deployed Proxmox on it in 2019. Have been using it as a modest little server ever since. But I realize it's now 10 years old. And it might be my server for another 5 years, or more if it can manage it.
In the host OS I tweaked some value to ensure the battery never charges over 80%. And while I don't know exactly how much electricity it consumes on idle, I believe it's not too much. Works great for what I want. The most significant issue is some error message that I can't remember the text of that would pop up, I think related to the NIC. I guess Linux and the NIC in this laptop have/had some kind of mutual misunderstanding.
Running a bunch of services here on a i3 PC I built for my wife back in 2010. I've since upgraded the RAM to 16GB, added as many hard drives as there are SATA ports on the mobo, re-bedded the heatsink, etc.
It's pretty much always ran on Debian, but all services are on Docker these days so the base distro doesn't matter as much as it used to.
I'd like to get a good backup solution going for it so I can actually use it for important data, but realistically I'm probably just going to replace it with a NAS at some point.
I'm still interested in Self-Hosting but I actually tried getting into self-hosting a year or so ago. I bought a s***** desktop computer from Walmart, and installed window server 2020 on it to try to practice on that.
Thought I could use it to put some bullet points on my resume, and maybe get into self hosting later with next cloud. I ended up not fully following through because I felt like I needed to first buy new editions of the server administration and network infrastructure textbooks I had learned from a decade prior, before I could continue with giving it an FQDN, setting it up as a primary DNS Server, or pointing it at one, and etc.
So it was only accessible on my LAN, because I was afraid of making it a remotely accessible server unless I knew I had good firewall rules, and had set up the primary DNS server correctly, and ultimately just never finished setting it up. The most ever accomplished was getting it working as a file server for personal storage, and creating local accounts with usernames and passwords for both myself and my mom, whom I was living with at the time. It could authenticate remote access through our local Wi-Fi, but I never got further.
Plex server is running on my old Threadripper 1950X. Thing has been a champ. Due to rebuild it since I've got newer hardware to cycle into it but been dragging my heels on it. Not looking forward to it.
My first @home server was an old defective iMac G3 but it did the job (and then died for good)
A while back, I got a RP3 and then a small thin client with some small AMD CPU. They (barely) got the job done.
I replaced them with an HP EliteDesk G2 micro with a i5-6500T. I don't know what to do with the extra power.
Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you're lucky and it's not a limit of that architecture.
If that won't work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.
I've worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).
In a self-hosting scenario I suspect you'll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture's maximum memory.
Granted, if a single service whose load can't be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you're stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.
I moved from a Drll R710 with dual docket Xeons to a rack mount desktop case with a single Ryzen R5 5600G. I doubled the performance and halved the power consumption in one go. I do miss having idrac though. I need a KVM over IP solution but haven't stomached the cost yet. For how often I need it it's not an issue.
Wow, it's been a long time since I had hardware that awful.
My old NAS was a Phenom II x4 from 2009, and I only retired it a year and a half ago when I upgraded my PC. But I put 8GB RAM into that since it was a 64-bit processor (could've put up to 32GB I think, since it had 4 DDR3 slots). My NAS currently runs a Ryzen 1700, but I still have that old Phenom in the closet in case that Ryzen dies, but I prefer the newer HW because it's lower power.
That said, I once built a web server on an Arduino which also supported websockets (max 4 connections). That was more of a POC than anything though.
Oldest I got is limited to 16GB (excluding rPis). My main desktop is limited to 32GB which is annoying, because I sometimes need more. But, I have a home server with 128GB of RAM that I can use when it's not doing other stuff. I once needed more than 128GB of RAM (to run optimizations on a large ONNX model, iirc), so had to spin up an EC2 instance with 512GB of RAM.