

Selfhosted
- Rough draft server/NAS is complete!
Just got all the hardware set up and working today, super stoked!
In the pic:
- Raspberry Pi 5
- Radxa Penta SATA hat for Pi
- 5x WD Blue 8TB HDD
- Noctua 140mm fan
- 12V -> 5V buck convertor
- 12V (red), 5V (white), and GND (black) distribution blocks
I went with the Raspberry Pi to save some money and keep my power consumption low. I'm planning to use the NAS for streaming TV shows and movies (probably with Jellyfin), replacing my google photos account (probably with Immich), and maybe steaming music (not sure what I might use for that yet). The Pi is running Raspberry Pi Desktop OS, might switch to the server version. I've got all 5 drives set up and I've tested out streaming some stuff locally including some 4K movies, so far so good!
For those wondering, I added the 5V buck convertor because some people online said the SATA hat doesn't do a great job of supplying power to the Pi if you're only providing 12V to the barrel jack, so I'm going to run a USB C cable to the Pi. Also using it to send 5V to the PWM pin on the fan. Might add some LEDs too, fuck it.
Next steps:
- Set up RAID 5
- 3D print an enclosure with panel mount connectors
Any tips/suggestions are welcome! Will post again once I get the enclosure set up.
- Custom remote backup
Hi everyone,
I’m visiting some family in another area of the country soon, and have the opportunity to set up a little remote backup server.
Essentially I would like to set something up that I can ssh into and backup photos/videos/documents from my main server periodically once a month or so. Ideally it would be off until I need to turn it on.
I’m looking for ideas on how to best approach this. What kind of hardware would you use in my shoes? I have a couple of spare raspberry pi’s I was thinking to use with an external drive. I was also considering something like those ugreen nas devices that have been popping up. I would ideally set it up and do a sync before I head there, and then just plug it in. Would wake on lan be advised for this?
- KDE Plasma Bigscreen (Android TV alternative) is back from deadwww.neowin.net KDE's Android TV alternative, Plasma Bigscreen, rises from the dead with a better UI
Another KDE project, Plasma Bigscreen, is back from the dead with a much better UI spread across the entire shell.
Not exactly self-hosted but I know many jellyfinners here would cherish this as well.
- Dedicated service user or not ?
Hi all !
As of today, I am running my services with rootless podman pods and containers. Each functional stack gets its dedicated user (user cloud runs a pod with nextcloud-fpm, nginx, postgresql...) with user mapping. Now, my thought were that if an attack can escape a container, it should be contained to a specific user.
Is it really meaningful ? With service users' home setup in /var/lib, it makes a lot of small stuff annoying and I wonder if the current setup is really worth it ?
- Sonarr - How to troubleshoot fake downloads
Hi! In the last few months, the amount of fake torrents I'm getting automatically added to my downloads is starting to be really annoying. I want to find out which source is the culprit to remove it...How can I find what was the source of the added torrent? Where did it get it? What line do I need to look for in the logs? Or what event?
- Help me figure out the way to scale laterally?
I have a home lab I use for learning and to self host a couple of services for me and my extended family.
- Nextcloud instance with about 1TB
- Couple of websites
- Couple of game servers
I'm running off an R430 with twin E5-2620 V3s 128GB and spinning rust storage.
When I deployed NC I did not think it thru, and I stored all the data locally, which causes the instance to be too big to backup normally.
As a solution, I've split the NC software into it's own LXC and a NAS into another and I'm thinking about hosting a cheap NUC NAS to rsync the files.
I would also like to distribute the load of my server into separate nodes so I can get more familiar with HA and possibly hyper converged infrastructure.
I would also like to have wo nodes locally to be able to work on one without bringing down services.
Any advice / tips?
Should I skip the NAS and go straight into Ceph?
Would 3x NUCs with Intel i5 or i7ths and 32Gb Ram be enough?
Would I be better off with 3x pizza box servers like R220s or DL20s?
Storage wise I'm trying to decide between m2 to Sata adapter like this [](
) and a mixtures of SSDs and spinning rust. Or
Otherwise would I be better off with SFF?
Otherwise I was considering a single 24 bay disk array with an LSI card in IT mode, but I'm inexperienced with those and I'm not sure about power usage / noise. (the rack does sit next to my workstation)
And yes you can put an LSI card on a NUC surprisingly (This looks like a VERY fun project) https://github.com/NKkrisz/HomeLab/blob/main/markdown%2FLenovo_M720Q_Setup.md
Plus, most likely I would not expand the storage past 5 or 10 TB on each node.
Additionally; I'm looking at cost per watt (current server runs at 168w 90% of the time, looks like those tiny NUCs run about 25W or so and the SFF 50-75W depending on what they have. The shallow depth servers also idle at 25-50 depending on storage and processor options.
I also have a 12U rack at home and I would very much like to keep things racked and neat. It seems a lot easier to rack the NUCs than it would be to do with SFF cases.
Obviously I'm OK with buying new hardware (I'll be selling the current one once I migrate) that's part of the "learning" experience.
Any advice or experience you can share would be highly appreciated.
Thanks /c/selfhosted
- Release 1.7.2 · LibreTranslate - A FOSS, self-hosted, offline capable Machine Translation APIgithub.com Release 1.7.2 · LibreTranslate/LibreTranslate
What's Changed add blitzw.in instance by @gigirassy in #770 🌏 i18n: Improve Catalan translation by @seicifarre in #777 Support for ISO 639-1 - 15924 codes by @pierotofy in #780 Update Dockerfile #...
Excerpts from the Changelog:
> ## What's Changed > * add blitzw.in instance by @gigirassy in https://github.com/LibreTranslate/LibreTranslate/pull/770 > * 🌏 i18n: Improve Catalan translation by @seicifarre in https://github.com/LibreTranslate/LibreTranslate/pull/777 > * Support for ISO 639-1 - 15924 codes by @pierotofy in https://github.com/LibreTranslate/LibreTranslate/pull/780 > * Update Dockerfile #737 by @jpralves in https://github.com/LibreTranslate/LibreTranslate/pull/785 > * Update Requirements: argos-translate-files srt support by @theUnrealSamurai in https://github.com/LibreTranslate/LibreTranslate/pull/786 > * Add fingerprinting mechanism by @pierotofy in https://github.com/LibreTranslate/LibreTranslate/pull/787 > * Attack mode support by @pierotofy in https://github.com/LibreTranslate/LibreTranslate/pull/788 > * Mark Chinese as reviewed in the README's UI Languages list by @LTSlw in https://github.com/LibreTranslate/LibreTranslate/pull/789 > * Add 2 variations of Dockerfile by @warren-bank in https://github.com/LibreTranslate/LibreTranslate/pull/790 > * Added full Spanish translation (README.es.md) and link in main README by @KerySeverino in https://github.com/LibreTranslate/LibreTranslate/pull/793 > * Added full spanish translation (TRADEMARK.es.md) and small bug fixes. by @KerySeverino in https://github.com/LibreTranslate/LibreTranslate/pull/795 > * Added full spanish translation (CONTRIBUTING.es.md) and Synced latests. by @KerySeverino in https://github.com/LibreTranslate/LibreTranslate/pull/799 > * Synced latests. by @KerySeverino in https://github.com/LibreTranslate/LibreTranslate/pull/802 > * Add Kazakh locale to libretranslate/locales/kk/ by @AnmiTaliDev in https://github.com/LibreTranslate/LibreTranslate/pull/805 > * Support LT_PORT in healtcheck script by @DL6ER in https://github.com/LibreTranslate/LibreTranslate/pull/810 > * Add new option --frontend-language by @DL6ER in https://github.com/LibreTranslate/LibreTranslate/pull/811 > * Add --hide-api to hide the API request/response fields from the web interface by @DL6ER in https://github.com/LibreTranslate/LibreTranslate/pull/812 > * Add user-customizable <title> by @DL6ER in https://github.com/LibreTranslate/LibreTranslate/pull/814 > * Hide API-related extra buttons if --hide-api is specified by @DL6ER in https://github.com/LibreTranslate/LibreTranslate/pull/813 > * healthcheck.py: Fail on non 200 status by @736-c41-2c1-e464fc974 in https://github.com/LibreTranslate/LibreTranslate/pull/822 > * Improve swagger definitions by @pierotofy in https://github.com/LibreTranslate/LibreTranslate/pull/823 > * Link to docs, move readme information by @pierotofy in https://github.com/LibreTranslate/LibreTranslate/pull/825
- Tape drive backups
Hey folks, being the family IT man I've held onto all of my families photos/videos over the last 20 years
I've been pretty careless with the backups and I know if I don't do anything it's only a matter of time before I lose them
Although I've never used them, tape drives seem to be the best so I thought I'd ask here if anyone uses them for their homelab?
It might be overkill for a few GB of photos but I'd also use the tape drives for data hoarding purposes so it's a win win in my book
- Very large amounts of gaming gpus vs AI gpus
| GPU | VRAM | Price (€) | Bandwidth (TB/s) | TFLOP16 | €/GB | €/TB/s | €/TFLOP16 | |-----------------------------|-------|-----------|------------------|---------------|------|---------|--------------| | NVIDIA H200 NVL | 141GB | 36284 | 4.89 | 1671 | 257 | 7423 | 21 | | NVIDIA RTX PRO 6000 Blackwell | 96GB | 8450 | 1.79 | 126.0 | 88 | 4720 | 67 | | NVIDIA RTX 5090 | 32GB | 2299 | 1.79 | 104.8 | 71 | 1284 | 22 | | AMD RADEON 9070XT | 16GB | 665 | 0.6446 | 97.32 | 41 | 1031 | 7 | | AMD RADEON 9070 | 16GB | 619 | 0.6446 | 72.25 | 38 | 960 | 8.5 | | AMD RADEON 9060XT | 16GB | 382 | 0.3223 | 51.28 | 23 | 1186 | 7.45 |
This post is part "hear me out" and part asking for advice.
Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.
so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.
- How to use a domain I own to self-host services?
I'm not really sure how to ask this because my knowledge is pretty limited. Any basic answers or links will be much appreciated.
I have a number of self hosted services on my home PC. I'd like to be able to access them safely over the public Internet. There are a couple of reasons for this. There is an online calendar scheduling service I would like to have access to my caldav/carddav setup. I'd also like to set up Nextcloud, which seems more or less require https. I am using http connections secured through Tailscale at the moment.
I own a domain through an old Squarespace account that I would like to use. I currently have zero knowledge or understanding of how to route my self hosted services through the domain that I own, or even if that's the correct way to set it up. Is there a guide that explains step by step for beginners how to access my home setup through the domain that I own? Should I move the domain from Squarespace to another provider that is better equipped for this type of setup?
Is this a bad idea for someone without much experience in networking in general?
- 14th Gen iGPU encoding woes
Hi, I'm trying to pass an HD770 from a i5-14600K to a jellyfin container that's on Open Media Vault. I keep thinking I have the correct firmware downloaded but I guess that's not the case.
The GPU is fully passed through via Proxmox.
Hypervisor: 6.14.5-1-bpo12-pve
OpenMediaVault: 6.12.32+bpo-amd64
lspci | grep -i vga:
00:10.0 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
dmesg | grep -i drm:
[ 2.084954] i915 0000:00:10.0: [drm] Failed to find VBIOS tables (VBT) [ 2.093499] i915 0000:00:10.0: [drm] *ERROR* DMC firmware has wrong CSS header length (1097158924 bytes) [ 2.093502] i915 0000:00:10.0: [drm] Failed to parse DMC firmware i915/adls_dmc_ver2_01.bin (-EINVAL). Disabling runtime power management. [ 3.998600] i915 0000:00:10.0: [drm] [ENCODER:240:DDI A/PHY A] failed to retrieve link info, disabling eDP [ 4.001294] i915 0000:00:10.0: [drm] *ERROR* GT0: GuC firmware i915/tgl_guc_70.bin: size (2134KB) exceeds max supported size (2048KB) [ 4.003303] i915 0000:00:10.0: [drm] GT0: GuC firmware i915/tgl_guc_70.1.1.bin: unexpected header size: 1841953 != 128 [ 4.003305] i915 0000:00:10.0: [drm] *ERROR* GT0: GuC firmware i915/tgl_guc_70.1.1.bin: fetch failed -EPROTO [ 4.003307] i915 0000:00:10.0: [drm] GT0: GuC firmware(s) can be downloaded from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915 [ 4.004270] i915 0000:00:10.0: [drm] GT0: GuC firmware i915/tgl_guc_70.1.1.bin version 0.0.0 [ 4.004331] i915 0000:00:10.0: [drm] *ERROR* GT0: GuC initialization failed -ENOEXEC [ 4.004333] i915 0000:00:10.0: [drm] *ERROR* GT0: Enabling uc failed (-5) [ 4.004334] i915 0000:00:10.0: [drm] *ERROR* GT0: Failed to initialize GPU, declaring it wedged!
I have adls_dmc_ver2_01.bin, tgl_guc_70.1.1.bin, and tgl_guc_70.bin all within /lib/firmware/i915/
This docker container returns this
docker run --rm \ --device /dev/dri:/dev/dri \ --entrypoint ffmpeg \ ghcr.io/linuxserver/ffmpeg \ -init_hw_device qsv=hw:/dev/dri/renderD128 \ -hwaccel qsv -hwaccel_device hw -hwaccel_output_format qsv \ -f lavfi -i testsrc=duration=3:size=1280x720:rate=30 \ -vf 'format=nv12,hwupload=extra_hw_frames=64' \ -c:v h264_qsv -f null -
[AVHWDeviceContext @ 0x56479ec06c00] libva: /usr/local/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed [AVHWDeviceContext @ 0x56479ec06c00] Failed to initialise VAAPI connection: 1 (operation failed). Device creation failed: -5. Failed to set value 'qsv=hw:/dev/dri/renderD128' for option 'init_hw_device': Input/output error Error parsing global options: Input/output error
I'm at a loss and pulling my hair out.
- Reitti (v1.1.0) Update: Family mode, faster processing, colors!github.com Releases · dedicatedcode/reitti
Contribute to dedicatedcode/reitti development by creating an account on GitHub.
After an intensive week of development, I'm proud to present Reitti v1.1.0 with a big list of improvements!
✨ What's New in This Update: ✓ Family & Friends Tracking: Now see multiple users on the same map - perfect for keeping tabs on your whole family or group adventures! ✓ Faster Processing: Experience significantly faster data crunching after importing new location data ✓ Redesigned Settings: Completely overhauled UI makes customization simpler and more intuitive ✓ Imperial Unit Support: Added miles and feet for our friends in the US and elsewhere ✓ Enhanced Maps: New color mode to personalize your viewing experience ✓ Google Timeline Import 2.0: Now supports legacy formats plus both iOS and Android variants ✓ OwnTracks Integration: Direct connection to your existing OwnTracks Recorder ✓ Docker Optimization: New arm64 images for efficient deployment
🔍 New to Reitti? Reitti is your ultimate privacy-focused location companion that: 📍 Builds smart maps of your travels 📊 Uncovers your movement patterns 🔐 Keeps all data securely on YOUR device 💙 Stays 100% free and open-source
🚦 Ready to Upgrade? 📲 Get the Latest Version ☕ Support Development on Ko-fi
Huge thanks to this amazing community for your suggestions and feedback!
- Home server advice
Hello /c/selfhosted!
Im sorry if this is not the place to ask but i figured id give it a shot. Mods feel free to delete if i should post elsewhere.
Im currently contemplating building an actual home server. My problem is i have no idea what to prioritize in a server. My main concern is probably power consumption and price. It doesnt really need to be a brast. I currently self host a media center on my gaming rig which id like to move over and id like to be able to host stuff like Immitch and maybe some game servers from time to time.
Im fairly confident in my building skulle since ive built a fair share of gaming rigs over the years but i dont really know whats optimal in a server setting. So i come to you to ask about this landscape.
Im thinking good amount of RAM a fairly recent AMD processor on an unspecified motherboard. I do have an M.2 and extra HDD lying around and also an old GPU (GTX 960) but idk if GPU matters. In any case, how would one go about reducing power consumption, my first idea was underclocking the CPU even though i know AMDs recent CPUs should be pretty efficient. But is there any other, better, solutions to bring down idle consumption?
As stated im pretty fresh on this. Closest ive gotten to a home server is a couple of RPis. Any information or tips are very welcomed!
(Edits: typos)
- Question about traffic using Cloudflare tunnel
Let's say I setup some subdomains and then point them to my home server via Cloudflare tunnel.
If I use one of those subdomains from my personal PC on the same network as my home server, to watch a movie for example, is all of that traffic going out to the internet and then back? Or does all the traffic stay internal once the connection has been made?
- signald on a pi ?
I currently selfhost a a matrix-server including a matrix-to-signal bridge. For that, I need to run signald in a docker container on a intel-box.
I would like to migrate to a full raspberry-pi setup (I have purchased a dedicated pi5 for that), however the signald container-image is intel only. Does somebody have a sollution to run signald on a pi? (perhaps as a native application or so?)
- Anubis is awesome! Stopping (AI)crawlbots
Incoherent rant.
I've, once again, noticed Amazon and Anthropic absolutely hammering my Lemmy instance to the point of the lemmy-ui container crashing. Multiple IPs all over the US.
So I've decided to do some restructuring of how I run things. Ditched Fedora on my VPS in favour of Alpine, just to start with a clean slate. And started looking into different options on how to combat things better.
Behold, Anubis.
"Weighs the soul of incoming HTTP requests to stop AI crawlers"
From how I understand it, it works like a reverse proxy per each service. It took me a while to actually understand how it's supposed to integrate, but once I figured it out all bot activity instantly stopped. Not a single one got through yet.
My setup is basically just a home server -> tailscale tunnel (not funnel) -> VPS -> caddy reverse proxy, now with anubis integrated.
I'm not really sure why I'm posting this, but I hope at least one other goober trying to find a possible solution to these things finds this post.
Edit: Further elaboration for those who care, since I realized that might be important.
- You don't have to use caddy/nginx/whatever as your reverse proxy in the first place, it's just how my setup works.
- My Anubis sits between my local server and inside Caddy reverse proxy docker compose stack. So when a request is made, Caddy redirects to Anubis from its Caddyfile and Anubis decides whether or not to forward the request to the service or stop it in its tracks.
- There are some minor issues, like it requiring javascript enabled, which might get a bit annoying for NoScript/Librewolf/whatever users, but considering most crawlbots don't do js at all, I believe this is a great tradeoff.
- The most confusing part were the docs and understanding what it's supposed to do in the first place.
- There's an option to apply your own rules via json/yaml, but I haven't figured out how to do that properly in docker yet. As in, there's a main configuration file you can override, but there's apparently also a way to add additional bots to block in separate files in a subdirectory. I'm sure I'll figure that out eventually.
Edit 2 for those who care: Well crap, turns out lemmy-ui crashing wasn't due to crawlbots, but something else entirely. I've just spent maybe 14 hours troubleshooting this thing, since after a couple of minutes of running, lemmy-ui container healthcheck would show "unhealthy" and my instance couldn't be accessed from anywhere (lemmy-ui, photon, jerboa, probably the api as well). After some digging, I've disabled anubis to check if that had anything to do with it, it didn't. But, I've also noticed my host ulimit -n was set to like 1000.... (I've been on the same install for years and swear an update must have changed it) After changing ulimit -n (nofile) and shm_size to 2G in docker compose, it hasn't crashed yet. fingerscrossed Boss, I'm tired and I want to get off Mr. Bones' wild ride. I'm very sorry for not being able to reply to you all, but it's been hectic.
Cheers and I really hope someone finds this as useful as I did.
- Calendar app
I'm looking for a selfhostable calendar web app that I could connect to my already running Baikal setup. I know nextcloud has a calendar, but I don't necessarily want to bother with a whole nextcloud installation. Anyone know a webapp for that?
- Pi-hole client filtering without DHCP?
Hi all - please tell me if I'm doing this wrong:
My 12yo spends all day on YouTube shorts. I want to block it, but can only block YouTube entirely. Blocking for everyone would upset my 15yo, so I need per-client domain filtering.
That was easy on Pi-hole. But my Raspberry died and I heard praise for Adguard Home so now I run that as a Docker container.
- I can't figure out how to block YouTube for only some devices. Is that not possible with Adguard? Claude gives me complicated nonsense; you can easily do better.
I want to ditch Adguard and go back to Pihole. The caveat is that I must let Pihole run the DHCP server, in order to get correct per-client blocking. That's a pity, as I have a neat UniFi network set up.
- Can I get Pihole's per-client blocking without Pihole as DHCP?
I don't mind setting it all up in Pihole again (as yet another container) because I know it works (it's how I had it before the Raspberry died). But I would love to know if I am going about this the wrong way? Thank you!
- Self host sff project
I want self host a media server with jellyfin,sonarr,radarr,prowlarr and qbittorrent and immich and nextcloud drive on a truenas scale. What sff do you recommend ?that can accommodate 2 ssd, and 4 4tb hdd?thank you
- github.com GitHub - voidauth/voidauth: An Easy to Use and Self-Host Single Sign-On Provider 🐈⬛🔒
An Easy to Use and Self-Host Single Sign-On Provider 🐈⬛🔒 - voidauth/voidauth
A new open-source Single Sign-On (SSO) provider designed to simplify user and access management.
Features:
- 🙋♂️ User Management
- 🌐 OpenID Connect (OIDC) Provider
- 🔀 Proxy ForwardAuth Domains
- 📧 User Registration and Invitations
- 🔑 Passkey Support
- 🔐 Secure Password Reset with Email Verification
- 🎨 Custom Branding Options
Screenshot of the login portal:
- I self hosted a World of Warcraft server.
The title really says it all, but I’m self hosting world of Warcraft wrath of the lich king.
I’m just so shocked that it all works to be honest. It’s blowing my mind still.
I always want to play classic wow, but I play so infrequently that it’s not worth paying a subscription.
It never really occurred to me that I could just host my own server until chatgpt recommended that when I was researching things to self hosting.
It’s not public yet as my upload speeds are too slow.
I think I’m going to set the server up on my laptop so I can play wow while on my 14 hour flight coming up.
I’ve always played the game solo anyway due to my casualness.
- An Immich LXC came up on community scriptcommunity-scripts.github.io Proxmox VE Helper-Scripts
The official website for the Proxmox VE Helper-Scripts (Community) Repository. Featuring over 300+ scripts to help you manage your Proxmox VE environment.
Hi all,
For all people awaiting for an LXC to self host Immich the time has come. The LXC came up a month ago, sorry if it's a repost.
- What are the advantages/disadvantages of the different backup solutions?
Lots of people have mentioned rsynx, restic, borgbackup, and others, but which would be best for backing up nextcloud, immich, and radicale? Do all of them have a method of automatically backing up every X days/weeks? Why use one over the other, what are the differences?
- What else should I self-host?
Today I set up my old laptop as a Debian server, hosting Immich (for photos), Nextcloud (for files), and Radicale (for calendar). It was surprisingly easy to do so after looking at the documentation and watching a couple videos online! Tomorrow I might try hosting something like Linkwarden or Karakeep.
What else should I self-host, aside from HA (I don’t have a smart home), Calibre (physical books are my jam), and Jellyfin (I don’t watch too many movies + don’t have a significant DVD/Blu-ray collection)?
I would like to keep my laptop confined to my local network since I don’t trust it to be secure enough against the internet.
edit: I forgot, I’m also hosting Tailscale so I can access my local network remotely!
- Laboratory note book for my new research group
I won a new grant (yaay!) and dipping my toes in the role of PI in my university. For now, I will have a PhD, a post doc and a couple of masters students in my team.
In all my previous labs, everything was on paper and very poorly documented (...don't ask). I myself used to use LaTeX to keep a "neat" labnote. Obviously, it is not easy to collaborate and work with others.
Any researchers here who have experience hosting their own e-lab book in their labs?
- Which guides to trust for novice / normie getting started?
I'm a good chemist, but not IT advanced. Started using Debian out of the box last year on miniPC. Running Jellyfin only on that local machine. Don't understand coding, but copy/ paste terminal instructions from trusted sites. Have 1TB music, films and documents. Want to move all photos from Google.
- Looking for DLNA Renderer software
Hello everyone, I am currently looking for a software solution to use my home server as a DLNA renderer which can output audio to my stereo amplifier.
The only solution I found was called gmrender-resurrect which seems like it would do exactly what I want but I was unable to get a docker container of it working. While I was able to find and connect to the DLNA Renderer, playback would fail every time and I was unable to get any information from the logs regarding why.
Do any of you know another solution to stream audio from my phone to my server (I am using Symfonium on the phone side)? Ideally it would be something I can deploy as a docker container on my server.
Thanks.
- DietPi is great!dietpi.com Lightweight justice for your SBC!
Optimised | Simplified | For everyone - Backed by community, DietPi is a minimal OS image for SBCs - Raspberry Pi, Odroid, PINE64 etc. Install software optimised for you!
Do you guys know about DietPi? I use it on two Raspberry Pi, just installed it on a Wyse mini-PC and I think it's really great:
Truly Optimised DietPi is an extremely lightweight Debian OS, highly optimised for minimal CPU and RAM resource usage, ensuring your SBC always runs at its maximum potential.
Simple interface DietPi programs use lightweight Whiptail menus. Spend less time staring at the command line, more time enjoying your Pi.
DietPi-Software Quickly and easily install popular software "ready to run" and optimised for your system. Only the software you need is installed.
DietPi-Config Quickly and effortlessly customise your device's hardware and software settings for your needs, including network connection and localisation setup.
DietPi-Backup Quickly and easily backup or restore your DietPi system.
Logging System Choices You decide how much logging you need. Get a performance boost with DietPi-RAMlog, or, rsyslog and logrotate for log critical servers.
DietPi-Services Control Control which installed software has higher or lower priority levels: nice, affinity, policy scheduler and more.
DietPi-Update System DietPi automatically checks for updates and informs you when they are available. Update instantly, without having to write a new image.
DietPi-Automation Allows you to completely automate a DietPi installation with no user input. Simply by configuring dietpi.txt before powering on.
- Trouble setting Let's Encrypt certificates for Pangolin
I've recently gotten into self hosting. I have a VPS and a domain name and decided to set up Pangolin as a reverse proxy to my local homelab.
During the options in the installation, I was asked to provide an email address for "generating Let's Encrypt certificates". I don't have a clue what what role my email address plays into this nor what email I should provide for the setup, so I just gave one of my personal email address. Everything worked fine and the service was completely set up in the VPS.
However, logging into the dashboard, I was informed by my browser that the certificate of the website is self signed and visiting the page may be dangerous. Although I was later able to access the panel with https enabled, I felt this setup is not okay and decided I would need to fix it.
Unfortunately I have no idea how certificate issuing works. I tried to search for a solution online and read the docs for Pangolin and Traefik as well as rewatch the tutorial through which I set up Pangolin, but either they tend to skip explaining the email thing or go too much into detail without even explaining where to start. I also checked my inbox to see if the CA pinged me or something but to no avail.
I feel like I'm missing something in my setup which was apparent to everybody else. I would really appreciate if someone could help me ELI5 what the root cause of this 'email' problem is and how to fix it. I am willing to set up the service all over again or edit the config files if needed but I just need to know what to do.
- [Recommendation Request] Selfhosted Fileserver on OpenWRT for LAN?
cross-posted from: https://slrpnk.net/post/24568506
> Hi! > > I'm supplying a small camp I'm participating in with Internet/Wifi, so I built an x86 OpenWRT router with an LTE modem... it took forever, but now it's working. (camp is quite outback for open wifi routers) So now I thought: What if we could share files for... anything easily via the router without setting up SAMBA on their phones or whatever. > > So I thought of services like Sharedrop, or drop.lol, or litterbox.moe or pastebin or whatever. And that it would be super convenient to fileshare without the Internet or whatever. > > There are a lot of self-hosted options available but which ones run on that 8GB OpenWRT router I set up. (Should be easy - that's a powerhouse for writeaple drive space in a router. > > So: what's the best idea here? I can set up a http server, but I guess an ftp server would work as well. Althoug it would be perfect if it worked with phones and ad-hoc filesharing (download and upload, preferably with QR-code generation). > > I know stuff like magic wormhole or localsend or warp, but all of those are a bit of a hassle for noobs to setup (i.e.: opening a firewall, which you shouldn't do if you don't know what you're doing). That's why I was thinking: hosted in the router. > > You got any ideas what I can run on my potato of a server/beefcake of a router?
- selfh.st Self-Host Weekly (11 July 2025)
Self-hosted news, updates, launches, and content for the week ending Friday, July 11, 2025
- www.omgubuntu.co.uk The Way Ubuntu Boots on Raspberry Pi is Changing
Ubuntu will use an A/B boot approach on Raspberry Pi devices to improve reliability and boot failures, albeit with a minor drawback for users.
- 🔒 Setting Up Headscale & Tailscale on NixOS: A Zero-Trust Networking Guide for ❄️ NixOS - YouTube
YouTube Video
Click to view this content.
Cross-posted from: https://programming.dev/post/33674513
Any general suggestions when getting started with headscale?
- From Docker with Ansible to k3s: I don't get it...
Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...
Recently, I've been considering migrating to bare metal K3S for a few reasons:
- To learn and actually practice K8S.
- To have redundancy and to try HA.
- My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
- Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!
Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong. More specifically:
- Am I really supposed to have a collection of small yaml files for everything, that I use with
kubectl apply -f
?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ?? - I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
- Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!
I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.
It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!
Thanks in advance,
Cheers!
- WhisperX — Automated Transcripts w/ Timestamps and Speaker Tagginggithub.com GitHub - m-bain/whisperX: WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization) - m-bain/whisperX
I think a lot of people have heard of OpenAI’s local-friendly Whisper model, but I don’t see enough self-hosters talking about WhisperX, so I’ll hop on the soapbox:
Whisper is extremely good when you have lots of audio with one person talking, but fails hard in a conversational setting with people talking over each other. It’s also hard to sync up transcripts with the original audio.
Enter WhisperX: WhisperX is an improved whisper implementation that automatically tags who is talking, and tags each line of speech with a timestamp.
I’ve found it great for DMing TTRPGs — simply record your session with a conference mic, run a transcript with WhisperX, and pass the output to a long-context LLM for easy session summaries. It’s a great way to avoid slowing down the game by taking notes on minor events and NPCs.
I’ve also used it in a hacky script pipeline to bulk download podcast episodes with yt-dlp, create searchable transcripts, and scrub ads by having an LLM sniff out timestamps to cut with ffmpeg.
Privacy-friendly, modest hardware requirements, and good at what it does. WhisperX, apply directly to the forehead.
- Got my first script kiddy
Nice big old port scan. Brand new server too. Just a few days old so there is nothing to find. Don't worry I contacted AWS. Stay safe out there.
- How big is your media library?
Background: I've been writing a new media server like Jellyfin or Plex, and I'm thinking about releasing it as an OSS project. It's working really well for me already, so I've started polishing up the install process, writing getting started docs, stuff like that.
I'm interested in how other folks have set up their media libraries. Especially the technical details around how files are encoded and organized.
My media library currently has about 1,100 movies and just shy of 200 TV shows. I've encoded everything as high quality AV1 video with Opus audio, in a WebM container. Subtitles and chapters are in a separate WebVTT file alongside the video. The whole thing is currently about 9TB. With few exceptions, I sourced everything directly from Blu-ray or DVD using MakeMKV. It's organized pretty close to how Jellyfin wants it.
What about you?
- Torrent for books
Looking for book torrents - anything really ive come across a number of sites from other forums - not sure if they work or are safe to use https://annas-archive.org/ https://x1337x.cc/ anyone know anymre
- [CLOSED] Podman, Peertube, AMD VAAPI
i’m starting to think it’s the debian base of this container image. it may just be too out of date for my GPU.
i think i'm giving up on this for now.
thanks all!
----------------------------
hey all!
for the life of me, i cannot get VAAPI hardware accelerated encoding to work. i always get this error:
Error: ffmpeg exited with code 234: Device creation failed: -22.
Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': Invalid argument
Error parsing global options: Invalid argument`
at ChildProcess.<anonymous> (/app/node_modules/fluent-ffmpeg/lib/processor.js:180:22) at ChildProcess.emit (node:events:524:28) at ChildProcess._handle.onexit (node:internal/child_process:293:12)
- AMD Radeon RX 9060 XT
- the peertube vaapi transcoding plugin is installed
- i have mesa-va-drivers and mesa-libgallium installed from bookworm backports.
- the container is rootful.
- /dev/dri is mapped
- the render group id matches between host and container.
- SELinux is set to allow containers access to devices.
no joy.
vainfo
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.17.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_17
amdgpu: os_same_file_description couldn't determine if two DRM fds reference the same file description.
If they do, bad things may happen!
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.17 (libva 2.12.0)
vainfo: Driver version: Mesa Gallium driver 25.0.4-1~bpo12+1 for AMD Radeon Graphics (radeonsi, gfx1200, ACO, DRM 3.63, 6.15.4-1-default)
vainfo: Supported profile and entrypoints VAProfileH264ConstrainedBaseline: VAEntrypointVLD VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice VAProfileH264Main : VAEntrypointVLD VAProfileH264Main : VAEntrypointEncSlice VAProfileH264High : VAEntrypointVLD VAProfileH264High : VAEntrypointEncSlice VAProfileHEVCMain : VAEntrypointVLD VAProfileHEVCMain : VAEntrypointEncSlice VAProfileHEVCMain10 : VAEntrypointVLD VAProfileHEVCMain10 : VAEntrypointEncSlice VAProfileJPEGBaseline : VAEntrypointVLD VAProfileVP9Profile0 : VAEntrypointVLD VAProfileVP9Profile2 : VAEntrypointVLD VAProfileAV1Profile0 : VAEntrypointVLD VAProfileAV1Profile0 : VAEntrypointEncSlice VAProfileNone : VAEntrypointVideoProc
i've also tried updating the packages from trixie and sid, and installing the firmware-linux-nonfree.
i've tried disabling SELinux. i've tried making the container permissive.
no change.
any help is appreciated! thank you!
> i’m starting to think it’s the debian base of this container image. it may just be too out of date for my GPU.