Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TO
Posts
2
Comments
1,641
Joined
2 yr. ago

  • Maybe Ukraine couldn't retake the areas occupied by Russia, but they could deliver a Pyrrhic blow to Kremlin.

    They have delivered a pyrrhic victory.
    Russia thought they could take Kyiv (Ukraine?) on 3 days.
    The fact that Ukraine has resisted so hard, have redefined the modern battlefield, have conducted huge deep strikes...
    Ukraine is winning.

    The reason Ukraine may not be "winning" is because the Russian war machine is huge. Like really really big.
    The reason that Ukraine is "winning" is because the Russian war machine is outdated and corrupt.

    The western opinion of Russia has been devastated. Russia tested themselves, and failed.
    Russia is holding on by their nukes.

  • A NAS as bare metal makes sense.
    It can then correctly interact with the raw disks.

    You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
    Let a storage device be a storage device, and let a hypervisor be a hypervisor.

  • especially once a service does fail or needs any amount of customization.

    A failed service gets killed and restarted. It should then work correctly.
    If it fails to recover after being killed, then it's not a service that's fully ready for containerisation.
    So, either build your recovery process to account for this... or fix it so it can recover.
    It's often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn't matter how many you run or restart.

    As for customisation, if it isn't exposed via env vars then it can't be altered.
    If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)

    It's a bit like saying "chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel".
    It's using a chisel incorrectly.

  • I would always run proxmox to set up docker VMs.

    I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
    It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
    I will use Talos Linux again.
    However next time, I'm running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
    I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it's the way k8s is designed.
    It wasn't the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would've made things so much easier.

    Also, why wouldn't I run proxmox?
    Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups

  • The remaster of myst 1 is good, the remaster of riven is good.
    Must 3-5 felt... Thin. Like, the game was about it being 3d and the tech... Not the puzzles.

    I feel a true successor to the myst 1 & 2 games is Quern: Undying Thoughts.
    Felt like the original premise, but in a modern game engine.

    Another game that gave me the same hook as Myst is Blue Prince. A rogue lite puzzle game that is amazing.

  • In that case, maybe look into proxmox and VMs.
    Then run docker inside a VM. Have multiple VMs of docker for different environments (eg a VM for containers that should only use a VPN, another for media server stuff, another for experimenting... Whatever)

    Learning proxmox (or another hypervisor) is well worthwhile, because the base installer sets things up to just work for virtualization. And VMs are great for learning to run services.
    Then you can spin up VMs for isolating environments, and have the benefit of oversight and management tools as well as snapshots. Snapshots means you can take a snapshot, tinker and break things, then roll back to a known good snapshot and try again.

    I use proxmox on any bare metal before I start setting up VMs for services. Even if it's just a single VM with the majority of resources allocated to it.

    Is proxmox overkill for running a server for some docker containers? Yes.
    Does it make things easier? IMO, yes. At least operationally safer/easier.

  • Imo, only services that require a VPN exit node should use a VPN exit node.

    https://github.com/qdm12/gluetun
    Is a well known VPN container that people use, and works with ProtonVPN.

    I don't know anything about how to do this, but a cursory search for "gluetun qbitorrent docker" suggests that gluetun gets network: "host". Any container that has to use a VPN exit node gets network_mode: "service:gluetun". A depends_on: {gluetun details} style option will ensure that any service that should use a VPN exit node will not run unless gluetun is running.

    Then it's getting the data out of the qbittorent container into whatever you are using as a media server.

  • Ah, gotcha.

    So... You generally have to pay a VPN company to get access to their VPN exit nodes, and "hide" in among all the other traffic.
    There is nothing you can self-host to do that.

    ProtonVPN used to be a popular recommendation, however they are slipping out of favour due to behaviour over the last couple of years.
    If you are looking for a VPN for anonymity, be careful of "review" articles posted on blogs owned by dodgy VPN providers.
    I'm not sure who the "go to" VPN provider is these days.

    If you rent a VPS (virtual private server) in order to run your own VPN exit node, and the VPS provider gets a letter regarding illegal activity, then your VPS will be deleted.
    I don't know of a VPS provider that will protect customers privacy WRT legal requests (maybe there are, but they will be exceptionally expensive).

    So everyone pays a VPN provider that doesn't keep logs in order to hide amongst the herd.

    In order to make sure that your file downloading system uses a VPN instead of the default gateway for internet access is a huge field.
    So you need to describe exactly the software you want to use the VPN exit node, and how it's installed.
    Because the solution could be host firewall, docker networking, isolated networks.... Pretty sure there are many others.

  • You can't hide your public IP. It's public.

    I presume your servers sit on your home network, and it's a basic flat network. And you have a basic home router. And you forward a port on your router to your server that's running wireguard.
    Sound about right?

    You already use a VPN to access your homelab/home-servers.
    So the only ports you are forwarding (presumably) relate to wireguard. So the only accessable ports are secured sensibly (by wireguard, cause thats what it is).

    So you are already doing everything right.

    If you want a fancier router/firewall, then OpnSense or OpenWRT are good options.
    But I wouldn't run everything through your server. Let your server serve. And use a router to do network things.
    If you really want to hyperconverge onto a single server like that, then I'd do it inside different VMs (probably running on a proxmox host). Have a VM running OpnSense that only does network and routing. Then VMs for other services.
    You're directly coupling your home internet access to the proxmox host and the VM, tho.
    Which is why I prefer using a more embedded/dedicated router appliance (I'm a huge fan of mikrotik stuff, but my home network is TP-Link Omada. Tho I think I'll move to Unifi)

  • Software Gore @programming.dev

    Am I going mad, or is this an entirely hallucinated article?

    Memes @lemmy.ml

    let me sleep