Skip Navigation

Anyone else running Lemmy with Kubernetes?

I just spun up Lemmy on my Kubernetes cluster with nginx-unprivileged and ingress-nginx. All is well so far! I’m thinking about posting the Kustomization manifests and continuing to maintain and publish OCI’s per version release of Lemmy.

20 comments
  • I currently am running the instance I am responding from on kubernetes. I published a helm chart, and others are working on them too. I feel being able to quickly deploy a kubernetes instance will help a lot of smaller instances pop up, and eventually be a good method of handling larger instances once horizontal scaling is figured out.

    • Is there a place I can read more about the horizontal scaling issues lemmy has?

      • Saved this comment. It claims that the Lemmy frontend and backend are stateless and can be scaled arbitrarily, as can the web server. The media server (pict-rs) and Postgres database are the limitations to scaling. I'm working to deploy Lemmy with external object storage to solve media storage scaling and there's probably some database experts figuring out Postgres optimization and scaling as well. None of the instances are big enough to run into serious issues with vertical scaling yet, so this won't be a problem for a while.

      • I'm not sure if there really is issues, I think it's just new ground since most lemmy instances have been able to run on a single node due to the low populations. It seems most large public instances are just adding bigger servers to deal with the problem short term.

        From what I can tell (I am not an expert in this field), it seems most of the architecture would spread horizontally without much issue. I haven't seen anywhere this is done yet, but I could be missing the obvious.

        The lemmy backend api just takes HTTP requests (and at the present websockets, but this is changing in 0.18 to only HTTP requests), and it uses postgres as the backend storage. Using a kubernetes postgres operator to scale the database and then running multiple lemmy backend api instances (and frontend as needed) seems like it would work, or would require minimal work to get running.

    • Very neat! I also considering writing a helm-chart with my close friend's amazing helm library. In the end I decided against it since this is a pretty simple deployment as of today. Tomorrow I will clean up the Kustomize manifests and some CI with a non-federated config file and post it :)

    • I tested your helm chart and it just worked :)

  • I run mine in a microk8s cluster.

    I live in an RV and I'm using T-Mobile 5g internet, so I have to deal with double-nat / cgnat networking.

    To overcome this, I've rented a vps with nginx on it, and have zero-tier on both machines for some vpn-like reverse proxying. Works very very well. I use this method for my other services too, like Plex.

  • Please do! You could even open an issue and/or PR on GitHub to propose Kubernetes as an alternate deployment model.

  • I am! @gabe565@lemmy.cook.gg and I worked on setting this up yesterday. He mentioned building a Helm chart for the whole shebang.

    • Yep I'm still working on a helm chart. Currently, each service is deployed with the bjw-s app-template helm chart, but I'd like to combine it all into a single chart.

      The hardest part was getting ingress-nginx to pass ActivityPub requests to the backend, but we settled on a hack that seems to work well. We had to add the following configuration snippet to the frontend's ingress annotations:

       yaml
          
      nginx.ingress.kubernetes.io/configuration-snippet: |
        if ($http_accept = "application/activity+json") {
          set $proxy_upstream_name "lemmy-lemmy-8536";
        }
        if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
          set $proxy_upstream_name "lemmy-lemmy-8536";
        }
        if ($request_method = POST) {
          set $proxy_upstream_name "lemmy-lemmy-8536";
        }
      
      
        

      The value of the variable is $NAMESPACE-$SERVICE-$PORT.
      I tested this pretty thoroughly and haven't been able to break it so far, but please let me know if anybody has a better solution!

  • No but I’d love to learn how

  • 👋 I'm not using Kustomize, just throwing Deployment manifests and such at the cluster manually. Works pretty nicely, though I had some trouble setting up the custom nginx stuff to proxy stuff in - I ended up running a new nginx instance and pointing the Ingress at that rather than the Lemmy pods directly. Maybe there's a more elegant solution I'm missing?

20 comments