Skip Navigation
What are the benefits of not owning a car?
  • Or, or instead of your crazy idea, we could update our cities so that cars aren't required and I suspect people might just choose to save on car payments, insurance payments, registration fees, gas/electricity, inconvenience of parking, wasting hours on daily commutes, etc, etc...

  • PostgreSQL Optimizations
  • If you like, you can give me access to the Grafana dashboard so I can take a look and we can take it from there. It's going to be totally free of charge of course as I am quite interested in your problem: it's both a challenge for me and helping a fellow Lemmy user. The only thing I ask is that we report back the results and solution here so that others can benefit from the work.

    No problem. PM me an IP (v4 or v6) or an email address (disposable is fine) and I'll reply with a link to access Grafana with above in allow list.

  • PostgreSQL Optimizations
  • I guess I could just use rsync to periodically sync RAM drive to disk or just rely on backups to restore running state in case of failure or just a restart. On a mostly idle server with few users this could probably work, but I don't think I'm quite ready for such a risky setup. Server is still perfectly usable at 15% iowait - i was just hoping I could reduce it with mechanisms built into PSQL. Appreciate the suggestion though.

    Edit: I just want to say this would be an awesome feature for Docker/Podman to have an in-memory volume that syncs to disk either periodically or on container termination as an option.

  • PostgreSQL Optimizations
    • I never manually VACUUMed the DB. I just assumed it does it automatically at regular intervals. VACUUMing manually didn't seem to make any difference and gave me the following error after a few minutes of running on various tables: ERROR: could not resize shared memory segment "/PostgreSQL.1987530338" to 67128672 bytes: No space left on device I'm not 100% sure where it out of space, but I'm assuming one of the configured buffers since there was still plenty of space left on disk and RAM. I didn't notice any difference in iowait while it was running or after.
    • Yes, seeding is mostly inserts, but I see a roughly equal number of selects. I did increase shared_buffers and effective_cache_size with no effect.
    • https://ctxt.io/2/AABQciw3FA https://ctxt.io/2/AABQTprTEg https://ctxt.io/2/AABQKqOaEg

    I did install Prometheus with PG exporter and Grafana. I'm not a DB expert and certainly not a PostgreSQL expert, but I don't see anything that would indicate an issue. Anything specific you can suggest that I should focus on?

    Thanks for all the suggestions!

  • PostgreSQL Optimizations
  • I'll try adjusting wal_buffers.

    I think I was hoping there's s magic setting that would allow psql to operate more like Redis that uses ram for everything until it dumps it to disk at specific intervals.

  • Paid Servers?
  • I didn't read the story about how exactly he lost the jwt, but is it still as big of an issue since 2fa was introduced?

    I guess existing jwt hashes will bypass 2fa, but I'm not super worried since my instance has 3 users.

  • Paid Servers?
  • He's talking about just hosting fees though and those could easily be covered for a few $/mo per user unless the instance becomes massive which isn't likely since most people hate subscriptions and most people even aware of Lemmy are technical enough to host their own if they are willing to invest actual money into it.

  • Paid Servers?
  • I did the same thing for the same reason. Admin approval for everything and I'm the only admin. Basically a personal instance for me and my friends if they're too lazy to host but want to try Lemmy.

  • PostgreSQL Optimizations

    cross-posted from: https://lemmy.daqfx.com/post/24701

    > I'm hosting my own Lemmy instance and trying to figure out how to optimize PSQL to reduce disk IO at the expense of memory. > > I accept increased risk this introduces, but need to figure out parameters that will allow a server with a ton of RAM and reliable power to operate without constantly sitting with 20% iowait. > > Current settings: > > > # DB Version: 15 > # OS Type: linux > # DB Type: web > # Total Memory (RAM): 32 GB > # CPUs num: 8 > # Data Storage: hdd > > max_connections = 200 > shared_buffers = 8GB > effective_cache_size = 24GB > maintenance_work_mem = 2GB > checkpoint_completion_target = 0.9 > wal_buffers = 16MB > default_statistics_target = 100 > random_page_cost = 4 > effective_io_concurrency = 2 > work_mem = 10485kB > min_wal_size = 1GB > max_wal_size = 4GB > max_worker_processes = 8 > max_parallel_workers_per_gather = 4 > max_parallel_workers = 8 > max_parallel_maintenance_workers = 4 > fsync = off > synchronous_commit = off > wal_writer_delay = 800 > wal_buffers = 64MB > > > Most load comes from LCS script seeding content and not actual users.

    Solution: My issue turned out to be really banal - Lemmy's PostgreSQL container was pointing at default location for config file (/var/lib/postgresql/data/postgresql.conf) and not at the location where I actually mounted custom config file for the server (/etc/postgresql.conf). Everything is working as expected after I updated docker-compose.yaml file to point PostgreSQL to correct config file. Thanks @bahmanm@lemmy.ml for pointing me in the right direction!

    11
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DA
    daq @lemmy.daqfx.com
    Posts 1
    Comments 9