Some other very popular communities in very popular servers I don’t find at all (even if those servers are in my “federate” list. What am I missing?
Lemmy is not scaling well, even with 0.18, and a lot of federation activity is timing out or otherwise failing. There are a few open GItHub issues: https://github.com/LemmyNet/lemmy/issues/3203
Lemmy.ml, Lemmy.world, Beehaw.org - are often ones that give the most trouble. Try small ones and confirm it isn't every server.
So far these logs don’t seem to point to where to look for the issue even.
i agree, I don't see anything in that log that indicates what is going on.
I don't see any long about database migrations, so maybe it is failing before that logs. How did you install the server, Docker? Is PostgreSQL running and working?
Where do people interested in talking about developing Lemmy go to communicate other than the github?
I had the same observation, that there was no history of discussion.
!lemmycode@lemmy.ml - but it's mostly empty so far.
!lemmyperformance@lemmy.ml about how the components of a server work together, notably SQL issues that are a hot topic right now.
Lemmy developers seem to not "at their own dogfood" as developers, not using Lemmy to discuss Lemmy. It also concerns me that Discord and Matrix are the kind of things that do not make it in Google Search, and Lemmy is inspired as a Reddit alternative - that search engine hits on technical topics land into discussions.
Lemmy runs fine when you start out, with only a few comments and posts in the PostgreSQL database. The real problems come in 0.18 when you start to join communities from other servers and get more and more data in the system. Then your performance needs start to climb rapidly.
A Lemmy server does not "backfill" the postings and comments in the community when it is the very first on your local server to subscribe. It only goes from the time of the first successful subscribe forward.
The code to back-fill does not currently exist, so it isn't a choice by the people running the server, it's a lack of capability.
Query speed is Lemmy’s main performance bottleneck, so we really appreciate any help database experts can provide.
I have been pleading that Lemmy server operators install pg_stat_statements extension and share metrics from PostgreSQL. https://lemmy.ml/post/1361757 - a restart of PostgreSQL server is required for the extension to be installed. I suggest this be part of 0.18 upgrade. Thank you.
I think lemmy.ml restarting the server helped the 'pending' subscribe problem, it started to come back for me once the server had been running an hour or more post-upgrade. It's better, but I still am having some get stuck.
Missing comments and postings are also not as glaring in the user interface as a 'pending' subscribe.
The primary reason I suspect is that it was buggy code. You would be reading a post and the votes and even the title and body of a post would just change in front of you to the wrong post. The server wasn't keeping the index of clients correct or something. It was a very uncommon way to build a webapp
I've been testing for hours, and "Subscribe Pending" went away for the first hour or so after the restart of lemmy.ml - but now I'm getting them again. The underlying issue still seems there.
Someone mentioned it on !lemmywishlist@lemmy.ml community: https://lemmy.ml/post/1485921