Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ZI
Posts
0
Comments
90
Joined
2 yr. ago

  • There are myriad ways to make money off third party apps that benefits both reddit and the apps. Spez is an absolute moron, who has thrown a spanner in the works of both, when he was sitting on a golden opportunity. I don't think he has any business sense at all.

  • For SD, pretty much any modern CPU will be sufficient -- SD runs primarily on the GPU. All the CPU really has to do is shunt data back and forth and encode/decode images. i5-13400 will be more than sufficient -- arguably overkill, as SD will not be making use of all the cores. Much of the time the CPU will just be waiting for the GPU to finish!

    32 GB main RAM is more than sufficient. The main thing you will need is not system RAM, but GPU RAM. Ideally, you want 8GB minimum, ideally 12GB, of GDDR6 GPU RAM. (6GB is, practically speaking, the absolute minimum for any decent performance at reasonably high resolution output, but will restrict you to certain workflows and workarounds, and somewhat reduced performance. You will encounter occasional issues with RAM allocation for certain workflows.) For best Stable Diffusion experience, focus your spending on the GPU, even if it means compromising on other components. (This won't help your gaming experience, but SD just needs GPU RAM + raw number crunching power on the GPU -- pretty much everything else is secondary).

    Last time I checked, SD doesn't allow processing on multiple GPUs simultaneously, unless you run multiple instances (possibly there are some experimental forks out there that do this to an extent). So you can't (currently) take a single job and run it on 2x GPUs, but you can take two jobs and run each on a different CPU (i.e. running two instances of SD). Multi-GPU support is being worked on, though, so maybe by the time you save up for a beefier GPU, who knows...? But to be honest, if you can go straight for the 4060 if you can, even if it means lowering the specs of your other components. But make sure you have a decent reliable PSU rated for whatever GPU + other components you have, with headroom.

  • This needs to be fixed, IMO.

    It's not at all obvious to newcomers. If you signed up on a smaller server (as you're advised to do), it makes it look like there's not much going on on Lemmy. It also makes it harder to find active communities and discourages participation.

    So now everyone and their dog is building Lemmy community explorers. This functionality should be baked into Lemmy itself, and available on every instance, so you can just browse and search all communities (seeing the true community sizes) and simply click join and be done. No confusing redirection to other instances, or having to copy and paste weird snippets of text into search boxes in other tabs.

  • Funny how you say it's not a problem, then go on to describe the problem that needs to be dealt with. Dealing with scaling is a problem, and it's a problem that costs money.

    Posts like this: https://lemm.ee/post/58472 suggest it is a problem. The rise in traffic seen by Lemmy in the last few days is absolutely tiny compared to a site like reddit, and already instances are struggling to cope. The recent growth in user registrations represents only about 0.007% of reddit's active user base. (~60K new Lemmy users vs 861,000,000 active monthly reddit users). A site like reddit costs millions to run.

    There are 190+ Lemmy instances last time I checked, yet almost all the brunt of this load has been borne by a handful of servers, which see an inordinate amount of traffic while 100+ other servers sit around idle. Why should a handful of "lucky" servers have to pay all the hosting costs? What if a volunteer-run instance explodes to reddit-like levels of popularity? It will simply fold, unless the volunteer has serious money to throw at the problem.

  • Active has a 48-hour cut-off, and the ranking function it uses seems to encourage the same few posts to stay at the top for 48 hours. It's basically the same ranking as "Hot", but using the timestamp of the last comment instead of the time of posting to decay its ranking over time.

    This means any comment activity whatsoever on a popular thread bumps it back up the rankings significantly, and I suspect leads to a kind of snowballing effect that keeps posts higher up. Ideally, it would use some metric based on user interactions over a time period to calculate a score of activity rather than solely the latest comment. In effect, it seems to act more like a "top from last 48 hours". (Although I would add I'm a newbie to Lemmy, so might not yet have an accurate picture of its behaviour).

    Lemmy seemed to get much livelier for me when I switched my default to Hot, but I wish there was a way to disable the auto-updates (I'd rather see new items only on browser refresh). Active sort feels pretty stale to me.

  • The more time I spend on lemmy, the less I'm missing reddit.

    Lemmy feels fresher, more positive, and faster. It's a bit rough around the edges, but things will only improve, and there seems to be a large number of people willing to get involved and help out.

    Even if the current blackout amounted to nothing, at least Lemmy has had a boost in users and engagement. Lemmy and the fediverse will learn lessons, improve, and fix bugs, and will be here for the next time reddit fucks up, and gain even more users.

    I think I'll be staying regardless, reddit has been pissing me off for the last couple of years. It was mainly the lack of a viable alternative that kept me there.

    Sp*z has fucked up badly this time. And he will continue to fuck up.

  • Lots of traffic, lots of posts, lots of comments, ... That's going to need more storage, more bandwidth, more CPU power, higher running costs. The original instance hosting the community bears a higher load than the instances that duplicate it.

    Ideally, there would be a way to more evenly distribute this load across instances according to their resources, but from my (currently limited) knowledge, I don't think Lemmy/ActivityPub is really geared for that kind of distributed computing, and currently I don't believe that there's a way to move subs between instances to offload them (although I believe some people may be working on that).

    Perhaps the Lemmy back-end could use a distributed architecture for serving requests and storage, such that anyone could run a backend server to donate resources without necessarily hosting an instance.

    For example, I currently have access to a fairly powerful spare server. I'm reluctant to host a Lemmy instance on it as I can't guarantee its availability in the long term (so any communities/user accounts would be lost when it goes down), but while it's available I'd happily donate CPU/storage/bandwidth to a Lemmy cloud, if such a thing existed.

    There are pros and cons to this approach, but it might be worth considering as Lemmy grows in popularity.

  • I'd never heard of kbin at all until I actually signed up to a Lemmy instance.

    I'd heard Lemmy mentioned somewhere before (I've searched for reddit alternatives a few times in the past as I got increasingly annoyed by their pushiness towards the app), but only really took notice of it a few days before the blackout when I saw it mentioned many times on reddit.

  • Working for me (from UK).

    EDIT: seems they migrated to a new server earlier today to handle their increased load. Maybe a hiccup related to that, or your DNS server is still pointing to the old instance or something (in which case it should eventually fix itself)