Skip Navigation
Selfhosted LLM (ChatGPT)
  • Check out localllama community. Lots of info there.

    I use oobabooga + exllama.

    Things are a bit budget dependent. If you can afford a rtx 3090 off ebay you can run some decent models (30B) at very good speed. I ended up with 3090 + 4090. You can use system ram with ggml but it's slow. Mac M1 is not bad for this .

    Where did you get the reddit dataset?

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SU
    supert @lemmy.fmhy.ml

    Reddit refugee

    Posts 0
    Comments 2