Skip Navigation
What have you been up to recently with your local LLMs?
  • I used it quite a lot at the start of the year, for software architecture and development. But the number of areas where it was useful were so small, and running it locally is quite slow. (which I do for privacy reasons)

    I noticed that much of what was generated needed to be double checked, and were sometimes just wrong, so I've basically stopped using it.

    Now I'm hopeful for better code generation models, and will spend the fall building a framework around a local model. See if the helps in guiding the models generation.

  • Faster hardware is a bad first solution to slow software
  • Hmm. I'd actually argue it's a good solution in some cases. We run multiple services where load is intermittent, services are short-lived, or the code is complex and hard to refactor. Just adding hardware resources can be a much cheaper solution than optimizing code.

  • llama.cpp for GPU only

    I’ve been using llama.cpp, gpt-llama and chatbot-ui for a while now, and I’m very happy with it. However, I’m now looking into a more stable setup using only GPU. Is this llama.cpp still still a good candidate for that?

    10
    What OS do you use on your pc and why?
  • I've been running debian stable for work laptop, gaming PC and servers for years now. Can confirm it just works!

    Debian 12 upgrade coming up soon. Probably (maybe not) some effort to upgrade everything, and that back to smooth sailing. :)

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BI
    bia @lemmy.ml
    Posts 1
    Comments 9