Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)VI
Posts
0
Comments
228
Joined
8 mo. ago

  • X11 has a shitload of unwanted and unused features that your favorite X11 compositor is actively fighting AGAINST to render your GUI.

    I implore you to pick up the X.Org source code and your favorite X11 shitshow's source code and realize why Wayland follows the same paradigms that apple adopted in 2001 and Microsoft in 2006.

  • I don't think we should work with scum like DHH and vaxry just because some asshole lib might accuse us of purity tests

    If "not working with people who are maniacs who want you dead" is a purity test I'm dusting off my Inquisition book

  • Permanently Deleted

    Jump
  • One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn't overflow and become incoherent immediately [1], it is extremely useful.

    But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake

    Some of y'all would've lit the fire under Jan Hus if you lived in the 15th century

    [1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it's not really much of a concern.

  • Permanently Deleted

    Jump
  • That is different. It's because you're interacting with token-based models. There has been new research on giving byte level data to LLMs to solve this issue.

    The numerical calculation aspect of LLMs and this are different.

    It would be best to couple an LLM into a tool-calling system for rudimentary numeral calculations. Right now the only way to do that is to cook up a Python script with HF transformers and a finetuned model, I am not aware of any commercial model doing this. (And this is not what Microshit is doing)

  • That's for quanting a model yourself. You can instead (read that as "should") download an already quantized model. You can find quantized models from the HuggingFace page of your model of choice. (Pro tip: quants by Bartowski, Unsloth and Mradermacher are high quality)

    And then you just run it.

    You can also use Kobold.cpp or OpenWebUI as friendly front ends for llama.cpp

    Also, to answer your question, yes.