Skip Navigation
Here’s Why I Decided To Buy ‘InfoWars’
  • The article itself muddies the waters even more:

    Today we celebrate a new addition to the Global Tetrahedron LLC family of brands. And let me say, I really do see it as a family. Much like family members, our brands are abstract nodes of wealth, interchangeable assets for their patriarch to absorb and discard according to the opaque whims of the market. And just like family members, our brands regard one another with mutual suspicion and malice.

    All told, the decision to acquire InfoWars was an easy one for the Global Tetrahedron executive board.

    Founded in 1999 on the heels of the Satanic “panic” and growing steadily ever since, InfoWars has distinguished itself as an invaluable tool for brainwashing and controlling the masses. With a shrewd mix of delusional paranoia and dubious anti-aging nutrition hacks, they strive to make life both scarier and longer for everyone, a commendable goal. They are a true unicorn, capable of simultaneously inspiring public support for billionaires and stoking outrage at an inept federal state that can assassinate JFK but can’t even put a man on the Moon...

    Its is a parody post, but also real, but also The Onion. It's a shining, throughbread Onion article, but also not.

    We need !sortoftheonion

  • "My biggest concern is staying out of war so I voted for the guy I think is Hitler."
  • You should submit this as a post. Or I might just repost it. It's insightful!

    I really dig the explanation of how the two parties have kind of worked for so long, in spite of everything, and now that balance is disrupted since the "cranks" moved to one party.

  • Bluesky Might End Up Defeating Twitter Once and for All
  • I'd posit the algorithm has turned it into a monster.

    Attention should be dictated more by chronological order and what others retweet, not what some black box thinks will keep you glued to the screen, and it felt like more of the former in the old days. This is a subtle, but also very significant change.

  • Trump taps Rep. Matt Gaetz as attorney general
  • What do these chodes expect to happen after 2028?

    Uh, nothing bad? Just like after screw-ups doing COVID-19?

    America collectively has the attention span of a flea, and an even shorter term memory. No one even cares about what happened in 2016-2020 anymore

  • AMD Confirms Laying Off 4% Of Its Employees To Align Resources With "Largest Growth Opportunities"
  • The localllama/local LLM community ridicules AMD basically every single day.

    They have the hardware. They have 90% of the software. Then they waste it with absolutely nonsensical business decisions, like they are actively trying to avoid the market.

    2 phone calls from Lisa Su (one to OEMs lifting VRAM restriction, another to engineers yelling "someone fix these random bugs with flash attention and torchetune , now,") would absolutely revolutionize the AI space, just to start... and apparently they couldn't care less. It's mind boggling.

  • "My biggest concern is staying out of war so I voted for the guy I think is Hitler."
  • No, because Dems are stuck on a high horse and burned 1 billion campaigning like its the 1950s. Fff, they could have won the election spending a tenth of that on bots and paying off influencers.

    We absolutely need money for a shameless 'oppositional' propaganda apparatus.

  • Young people are struggling to deal with their MAGA parents — again
  • I know every generation says it, but I really think there was a "peak" generation that grew up on the old web, and learned critical thinking the hard way. The Internet is a lie.

    Those that never leave apps and their feeds? Not learning that lesson.

  • Bluesky adds 700,000 new users in a week / A ‘majority' of the new users are from the US, indicating that people are searching for a new platform as an alternative to X.
  • The facebook/mastadon format is much better for individuals, no? And Reddit/Lemmy for niches, as long as they're supplemented by a wiki or something.

    And Tumblr. The way content gets spread organically, rather than with an algorithm, is actually super nice.

    IMO Twitter's original premise, of letting novel, original, but very short thoughts fly into the ether has been so thoroughly corrupted that it can't really come back. It's entertaining and engaging, but an awful format for actually exchanging important information, like discord.

  • Elon Musk Dragged After His Own Chatbot Admits He's A 'Significant Spreader' Of Misinformation
  • This is called prompt engineering, and it's been studied objectively and extensively. There are papers where many different personas are benchmarked, or even dynamically created like a genetic algorithm.

    You're still limited by the underlying LLM though, especially something so dry and hyper sanitized like OpenAI's API models.

  • Guide to Self Hosting LLMs Faster/Better than Ollama

    I see a lot of talk of Ollama here, which I personally don't like because:

    • The quantizations they use tend to be suboptimal

    • It abstracts away llama.cpp in a way that, frankly, leaves a lot of performance and quality on the table.

    • It abstracts away things that you should really know for hosting LLMs.

    • I don't like some things about the devs. I won't rant, but I especially don't like the hint they're cooking up something commercial.

    So, here's a quick guide to get away from Ollama.

    • First step is to pick your OS. Windows is fine, but if setting up something new, linux is best. I favor CachyOS in particular, for its great python performance. If you use Windows, be sure to enable hardware accelerated scheduling and disable shared memory.

    • Ensure the latest version of CUDA (or ROCm, if using AMD) is installed. Linux is great for this, as many distros package them for you.

    • Install Python 3.11.x, 3.12.x, or at least whatever your distro supports, and git. If on linux, also install your distro's "build tools" package.

    Now for actually installing the runtime. There are a great number of inference engines supporting different quantizations, forgive the Reddit link but see: https://old.reddit.com/r/LocalLLaMA/comments/1fg3jgr/a_large_table_of_inference_engines_and_supported/

    As far as I am concerned, 3 matter to "home" hosters on consumer GPUs:

    • Exllama (and by extension TabbyAPI), as a very fast, very memory efficient "GPU only" runtime, supports AMD via ROCM and Nvidia via CUDA: https://github.com/theroyallab/tabbyAPI

    • Aphrodite Engine. While not strictly as vram efficient, its much faster with parallel API calls, reasonably efficient at very short context, and supports just about every quantization under the sun and more exotic models than exllama. AMD/Nvidia only: https://github.com/PygmalionAI/Aphrodite-engine

    • This fork of kobold.cpp, which supports more fine grained kv cache quantization (we will get to that). It supports CPU offloading and I think Apple Metal: https://github.com/Nexesenex/croco.cpp

    Now, there are also reasons I don't like llama.cpp, but one of the big ones is that sometimes its model implementations have... quality degrading issues, or odd bugs. Hence I would generally recommend TabbyAPI if you have enough vram to avoid offloading to CPU, and can figure out how to set it up. So:

    • Open a terminal, run git clone https://github.com/theroyallab/tabbyAPI.git

    • cd tabbyAPI

    • Follow this guide for setting up a python venv and installing pytorch and tabbyAPI: https://github.com/theroyallab/tabbyAPI/wiki/01.-Getting-Started#installing

    This can go wrong, if anyone gets stuck I can help with that.

    • Next, figure out how much VRAM you have.

    • Figure out how much "context" you want, aka how much text the llm can ingest. If a models has a context length of, say, "8K" that means it can support 8K tokens as input, or less than 8K words. Not all tokenizers are the same, some like Qwen 2.5's can fit nearly a word per token, while others are more in the ballpark of half a work per token or less.

    • Keep in mind that the actual context length of many models is an outright lie, see: https://github.com/hsiehjackson/RULER

    • Exllama has a feature called "kv cache quantization" that can dramatically shrink the VRAM the "context" of an LLM takes up. Unlike llama.cpp, it's Q4 cache is basically lossless, and on a model like Command-R, an 80K+ context can take up less than 4GB! Its essential to enable Q4 or Q6 cache to squeeze in as much LLM as you can into your GPU.

    • With that in mind, you can search huggingface for your desired model. Since we are using tabbyAPI, we want to search for "exl2" quantizations: https://huggingface.co/models?sort=modified&search=exl2

    • There are all sorts of finetunes... and a lot of straight-up garbage. But I will post some general recommendations based on total vram:

    • 4GB: A very small quantization of Qwen 2.5 7B. Or maybe Llama 3B.

    • 6GB: IMO llama 3.1 8B is best here. There are many finetunes of this depending on what you want (horny chat, tool usage, math, whatever). For coding, I would recommend Qwen 7B coder instead: https://huggingface.co/models?sort=trending&search=qwen+7b+exl2

    • 8GB-12GB Qwen 2.5 14B is king! Unlike it's 7B counterpart, I find the 14B version of the model incredible for its size, and it will squeeze into this vram pool (albeit with very short context/tight quantization for the 8GB cards). I would recommend trying Arcee's new distillation in particular: https://huggingface.co/bartowski/SuperNova-Medius-exl2

    • 16GB: Mistral 22B, Mistral Coder 22B, and very tight quantizations of Qwen 2.5 34B are possible. Honorable mention goes to InternLM 2.5 20B, which is alright even at 128K context.

    • 20GB-24GB: Command-R 2024 35B is excellent for "in context" work, like asking questions about long documents, continuing long stories, anything involving working "with" the text you feed to an LLM rather than pulling from it's internal knowledge pool. It's also quite goot at longer contexts, out to 64K-80K more-or-less, all of which fits in 24GB. Otherwise, stick to Qwen 2.5 34B, which still has a very respectable 32K native context, and a rather mediocre 64K "extended" context via YaRN: https://huggingface.co/DrNicefellow/Qwen2.5-32B-Instruct-4.25bpw-exl2

    • 32GB, same as 24GB, just with a higher bpw quantization. But this is also the threshold were lower bpw quantizations of Qwen 2.5 72B (at short context) start to make sense.

    • 48GB: Llama 3.1 70B (for longer context) or Qwen 2.5 72B (for 32K context or less)

    Again, browse huggingface and pick an exl2 quantization that will cleanly fill your vram pool + the amount of context you want to specify in TabbyAPI. Many quantizers such as bartowski will list how much space they take up, but you can also just look at the available filesize.

    • Now... you have to download the model. Bartowski has instructions here, but I prefer to use this nifty standalone tool instead: https://github.com/bodaay/HuggingFaceModelDownloader

    • Put it in your TabbyAPI models folder, and follow the documentation on the wiki.

    • There are a lot of options. Some to keep in mind are chunk_size (higher than 2048 will process long contexts faster but take up lots of vram, less will save a little vram), cache_mode (use Q4 for long context, Q6/Q8 for short context if you have room), max_seq_len (this is your context length), tensor_parallel (for faster inference with 2 identical GPUs), and max_batch_size (parallel processing if you have multiple user hitting the tabbyAPI server, but more vram usage)

    • Now... pick your frontend. The tabbyAPI wiki has a good compliation of community projects, but Open Web UI is very popular right now: https://github.com/open-webui/open-webui I personally use exui: https://github.com/turboderp/exui

    • And be careful with your sampling settings when using LLMs. Different models behave differently, but one of the most common mistakes people make is using "old" sampling parameters for new models. In general, keep temperature very low (<0.1, or even zero) and rep penalty low (1.01?) unless you need long, creative responses. If available in your UI, enable DRY sampling to tamp down repition without "dumbing down" the model with too much temperature or repitition penalty. Always use a MinP of 0.05 or higher and disable other samplers. This is especially important for Chinese models like Qwen, as MinP cuts out "wrong language" answers from the response.

    • Now, once this is all setup and running, I'd recommend throttling your GPU, as it simply doesn't need its full core speed to maximize its inference speed while generating. For my 3090, I use something like sudo nvidia-smi -pl 290, which throttles it down from 420W to 290W.

    Sorry for the wall of text! I can keep going, discussing kobold.cpp/llama.cpp, Aphrodite, exotic quantization and other niches like that if anyone is interested.

    72
    qwenlm.github.io Qwen2.5: A Party of Foundation Models!

    GITHUB HUGGING FACE MODELSCOPE DEMO DISCORD Introduction In the past three months since Qwen2’s release, numerous developers have built new models on the Qwen2 language models, providing us with valuable feedback. During this period, we have focused on creating smarter and more knowledgeable languag...

    cross-posted from: https://lemmy.world/post/19925986

    > https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e > > Qwen 2.5 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B just came out, with some variants in some sizes just for math or coding, and base models too. > > All Apache licensed, all 128K context, and the 128K seems legit (unlike Mistral). > > And it's pretty sick, with a tokenizer that's more efficient than Mistral's or Cohere's and benchmark scores even better than llama 3.1 or mistral in similar sizes, especially with newer metrics like MMLU-Pro and GPQA. > > I am running 34B locally, and it seems super smart! > > As long as the benchmarks aren't straight up lies/trained, this is massive, and just made a whole bunch of models obsolete. > > Get usable quants here: > > GGUF: https://huggingface.co/bartowski?search_models=qwen2.5 > > EXL2: https://huggingface.co/models?sort=modified&search=exl2+qwen2.5

    0
    qwenlm.github.io Qwen2.5: A Party of Foundation Models!

    GITHUB HUGGING FACE MODELSCOPE DEMO DISCORD Introduction In the past three months since Qwen2’s release, numerous developers have built new models on the Qwen2 language models, providing us with valuable feedback. During this period, we have focused on creating smarter and more knowledgeable languag...

    https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e

    Qwen 2.5 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B just came out, with some variants in some sizes just for math or coding, and base models too.

    All Apache licensed, all 128K context, and the 128K seems legit (unlike Mistral).

    And it's pretty sick, with a tokenizer that's more efficient than Mistral's or Cohere's and benchmark scores even better than llama 3.1 or mistral in similar sizes, especially with newer metrics like MMLU-Pro and GPQA.

    I am running 34B locally, and it seems super smart!

    As long as the benchmarks aren't straight up lies/trained, this is massive, and just made a whole bunch of models obsolete.

    Get usable quants here:

    GGUF: https://huggingface.co/bartowski?search_models=qwen2.5

    EXL2: https://huggingface.co/models?sort=modified&search=exl2+qwen2.5

    0
    How does Lemmy feel about "open source" machine learning, akin to the Fediverse vs Social Media?

    Obviously there's not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation's Open Model Initiative?

    I feel like a lot of people just don't know there are Apache/CC-BY-NC licensed "AI" they can run on sane desktops, right now, that are incredible. I'm thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it's mine to use.

    And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training... and OpenAI and such want to shut down all of this because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it's actually a threat to them.

    Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

    58
    Cohere Drops Command-R 35B 08-2024 Update, Just About a Perfect Local LLM for 24GB GPUs.
    huggingface.co CohereForAI/c4ai-command-r-08-2024 · Hugging Face

    We’re on a journey to advance and democratize artificial intelligence through open source and open science.

    CohereForAI/c4ai-command-r-08-2024 · Hugging Face

    cross-posted from: https://lemmy.world/post/19242887

    > I can run the full 131K context with a 3.75bpw quantization, and still a very long one at 4bpw. And it should barely be fine-tunable in unsloth as well. > > > It's pretty much perfect! Unlike the last iteration, they're using very aggressive GQA, which makes the context small, and it feels really smart at long context stuff like storytelling, RAG, document analysis and things like that (whereas Gemma 27B and Mistral Code 22B are probably better suited to short chats/code).

    0
    Cohere Drops Command-R 35B 08-2024 Update, Just About a Perfect Local LLM for 24GB GPUs.
    huggingface.co CohereForAI/c4ai-command-r-08-2024 · Hugging Face

    We’re on a journey to advance and democratize artificial intelligence through open source and open science.

    CohereForAI/c4ai-command-r-08-2024 · Hugging Face

    I can run full 131K context with a 3.75bpw quantization, and still a very long one at 4bpw. And it should barely be fine-tunable in unsloth as well.

    It's pretty much perfect! Unlike the last iteration, they're using very aggressive GQA, which makes the context small, and it feels really smart at long context stuff like storytelling, RAG, document analysis and things like that (whereas Gemma 27B and Mistral Code 22B are probably better suited to short chats/code).

    0
    Pressure grows as "last chance" negotiations for Gaza deal resume

    > Senior U.S., Qatari, Egyptian and Israeli officials will meet on Thursday under intense pressure to reach a breakthrough on the Gaza hostage and ceasefire deal.

    > he heads of the Israeli security and intelligence services told Netanyahu at the meeting on Wednesday that time is running out to reach a deal and emphasized that delay and insistence on certain positions in the negotiations could cost the lives of hostages, a senior Israeli official said.

    6
    Alleged AMD Strix Halo APU Appears in Benchmark

    HP is apparently testing these upcoming APUs in a single, 8-core configuration.

    The Geekbench 5 ST score is around 2100, which is crazy... but not what I really care about. Strix Halo will have a 256 -bit memory bus and 40 CUs, which will make it a monster for local LLM inference.

    I am praying AMD sells these things in embedded motherboards with a 128GB+ memory config. Especially in an 8-core config, as I'd rather not burn money and TDP on a 16 core version.

    2
    Paramount Acquisition Deal Falls Through

    cross-posted from: https://lemmy.world/post/16629163

    > Supposedly for petty personal reasons: > > > The woman who controls the company, Shari Redstone, snatched defeat from the jaws of victory last week as she scuttled a planned merger with David Ellison's Skydance Media. > > > Redstone had spent six months negotiating a complicated deal that would have given control of Paramount to Ellison and RedBird Capital, only to call it off as it neared the finish line. > > > The chief reason for her decision: Her reluctance to let go of a family heirloom she fought very hard to get.

    I cross posted this from c/Avatar, but I am a Trekkie too and don't like this one bit.

    FYI previous articles seemed to imply the Sony deal is dead.

    6
    Paramount Acquisition Deal Falls Through

    Supposedly for petty personal reasons:

    > The woman who controls the company, Shari Redstone, snatched defeat from the jaws of victory last week as she scuttled a planned merger with David Ellison's Skydance Media.

    > Redstone had spent six months negotiating a complicated deal that would have given control of Paramount to Ellison and RedBird Capital, only to call it off as it neared the finish line.

    > The chief reason for her decision: Her reluctance to let go of a family heirloom she fought very hard to get.

    The fandom doesn't want to talk about it, but the Avatar franchise is in trouble.

    5
    Paramount Sony Deal Not Looking Like A Bid For Whole Company Anymore
    deadline.com Sony & Paramount Sign Non-Disclosure Agreement Allowing Deal Talks To Start, But It’s Not Looking Like A $26 Billion Bid For Whole Company Anymore

    Sony signed a non-disclosure agreement with Paramount allowing deal talks to begin but they'll not be focused on a $26 billion bid for whole company.

    Sony & Paramount Sign Non-Disclosure Agreement Allowing Deal Talks To Start, But It’s Not Looking Like A $26 Billion Bid For Whole Company Anymore

    Avatar Studios seems to be part of Paramount Media, aka the "pay television channels" that I assume Sony is not interested in: https://en.wikipedia.org/wiki/Paramount_Global

    And in light of this article: https://deadline.com/2024/05/paramount-sale-hollywood-studio-takeover-history-lessons-1235910245/

    That doesn't look good for Avatar Studios. If they are left behind in a Sony sale, it seems the probability of them getting shut down (or just going down with whatever is left of Paramount) is very high.

    0
    Paramount (Avatar IP owner) sale reopens, as Sony-Apollo swoops in

    The article is a very fast read because it's Axios, but in a nutshell, either:

    • Skydance gets Paramount intact, but possibly with financial trouble and selling some IP.

    • Sony gets Paramount, but restructures the company and also possibly sells some parts.

    • Nothing happens... and Paramount continues its downward spiral, probably accelerated by a failed sale.

    The can of worms opened today, as now Paramount is officially open to a buyout from sony.

    I don't like this at all. Avatar is a high budget IP, animesque fantasy, and not historically, proveably profitable like Star Trek/Spongebob. Avatar Studios is a real candidate to be chopped off.

    3
    How do y'all watch Avatar? What screen? What source?

    As the title says. This includes any visual media, including all 7 Books and other stuff.

    What kind screen do you watch it on? What sound setup? What source?

    Screen poll: https://strawpoll.com/e6Z28M9aqnN

    Source poll: https://strawpoll.com/Q0ZpRmzaVnM

    I'm asking this because:

    A: I'm curious how this fandom generally consumes the shows

    B: I theorize this may have an impact on the experience. Avatar is an audiovisual feast, and I find I get caught up in the art/music more than many viewers seem to. LoK in particular is like a totally different show with high-bitrate HD vs. a bad stream.

    4
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BR
    brucethemoose @lemmy.world
    Posts 14
    Comments 787