Skip Navigation
Cloudflare is bad. Youre right.
  • Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don't have to trust any specific third party in this case.

  • [Paper] Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in SOTA Large Language Models
  • Don't know much of the stochastic parrot debate. Is my position a common one?

    In my understanding, current language models don't have any understanding or reflection, but the probabilistic distributions of the languages that they learn do - at least to some extent. In this sense, there's some intelligence inherently associated with language itself, and language models are just tools that help us see more aspects of nature than we could earlier, like X-rays or a sonar, except that this part of nature is a bit closer to the world of ideas.

  • Chrome: 72 hours to update or delete your browser.
  • xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.

  • Chrome: 72 hours to update or delete your browser.
  • CVEs are constantly found in complex software, that's why security updates are important. If not these, it'd have been other ones a couple of weeks or months later. And government users can't exactly opt out of security updates, even if they come with feature regressions.

    You also shouldn't keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.

  • AI training data has a price tag that only Big Tech can afford
  • You can get your hands on books3 or any other dataset that was exposed to the public at some point, but large companies have private human-filtered high-quality datasets that perform better. You're unlikely to have the resources to do the same.

  • Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)
  • Mostly via terminal, yeah. It's convenient when you're used to it - I am.

    Let's see, my inference speed now is:

    • ~60-65 tok/s for a 8B model in Q_5_K/Q6_K (entirely in VRAM);
    • ~36 tok/s for a 14B model in Q6_K (entirely in VRAM);
    • ~4.5 tok/s for a 35B model in Q5_K_M (16/41 layers in VRAM);
    • ~12.5 tok/s for a 8x7B model in Q4_K_M (18/33 layers in VRAM);
    • ~4.5 tok/s for a 70B model in Q2_K (44/81 layers in VRAM);
    • ~2.5 tok/s for a 70B model in Q3_K_L (28/81 layers in VRAM).

    As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don't see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.

  • Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)
  • Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My "hub" is a collection of bash scripts and a ssh server running.

    I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.

    I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don't rent hardware - don't want any data to leave my machine.

    My use isn't intensive enough to warrant measuring energy costs.

  • every worsening cascade of problems...
  • After shopping for solutions online, i cleared CMOS via the button on the mobo. I hoped it would either help the keyboard get recognised by GRUB, or at least deactivate fast-boot. But after powering the pc on again, my screen stays blank and the indication LEDs DRAM and BOOT are glowing.

    I had to boot from a USB stick and regenerate UEFI entries after things like that. Though it specifically said it couldn't boot.

    What does your motherboard's manual say about this pattern of LEDs?

    Try booting a live OS and running memtest? (disconnect all bootable drives first)

    Can you double-check your keyboard works with other devices?

  • Sheet music resources

    How do you acquire sheet music?

    There're IMSLP and musescore, but many things are just not there.

    Bonus points if you know anything with xenharmonic/microtonal music well-represented.

    11
    Downloading a subreddit

    There are some subreddits which may never happen to come online again. There are also some subreddits which are very valuable because of the old posts and responses. Alas, the intersection isn't empty (I personally am anxious about r/suggestmeabook and r/TrueLit).

    Naturally, one would like to download all posts and comments to an offline storage. Naturally, the usual methods are useless when the subreddit is private.

    Are there any good options for the pessimistic scenario? Scraping the web archive? Filtering ML datasets? Anything else?

    5
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AU
    Audalin @lemmy.world
    Posts 2
    Comments 107