Skip Navigation
Microsoft to test “new features and more” for aging, stubbornly popular Windows 10
  • Windows laptops generally get trashy battery life, and if this going to tank it further, I'd just run Linux full-time on my family laptop and call it a day.

    The only reason we had windows was my wife's comfortability and sometimes zoom glitches out on linux.

  • Whistleblower Josh Dean of Boeing supplier Spirit AeroSystems has died
  • Absolutely, my toddler had MRSA within a few days after he was born and its most likely due to some contamination or something to the effect.

    Hospitals are a severe breeding grounds for resistant bacterial strains via sewage.

  • meme, and reality.
  • Absolutely. My wife flew to her parent place with our toddler and I dont have any idea on what to watch.

    All I'm watching is nursery rhymes since they're catchy as all hell.

  • What does your current setup look like?
  • ThinkPad T450s (my old laptop)

    OS: Arch Linux DE: Plasma

    Services: Arr stack for gluetun, sonarr, radar and jackets Jellyfin for videos Gonic for audio

    All 3 of them are run using docker compose

  • inflation rule
  • Funny story, we are moving out of our apartment to another in the same city, because that is close to $300 cheaper.

    We had an issue in the bathroom and the maintenence guy comes and we make small talk. I find that he lives in another apartment complex, the property managers live in another apartment complex, because the ones I live in is expensive and the housing in our city is expensive that none of us can afford to buy one.

  • Kids Tablet recommendations.
  • This and the no questions asked two year replacement policy is amazing if you have a toddler.

    The bundled foamy case also is really great.

    We slapped a 256gb SD card, and have almost it full of videos that he watches when we travel.

  • GET TO THE POINT
  • My dad and my wife begin telling their anecdote and it finally involves me in having a mental and emotional breakdown trying to figure put their point.

    I used to fight them initially into telling the point in the few seconds to no avail. These days, I just tune in enough to listen every 10th word.

  • What's your dream job?
  • You seem to assume that half the indian subcontinental populace doesn't exist.

    People will marry off their kids to donkeys, frogs and cows if it means no drought for a season.

    Astrology runs rabid there.

    Source: Am Indian.

  • One of my back teeth is aching at the moment
  • This. They just need you for a follow-up visits, since they get graded on how mow complete the procedure was done.

    Unfortunately, dental works are of those kinds where everything takes multiple sittings.

  • Auditory torture
  • Gods, I relate to this.

    My wife watches Big Boss which is like Bog Brother everyday.

    My toddler has some song or the other playing all day long.

    I can't put my headphones because either of the two keep talking all the time.

    When my inlaws visit:

    My dad in law is a media person, so he's on the phone the whole darn day.

    My mom in law sing-songs her words.

    My sis in law doesn't remember any song past the first two lines, so she sings the two lines whole fucking day.

    I live in a hellscape of my own device.

  • Small guide to run Llama.cpp on windows with discrete AMD GPU

    Hi!

    I have an ASUS AMD Advantage Edition laptop (https://rog.asus.com/laptops/rog-strix/2021-rog-strix-g15-advantage-edition-series/) that runs windows. I haven't gotten time to install linux and set it up the way I like yet, still after more than a year.

    I'm just dropping a small write-up for the set-up that I'm using with llama.cpp to run on the discrete GPUs using clbast.

    You can use Kobold but it meant for more role-playing stuff and I wasn't really interested in that. Funny thing is Kobold can be set up to use the discrete GPU if needed.

    1. For starters you'd need llama.cpp itself from here: https://github.com/ggerganov/llama.cpp/tags.

      Pick the clblast version, which will help offload some computation over to the GPU. Unzip the download to a directory. I unzipped it to a folder called this: "D:\Apps\llama\"

    2. You'd need a llm now and that can be obtained from HuggingFace or where-ever you'd like it from. Just note that it should be in ggml format. If you have a doubt, just note that the models from HuggingFace would have "ggml" written somewhere in the filename. The ones I downloaded were "nous-hermes-llama2-13b.ggmlv3.q4_1.bin" and "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin"

    3. Move the models to the llama directory you made above. That makes life much easier.

    4. You don't really need to navigate to the directory using Explorer. Just open Powershell where-ever and you can also do cd D:\Apps\llama\

    5. Here comes the fiddly part. You need to get the device ids for the GPU. An easy way to check this is to use "GPU caps viewer", go to the tab titled OpenCl and check the dropdown next to "No. of CL devices".

      The discrete GPU is normally loaded as the second or after the integrated GPU. In my case the integrated GPU was gfx90c and discrete was gfx1031c.

    6. In the powershell window, you need to set the relevant variables that tell llama.cpp what opencl platform and devices to use. If you're using AMD driver package, opencl is already installed, so you needn't uninstall or reinstall drivers and stuff.

      $env:GGML_OPENCL_PLATFORM = "AMD"

      $env:GGML_OPENCL_DEVICE = "1"

    7. Check if the variables are exported properly

      Get-ChildItem env:GGML_OPENCL_PLATFORM Get-ChildItem env:GGML_OPENCL_DEVICE

      This should return the following:

      Name Value

      ---- -----

      GGML_OPENCL_PLATFORM AMD

      GGML_OPENCL_DEVICE 1

      If GGML_OPENCL_PLATFORM doesn't show AMD, try exporting this: $env:GGML_OPENCL_PLATFORM = "AMD"

    8. Once these are set properly, run llama.cpp using the following:

      D:\Apps\llama\main.exe -m D:\Apps\llama\Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin -ngl 33 -i --threads 8 --interactive-first -r "### Human:"

      OR

      replace Wizard with nous-hermes-llama2-13b.ggmlv3.q4_1.bin or whatever llm you'd like. I like to play with 7B, 13B with 4_0 or 5_0 quantized llms. You might need to trawl through the fora here to find parameters for temperature, etc that work for you.

    9. Checking if these work, I've posted the content at pastebin since formatting these was a paaaain: https://pastebin.com/peSFyF6H

      salient features @ gfx1031c (6800M discrete graphics): llama_print_timings: load time = 60188.90 ms llama_print_timings: sample time = 3.58 ms / 103 runs ( 0.03 ms per token, 28770.95 tokens per second) llama_print_timings: prompt eval time = 7133.18 ms / 43 tokens ( 165.89 ms per token, 6.03 tokens per second) llama_print_timings: eval time = 13003.63 ms / 102 runs ( 127.49 ms per token, 7.84 tokens per second) llama_print_timings: total time = 622870.10 ms

      salient features @ gfx90c (cezanne architecture integrated graphics): llama_print_timings: load time = 26205.90 ms llama_print_timings: sample time = 6.34 ms / 103 runs ( 0.06 ms per token, 16235.81 tokens per second) llama_print_timings: prompt eval time = 29234.08 ms / 43 tokens ( 679.86 ms per token, 1.47 tokens per second) llama_print_timings: eval time = 118847.32 ms / 102 runs ( 1165.17 ms per token, 0.86 tokens per second) llama_print_timings: total time = 159929.10 ms

    Edit: added pastebin since I actually forgot to link it. https://pastebin.com/peSFyF6H

    5
    Viewing magazines from kbin.social

    Hi!

    I subscribed to a few magazines from kbin.social. Is there something I need to check/do so that the subscriptions gets synced across the two instances?

    If not, does this mean we are not federated with them yet?

    Regards, fatboy93

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FA
    fatboy93 @lemm.ee
    Posts 2
    Comments 67