Skip Navigation

Posts
18
Comments
587
Joined
2 yr. ago

  • users trade off decision quality against effort reduction

    They should put that on the species' gravestone.

  • What if quantum but magically more achievable at nearly current technology levels. Instead of qbits they have pbits (probabilistic bits, apparently) and this is supposed to help you fit more compute in the same data center.

    Also they like to use the word thermodynamic a lot to describe the (proposed) hardware.

  • I feel the devs should just ask the chatbot themselves before submitting if they feel it helps, automating the procedure invites a slippery slope in an environment were doing it the wrong way is being pushed extremely strongly and executives' careers are made on 'I was the one who led AI adoption in company x (but left before any long term issues became apparent)'

    Plus the fact that it's always weirdos like the hating AI is xenophobia person who are willing to go to bat for AI doesn't inspire much confidence.

  • As far as I can tell there's absolutely no ideology in the original transformers paper, what a baffling way to describe it.

    James Watson was also a cunt, but calling "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" one of the founding texts of eugenicist ideology or whatever would be just dumb.

  • Hey it's the character.ai guy, a.k.a. first confirmed AI assisted kid suicide guy.

    I do not believe G-d puts people in the wrong bodies.

    Shazeer also said people who criticized the removal of the AI Principles were anti-Semitic.

    Kind of feel the transphobia is barely scratching the surface of all the things wrong with this person.

  • So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.

    Eh, Local LLMs don't really scale, you can't do much better than one person per one computer, unless it's really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren't currently on work laptops and VMs.

    Sparks type machines will do better eventually but for now they're supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail's pace.

  • It definitely feels like the first draft said for the longest time we had to use AI in secret because of Woke.

  • only have 12-days of puzzles

    Obligatory oh good I might actually get something job-related done this December comment.

  • What's a government backstop, and does it happen often? It sounds like they're asking for a preemptive bail-out.

    I checked the rest of Zitron's feed before posting and its weirder in context:

    Interview:

    She also hinted at a role for the US government "to backstop the guarantee that allows the financing to happen", but did not elaborate on how this would work.

    Later at the jobsite:

    I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word "backstop" and it mudlled the point.

    She then proceeds to explain she just meant that the government 'should play its part'.

    Zitron says she might have been testing the waters, or its just the cherry on top of an interview where she said plenty of bizzare shit

  • it often obfuscates from the real problems that exist and are harming people now.

    I am firmly on the side of it's possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.

    Yudkowsky and his ilk are cranks.

    That Yud is the Neil Breen of AI is the best thing ever written about rationalism in a youtube comment.

  • this seems counterintuitive but... comments are the best, name of the function but longer are the worst. Plain text summary of a huge chunk of code that I really should have taken the time to break up instead of writing a novella about it are somewhere in the middle.

    I feel a lot of bad comment practices are downstream of javascript relying on jsdoc to act like a real language.

  • Managers gonna manage, but having a term for bad code that works that is more palatable than 'amateur hour' isn't inherently bad imo.

    Worst i've heard is some company forbidding LINQ in C#, which in python terms is forcing you to always use for-loops in place of filter/map/reduce and comprehensions and other stuff like pandas.groupby

  • My impression from reading the stuff posted here is that omarchy is a nothing project that's being aggressively astroturfed so a series of increasingly fashy contributors can gain clout and influence in the foss ecosystem.

  • Definitely, it's just code for I'm ok with nazis at this point.

  • pro-AI but only self hosted

    Like being pro-corporatism but only with regard to the breadcrumbs that fall off the oligarchs' tables.

    We should start calling so-called open source models trickle-down AI.

  • This improved my mood considerably, thank you.

  • and actually use an AI that cites it’s sources

    make the hallucinotron useful with this one weird trick

  • Zitron catching strays in the comments for having too much of a bullying tone, I guess against billionaires and tech writers, and being too insistent on his opinion that the whole thing makes no financial sense. It's also lamented that the entire field of ML avoids bsky because it has a huge AI hostility problem.

    Concern trolling notwithstanding, the eigenrobot stuff is worrisome though, if not specifically for him about how extremely online the ideological core of the administration seems to be, as close to the lunatics running the asylum as you'll get in a modern political setting.

  • Micrsosoft will be adding numbers on total meetings summarised, total hours summarised, and various classes of prompts.

    So the manager types are also affected? This might be interesting.