Skip Navigation

Posts
4
Comments
221
Joined
2 yr. ago

  • Either way it's a circus of incompetence.

  • Something something Poe's law, something something. Honestly some of the shit i've read should have been satire, but noooooo.

  • Absolutely this, shuf would easily come up in a normal google search (even in googles deteriorated relevancy).

    For fun, "two" lines of bash + jq can easily achieve the result even without shuf (yes I know this is pointlessly stupid)

     bash
        
    cat /usr/share/dict/words | jq -R > words.json
    cat /dev/urandom | od -A n -D | jq -r -n '
      import "words" as $w;
      ($w | length) as $l |
      label $out | foreach ( inputs * $l / 4294967295 | floor ) as $r (
        {i:0,a:[]} ;
        .i = (if .a[$r] then .i  else .i + 1 end) | .a[$r] = true ;
        if .i > 100 then break $out else $w[$r] end
      )
    '
    
      

    Incidentally this is code that ChatGPT would be utterly incapable of producing, even as toy example but niche use of jq.

  • Almost always sneerious Yud.

  • Ah, but each additional sentence strikes home the point of absurd over-abundance!

    Quite poetically, the sin of verbosity is commited to create the illusion of considered thought and intelligence, in the case of hpmor literally by stacking books.

    Amusingly him describing his attempt as "striking words out" rather than "rewording" or "distilling", i think illustrates his lack of editing ability.

  • Fair enough, I will note he fails to specify the actual car to Remote Assistance operator ratio. Here's to hoping that the burstiness readiness staff is not paid pennies when on "stand-by".

  • It makes you wonder about the specifics:

    • Did the 1.5 workers assigned for each car mostly handle issues with the same cars?
    • Was it a big random pool?
    • Or did each worker have their geographic area with known issues ?

    Maybe they could have solved context issues and possible latency issues by seating the workers in the cars, and for extra quick intervention speed put them in the driver's seat. Revolutionary. (Shamelessly stealing adam something's joke format about trains)

  • Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.

    I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.

  • In such a (unlikely) future of build tooling corruption, actual plausible terminology:

    • Intent Annotation Prompt (though sensibly, this should be for doc and validation analysis purposes, not compilation)
    • Intent Pragma Prompt (though sensibly, the actual meaning of the code should not change, and it should purely be optimization hints)
  • Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.

    I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.

    But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨

    The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.

  • The fact that “artificial intelligence” suggests any form of quality is already a paradox in itself ^^. Would you want to eat an artificial potato? The smokes and mirrors should be baked in.

  • I need eye and mind bleach, c'est très ironique tout ça quand même.

  • Unhinged is another suitable adjective.

    It's noteworthy that how the operations plan seems to boil down to "follow you guts" and "trust the vibes", above "Communicating Well" or even "fact-based" and "discussion-based problem solving". It's all very don't think about it, let's all be friends and serve the company like obedient drones.

    This reliance on instincts, or the esthetics of relying on instincts, is a disturbing aspect of Rats in general.

  • ^^ Quietly progressing from humans are not the only ones able to do true learning, to machines are the only ones capable of true learning.

    Poetic.

    PS: Eek at the cough extrapolation rules lawyering 😬.

  • Not even that! It looks like a blurry jpeg of those sources if you squint a little!

    Also I’ve sort of realized that the visualization is misleading in three ways:

    1. They provide an animation from shallow to deep layers to show the dots coming together, making the final result more impressive than it is (look at how many dots are in the ocean)
    2. You see blobby clouds over sub-continents, with nothing to gauge error within the cloud blobs.
    3. Sorta-relevant but obviously the borders as helpfully drawn for the viewer to conform to “Our” world knowledge aren’t even there at all, it’s still holding up a mirror (dare I say a parrot?) to our cognition.
  • Brawndo Blockchain has got what plants LLMs crave, it's got electrolytes ledgers.

  • That's the dangerous part:

    • The LLM being just about convincing enough
    • The language being unfamiliar

    ....................................................

    You have no way of judging how correct or how wrong the output is, and no one to hold responsible or be a guarantor.

    With the recent release of the heygen drag-drop tool for video translating, and lip-syncing tool, I saw enough people say: "Look isn't it amazing, I can speak Italian now"

    No, something makes look like you can, and you have no way of judging how convincing the illusion is. Even if the output is convincing/bluffing to a native speaker, you still can't immediately check that the translation is correct. And again no one to hold accountable.

  • I said I wouldn't be confident about it, not that enshitification would not occur ^^.

    I oscillate between optimisim and pessimism frequently, and for sure some many companies will make bad doo doo decisions. Ultimately trying to learn the grift is not the answer for me though, I'd rather work for some company with at least some practical sense and pretense at an attempt of some form of sustainability.

    The mood comes, please forgive the following, indulgent, poem:
    Worse before better
    Yet comes the AI winter
    Ousting the fever

  • I wouldn't be so confident in replacing junior devs with "AI":

    1. Even if it did work without wasting time, it's unsustainable since junior devs need to acquire these skills, senior devs aren't born from the void, and will eventually graduate/retire.
    2. A junior dev willing to engage their brain, would still iterate through to the correct implementation for cheaper (and potentially faster), than senior devs needing spend time reviewing bullshit implementations, and at arcane attempts of unreliable "AI"-prompting.

    It's copy-pasting from stack-overflow all over again. The main consequence I see for LLM based coding assistants, is a new source of potential flaws to watch out for when doing code reviews.