Ah, but each additional sentence strikes home the point of absurd over-abundance!
Quite poetically, the sin of verbosity is commited to create the illusion of considered thought and intelligence, in the case of hpmor literally by stacking books.
Amusingly him describing his attempt as "striking words out" rather than "rewording" or "distilling", i think illustrates his lack of editing ability.
Fair enough, I will note he fails to specify the actual car to Remote Assistance operator ratio. Here's to hoping that the burstiness readiness staff is not paid pennies when on "stand-by".
Did the 1.5 workers assigned for each car mostly handle issues with the same cars?
Was it a big random pool?
Or did each worker have their geographic area with known issues ?
Maybe they could have solved context issues and possible latency issues by seating the workers in the cars, and for extra quick intervention speed put them in the driver's seat. Revolutionary. (Shamelessly stealing adam something's joke format about trains)
Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.
I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.
Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.
I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.
But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨
The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.
The fact that “artificial intelligence” suggests any form of quality is already a paradox in itself ^^.
Would you want to eat an artificial potato?
The smokes and mirrors should be baked in.
It's noteworthy that how the operations plan seems to boil down to "follow you guts" and "trust the vibes", above "Communicating Well" or even "fact-based" and "discussion-based problem solving". It's all very don't think about it, let's all be friends and serve the company like obedient drones.
This reliance on instincts, or the esthetics of relying on instincts, is a disturbing aspect of Rats in general.
Not even that! It looks like a blurry jpeg of those sources if you squint a little!
Also I’ve sort of realized that the visualization is misleading in three ways:
They provide an animation from shallow to deep layers to show the dots coming together, making the final result more impressive than it is (look at how many dots are in the ocean)
You see blobby clouds over sub-continents, with nothing to gauge error within the cloud blobs.
Sorta-relevant but obviously the borders as helpfully drawn for the viewer to conform to
“Our” world knowledge aren’t even there at all, it’s still holding up a mirror (dare I say a parrot?) to our cognition.
You have no way of judging how correct or how wrong the output is, and no one to hold responsible or be a guarantor.
With the recent release of the heygen drag-drop tool for video translating, and lip-syncing tool, I saw enough people say:
"Look isn't it amazing, I can speak Italian now"
No, something makes look like you can, and you have no way of judging how convincing the illusion is. Even if the output is convincing/bluffing to a native speaker, you still can't immediately check that the translation is correct. And again no one to hold accountable.
I said I wouldn't be confident about it, not that enshitification would not occur ^^.
I oscillate between optimisim and pessimism frequently, and for sure some many companies will make bad doo doo decisions.
Ultimately trying to learn the grift is not the answer for me though, I'd rather work for some company with at least some practical sense and pretense at an attempt of some form of sustainability.
The mood comes, please forgive the following, indulgent, poem:
Worse before better
Yet comes the AI winter
Ousting the fever
I wouldn't be so confident in replacing junior devs with "AI":
Even if it did work without wasting time, it's unsustainable since junior devs need to acquire these skills, senior devs aren't born from the void, and will eventually graduate/retire.
A junior dev willing to engage their brain, would still iterate through to the correct implementation for cheaper (and potentially faster), than senior devs needing spend time reviewing bullshit implementations, and at arcane attempts of unreliable "AI"-prompting.
It's copy-pasting from stack-overflow all over again. The main consequence I see for LLM based coding assistants, is a new source of potential flaws to watch out for when doing code reviews.
Either way it's a circus of incompetence.