Algernoq (the blogpost author):
I assume this is a "Nope, because of secret author evidence that justifies a one-word rebuttal" or a "Nope, you're wrong in several ways but I have higher-value things to do than retype the sequences".
(Also, it's an honor; I share your goal but take a different road.)
[...]
Richard_Kennaway:
What goal do you understand yourself to share with Eliezer, and what different road?
Algernoq:
I don't deserve to be arrogant here, not having done anything yet. The goal: I had a sister once, and will do what I can to end death. The road: I'm working as an engineer (and, on reflection, failing to optimize) instead of working on existential risk-reduction. My vision is to build realistic (non-nanotech) self-replicating robots to brute-force the problem of inadequate science funding. I know enough mechanical engineering but am a few years away from knowing enough computer science to do this.
And the extension of this to characters, and I don't actually remember at this point, if this exact way of phrasing it is original to me or not, is that you might think of a three dimensional character as one who contains at least two two-dimensional characters.
Ahhh! No! I can't! Just... NO. Two stereotypes don't make a full person! (screams into a pillow)
Funnily enough it isn't even required by their purported bayesian doctrine (which proves none of them do the math), you could simply "update forward" again based on the new evidence that the text is part-fictional.
Counter-theory: The now completely irrelevant search results and the idiotic summaries, are a one-two punch combo, that plunges the user in despair, and makes them close the browser out of disgust.
Pre-LLM summaries were for the most part actually short.
They were more directly lifted from human written sources, I vaguely remember lawsuits or the threat of lawsuits by newspapers over google infoboxes and copyright infringement in pre-2019 days, but i couldn't find anything very conclusive with a quick search.
They didn't have the sycophantic—hey look at me I'm a genius—overly-(and wrong)-detailed tone that the current batch has.
I mean if you want to be exceedingly generous (I sadly have my moments), this is actually remarkably close to the "intentional acts" and "shit happens" distinction, in a perverse Rationalist way. ^^
But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.
Did you read any of what I wrote? I didn't say that human interactions can't be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.
EDIT:
To clarify I and maybe put it in terms which are closer to your interpretation.
With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.
With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it's where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There's nothing below the surface,
I don't personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it's more useful to frame it as respecting other people's boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.
And I especially don't think it healthy to essentially be saying: "have the same expectations of chatbots and service workers".
TLDR:
You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it's not just about protecting yourself, it's also about protecting them.
You should never catch feelings for a chatbot, because they don't have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).
Don't besmirch the oldest profession by making it akin to souless vacuum. It's not even a transaction! The AI gains nothing and gives nothing. It's alienation in it's purest form—no wonder the rent-seekers love it—It's the ugliest and least faithful mirror.
✨The Vibe✨ is indeed getting increasingly depressing at work.
It's also killing my parents' freelance translation business, there is still money in live interpreting, and prestige stuff or highly technical accuracy very obviously matters stuff, but a lot of stuff is drying up.
Jinsatsu Zetsubō (人殺・絶望, but his thralls call him Ginny) was not your ordinary vampire goth demon lord... He delighted in his garments of true terror and dread, what better source of inescapable despair than his beige ulster coat, barely held together by off-yellow gold pins, with a salmon pink napkin in the over pocket, an ensemble designed to inspire trudgery sucking all soul and joy from any passerby...
The movement connected toattracted the attention of the founder culture of Silicon Valley and leading to many shared cultural shibboleths and obsessions, especially optimism about the ability of intelligent capitalists and technocrats to create widespread prosperity.
At first I was confused at what kind of moron would try using shibboleth positively, but it turns it's just terribly misquoting a citation:
Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.
Also lol at insiting on "exonym" as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn't really even make sense to use the acronym unless you're doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)
It's a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.
Ah but not everyone's taste is the same, therefore the best conceible plate of nachos is made worse by existing, because it can then be confronted to people's preferences instead of staying in the platonic realm!
I'm under the impression that he essentially stated as much, though i'm a bit too lazy to go quote mining.