Dude you hit the nail on the head. This should only be done with questionable books that don't have the best plot, idea or premise to find out if it's worth reading or not if you don't want to ask people and wait for a response for several hours or days until they respond lol. :3
Well, it's hard to argue with that, because a person chooses what is easier for him. It's a pity that if these people actually read the book, especially some of them, they will not only absorb the information, but also feel it. In addition, GPT can sometimes remain silent about sensitive topics. Now, of course, he says truthful things, but in the future he can be made more deceitful, because of which he will begin to distort the essence of the some books ( Although he may already be doing this ).
Not to mention that such easy, quick and superficial assimilation of information leads to degradation.
This guy made a joke that reads identically to the kinds of things people have been saying without a hint of humour since the ignoble days of Reader's Digest Condensed Books up to, yes, people saying almost exactly the same thing as he said here and people took him at face value. This is despite knowing that Poe's Law is a thing.
How terrible.
Generally if people don't "get" your joke, there's one of two things likely happening:
Your joke wasn't funny.
This was a Schrodinger's Joke: serious until someone says something bad about it after which it becomes "Gosh, all y'all just can't take a joke!"
2 definitely does happen a lot with conservatives, but I think it's a stretch to suggest it happened here. The evidence @kirk@startrek.website provided seems a little inconclusive to me (I'd really want to see a broader history of satirical comments and/or anti-AI-hype comments prior to this tweet to be the real proof, not an after-the-fact comment which could be taken either way), but on the face of it taking the first tweet seriously is a bit ridiculous. Had they used some self-help book or a piece of genre fiction (even excellent quality genre fiction) it might have become a bit more ambiguous (even then, the idea that someone would sincerely hold out the idea of AI summaries as being equivalent to actually reading a book is a fucking stretch), but using Tolstoy? Someone famous for the quality of his prose? Give me a break. Nobody believes that.
1 is obviously just subjective and meaningless. Personally, had I seen the original tweet without context, I think I would have found it funny as a parody of the AI-hyping techbros. You're welcome to disagree, but only insofar as you disagree that you personally found it funny. You are not welcome to make a generic sweeping statement that "it was not funny".
To be sort of fairish, I get the impression that anyone who would say that is the sort of person who could read a book cover to cover and manage to not get anything more than a rough outline of the plot out of it anyway.
The best part is that they don't even need to be real books! Here's one from DeepSeek: "The book 'Lunar Employment for Undergraduates' by Kurt Langer offers practical advice and strategies for finding employment after completing undergraduate studies in Southern Africa."
There was an article recently about how he "enjoys podcasts"... by feeding the transcript of the podcast into the AI, letting it summarise it, and having a conversation with the AI about the podcast on his commute to work.
Comically missing the point that a podcast is a performative medium; the presenter(s) telling you the story is a part of the artform, which you've just lost. Turn off tech-bro brain, just for a minute, and actually engage in the product as it was intended.
It just boggles the mind, do they really think they've stumbled on some sort of secret the rest of us have been sleeping on?
I think thats the whole thing people love about AI, it was the same with the expensive pictures. Tech lads thinking they were early with the secret sauce no one had found. The boys just wanna feel like they are the smart ones for once.
This is kind of like me when I don't really want to watch a movie or show but I want to know what is it about so I just watch a summarized commentary on YouTube for a fraction of the time
... only I'm aware I don't really want to watch it in the first place
I always discover that one or two episodes in. It's always that it's a good idea executed poorly.
The fan wiki is great when you just want more of the idea but to skip the cruddy details.
Yes, that's the case. Good direction can turn the most banal story into something interesting, but that's a rare trait, and on top of that shows and film are teamwork that also needs to answer to producers/investors/broadcasters interests and requirements. Keeping an idea fresh, with good pacing, and interesting taking all that into account is very hard.
We are flirting with Poe's Law, yes. But I have seen people express similar thoughts in dead earnestness dating as far back as Reader's Digest condensed books, so for decades people have been looking for shortcuts to comprehension of art.
Give me an elevator pitch of the top 10,000 works of literature and philosophy throughout history. Ima speed-run me into a sage this afternoon.
Humanity wrestles with meaning, morality, power, suffering, love, and the search for truth—across every age and culture, we tell stories and ask questions to understand ourselves, each other, and the world, forever torn between hope and despair, freedom and fate, reason and mystery.
It was the best of times, it was the worst of times, and amid revolution and resurrection, two cities bore witness to sacrifice as Sydney Carton, seeking redemption, found "a far, far better thing that I do, than I have ever done."
Summaries and shortcuts can provide surface-level knowledge, but the true benefits of reading—expanded perspective, personal growth, and the joy of discovery—are only realized through immersive, attentive reading. In a world that values "time efficiency" above all else, the richness and depth of art are flattened, and the very qualities that make us human—our capacity for reflection, connection, and wonder—are diminished.
OP, LLMs don't "know" shit. When they say something that conforms to a preexisting bias of yours, that's nothing. That should affect the strength of your argument in no capacity. It's not a knowledge base; it's a transformer model that exists to tell you what you're most likely to want to hear given what's come before.
The part of the anti-AI crowd who denounce rampant, uncritical use of LLMs but who also shit their pants and clap every time an LLM says something against LLMs tells me they don't have even a bare minimum understanding of machine learning or of cognitive biases like confirmation bias.
(Your link results in an internal runtime error btw.)
Perplexity does those weird runtime errors all the time. Just hit refresh. It eventually wakes up.
OP, LLMs don't "know" shit.
You'll find me making this exact point, incidentally, right here in this forum. I'm well aware that LLMbeciles know literally nothing. And that the "reasoning" models don't do anything that even slightly resembles reasoning.