Yeah, I found his credulousness about "AI"quite amusing. Like when he went to a wrong station in Japan, because he asked ChatGPT. And posted about it, not realizing how much of a dumbass it made him look like. But it's starting to wear off.
But hey, he at least admits there is a bubble.
And also, I haven't unfollowed/muted/blocked Mike Masnick yet. And he's at least twice as annoying about AI.
Unrelated: Did chrome just detect me writing about AI to shill Gemini to me? (puts on tinfoil hat)
If we're talking about the general West, then there new nuclear is probably fucked. Rest of the world still builds for reasonable costs. Not nuclear bro amounts, but still.
I think we could see a future where nuclear makes 5-10% of the world's electricity, which would technically make it a niche source of power, but it would also be a massive increase from today.
The fact I can login every morning and ask an AI to review all my emails and chats from yesterday then given what it knows about my goals and my role it should suggest what I could have done better is amazing.
Bubble or not, AI is huge for personal productivity and overall improvement.
Simon Willison writes a fawning blog post about the new "Claude skills" (which are basically files with additional instructions for specific tasks for the bot to use)
How does he decide to demonstrate these awesome new capabilities?
By making a completely trash, seizure inducing GIF...
He even admits it's garbage. How do you even get to the point that you think that's something you want to advertise? Even the big slop monger companies manage to cherry pick their demos.
I wondered if this should be called a shitpost or an effortpost, then I wondered what would something that is both be called and I came up with "constipationpost".
Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.
It talked about how it's almost impossible to detect whether a model was deliberately trained to output some "bad" output (like vulnerable code) for some specific set of inputs.
Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a "sleeper agent". But maybe some of y'all will find it interesting.
It's two guys in London and one guy in San Francisco. In London there's presumably no OpenAI office, in SF, you can't be at two places at once and Anthropic has more true believers/does more critihype.
Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle "BussyGyatt
@feddit.org". Truly the dumbest timeline.
Could be, I don't follow that closely. I'm not aware of any that come close to the level of shitshow of say, Hinckley Point C. That matters.