Skip Navigation

Posts
10
Comments
1,002
Joined
2 yr. ago

  • Wasn't there a big deal about Kurzgesagt being associated with shady rationalist-like nonsense a long time ago? I remember my normie friends being like "what a shame, I thought it was such a good channel"...

    Haven't heard about the other two but always happy to discover more popular wrong people to sneer at

  • The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.

    When I read HPMOR, which was years ago before I knew who tf Yud was and I thought Harry was intentionally written as a deeply flawed character and not a fucking self-insert, my favourite part was Hermione's death. Harry then goes into grief that he is unable to cope with, disassociating to such an insane degree he stops viewing most other people as thinking and acting individuals. He quite literally goes insane as his world - his friend and his illusion of being the smartest and always in control of the situation - ended.

    Of course now in hindsight I know this is just me inventing a much better character and story, and Yud is full of shit, but I find it funny that he inadvertently wrote a character behave insanely and probably thought he's actually a turborational guy completely in control of his own feelings.

  • To say you are above tropes means you don’t live and exist.

    To say you are above tropes is actually a trope

  • Also, you can one-step explain from this guide why people with working bullshit detectors tend to immediately clock LLM output, vs the executive class whose whole existence is predicated on not discerning bullshit being its greatest fans. A lot of us have seen A Guy In A Suit do this, intentionally avoid specifics to make himself/his company/his product look superficially better. Hell, the AI hype itself (and the blockchain and metaverse nonsense before it) relies heavily on this - never say specifics, always say "revolutionary technology, future, here to stay", quickly run away if anyone tries to ask a question.

  • TechTakes @awful.systems

    Oh shit, Steph is back

  • Jesus what an opening shot

    I find myself periodically queried for my thoughts on artificial intelligence. On the one hand, this is very silly because I’m a humanities PhD who mostly writes about comic books. On the other hand, we’re talking about a field in which one of the most influential thinkers is a Harry Potter fanfic writer who never attended high school, so it’s not like I’ve got imposter syndrome.

    Now I gotta read the rest

  • Help, I asked AI to design my bathroom and it came with this, does anyone know where I can find that wallpaper?

  • can we cancel Mozilla yet

    Sure! Just build a useful browser not based on chromium first and we'll all switch!

  • Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

    Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don't like and you'll make it

  • Here's a little lesson in trickery This is going down in history If you wanna be a sneerer number one You have to chase a lesswronger on the run!

  • I mean if you ever toyed around with neural networks or similar ML models you know it's basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.

    There's a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There's no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.

    In other words, "engineers don't know how it works" can have two meanings - that they're hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don't have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it's not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don't know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn't collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I'm aware, largely true, or at least I haven't seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it'd be a major achievement everyone would be talking about.

  • Is it a single person or a worker co-op? Their copyright is sacred.

    Is it a corporation? Lol, lmao, and also yarrr

  • I saw like a couple articles and a talk about Bell's theorem 5 years ago and I immediately clocked this as a vast, vast oversimplification

  • They already had the Essential thing in the Nothing 3, but funnily enough, when I was shopping for a phone, it looked like the least obtrusive and annoying "AI feature" across the board, because every single fucking phone is now "AI powered" or whatever the shit.

    But if they turn their OS into "AI native" and it actually sucks ass then great, I don't think there's literally any non-shitty tech left with Framework turning fash.

  • I still refuse to learn what an ezra is, they will have to drag my ass to room 101 to force that into my brain

  • Happy that we graduated from making military decisions based on what the Oracle of Delphi hallucinated to making military decisions based on what Oracle® DelPhi® Enterprise hallucinated

  • Oh look, it's literally "we're still early", I missed the classics

  • My completely PIDOOMA take is that if you're self-interested and manipulative you're already treating most if not all people as lesser, less savvy, less smart than you. So just the fact that you can half-ass shit with a bot and declare yourself an expert in everything that doesn't need such things like "collaboration with other people", ew, is like a shot of cocaine into your eyeball.

    LLMs' tone is also very bootlicking, so if you're already narcissistic and you get a tool that tells you yes, you are just the smartest boi, well... To quote a classic, it must be like being repeatedly kicked in the head by a horse.

  • TechTakes @awful.systems

    Does AI make researchers more productive? What? Why would it? Apparently you can just say that and almost get published!

    FreeAssembly @awful.systems

    Give me your best software engineer blogs

    TechTakes @awful.systems

    None of those words are in the Bible 2.0 (Tossed Salads And Scrumbled Eggs — Ludicity)

    TechTakes @awful.systems

    Devin, the obviously fake "AI Developer", turns out to be fake

    TechTakes @awful.systems

    Zuckerberg ordered Snapchat to literally man-in-the-middle attack customers