Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CO
Posts
28
Comments
346
Joined
2 yr. ago

  • There was a Dilbert TV show. Because it wasn't written wholly by Adams, it was funny and engaging, with character development, a critical eye at business management, and it treated minorities like Alice and Asok with a modicum of dignity. While it might have been good compared to the original comic strip, it wasn't good TV or even good animation. There wasn't even a plot until the second season. It originally ran on UPN; when they dropped it, Adams accused UPN of pandering to African-Americans. (I watched it as reruns on Adult Swim.) I want to point out the episodes written by Adams alone:

    1. An MLM hypnotizes people into following a cult led by Wally
    2. Dilbert and a security guard play prince-and-the-pauper

    That's it! He usually wasn't allowed to write alone. I'm not sure if we'll ever have an easier man to psychoanalyze. He was very interested in the power differential between laborers and managers because he always wanted more power. He put his hypnokink out in the open. He told us that he was Dilbert but he was actually the PHB.

    Bonus sneer: Click on Asok's name; Adams put this character through literal multiple hells for some reason. I wonder how he felt about the real-world friend who inspired Asok.

    Edit: This was supposed to be posted one level higher. I'm not good at Lemmy.

  • He's not wrong. Previously, on Awful, I pointed out that folks would have been on the wrong side of Sega v. Accolade as well, to say nothing of Galoob v. Nintendo. This reply really sums it up well:

    [I]t strikes me that what started out as a judo attack against copyright has made copyright maximalists out of many who may not have started out that way.

    I think that the turning point was Authors Guild v. Google, also called Google Books, where everybody involved was avaricious. People want to support whatever copyright makes them feel good, not whatever copyright is established by law. If it takes the example of Oracle to get people to wake up and realize that maybe copyright is bad then so be it.

  • Previously, on Awful, we considered whether David Chapman was an LSD user. My memory says yes but I can't find any sources.

    I do wonder what you're aiming at, exactly. Psychedelics don't have uniform effects; rather, what unifies them is that they put the user into an atypical state of mind. I gather that Yud doesn't try them because he is terrified of not being in maximum control of himself at all times.

  • Over on Lobsters, Simon Willison and I have made predictions for bragging rights, not cash. By July 10th, Simon predicts that there will be at least two sophisticated open-source libraries produced via vibecoding. Meanwhile, I predict that there will be five-to-thirty deaths from chatbot psychosis. Copy-pasting my sneer:

    How will we get two new open-source libraries implementing sophisticated concepts? Will we sacrifice 5-30 minds to the ELIZA effect? Could we not inspire two teams of university students and give them pizza for two weekends instead?

  • I guess. I imagine he'd turn out like Brandon Sanderson and make lots of Youtube videos ranting about his writing techniques. Videos on Timeless Diction Theory, a listicle of ways to make an Evil AI character convincing, an entire playlist on how to write ethical harem relationships…

  • Kernel developer's perspective: The kernel is just software. It doesn't have security bugs, just bugs. It doesn't have any opinions on userspace, just contracts for how its API will behave. Its quality control is determined by whether it boots on like five machines owned by three people; it used to be whether it booted Linus' favorite machine. It doesn't have a contract for its contributors aside from GPLv2 and an informal agreement to not take people to court with GPLv2 contract violations. So, LLM contributions are… just contributions.

    It might help to remember that the Linux development experience includes lots of aggressive critique of code. Patches are often rejected. Corporations are heavily scrutinized for ulterior motives. Personal insults are less common than they used to be but still happen, egos clash constantly, and sometimes folks burn out and give up contributing purely because they cannot stand the culture. It's already not a place where contributors are assumed to have good faith.

    More cynically, it seems that Linus has recently started using generative tools, so perhaps his reluctance to craft special contributor rules is part of his personal preference towards those tools. I'd be harsher on that preference if it weren't also paying dividends by e.g. allowing Rust in the kernel.

  • Catching up and I want to leave a Gödel comment. First, correct usage of Gödel's Incompleteness! Indeed, we can't write down a finite set of rules that tells us what is true about the world; we can't even do it for natural numbers, which is Tarski's Undefinability. These are all instances of the same theorem, Lawvere's Fixed-Point. Cantor's theorem is another instance of Lawvere's theorem too. In my framing, previously, on Awful, postmodernism in mathematics was a movement from 1880 to 1970 characterized by finding individual instances of Lawvere's theorem. This all deeply undermines Rand's Objectivism by showing that either it must be uselessly simple and unable to deal with real-world scenarios or it must be so complex that it must have incompleteness and paradoxes that cannot be mechanically resolved.

  • Something useful to know, which I'm not saying over there because it'd be pearls before swine, is that Glyph Lefkowitz and many other folks core to the Twisted ecosystem are extremely Jewish and well-aware of Nazi symbols. Knowing Glyph personally, I'd guess that he wanted to hang a lampshade on this particular symbol; he loves to parody overly-serious folks and he spends most of his blogposts gently provoking the Python community into caring about software and people. This is the same guy who started a PyCon keynote with, "Friends, Romans, countrymen, lend me your ears; I come to bury Python, not to praise it."

  • Complementing sibling comments: Swift requires an enormous amount of syntactic ceremony in order to get things done and it lacks a powerful standard library to abbreviate common tasks. The generative tooling does so well here because Swift is designed for an IDE which provides generative tools of the sort invented in the 80s and 90s; when their editor already generates most of their boilerplate, predicts their types, and tab-completes their very long method/class names, they are already on auto-pilot.

    The actual underlying algorithm should be a topological sort with either Kahn's algorithm or Tarjan's algorithm. It should take fewer than twenty lines total when ceremony is kept to a minimum; here is the same algorithm for roughly the same purpose in my Monte-in-Monte compiler, sorting modules based on their dependencies in fifteen lines. Also, a good standard library should have a routine or module implementing topological sorting and other common graph algorithms; for example, Python's graphlib.TopologicalSorter was added in 2020 and POSIX tsort dates back to 1979. I would expect students to immediately memorize this algorithm upon grokking it during third-year undergrad as part of a larger goal of grokking graph-traversal algorithms; the idea of both Kahn and Tarjan is merely to look for vertices with no incoming edges and error if none can be found, not an easy concept to forget or to fail to rediscover when needed. Congrats, the LLM can do your homework.

    If there's any Swifties here: Hi! I love Taytay; I too was born in the late 80s and have trouble with my love life. Anyway, the nosology here is pretty easy; Swift's standard library doesn't include algorithms in general, only algorithms associated to data structures, which themselves are associated to standardized types. Since Swift descends from Smalltalk, its data structures include Collections, so a reasonable fix here would be to add a Graph collection and make topological sorting a method; see Python's approach for an example. Another possibility is to abuse the builtin sort routine, but this will cost O(n lg n) path lookups and is much more expensive; it's not a long-term solution.

  • One important nuance is that there are, broadly speaking, two ways to express a formal proof: it can either be fairly small but take exponential time to verify, or it can be fairly quick to verify but exponentially large. Most folks prefer to use the former sort of system. However, with extension by definitions, we can have a polynomial number of polynomially-large definitions while still verifying quickly. This leads to my favorite proof system, Metamath, whose implementations measure their verification speed in kiloproofs/second. If you give me a Metamath database then I can quickly confirm any statement in a few moments with multiple programs and there is programmatic support for looking up the axioms associated with any statement; I can throw more compute at the problem. While LLMs do know how to generate valid-looking Metamath in context, it's safe to try to verify their proofs because Metamath's kernel is literally one (1) string-handling rule.

    This is all to reconfirm your impression that e.g. Lean inherits a "mediocre software engineering" approach. Junk theorems in Lean are laughably bad due to type coercions. The wider world of HOL is more concerned with piles of lambda calculus than with writing math proofs. Lean as a general-purpose language with I/O means that it is no longer safe to verify untrusted proofs, which makes proof-carrying Lean programs unsafe in practice.

    @Seminar2250@awful.systems you might get a laugh out of this too. FWIW I went in the other direction: I started out as a musician who learned to code for dayjob and now I'm a logician.

  • I don't have any good lay literature, but get ready for "steering vectors" this year. It seems like two or three different research groups (depending on whether I count as a research group) independently discovered them over the past two years and they are very effective at guardrailing because they can e.g. make slurs unutterable without compromising reasoning. If you're willing to read whitepapers, try Dunefsky & Cohan, 2024 which builds that example into a complete workflow or Konen et al, 2024 which considers steering as an instance of style transfer.

    I do wonder, in the engineering-disaster-podcast sense, exactly what went wrong at OpenAI because they aren't part of this line of research. HuggingFace is up-to-date on the state of the art; they have a GH repo and a video tutorial on how to steer LLaMA. Meanwhile, if you'll let me be Bayesian for a moment, my current estimate is that OpenAI will not add steering vectors to their products this year; they're already doing something like it internally, but the customer-facing version will not be ready until 2027. They just aren't keeping up with research!

  • Steve Yegge has created Gas Town, a mess of Claude Code agents forced to cosplay as a k8s cluster with a Mad Max theme. I can't think of better sneers than Yegge's own commentary:

    Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.

    If you're familiar with the Towers-of-Hanoi problem then you can appreciate the contrast between Yegge's solution and a standard solution; in general, recursive solutions are fewer than ten lines of code.

    Gas Town solves the MAKER problem (20-disc Hanoi towers) trivially with a million-step wisp you can generate from a formula. I ran the 10-disc one last night for fun in a few minutes, just to prove a thousand steps was no issue (MAKER paper says LLMs fail after a few hundred). The 20-disc wisp would take about 30 hours.

    For comparison, solving for 20 discs in the famously-slow CPython programming system takes less than a second, with most time spent printing lines to the console. The solution length is exponential in the number of discs, and that's over one million lines total. At thirty hours, Yegge's harness solves Hanoi at fewer than ten lines/second! Also I can't help but notice that he didn't verify the correctness of the solution; by "run" he means that he got an LLM to print out a solution-shaped line.

  • NEOM is a laundry for money, religion, genocidal displacement, and the Saudi reputation among Muslims. NEOM is meant to replace Wahhabism, the Saudi family's uniquely violent fundamentalism, with a much more watered-down secularist vision of the House of Saud where the monarchs are generous with money, kind to women, and righteously uphold their obligations as keepers of Mecca. NEOM is not only The Line, the mirrored city; it is multiple different projects, each set up with the Potemkin-village pattern to assure investors that the money is not being misspent. In each project, the House of Saud has targeted various nomads and minority tribes, displacing indigenous peoples who are inconvenient for the Saudi ethnostate, with the excuse that those tribes are squatting on holy land which NEOM's shrines will further glorify.

    They want you to look at the smoke and mirrors in the desert because otherwise you might see the blood of refugees and the bones of the indigenous. A racing team is one of the cheaper distractions.

  • I clicked through too much and ended up finding this. Congrats to jdp for getting onto my radar, I suppose. Are LLMs bad for humans? Maybe. Are LLMs secretly creating a (mind-)virus without telling humans? That's a helluva question, you should share your drugs with me while we talk about it.

  • Nah, it's more to do with stationary distributions. Most tokens tend to move towards it; only very surprising tokens can move away. (Insert physics metaphor here.) Most LLM architectures are Markov, so once they get near that distribution they cannot escape on their own. There can easily be hundreds of thousands of orbits near the stationary distribution, each fixated on a simple token sequence and unable to deviate. Moreover, since most LLM architectures have some sort of meta-learning (e.g. attention) they can simulate situations where part of a simulation can get stuck while the rest of it continues, e.g. only one chat participant is stationary and the others are not.

  • NotAwfulTech @awful.systems

    A Nix flake for detecting and removing fascist software

    TechTakes @awful.systems

    CATGIRL Officially Banned For Cheating!!!

  • It's a power play. Engineers know that they're valuable enough that they can organize openly; also, as in the case of Alphabet Workers Union, engineers can act in solidarity with contractors, temps, and interns. I've personally done things like directly emailing CEOs with reply-all, interrupting all-hands to correct upper management on the law, and other fun stuff. One does have to be sufficiently skilled and competent to invoke the Steve Martin principle: "be so good that they can't ignore you."

  • SneerClub @awful.systems

    Your favorite science YouTubers are misleading you about AI — how to spot lies

    TechTakes @awful.systems

    Ai told me to kіӏӏ 17 people (and myself)!

    SneerClub @awful.systems

    Anil Seth: Can AI Be Conscious?

    SneerClub @awful.systems

    The Biggest, Craziest Wikipedia Drama Ever

    SneerClub @awful.systems

    ChatGPT made me delusional

    NotAwfulTech @awful.systems

    Are You Under the Influence? The Tail That Wags The Dog - Dhole Moments

    NotAwfulTech @awful.systems

    Busy Beaver Gauge

    SneerClub @awful.systems

    Bag of words, have mercy on us

    MoreWrite @awful.systems

    System 3

    SneerClub @awful.systems

    OpenAI investor falls for GPT's SCP-style babble

    SneerClub @awful.systems

    A non-anthropomorphized view of LLMs

    TechTakes @awful.systems

    Linux users failing to respect trans Linux developers

    TechTakes @awful.systems

    Leopard-trainer J. Tunney now scared of leopards

    TechTakes @awful.systems

    Why has Emperor Zuck given us this bounty?

    TechTakes @awful.systems

    HN has no opinions on memetics

    TechTakes @awful.systems

    It's not a death threat, you're just unfamiliar with 90s hip-hop

    TechTakes @awful.systems

    Overly libertarian crypto-bro vs AML regulations: EU edition

    SneerClub @awful.systems

    Big Yud and the Methods of Compilation