Skip Navigation

Posts
0
Comments
261
Joined
2 yr. ago

  • I can’t find any good sources right now, I’m absolutely not going to find a copy of that particular screed and look up the relevant bits, but I think the oil wells in galts gulch were automated, with self driving trucks, I think? It might also be implicit in the goods that are produced there, but that is drifting a bit into fanfic, I admit.

    Anyway, between boston dynamics and the endless supply of rand fans on the internet, it’s very hard to research without reading the damn thing.

    If I find a new source, I’ll report back.

  • They had robots in galts gulch, which means that all businesses need them. If you aren’t randmaxxing 24/7, can you really call yourself a technological visionary at the vanguard of the libertarian master race?

  • Of all the environments that you might want to rearrange to facilitate non-humanoid labour, surely warehouses are the easiest. There’s even a whole load of pre-existing automated warehousing stuff out there already. Wheels, castors, conveyors, scissor lifts… most humans don’t have these things, and they’re ideal for moving box-like things around.

    Industrialisation and previous waves of automation have lead to workplaces being rearranged to make things cheaper or faster to make, or both, but somehow the robot companies think this won’t happen again? The only thing that seems to be different this time around, is that llms have shown that the world’s c-suites are packed with deeply gullible people and we now have a load of new technology for manipulating and exploiting them.

  • Much of the content of mythical man month is still depressingly relevant, especially in conjunction with brooks’ later stuff like no silver bullets. A lot of senior tech management either never read it, or read it so long ago that they forgot the relevant points beyond the title.

    It’s interesting that clausewitz doesn’t appear in lw discussions. That seems like a big point in favour of his writing.

  • A second post on software project management in a week, this one from deadsimpletech: failed software projects are strategic failures.

    A window into another it disaster I wasn’t aware of, but clearly there is no shortage of those. An australian one this time.

    And of course, without having at least some of that expertise in-house, they found themselves completely unable to identify that Accenture was either incompetent, actively gouging them or both.

    (spoiler alert, it was both)

    Interesting mention of clausewitz in the context of management, which gives me pause a bit because techbros famously love the “art of war”, probably because sun tzu was patiently explaining obvious things to idiots and that works well on them. “On war” might be a better text, I guess.

    https://deadsimpletech.com/blog/failed_software_projects

  • I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc.

    I suspect that part of the problem is that there is company in there that’s doing a pretty amazing job of reusable rocketry at lower prices than everyone else under the guidance of a skilled leader who is also technically competent, except that leader is gwynne shotwell who is ultimately beholden to an idiot manchild who wants his flying cybertruck just the way he imagines it, and cannot be gainsayed.

  • Bleugh, I’ve been using crucial ram and flash for a hell of a long time, and they’ve always been high quality and reasonably priced. I dislike having to find new manufacturers who don’t suck, especially as the answer seems to be increasingly “lol, there are no such companies”.

    Thanks to the ongoing situation in the us, it doesn’t look like the ai bubble is going to pop soon, but I can definitely see it causing more damage like this before the event.

  • For a lot of this stuff at the larger end of the scale, the problem mostly seems to be a complete lack of accountability and consequences, combined with there being, like, four contractors capable of doing the work, with three giant accountancy firms able to audit the books.

    Giant government projects always seem to be a disaster, be they construction, heathcare, IT, and no heads ever roll. Fujitsu was still getting contracts from the UK government even after it was clear they’d been covering up the absolute clusterfuck that was their post office system that resulted in people being driven to poverty and suicide.

    At the smaller scale, well. “No warranty or fitness for any particular purpose” is the whole of the software industry outside of safety critical firmware sort of things. We have to expend an enormous amount of effort to get our products at work CE certified so we’re allowed to sell them, but the software that runs them? we can shovel that shit out of the door and no-one cares.

    I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

  • Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.

    Few IT projects are displays of rational decision-making from which AI can or should learn.

    Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.

    The article continues to talk about how we can’t do IT, and wraps up with

    It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined

    It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.

    https://spectrum.ieee.org/it-management-software-failures

  • It is important to note that the reviews were detected as being ai generated by an ai tool.

    This is a marketing puff piece.

    I mean, I expect that loads of the submissions are by slop extruders… under the circumstances, how could they not be? But until someone does the legwork of checking this, it’s just another magic-eight-ball-says-maybe, dressed up as science.

  • Stuff like this is particularly frustrating because this is one of they places where I have to grudgingly admit that llm coding assistants could actually deliver… it turns out that having to state a problem unambiguously and having a way in which answers can be automatically checked for correctness means that you don’t have to worry about bullshit engines bullshitting you so much.

    No llm is going to give good answers to “solve the riemann hypothesis in the style of euler, cantor, tao, 4k 8k big boobies do not hallucinate” and for everything else the problem then becomes “can you formally specify the parameters of your problem such that correct solutions are unambiguous” and now you need your professional mathematicians and computer scientists and cryptographers still…

  • And whilst we’re in that liminal space where no-one reads the old stubstack but the new one hasn’t yet surfaced, here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.

    Few IT projects are displays of rational decision-making from which AI can or should learn.

    it doesn’t get any cheerier, and wraps up with

    It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined

    Oof.

    https://spectrum.ieee.org/it-management-software-failures

  • Noted for the amusing headline: https://www.nature.com/articles/d41586-025-03506-6

    Major AI conference flooded with peer reviews written fully by AI Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.

    Do note that it appears to be an advert for ai peer review detection services, but I was still tickled by the whole “why are there leopards at our face-eating conference” surprise being expressed.

  • Given the state of renewables and energy storage, this feels a lot like the final opportunity for nuclear power in its current state to actually do anything at all, and the “move fast and break things” crowd have no idea about building physical things more complex than a datacentre which honestly, isn’t that challenging in comparison.

    openai will be a smoking crater well before site for the first plant will get selected

    Other things that might not last that long include the government of the country in which you’re trying to build massive piece of infrastructure that represents a significant ongoing maintenance burden and risk.

  • Lord grant me the confidence of a mediocre white man, etc.

  • Synergies!

    Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

    Reactor licensing is a simple mechanisable form filling exercise, y’know.

    “Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

    https://www.404media.co/power-companies-are-using-ai-to-build-nuclear-power-plants/

    (Paywalled, at least for me)

    Ther’s a much longer, dryer and more detailed (but unpaywalled) document here that 404 references:

    https://ainowinstitute.org/publications/fission-for-algorithms

  • Stupid chatbots marketed at gullible christians aren’t new,

    The app Text With Jesus uses artificial intelligence and chatbots to offer spiritual guidance to users who are looking to connect with a higher power.

    bit this is certainly an unusual USP:

    Premium users can also converse with Satan.

    https://www.nbcphiladelphia.com/news/tech/religious-chatbot-apps/4302361/

    (via parker molloy’s bluesky)

  • I’m being shuffled sideways into a software architecture role at work, presumably because my whiteboard output is valued more than my code 😭 and I thought I’d try and find out what the rest of the world thought that meant.

    Turns out there’s almost no way of telling anymore, because the internet is filled with genai listicles on random subjects, some of which even have the same goddamn title. Finding anything from the beforetimes basically involves searching reddit and hoping for the best.

    Anyway, I eventually found some non-obviously-ai-generated work and books, and it turns out that even before llms flooded the zone with shit no-one knew what software architecture was, and the people who opined on it were basically in the business of creating bespoke hammers and declaring everything else to be the specific kind of nails that they were best at smashing.

    Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

  • I generally read stuff like that netbsd policy as “please ask one of our ancient, grumpy, busy and impatient grognards, who hate people in general and you in particular, to say nice things about your code”.

    I guess you can only draw useful conclusions if anyone actually clears that particular obstacle.