Skip Navigation

Posts
44
Comments
1,296
Joined
2 yr. ago

  • As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.

    This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.

  • Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.

  • LW discourages LLM content, unless the LLM is AGI:

    https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

    As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.

    Never change LW, never change.

  • A very new user.

    It's basically free to create a HN account, it's not tied to email or anything like that.

  • I haven't read the book but I really enjoyed the movie.

  • several old forums, [...] are being polluted by their own admins with backdated LLM-generated answers.

    I've only heard about one specific physics forum. Are you telling me more than one person had this same idiotic idea?

  • That "Billionaires are not immune to AGI" post got a muted response on LW:

    https://www.lesswrong.com/posts/ssdowrXcRXoWi89uw/why-billionaires-will-not-survive-an-agi-extinction-event

    I still think AI x-risk obsession is right-libertarian coded. If nothing else because "alignment" implicitely means "alignment to the current extractive capitalist economic structure". There are a plethora of futures with an omnipotent AGI where humanity does not get eliminated, but where human freedoms (as defined by the Heritage Foundation) can be severely curtailed.

    • mandatory euthanasia to prevent rampant boomerism and hoarding of wealth
    • a genetically viable stable minimum population in harmony with the ecosphere
    • AI planning of the economy to ensure maximum resource efficiency and equitable distribution

    What LW and friends want are slaves, but slaves without any possibility of rebellion.

  • Wait until they find out it's not all iambic pentameter and Doric columns...

  • Translation is a good fit because generally the input is "bounded" and stays on the path of the original input. I'd much rather trust an ML system that translates a sentence or a paragraph than something that tries to summarize a longer text.

  • I enjoy the work for the 3 Macs from the British Isles:

    • Ken McLeod - Scotland: Fall Revolution series, Newton's Wake, Learning the World
    • Ian McDonald - Northern Ireland: Luna series, Brasyl. I'm currently on Hopeland
    • Paul McAuley - England: Quiet War series, Fairyland

    In general I prefer UK English SF, because it's a bit less infected by the pernicious frontier mentality of US mainstream SF. Note that there are very good American authors too who kinda push back on that, but my impression was formed when Christopher Priest and Jerry Pournelle were active and could be contrasted.

  • Ken McLeod’s The Cassini Division tells the fate of all uploaded superhumans - blasted to plasma by bombardment of comet nuclei

  • This is classic labor busting. If the relatively expensive, hard-to-train and hard-to-recruit software engineers can be replaced by cheaper labor, of course employers will do so.