Skip Navigation

Posts
18
Comments
569
Joined
2 yr. ago

  • 'Genetic engineering to merge with machines' is both a stream of words with negative meaning and something I don't think he could come up with on his own, like the solar system sized dyson sphere or the lab leak stuff. He just strikes me as too incurious to have come across the concepts he mashes together on his own.

    Simplest explanation I guess is he's just deliberately joeroganing the CEO thing and that's about as deep as it goes.

  • Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”

    Fun article.

    Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.

    Occasionally I feel that Altman may be plugged into something that's even dumber and more under the radar than vanilla rationalism.

  • users trade off decision quality against effort reduction

    They should put that on the species' gravestone.

  • What if quantum but magically more achievable at nearly current technology levels. Instead of qbits they have pbits (probabilistic bits, apparently) and this is supposed to help you fit more compute in the same data center.

    Also they like to use the word thermodynamic a lot to describe the (proposed) hardware.

  • I feel the devs should just ask the chatbot themselves before submitting if they feel it helps, automating the procedure invites a slippery slope in an environment were doing it the wrong way is being pushed extremely strongly and executives' careers are made on 'I was the one who led AI adoption in company x (but left before any long term issues became apparent)'

    Plus the fact that it's always weirdos like the hating AI is xenophobia person who are willing to go to bat for AI doesn't inspire much confidence.

  • As far as I can tell there's absolutely no ideology in the original transformers paper, what a baffling way to describe it.

    James Watson was also a cunt, but calling "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" one of the founding texts of eugenicist ideology or whatever would be just dumb.

  • Hey it's the character.ai guy, a.k.a. first confirmed AI assisted kid suicide guy.

    I do not believe G-d puts people in the wrong bodies.

    Shazeer also said people who criticized the removal of the AI Principles were anti-Semitic.

    Kind of feel the transphobia is barely scratching the surface of all the things wrong with this person.

  • So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.

    Eh, Local LLMs don't really scale, you can't do much better than one person per one computer, unless it's really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren't currently on work laptops and VMs.

    Sparks type machines will do better eventually but for now they're supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail's pace.

  • It definitely feels like the first draft said for the longest time we had to use AI in secret because of Woke.

  • only have 12-days of puzzles

    Obligatory oh good I might actually get something job-related done this December comment.

  • What's a government backstop, and does it happen often? It sounds like they're asking for a preemptive bail-out.

    I checked the rest of Zitron's feed before posting and its weirder in context:

    Interview:

    She also hinted at a role for the US government "to backstop the guarantee that allows the financing to happen", but did not elaborate on how this would work.

    Later at the jobsite:

    I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word "backstop" and it mudlled the point.

    She then proceeds to explain she just meant that the government 'should play its part'.

    Zitron says she might have been testing the waters, or its just the cherry on top of an interview where she said plenty of bizzare shit

  • it often obfuscates from the real problems that exist and are harming people now.

    I am firmly on the side of it's possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.

    Yudkowsky and his ilk are cranks.

    That Yud is the Neil Breen of AI is the best thing ever written about rationalism in a youtube comment.

  • this seems counterintuitive but... comments are the best, name of the function but longer are the worst. Plain text summary of a huge chunk of code that I really should have taken the time to break up instead of writing a novella about it are somewhere in the middle.

    I feel a lot of bad comment practices are downstream of javascript relying on jsdoc to act like a real language.

  • Managers gonna manage, but having a term for bad code that works that is more palatable than 'amateur hour' isn't inherently bad imo.

    Worst i've heard is some company forbidding LINQ in C#, which in python terms is forcing you to always use for-loops in place of filter/map/reduce and comprehensions and other stuff like pandas.groupby

  • My impression from reading the stuff posted here is that omarchy is a nothing project that's being aggressively astroturfed so a series of increasingly fashy contributors can gain clout and influence in the foss ecosystem.

  • Definitely, it's just code for I'm ok with nazis at this point.

  • pro-AI but only self hosted

    Like being pro-corporatism but only with regard to the breadcrumbs that fall off the oligarchs' tables.

    We should start calling so-called open source models trickle-down AI.

  • This improved my mood considerably, thank you.

  • and actually use an AI that cites it’s sources

    make the hallucinotron useful with this one weird trick

  • TechTakes @awful.systems

    Peter Thiel Antichrist lecture: We asked guests what the hell it is

    TechTakes @awful.systems

    Albania appoints AI bot as minister to tackle corruption

    SneerClub @awful.systems

    Where Scoot makes the case about how an AGI could build an army of terminators in a year if it wanted.

    TechTakes @awful.systems

    OpenAI scuttles for-profit transformation

    TechTakes @awful.systems

    "If a man really wants to make a million dollars, the best way would be to start his own social network." -- L. Ron Altman

    TechTakes @awful.systems

    UK creating ‘murder prediction’ tool to identify people most likely to kill

    NotAwfulTech @awful.systems

    Advent of Code 2024 - Historian goes looking for history in all the wrong places

    SneerClub @awful.systems

    New article from reflective altruism guy starring Scott Alexander and the Biodiversity Brigade

    TechTakes @awful.systems

    It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)

    TechTakes @awful.systems

    Generating (often non-con) porn is the new crypto mining

    SneerClub @awful.systems

    SBF's effective altruism and rationalism considered an aggravating circumstance in sentencing

    SneerClub @awful.systems

    Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next

    SneerClub @awful.systems

    Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.

    SneerClub @awful.systems

    Reply guy EY attempts incredibly convoluted offer to meet him half-way by implying AI body pillows are a vanguard threat that will lead to human extinction...

    SneerClub @awful.systems

    Existential Comics on rationalism and parmesan

    TechTakes @awful.systems

    Turns out Altman is a lab-leak covid truther, calls virus 'synthetic' according to Spectator piece on AI risk.

    SneerClub @awful.systems

    Rationalist literary criticism by SBF, found on the birdsite