Skip Navigation

AI should exist only for its own sake

I've been active in the field of AI since 2012, since the beginning of the GPGPU revolution.

I feel like many, not most, of the experts and scientists until the early stages of the GPGPU revolution and before shared a similar sentiment as what i'm stating in the title.

If asked by the public and by investors about what it's all actually good for, most would respond with something along the lines of "idk, medicine or something? Probably climate change?" when actually, many were really just trying to make Data from TNG a reality, and many others were trying to be the first in line to receive AI immortality and other transhumanist dreams. And these are the S-Tier dinosaur savants in AI research that i'm talking about, not just the underlings. See e.g. Kurzweil and Schmidthuber.

The moment AI went commercial it all went to shit. I see AI companies sell dated methods with new compute to badly solve X, Y, Z and more things that weren't even problems. I see countless people hate and criticize, and i can't even complain, because for the most part, i agree with them.

I see people vastly overstate, and other people trivialize what it is and what it isn't. There's little inbetween, and of the people who wish AI for only its own sake, virtually none are left, save for mostly vulnerable people who've been manipulated into parasocial relationships with AI, and a handful of experts that face brutal consequences and opposition from all sides the moment they speak openly.

Call me an idiot for ideologically defending a technology that, in the long term, in 999999 out of 1000000 scenarios will surely harm us. But AI has been inevitable since the invention of the transistor, and all major post-commercialization mindsets steer us clear of the 1 in a million paths where we'd still be fine in 2100.

21 comments
  • AI doesn't have "its own sake." The LLM boom has very little in common with "AI" as you described. The product called ai doesn't live up to a utopian sci-fi fantasy because we do not live in a sci-fi utopia and fantasies are not real.

  • But AI has been inevitable since the invention of the transistor

    If the thoughts and opinions of people who developed AI are irrelevant to it's existence, why should we value their thoughts and opinions about how it's used?

    The Manhattan project scientists were writing hand wringing op-eds; making policy suggestions; and lobbing the government basically until they died. It didn't amount to much.

  • If you mean that AI as a field of study, as an endeavor, as a pursuit and goal... should exist?

    Then yes, in theory, I agree.

    ... If done properly.

    Unfortunately, as you point out, basically, humans broadly appear to be too stupid to pursue this goal properly, we seem to just want a magic money machine, or a new god to worship and usher in heaven on earth, or fully a fully automated kill chain, or global panopticon spying system.

    Clearly, we are not ready for this yet, we need to seriously reform ourselves and our societies before we throw more resources at attempting to invent a super intelligence... instead of trying to summon a techno god from the Eldritch plane and then being surprised to find that we fucked up the invocation ritual, due to our greed, haste and laziness... we should perhaps shore up or reform or even revutionize the foundations of every interlinked system that even allows us to seriously ask whether or not AI is a 'good' goal.

    If our current iteration of LLM based AI proliferates through all of human society and more or less destroys and undermines it... thats on us, ultimately.

    So... in practice?

    Well, it'd be nice if any of our governments were immune to being corrupted by promises of wealth and harmony, just believing this shit blindly.

    But that appears to be a similar magnitude of fanciful pipedream.

    So what should be the case?

    I don't know.

    Were I more optimistic, I would say no, we should put those minds and resources toward something like a globalized Manhattan Project to try to figure out the most cost effective, built out of proven technologies, ways to brace for the impacts of climate change.

    But after seeing humanity's attempts toward something approximating that fail, for basically my entire lifetime... I am not optimistic.

    Maybe if we could construct a thinking machine based around the concept of defaulting to 'I don't know' when it isn't sure of something, we'd be in a better spot, but at the moment, best as I can tell, my opinion doesn't matter at this scale anyway, we've already irrevocably fundamentally broken our planet's climate, we've already built our own Great Filter and are past the point of being able to meaningfully deconstruct it.

    Ask again in a century, if technological civilization still exists.

    • I don't disagree with most of you wrote, just one nitpick and a comment:

      If you mean that AI as a field of study, as an endeavor, as a pursuit and goal… should exist?

      No, but the product of all that, to which all that would be a means to the end that is its product. I elaborated this in a reply to the comment you wrote just previously.

      Maybe if we could construct a thinking machine based around the concept of defaulting to ‘I don’t know’ when it isn’t sure of something, we’d be in a better spot

      That would undoubtedly be very good, but let me take this opportunity to clarify something of what AI is and isn't: LLMs are indeed just autocomplete on steroids. And humans are indeed just [replicate] on steroids. LLMs are just transistors switching, and humans are just molecular dynamics.

      The real question is what succeeding in the objective (replicate for humans, predict text for LLMs) implies. Irrespective of the underlying nature (molecular dynamics, semiconductors) unless we want to make this debate religious, which i am not qualified to participate in. The human objective implied, clearly, everything you can see of humanity. The LLM objective implies modeling and emulating human cognition. Not perfectly, not all of it, but enough of it that it should be making it a greater ethical issue than most people, on any side (commercial: deny because business, Anti-AI: deny because trivializing), are willing to admit.

      • No, but the product of all that, to which all that would be a means to the end that is its product.

        Ok, ok, minor misphrasing or misunderstanding on my part, but yes, in theory I believe that the actual thing produced by a properly pursued/conducted... endeavor of AI research ... would be a good thing, yes.

        I elaborated this in a reply to the comment you wrote just previously.

        Yep, and that clarified that as well, thank you for that.

        That would undoubtedly be very good,

        First off, glad you agree on that lol.

        but let me take this opportunity to clarify something of what AI is and isn't: LLMs are indeed just autocomplete on steroids.

        No argument there.

        And humans are indeed just [replicate] on steroids. LLMs are just transistors switching, and humans are just molecular dynamics.

        Eh, disagree here.

        The methods by which the synapses fire and construct memories and evaluate inputs and make decisions... they are very, very different from how LLMs ... lets say, attempt to simulate the same.

        They are functionally, mechanistically distinct, in many ways.

        I've always been a fan of the 'whole brain emulation' approach to AI, and... while I am no expert, my layman understanding is that we are consistently shocked and blown away by how much more complicated brains actually are in this mechanistic process... and again, also that LLMs function in what is really a very poor and simplified version of trying to emulate this, just with gazillions more compute power and training data.

        The real question is what succeeding in the objective (replicate for humans, predict text for LLMs) implies.

        I would argue (and have) argue(d) that these processes are so distinct, that we should at bare minimum be asking this question for different approaches to generating an AI, there are more than just LLMs, and I think they would or could imply vastly different things, should one or multiple methods... be pursued, perfected, hybridizied... all different questions with different implications.

        Irrespective of the underlying nature (molecular dynamics, semiconductors) unless we want to make this debate religious, which i am not qualified to participate in. The human objective implied, clearly, everything you can see of humanity. The LLM objective implies modeling and emulating human cognition. Not perfectly, not all of it, but enough of it that it should be making it a greater ethical issue than most people, on any side (commercial: deny because business, Anti-AI: deny because trivializing), are willing to admit.

        I fundamentally do not agree that LLMs can or will ever emulate the totality of human cognition.

        I see no evidence they can do metacognition in a robust, consistent, useful way.

        I see no evidence they can deduce implications from higher level concepts, across fields and disciplines, that they can propose ways to test their own hypotheses or even really just their own output, half the time, for actual correctness.

        They seem to me to be maxxing out at a rough average of ... a perma online human of approximately average intelligence, but that ... either has access to instant recall of all the data on the internet, or exists in some kind of time bubble where they can read things a billion times faster than a human can, but they can't really do critical thinking.

        ...

        But perhaps I am missing your point.

        Should LLMs achieve an imperfect emulation of a ... kind of, or imitation of human intelligence, what does that mean?

        Well, uh... thats the world we currently live in, and yes I do agree most people simplify this all way too much, but my answer to this hypothetical that I do not think is actually a hypothetical is as I already said:

        We are basically just building a machine god, which we will largely worship, love, fear, respect, learn from, and cite our own interpretations of what it says as backing for our own subjective opinions, worldviews, policy prescriptions.

        Implications of this?

        Neo Dark Age, the masses relinquish the former duties of their minds to the fancy autocomplete, pandemonium ensues.

        The elites don't care, they'll be fine so long as it makes them money and keeps us too stupid and distracted to do anything meaningfully effective about the collapsing biosphere and increasingly volatile climate, they'll hide away in bunkers and corporate enclaves while we basically all kill each other when the food starts to run out.

        Which is a net win for them, because we are all 'useless eaters' anyway, they'll figure out how to build a robot humanoid that can do menial labor one of these days, they'll be fine.

        Or at least they think that.

        Lots could go wrong with that plan, but I'd bet they're closer to being correct than incorrect.

        Uh yeah, yeah, that is I think the implication of the actual current path we are on, current state of thing.

        Sorry if you were asking a slightly different question and I missed it.

        EDIT:

        it now occurs to me that we may be unrionically reproducing a or an approximation of a comment thread or post somewhere from the bowels of LessWrong, and ... this causes me discomfort.

  • I agree that the premature commercialization will do more harm to the field than good. For now, it needed funding and our system requires investment and return on investment. All of which I'm sure you know. But I don't know how you inspire someone to give you large sums of money without connecting to their emotional core. And most don't have a connection to AGI, a thing that is far from garunteed, or transhumanism, an ideology filled with some of the nuttiest, least relatable people I've ever met.

    Ultimately, how do you fund this very expensive enterprise?

  • In your professional opinion. How long until we have an AI powered Morris Worm situation?

21 comments