Alt Text: an image of Agent Smith from The Matrix with the following text superimposed, "1999 was described as being the peak of human civilization in 'The Matrix' and I laughed because that obviously wouldn't age well and then the next 25 years happened and I realized that yeah maybe the machines had a point."
When I heard that line I was like "Yeah, sure. We'll never have AI in my lifespan" and you know what? I was right.
What I wasn't expecting was for a bunch of tech bros to create an advanced chatbot and announce "Behold! We have created AI, let's have it do all of our thinking for us!" while the chatbot spits out buggy code and suggests mixing glue into your pizza sauce.
AI is an umbrella term that covers many things we've already had for a long time, including things like machine learning. This is not a new definition of AI, it's always been this definition.
You’re not going to achieve AI on classical computers and is simply rebranded for machine learning like how 5G was advertised to bring futuristic utopia back in 2020 only to have 4K being considered a premium feature behind paid subscriptions from 𝕏 (Twitter) to YouTube.
Quantum Computers do exist but it’s far from being on the palm of your hand.
I work in the gaming industry and every week I receive emails about how AI is gonna revolutionize my job and get sent to time wasting training about how to use Figma AI or other shit like that because it's the best thing ever according to HR... and it never is obviously.
At best, it's gonna make middle managing jobs easier but for devs like me, as long as the "AI" stays out of our engines and stays into the equivalent of cooperative vision boards, it does nothing for me. Not once have I tried to use it for it to turn actually useful. It's mediocre at best and I can't believe there are game devs that actually try to code with it, can't wait to see these hot garbage products come on the market.
I've been enjoying Copilot quite a bit while developing, particularly for languages that I'm not familiar with. I'm not worried about it replacing me, because I very clearly use my experience and knowledge to guide it and to coax answers out of it. But when you tell it exactly what you want, it's really nice to get answers back in the development language without needing to look up syntax.
"Give me some nice warning message css" was an easy, useful one.
AI is not supposed to be an artificial human being. AI just does a task that people associated with humans (before they readjusted the definition of intelligence after it being created).
You won't have general purpose true AI until it can actually think and reason, llm will never do that. At most they would be a way of interaction with an AI.
I genuinely do not understand these very obviously biased comments. By the very definition of AI, we have had it for decades, and suddenly people say we don't have it? I don't get it. Do you hate LLMs so much you want to change the entire definition for AI (and move it under AGI or something)? This feels unhinged, disconnected from reality, biases so strong it looks like delusions
What is delusional is calling a token generator intelligent. These programs don't know what the input is, nor do they understand what they put out. They "know" that after this sequence of tokens, what a likely successive token is based on previously supplied data.
They understand nothing. They generate nothing new. They don't think. They are not intelligent.
They are very cool, very impressive and quite useful. But intelligent? Pffffffh
This argument pre-dates the modern LLM by several decades. When the average person thinks of AI, they think of Star Wars or any of a myriad of other works of science fiction. Most people have never heard the term in any other context and so are offended by the implied comparison (in their understanding of the word) of LLM models as being equal to Data from Star Trek.
When I heard that line I was like "Yeah, sure. We'll never have AI in my lifespan" and you know what? I was right.
Unless you just died or are about to, you can't really confidently make that statement.
There's no technical reason to think we won't in the next ~20-50 years. We may not, and there may be a technical reason why we can't, but the previous big technical hurdles were the amount of compute needed and that computers couldn't handle fuzzy pattern matching, but modern AI has effectively found a way of solving the pattern matching problem, and current large models like ChatGPT model more "neurons" than are in the human brain, let alone the power that will be available to them in 30 years.
There's no technical reason to think we won't in the next ~20-50 years
Other than that nobody has any idea how to go about it? The things called "AI" today are not precursors to AGI. The search for strong AI is still nowhere close to any breakthroughs.
I was surprised how poorly they still did as a chatbot vs ELIZA over after 50 years of potential progress and how revered they are in certain contexts.
Interestingly enough though, the directors of the matrix are two trans women
But while yes, queerphobia was worse in some ways, it was also not as bad in others. For example, trans people didn't have the massive organized targeted attack back then. In many ways, things have gotten worse in this aspect too
They definitely weren't capitalist, lmao. They only wanted to make the perfect system. You could consider their quest for perfection "greed".
They didn't really have to try, though, they had a great system in place. Humans lived long enough for turnover and plenty of energy provided. That glitch was an issue, but contained. At least until someone decided to fall in love. Then the whole system failed.
You should read the duology (I’ve only read the first book) Monk and Robot, which is solopunk. The premise is that robots got tired of doing what they were built for, and decided to form a treaty with humans allowing them to wonder into the wild and live without human contact.
The further you get from the gold standard the worse life you'll have. Though you might have more social media and gadgets you'll have a smaller house and worse quality food/services, as everything is financialized through debt in a futile attempt to force the elderly who own all the assets to consume every greater amounts, as automation progressively decreases the costs and companies find more advanced ways to shrinkflate products.
The rise of authoritarianism and nationalism is happening in multiple countries like Hungary, Russia, China, and the U.S. Parties like AfD have grown in strength in the last 20 years, thanks in part to the ability of social media companies to prioritize "engagement" to enable them to make more money over societal health.
American civilization? Yes,definitely. Human civilization? I genuinely don't think so. I believe in us as a species and think the best is yet to come (after we rid ourselves of bigots and authoritarians).
Most of human civilization has been run by kings, emperor's and dictators. I see the worlds rich population gaining more control than ever while the possibilities for everyone else is less and less. The lower and lower middle class have become too easily influenced by fake news and propaganda. How do we advance when people can be manipulated to go against their own best interests?