The AI bubble is so big it's propping up the US economy (for now)
The AI bubble is so big it's propping up the US economy (for now)

The AI bubble is so big it's propping up the US economy (for now)

The AI bubble is so big it's propping up the US economy (for now)
The AI bubble is so big it's propping up the US economy (for now)
I feel like literally everybody knew it was a bubble when it started expanding and everyone just kept pumping into it.
How many tech bubbles do we have to go through before we leave our lesson?
what lesson? it's a ponzi scheme and whoever is the last holding the bag is the only one losing.
And that's why it's being done. Everyone hopes that they make it out at just the right time to make millions while the greater fools who join too late are left holding the bag.
Bubbles are great. For those who make it out in time. They suck fo everyone else including the taxpayer who might have to bail out companies and investors.
Always following the doctrine of privatizing profits and socializing losses.
Plus everyone else that pays taxes as they will have to continue to pay for unemployment insurance, food stamps, rent assistance, etc (not the CEOs and execs that caused it that's for sure).
its the NEW CRYPTO hype basically
I get that people who sell AI-services wants to promote it. That part is obvious.
What I don't get is how gullible the rest of society at large is. Take the norwegian digitalization minister, who says that 80% of the public sector shall use AI. Whatever that means.
Or building a gigantic fuckoff openai data centre, instead of new industry https://openai.com/nb-NO/index/introducing-stargate-norway/
Jared Diamond had a great take on this in "Collapse". That there a countless examples of societies making awful decisions - because the decisionmakers are insulated from the consequences. On the contrary, they get short term gains.
We know that our current way of economic growth and consistent new "inventions" is destroying the basis of our life. We know that the only way to stop is to fundamentally redesign the social system, moving away from capitalism, growth economics and ever new gadgets.
But facing this is difficult. Facing this and winning elections with it is even more difficult. Instead claiming there is some wonder technology that will safe us all and putting the eggs in that basket is much easier. It will fail inevitably, but until then it is easier.
Never. Some people think the universe owes us Star Trek and are just waiting for something new to happen.
the ceos, C-SUITES and some people trying to get into CS field are the one that believe in it. i know a person who already has a degree, and sitll think its wise to pursue a GRAD degree in the field adjacent or directly with AI or close to it.
A grad course in AI/LLM/ML might actually be useful. Its where my old roommates learned about Googles Transformers and got into LLMs before the hype bubble in 2018.
Home might get ahead of the curve for the next over inflated hype bubble and then proceed to make unearned garbage loads of money and have learned something other than how to put ChatGPT in a new wrapper.
As someone who works with integrating AI- it’s failing badly.
At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.
It is going to pop, messily.
and most people are frankly too dumb or lazy to properly verify outputs.
This is my main argument. I need to check the output for correctness anyways. Might as well do it in the first place then.
People are happy to accept the wrong answer without even checking lol
Honestly I mostly use it as a jumping off point for my code or to help me sound more coherent when writing emails.
This is exactly why I love duckduckgo's AI results built in to search. It appears when it is relevant (and yes you can nuke it from orbit so it never ever appears) and it always gives citations (2 websites) so I can go check if it is right or not. Sometimes it works wonders when regular search results are not relevant. Sometimes it fails hard. I can distinguish one from the other because I can always check the sources.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.
This 1 million%.
The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.
Summarizing and aggregating data alone isn't a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add "integrating AI" another bucket of vomit to fling.
I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.
As someone who is actually an AI tool developer (I just use existing models) - it's absolutely NOT failing.
Lemmy is ironically incredibly tech illiterate.
It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it's "failing badly" is absurd and really helps absolutely no one.
If you want to define “failing” as unable to do everything correctly, then sure, I’d concur.
However, if you want to define “failing” as replacing people in their jobs, I’d disagree. It’s doing that, even though it’s not meeting the criteria to pass the first test.
Well, from this description it's still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.
BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an "AI" for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it's possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).
It's going to be great when the AI hype bubble crashes
If I was China, I would be thrilled to hear that the west are building data centres for LLMs, sucking power from the grid, and using all their attention and money on AI, rather than building better universities and industry. Just sit back and enjoy, while I can get ahead in these areas.
They've been ahead for the past 2 decades. Government is robbing us blind because it only serves multinational corporations or foreign governments. It does not serve the people.
never interrupt your enemy while he is making a big mistake
Everyone knows a bubble is a firm foundation to build upon. Now that Trump is back in office and all our American factories are busy cranking out domestic products I can finally be excited about the future again!
I predict that in a year this bubble will be at least twice as big!
So is it smart to short on the ai bubble ? 👉👈
The market can remain irrational longer than you can remain solvent.
Yup. If you have money you can AFFORD TO BURN, go ahead and short to your heart's content. Otherwise, stay clear and hedge your bets.
The question is when, not if. But it's an expensive question to guess the "when" wrong. I believe the famous idiom is: the market can stay irrational longer than you can stay solvent.
Best of luck!
Willing to take real life money bet that bubble is not going to pop despite Lemmy's obsession here. The value is absolutely inflated but it's definitely real value and LLMs are not going to disappear unless we create a better AI technology.
In general we're way past the point of tech bubbles popping. Software markets move incredibly fast and are incredibly resilient to this. There literally hasn't been a software bubble popping since dotcom boom. Prove me wrong.
Even if you see problems with LLMs and AI in general this hopeful doomerism is really not helping anyone. Now instead of spending effort on improving things people are these angry, passive, delusional accelerationists without any self awareness.
I get the thinking here, but past bubbles (dot com, housing) were also based on things that have real value, and the bubble still popped. A bubble, definitionally, is when something is priced far above its value, and the "pop" is when prices quickly fall. It's the fall that hurts; the asset/technology doesn't lose its underlying value.
I mean we haven’t figured out how to make AI profitable yet, and though it’s a cool technology with real world use cases, nobody has proven yet that the juice is worth the squeeze. There’s an unimaginable amount of money tied up in a technology on the hope that one day they find a way to make it profitable and though AI as a technology “improves”, its journey towards providing more value than it costs to run is not.
If I roleplayed as somebody who desperately wanted AI to succeed, my first question would be “What is the plan to have AI make money?” And so far nobody, not even the technology’s biggest sycophants have an answer.
In a capitalist society, what is good or best is irrelevant. All that matters is if it makes money. AI makes no money. The $200 and $300/month plans put in rate limits because at those prices they're losing too much money. Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity, and people can barely afford food, housing, and transportation as it is. What is the business model for these LLMs going to be? A person could get a coffee today, or send a single request to an LLM? Now start thinking that they'll need newer gpus next year. And the year after that. And after that. And the data center will need maintenance. They're paying literally millions of dollars to individual programmers.
Maybe there is a niche market for mega corporations like Google who can afford to spend thousands of dollars a day on LLMs, but most companies won't be able to afford these tools. Then there is the problem where if the company can afford these tools, do they even need them?
The only business model that makes sense to me is the one like BMW uses for their car seat warmers. BMW requires you to pay a monthly subscription to use the seat warmers in their cars. LLM makers could charge a monthly subscription to run a micro model on your own device. That free assistant in your Google phone would then be pay walled. That way businesses don't need to carry the cost of the electricity, but the LLM is going to be fairly low functioning compared to what we get for free today. But the business model could work. As long as people don't install a free version.
I don't buy the idea that "LLMs are good so they are going to be a success". Not as long as investors want to make money on their investments.
I imagine a dystopia where the main internet has been destroyed and watered down so you can only access it through a giant corpo llm (isps will become llmsps) So you choose between watching an ai generated movie for entertainment or a coffee. Because they will destroy the internet any way they can.
Also they'll charge more for prompts related to things you like. Its all there for the plundering, and consumers want it.
Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity,
Are you baiting the fine people here?
people can barely afford food, housing, and transporation as it is.
Citation needed. The doomerism in this thread is so cringe.
I believe that if something has enough value, people are willing to pay for it. And by people here I mean primarily executives. The problem is that AI has not enough value to sustain the hype.
LLMs can absolutely disappear as a mass market technology. They will always exist in some sense as long as there are computers to run them and people who care to try, but the way our economy has incorporated them is completely unsustainable. No business model has emerged that can support them, and at this point, I'm willing to say that there is no such business model without orders of magnitude gains in efficiency that may not ever happen with LLMs.
The value a thing creates is only part of whether the investment into it is worth it.
It's entirely possible that all of the money that is going into the AI bubble will create value that will ultimately benefit someone else, and that those who initially invested in it will have nothing to show for it.
In the late 90's, U.S. regulatory reform around telecom prepared everyone for an explosion of investment in hard infrastructure assets around telecommunications: cell phones were starting to become a thing, consumer internet held a ton of promise. So telecom companies started digging trenches and laying fiber, at enormous expense to themselves. Most ended up in bankruptcy, and the actual assets eventually became owned by those who later bought those assets for pennies on the dollar, in bankruptcy auctions.
Some companies owned fiber routes that they didn't even bother using, and in the early 2000's there was a shitload of dark fiber scattered throughout the United States. Eventually the bandwidth needs of near universal broadband gave that old fiber some use. But the companies that built it had already collapsed.
If today's AI companies can't actually turn a profit, they're going to be forced to sell off their expensive data at some point. Maybe someone else can make money with it. But the life cycle of this tech is much shorter than the telecom infrastructure I was describing earlier, so a stale LLM might very well become worthless within years. Or it's only a stepping stone towards a distilled model that costs a fraction to run.
So as an investment case, I'm not seeing a compelling case for investing in AI today. Even if you agree that it will provide value, it doesn't make sense to invest $10 to get $1 of value.
dint microsoft already admitted thier AI isnt profitable, i suspect thats why they have been laying off in waves. they are hoping govt contracts will stem the bleeding or hold them off, and they found the sucker who will just do it, trump. I wonder if palintir is suffeing too, surely thier AI isnt as useful to the military as they claim.
there's an argument that this is just the targeted ads bubble that keeps inflating using different technologies. That's where the money is coming from. It's a game of smoke and mirrors, but this time it seems like they are betting big on a single technology for a longer time, which is different from what we have seen in the past 10 years.
Dotcom was a bubble too and it popped hard with huge faillout even though the internet didn't disappear and it still was and is a revolutionary thing that changed how we live our lives.
Overvalued doesn't mean the thing has no value.
Proof: Agentic AI is worthless.
I didn't have the US becoming a banana republic on my bingo card tbf
why not
Yeah ten years seems like plenty of notice
Open models are going to kick the stool out. Hopefully.
GLM 4.5 is already #2 on lm arena, above Grok and ChatGPT, and runnable on homelab rigs, yet just 32B active (which is mad). Extrapolate that a bit, and it’s just a race to the zero-cost bottom. None of this is sustainable.
I did not understand half of what you've written. But what do I need to get this running on my home PC?
I am referencing this: https://z.ai/blog/glm-4.5
The full GLM? Basically a 3090 or 4090 and a budget EPYC CPU. Or maybe 2 GPUs on a threadripper system.
GLM Air? Now this would work on a 16GB+ VRAM desktop, just slap in 96GB+ (maybe 64GB?) of fast RAM. Or the recent Framework desktop, or any mini PC/laptop with the 128GB Ryzen 395 config, or a 128GB+ Mac.
You’d download the weights, quantize yourself if needed, and run them in ik_llama.cpp (which should get support imminently).
https://github.com/ikawrakow/ik_llama.cpp/
But these are…not lightweight models. If you don’t want a homelab, there are better ones that will fit on more typical hardware configs.
You can probably just use ollama and import the model.
Soon to lose the r from propping.
Not only the tech bubble is doing that.
It's also the tech bubble ow and the pyramide scheme of the US housing sector will cause more financial issues as well and so is the whole creditcard system
Ooowee, they are setting up the US for a major bust aren't they. I guess all the wealthy people will just have to buy up everything when it becomes dirt cheap. Sucks to have to own everything I guess.
Did you think it was strange when tech bubble burst in 2001 ? And the housing market and San Jose, Tech capital of the world went up.
Recognizing from history the possibilities of where this all might lead, the prospect of any serious economic downturn being met with a widespread push of mass automation—paired with a regime overwhelmingly friendly to the tech and business class, and executing a campaign of oppression and prosecution of precarious manual and skilled laborers—well, it should make us all sit up and pay attention.
Your kids will enjoy their new Zombie Twitter AI teacher with fabulous lesson plans like, "Was the Holocaust real or just a hoax?"
propo dabogda
You don't believe in the quantum block chain 3D printed AI cloud future mining asteroids for the private Mars colony (yet with no life extension)?
Luddite.
Quantum was popular as "oh god, our cryptography will die, what are we going to do". Now post-quantum cryptography exists and it doesn't seem to be clear what else quantum computers are useful for, other than PR.
Blockchain was popular when the supply of cryptocurrencies was kinda small, now there's too many of them. And also its actually useful applications require having offline power to make decisions. Go on, tell politicians in any country that you want electoral system exposed and blockchain-based to avoid falsifications. LOL. They are not stupid. If you have a safe electoral system, you can do with much more direct democracy. Except blockchain seems a bit of an overkill for it.
3D printing is still kinda cool, except it's just one tool among others. It's widely used to prototype combat drones and their ammunition. The future is here, you just don't see it.
Cloud - well, bandwidths allowed for it and it's good for companies, so they advertised it. Except even in the richest countries Internet connectivity is not a given, and at some point wow-effect is defeated by convenience. It's just less convenient to use cloud stuff, except for things which don't make sense without cloud stuff. Like temporary collaboration on a shared document.
"AI" - they've ran out of stupid things to do with computers, so they are now promising the ultimate stupid thing. They don't want smart things, smart things are smart because they change the world, killing monopolies and oligopolies along the way.
Well that's a lot of words that I wasted time reading.
No room-temperature superconductor fusion reactors, space-based solar, or private space mining? Luddite.
Quantum computing has incredible value as a scientific tool, what are you talking about.