Linus Torvalds reckons AI is ‘90% marketing and 10% reality’
Linus Torvalds reckons AI is ‘90% marketing and 10% reality’

Linus Torvalds reckons AI is ‘90% marketing and 10% reality’

Linus Torvalds reckons AI is ‘90% marketing and 10% reality’
Linus Torvalds reckons AI is ‘90% marketing and 10% reality’
I had a professor in college that said when an AI problem is solved, it is no longer AI.
Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they're just tools we use without thinking about them.
I'm sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it's just the keyboard app on my phone.
The approach of LLMs without some sort of symbolic reasoning layer aren't actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.
Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don't even think we've found all the patterns they'd be great at predicting.
There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That's not from me that's Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.
There's a name for it the phenomenon: the AI effect.
I make DNNs (deep neural networks), the current trend in artificial intelligence modeling, for a living.
Much of my ancillary work consists of deflating/tempering the C-suite's hype and expectations of what "AI" solutions can solve or completely automate.
DNN algorithms can be powerful tools and muses in scientific endeavors, engineering, creativity and innovation. They aren't full replacements for the power of the human mind.
I can safely say that many, if not most, of my peers in DNN programming and data science are humble in our approach to developing these systems for deployment.
If anything, studying this field has given me an even more profound respect for the billions of years of evolution required to display the power and subtleties of intelligence as we narrowly understand it in an anthropological, neuro-scientific, and/or historical framework(s).
Yup.
I don't know why. The people marketing it have absolutely no understanding of what they're selling.
Best part is that I get paid if it works as they expect it to and I get paid if I have to decommission or replace it. I'm not the one developing the AI that they're wasting money on, they just demanded I use it.
That's true software engineering folks. Decoupling doesn't just make it easier to program and reuse, it saves your job when you need to retire something later too.
The people marketing it have absolutely no understanding of what they're selling.
Has it ever been any different? Like, I'm not in tech, I build signs for a living, and the people selling our signs have no idea what they're selling.
The worrying part is the implications of what they're claiming to sell. They're selling an imagined future in which there exists a class of sapient beings with no legal rights that corporations can freely enslave. How far that is from the reality of the tech doesn't matter, it's absolutely horrifying that this is something the ruling class wants enough to invest billions of dollars just for the chance of fantasizing about it.
Sounds about right. There are some valid and good use cases for "AI", but the majority is just buzzword marketing.
I have lots of uses for Attack Insects….
That's about right. I've been using LLMs to automate a lot of cruft work from my dev job daily, it's like having a knowledgeable intern who sometimes impresses you with their knowledge but need a lot of guidance.
watch out; i learned the hard way in an interview that i do this so much that i can no longer create terraform & ansible playbooks from scratch.
even a basic api call from scratch was difficult to remember and i'm sure i looked like a hack to them since they treated me as such.
In addition, there have been these studies released (not so sure how well established, so take this with a grain of salt) lately, indicating a correlation with increased perceived efficiency/productivity, but also a strongly linked decrease in actual efficiency/productivity, when using LLMs for dev work.
After some initial excitement, I’ve dialed back using them to zero, and my contributions have been on the increase. I think it just feels good to spitball, which translates to heightened sense of excitement while working. But it’s really just much faster and convenient to do the boring stuff with snippets and templates etc, if not as exciting. We’ve been doing pair programming lately with humans, and while that’s slower and less efficient too, seems to contribute towards rise in quality and less problems in code review later, while also providing the spitballing side. In a much better format, I think, too, though I guess that’s subjective.
I mean, interviews have always been hell for me (often with multiple rounds of leetcode) so there's nothing new there for me lol
What happened to Linus? He looks so old now...
He got old.
Not especially old, though; he looks like a 54yo dev. Reminds me of my uncles when they were 54yo devs.
I guess having 3 kids will do that to you.
[citation needed]/s
That's an excessive amount of aging is what folks are seeing. Not that he's just old.
He's lost a lot of weight in 4 years so that's probably exacerbating the wtf.
he aged
Source?
If you find out what happened, let me know, because I think it's happening to me too.
He's 54 years old
Oxidative stress is a bitch
He has a real Michael McKean vibe
Wow, yeah that's a big difference from how I remember him
It's like he aged 10 years in the past 2 years... damn
I think when the hype dies down in a few years, we'll settle into a couple of useful applications for ML/AI, and a lot will be just thrown out.
I have no idea what will be kept and what will be tossed but I'm betting there will be more tossed than kept.
AI is very useful in medical sectors, if coupled with human intervention. The very tedious works of radiologists to rule out normal imaging and its variants (which accounts for over 80% cases) can be automated with AI. Many of the common presenting symptoms can be well guided to diagnosis with some meticulous use of AI tools. Some BCI such as bioprosthosis can also be immensely benefitted with AI.
The key is its work must be monitored with clinicians. As much valuable the private information of patients is, blindly feeding everything to an AI can have disastrous consequences.
Maybe in some places, but I just found this:
A Market place, where people can generate their ideas of jewellery and order them after. Makes life of goldsmiths and customers way more easy. I do not think aI will leave this project, for example.
I recently saw a video of AI designing an engine, and then simulating all the toolpaths to be able to export the G code for a CNC machine. I don't know how much of what I saw is smoke and mirrors, but even if that is a stretch goal it is quite significant.
An entire engine? That sounds like a marketing plot. But if you take smaller chunks let's say the shape of a combustion chamber or the shape of a intake or exhaust manifold. It's going to take white noise and just start pattern matching and monkeys on typewriter style start churning out horrible pieces through a simulator until it finds something that tests out as a viable component. It has a pretty good chance of turning out individual pieces that are either cheaper or more efficient than what we've dreamed up.
and then simulating all the toolpaths to be able to export the G code for a CNC machine. I don’t know how much of what I saw is smoke and mirrors, but even if that is a stretch goal it is quite significant.
<sarcasm> Damn, I ascended to become an AI and I didn't realise it. </sarcasm>
Snort might actually be a good real world application that stands to benefit from ML, so for security there's some sort of hopefulness.
The only time I've seen AI work well are for things like game development, mainly the upscaling of textures and filling in missing frames of older games so they can run at higher frames without being choppy. Maybe even have applications for getting more voice acting done... If the SAG and Silicon Valley can find an arrangement for that that works out well for both parties..
If not for that I'd say 10% reality was being.... incredibly favorable to the tech bros
^^
^^^
he isn't wrong
If anything he's being a bit generous.
I am thinking of deploying a RAG system to ingest all of Linus's emails, commit messages and pull request comments, and we will have a Linus chatbot.
Hold on there Satan... let's be reasonable here.
I play around with the paid version of chatgpt and I still don't have any practical use for it. it's just a toy at this point.
I used chatGPT to help make looking up some syntax on a niche scripting language over the weekend to speed up the time I spent working so I could get back to the weekend.
Then, yesterday, I spent time talking to a colleague who was familiar with the language to find the real syntax because chatGPT just made shit up and doesn't seem to have been accurate about any of the details I asked about.
Though it did help me realize that this whole time when I thought I was frying things, I was often actually steaming them, so I guess it balances out a bit?
I use shell_gpt with OpenAI api key so that I don't have to pay a monthly fee for their web interface which is way too expensive. I topped up my account with 5$ back in March and I still haven't use it up. It is OK for getting info about very well established info where doing a web search would be more exhausting than asking chatgpt. But every time I try something more esoteric it will make up shit, like non existent options for CLI tools
ugh hallucinating commands is such a pain
It's useful for my firmware development, but it's a tool like any other. Pros and cons.
Like with any new technology. Remember the blockchain hype a few years back? Give it a few years and we will have a handful of areas where it makes sense and the rest of the hype will die off.
Everyone sane probably realizes this. No one knows for sure exactly where it will succeed so a lot of money and time is being spent on a 10% chance for a huge payout in case they guessed right.
There's an area where blockchain makes sense!?!
Git is a sort of proto-blockchain -- well, it's a ledger anyway. It is fairly useful. (Fucking opaque compared to subversion or other centralized systems that didn't have the ledger, but I digress...)
Cryptocurrencies can be useful as currencies. Not very useful as investment though.
Yep, Ik ai should die someday.
I'm waiting for the part that it gets used for things that are not lazy, manipulative and dishonest. Until then, I'm sitting it out like Linus.
AI has been used for these things for decades, they are just in the background and not noticed by laypeople
Though the biggest issue is that when people say "AI" today, they mean specifically LLMs, but the world of AI is so much larger than that
This is where I'm at. The push right now has nft pump and dump energy.
The moment someone says ai to me right now I auto disengage. When the dust settles, I'll look at it seriously.
No AI is a very real thing... just not LLMs, those are pure marketing
The latest llms get a perfect score on the south Korean SAT and can pass the bar. More than pure marketing if you ask me. That does not mean 90% of business that claim ai are nothing more than marketing or the business that are pretty much just a front end for GPT APIs. llms like claud even check their work for hallucinations. Even if we limited all ai to llms they would still be groundbreaking.
I feel like they snuck in a little square of reasonable terms with
Best practices Optimization Industry standard Authenticate
But now that I’ve typed it, I’m scared that optimization and authenticate have gross business-speak definitions I just don’t know about yet.
Copilot by Microsoft is completely and utterly shit but they're already putting it into new PCs. Why?
Investors are saying they'll back out if no AI in products. So tech leaders will talk talk and all deal with ai.
Copilot + Pcs tho...
Just chiming in as another guy who works in AI who agrees with this assessment.
But it's a little bit worrisome that we all seem to think we're in the 10%.
Mr. Torvalds is truly a generous man, giving the current AI market an analysis of 10% usefulness is probably a decimal or two more than will end up panning out once the hype bubble pops.
And then people will complain about that saying it’s almost all hype and no substance.
Then that one tech bro will keep insisting that lemmy is being unfair to AI and there are so many good use cases.
No one is denying the 10% use cases, we just don’t think it’s special or needs extra attention since those use cases already had other possible algorithmic solutions.
Tech bros need to realize, even if there are some use cases for AI, there has not been any revolution, stop trying to make it happen and enjoy your new slightly better tool in silence.
Hi! It's me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.
Omg you found me in another post. I’m not even mad; I do like how passionate you are about things.
Since there isn't any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).
What you’re talking about is polarization and yeah, it’s a big issue.
This is a good example, I never did any strawman nor disagree with the fact that it can be useful in some shape or form. I was trying to say its value is much much lower than what people claim to be.
But that’s the issue with polarization, me saying there is much less value can be interpreted as absolute zero, and I apologize for contributing to the polarization.
Nice replacement topic after the maintainer drama last week
I think the drama came from when the Russian forces started killing civilians 🤷
Not a company following the law.
Sucks to suck work for companies run by a wartime government.
Yea this is so blatant I'm not even going to click on that shit.
I agree with Mr. Torvalds
That's my usual feeling with Linus takes.
Well, I agree, but he could be nicer about it.
Seems generous, might be more like 5% reality.
Just like Furbys
That's probably true about all new technology that VCs throw billions at.
We lived more than a decade of those decisions, when borrowing money was cheap, and VC was investing in startups selling juice machines.
AI is nothing more than a way for big businesses to automate more work and fire more people.
and do that at the expense of 30+ years of power reduction and efficiency gains, to the point that private companies are literally buying/building/restarting old power plants just to cover the insane power demand, because literally operating a power plant is cheaper than paying the energy costs.
For the common every day person its 3d tv and every other bullshit fad that burned brilliantly for all of 3 seconds before snuffing itself out, leaving people to have had paid for overpriced garbage thats no longer useful.
AI is nothing more than a way for big businesses to automate more work and fire more people.
All technology in human history has done that. What are you proposing? Reject technology to keep people employed on inefficient tasks?
At some point people need to start thinking that is better to end capitalism that to return to monke.
There was a great article in the Journal of Irreproducible Results years ago about the development of Artificial Stupidity (AS). I always do a mental translation to AS when ever I see AI.
game devs gonna have to use different language to describe what used to be simply called "enemy AI" where exactly zero machine learning is involved
Logic and Path-finding?
CPU
Yeah, he's right. AI is mostly used by corps to enshittificate their products for just extra profit
I dunno about him; but genuinely I'm excited about AI. Blows my mind each passing day ;)
So basically just like linux. Except linux has no marketing.....So 10% reality, and 90% uhhhhhhhhhh.......
That says more about your ignorance than anything about AI or Linux.
So basically just like linux. Except linux has no marketing
Except for the most popular OS on the Internet, of course.
Never heard of Android I guess?
You're aware Linux basically runs the Internet, right?
You’re aware Linux basically runs the
InternetWorld, right?
Billions of devices run Linux. It is an amazing feat!
90% angry nerds fighting each other over what answer is “right”
In a way he’s right, but it depends! If you take even a common example like Chat GPT or the native object detection used in iPhone cameras, you’d see that there’s a lot of cool stuff already enabled by our current way of building these tools. The limitation right now, I think, is reacting to new information or scenarios which a model isn’t trained on, which is where all the current systems break. Humans do well in new scenarios based on their cognitive flexibility, and at least I am unaware of a good framework for instilling cognitive flexibility in machines.
I admit I understand nothing about ai and haven't used it in any way nor do I plan to. It feels wrong for me and I believe it might fuck us harder than social media ever could.
But the pictures it creates, the stories and conversations don't seem like hot air. And I guess, compared to the internet we are at the stage where the modem is still singing the songs of its people. There is more to come.
I heard it can code at a level where entry positions might be in danger to be swapped for ai. It detects cancer visually, recognizes people by the way they walk in China. Also I fear that vulnerable persons might fall for those conversation bots in a world where there is less and less personal contact.
Gotta admit I'm a little afraid it will make most of us useless in the future.
It makes somewhat passable mediocrity, very quickly when directly used for such things. The stories it writes from the simplest of prompts is always shallow and full of cliche (and over-represented words like "delve"). To get it to write good prose basically requires breaking down writing, the activity, into its stream of constituent, tiny tasks and then treating the model like the machine it is. And this hack generalizes out to other tasks, too, including writing code. It isn't alive. It isn't even thinking. But if you treat these things as rigid robots getting specific work done, you can make then do real things. The problem is asking experts to do all of that labor to hyper segment the work and micromanage the robot. Doing that is actually more work than just asking the expert to do the task themselves. It is still a very rough tool. It will definitely not replace the intern, just yet. At least my interns submit code changes that compile.
Don't worry, human toil isn't going anywhere. All of this stuff is super new and still comparatively useless. Right now, the early adopters are mostly remixing what has worked reliably. We have yet to see truly novel applications yet. What you will see in the near future will be lots of "enhanced" products that you can talk to. Whether you want to or not. The human jobs lost to the first wave of AI automation will likely be in the call center. The important industries such as agriculture are already so hyper automated, it will take an enormous investment to close the 2% left. Many, many industries will be that way, even after AI. And for a slightly more cynical take: Human labor will never go away because having power over machines isn't the same as having power over other humans. We won't let computers make us all useless.
Thanks for easing my mind a little. You definetly did in perspective to labor.
You also reminded me I already had my first encounter with a callcenter AI by telekom and it was just as useless as the human equivalent, they seem to get similar training!
I just hope it won't hinder or replace interhuman connection on a larger scale cause in this sphere mediocrity might be enough and we are already lacking there.
The albeit small but present virtual girlfriend culture in Japan really shocked me and I feel we are not far away from things like AI-droid wives for example.
He is correct. It is mostly people cashing out on stuff that isn't there.
"duh."
That makes sense. He's old enough and close enough thematically to have seen a few of these tech hype cycles.
it is basically like how self improvement folks are using quantum
As a fervent AI enthusiast, I disagree.
...I'd say it's 97% hype and marketing.
It's crazy how much fud is flying around, and legitimately buries good open research. It's also crazy what these giant corporations are explicitly saying what they're going to do, and that anyone buys it. TSMC's allegedly calling Sam Altman a 'podcast bro' is spot on, and I'd add "manipulative vampire" to that.
Talk to any long-time resident of localllama and similar "local" AI communities who actually dig into this stuff, and you'll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.
For real. Being a software engineer with basic knowledge in ML, I'm just sick of companies from every industry being so desperate to cling onto the hype train they're willing to label anything with AI, even if it has little or nothing to do with it, just to boost their stock value. I would be so uncomfortable being an employee having to do this.
For sure, it seems like 90% of ai startups are nothing more than front end wrappers for a gpt instance.
As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.
TSMC are probably making more money than anyone in this goldrush by selling the shovels and picks, so if that's their opinion, I feel people should listen...
There's little in the AI business plan other than hurling money at it and hoping job losses ensue.
TSMC doesn't really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.
Altman's scheme is just a whole other level of crazy though.
Seriously, I'd love to be enthusiastic about it because it's genuinely cool what you can do with math.
But the lies that are shoved in our faces are just so fucking much and so fucking egregious that it's pretty much impossible.
And on top of that LLMs are hugely overshadowing actual interesting approaches for funding.
I think we should indict Sam Altman on two sets of charges:
He's out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there's a good chance that they won't be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?
So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he's telling the truth, he's endangering us all. If he's lying, then he's committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.
"When you're rich, they let you do it."
The saddest part is, this is going to cause yet another AI winter. The first few ones were caused by genuine over-enthusiasm but this one is purely fuelled by greed.
The AI ecosystem is flooded, we need a good bubble pop to slow down the massive waste of resources that our current info-remix-based-on-what-you-will-likely-react-positively-to shit-tier AI represents.
Agreed that’s why it’s so dangerous. These tech bros are going to do damage with their shitty products. It seems like it's Altman's goal, honestly.
He wants money/power, and he is getting it. The rest of the AI field will forever be haunted by his greed.
After getting my head around the basics of the way LLMs work I thought "people rely on this for information?", the model seems ok for tasks like summarisation though
I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.
Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.
In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.
It's good for coding if you train it on your own code base. Not great for writing very complex code since the models tend to hallucinate, but it's great for common patterns, and straightforward questions specific to your code base that can be answered based on existing code (eg "how do I load a user's most recent order given their email address?")
That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.
Ya, it's like machine learning but better. That's about it IMO.
Edit: As I have to spell it out: as opposed to (machine learning with) neural networks.
I mean... it is machine learning.
It's selling the future, but nobody knows if we can actually get there
It's selling an anticompetitive dystopia. It's selling a Facebook monopoly vs selling the Fediverse.
We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.
The first part is true .... no one cares about the second part of your statement.
What's the source for that? It sounds hilarious
https://web.archive.org/web/20240930204245/https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html
Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.
Current LLMs cannot be AGI, no matter how big they are. The fundamental architecture just isn't right.
I know nothing about anything, but I unfoundedly believe we're still very far away from the computing power required for that. I think we still underestimate the power of biological brains.
What makes you think there's a threshold?