Might not be efficient, but at least it... Uhhh, wait, what good does it provide again?
Might not be efficient, but at least it... Uhhh, wait, what good does it provide again?
Might not be efficient, but at least it... Uhhh, wait, what good does it provide again?
it's really good at writing termination notices without making middle managers feel bad about letting their employees go.
We use about 20% of our caloric intake (at rest, not doing math) for our bio intelligence. Having superpowers of social organization is expensive and power hungry.
So it's really no surprise that the computation machines that can run AI require tens of megawatts to think.
"Pretend to think" at that lmao.
Yeah, it's nowhere near thinking. More like arranging things into a pattern.
yeah i mean ofc if you also put everyone in the world that that datacentre is serving in a human datacentre, I'm sure it'd also consume tons of power (in food)
"Or so I've heard"
Isn't it more like they're comparing all the hamburgers and everything else you have eating since you were born?
That's what they're doing with AI enegry usages isn't it? I thought it was including the training which is where the greatest costs come from vs just daily running.
No. "In practice, inference [which is to say, queries, not training] can account for up to 90% of the total energy consumed over a model’s lifecycle." Source.
1 gram of cocain equals roughly 150 grams of CO2 emissions due to production and shipping etc, plus the effect wears of very quickly. Cocain Als destroys your nostrils, it's really, really bad. I would advice amphetamine instead. Can also be taken orally, for instance in the medicinal form of dexamphetamine. Another side effect is that you aren't hungry anymore so you don't need the Twix. Just dexamphetamine and you are able to achieve your goals better like becoming a dictator (like Hitler, he got daily shots of amphetamine) or invade France if you want (the German army had amphetamine pills which helped them advance into France day and night. The French assumed they would stop during the night to rest but since they didn't, the French greatly miscalculated and were completely overrun. Thats why you should use amphetamine kids). It also really helps with ADHD to focus on things and think clearly.
Valid, but not the first two things that I'd come up with.
So, you are saying, I should mix my cocaine with twix bars for maximum efficiency? (Would still be stupid, but now more efficiently)??
Same stupid, more fun
The left Twix has cocaine in it. The right one does, too, but it comes from the Right Cocaine Twix factory.
I think if you add some twix bars in the mix you have good chances that it wont get worse because of it. Only logical choice is to go the twix route.
To be fair a lot of people think they're intelligent and they really really aren't.
Especially if they've had cocaine.
I'm not trying to grade on potential but betting on human potential vs AI potential feels like it rewards ourselves for being better vs a machine. Would we have Albert Einstein if we didn't have Isaac Newton?
That's kind of a false dichotomy. They may be separate today, but there's no reason to believe we won't augment human minds with artificial neural networks in the future. Not in the magical cure all fix all way techbros like to sell it, but for like really boring and mundane things initially. Think replacing a small damaged part of some brain region, like the visual or auditory cortexes, to repair functional deficiencies. Once they get the basic technology worked out to be reliable, repeatable, and not require too much maintenance (cough subscriptions and software licenses), there's no reason to believe we won't progress rapidly to other augmentations and improvements. A simple graphical interface for like a heads up display or a simple audio interface for direct communications both come to mind, but I'm sure our imaginations will be comically optimistic about some things and comically pessimistic about others. All that to say that any true AI potential will be human potential in time. We won't stop at making super intelligent AGI. We will want to BE super intelligent AGI. Since we already know highly efficient and capable intelligence is possible (see yourself) it's only a matter of time until we make it ourselves, provided we don't kill ourselves somehow along the way.
Why do people keep telling me this?
See, the thing is, I watch piss porn. Hear me out. I told my friend that the thing is, to do piss porn, you kind of have to be into it. You could try and fake it, but it wouldn't be very convincing. So, my contention is, piss porn is more genuine than other types of porn, because the people partaking are statistically more likely to enjoy doing that type of porn. Which is great, I think, because then they really get into it, which is hot. It's that enjoyment that gets me off. Their enjoyment.
She said, "Krooklochurm, you're an idiot. Anyone can fake liking getting pissed in the face."
So I said, "Well, if you're so adamant, get in the tub and I'll piss in your mouth, and let's see if it's as easy as you claim."
So she said, "All right. If I can fist you in the ass afterwards."
Which I felt was a fair deal, so I took it.
My (formal) position was strengthened significantly by the former event. And I can also attest that I could not convincingly fake enjoying being ass-fisted.
What does that have to do with anything, you ask? Genuinity. The real deal. That's what.
I could see it. A lot of people dont like it, and its not my personal thing, but sex can get messy so its p much whatever. Always makes me think of this though:
What the fuck did I just read
AI poison.
Fresh lemmy copypasta
Some lost green text post or the internet comment etiquette guy.
Plot twist, she really did like getting pissed on, and she knew it ahead of time. She was gaming you for that golden shower.
This is nice to be confused with shit porn. Which is just not very good.
In other words, it is shit.
What a good piece of meal, thank you
and some of the most intelligent people are cast out from society because they don't fit the culture of arrogance.
R.I.P. Alan Turing...
And some of the most intelligent people ARE arrogant twits, unfortunately.
It's not artifical intelligence. A Large Language Model is not intelligent.
And yes yes, scientifically, LLM belongs there and whatnot. But important is, what the people expect.
That's the typical discrepancy between "definition of technical term" and "popular expectations evoked by term". The textbook example used to be "theory", but I guess AI is set to replace that job too...
Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”
Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.
I completely agree that LLMs aren't intelligent. On the other hand, I'm not sure most of what we call intelligence in human behavior is any more intelligent than what LLMs do.
We are certainly capable of a class of intelligence that LLMs can't even approach, but most of us aren't using it most of the time. Even much (not all) of our boundary pushing science is just iterating algorithms that made the last discoveries.
On the other hand, I’m not sure most of what we call intelligence in human behavior is any more intelligent than what LLMs do.
Human intelligence is analog and predicated on a complex, constantly changing, highly circumstantial manifestation of consciousness rooted in brain chemistry.
Artificial Intelligence (a la LLMs) is digital and predicated on a single massive pre-compiled graph that seeks to approximate existing media from descriptive inputs.
The difference is comparable to the gulf between a body builder's quad muscle and a piston.
Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”
Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.
Btw, you got it double-posted.
Wasn't there an article posted yesterday about a group trying to create a biological computer that was living cells do to their efficiency of use on less power? (They are far from close, they basically took skin cells, ionized them, and had no idea how they were going to get them to stay alive long term yet.
Even that won't be anywhere close to the efficiency of neurons.
And actual neurons are not comparable to transistors at all. For starters the behaviour is completely different, closer to more complex logic gates built from transistors, and they're multi-pathway, AND don't behave as binary as transistors do.
Which is why AI technology needs so much power. We're basically virtualising a badly understood version of our own brains. Think of it like, say, PlayStation 4 emulation - it's kinda working but most details are unknown and therefore don't work well, or at best have a "close enough" approximaion of behaviour, at the cost of more resource usage. And virtualisation will always be costly.
Or, I guess, a better example would be one of the many currently trending translation layers (e.g. SteamOS's Proton or macOS' Rosetta or whatever Microsoft was cooking for Windows for the same purpose, but also kinda FEX and Box86/Box64), versus virtual machines. The latter being an approximation of how AI relates to our brains (and by AI here I mean neural network based AI applications, not just LLMs).
There's already been some work on direct neural network creation to bypass the whole virtualization issue. Some people are working on basically an analog FPGA style silicon based neural network component you can just put in a SOM and integrate into existing PCB electronics. Rather than being traditional logic gates they directly implement the neural network functions in analog, making them much faster and more efficient. I forget what the technology is called but things like that seem like the future to me.
What's with all the AI hate? I use it for work and it significantly decreases my workload. I'm getting stuff done in minutes instead of hours. AI slop aside.
The massive corporate AI (LLMs for the most part) are driving up electricity and water usage, negatively impacting communities. They are creating a stock market bubble that will eventually burst. They are sucking up all the hardware, from GPUs to memory, to hard drives and SSDs.
On top of all of that they are in such a rush to expand that a lot of them are installing fossil fuel power on top of running the local grid ragged so they pollute, drive up costs, and all for a 45% average rate of incorrect results.
There are a lot of ethical problems too, but those are the direct negatives to tons of people.
If AI can do your job in minutes you're either: A fool pumping out AI slop someone else has to fix and you don't realize it.
Or
Doing a job that really shouldn't exist.
LLMs can't do more than shove out a watered down average of things it's seen before. It can't really solve problems, it can't think, all it can do is regurgitate what it's seen before. Not exactly conducive to quality.
effect on environment, and the fact that we know it will definitely lose its good, like TV/Cable, Internet, and any honest useful invention that has been raped by the dark side of human culture within history.
Within the structure of ego driven society we live in I don't think we are capable of being a good species.
would be cool if things were different, but Ive never seen it not turn out bad.
People got roped into a media campaign spear headed by copyright companies.
Hilarious to think nobody could notice how dogshit AI is without being handheld into it.
I hope analog hardware or some other trick will help us in the future to make at least local inference fast and low power.
Local inference isn't really the issue. Relatively low power hardware can already do passable tokens per sec on medium to large size models (40b to 270b). Of course it won't compare to an AWS Bedrock instance, but it is passable.
The reason why you won't get local AI systems - at least not completely - is due to the restrictive nature of the best models. Most actually good models are not open source. At best you'll get a locally runnable GGUF, but not open weights, meaning re-training potential is lost. Not to mention that most of the good and usable solutions tend to have complex interconnected systems so you're not just talking to an LLM but a series of models chained together.
But that doesn't mean that local (not hyperlocal, aka "always on your device" but local to your LAN) inference is impossible or hard. I have a £400 node running 3-4b models at lightning speed, at sub-100W (really sub-60W) power usage. For around £1500-2000 you can get a node that gets similar performance with 32-40b models. For about £4000, you can get a node that does the same with 120b models. Mind you I'm talking about lightning fast performance here, not passable.
For example of course
I think we're at a point were the hardware right now does not fit with the algorithm being used. Since they take so much power due to our computers being digital. Having a transistor only capable of holding 1 state (0V or 5V usually) is inefficient. The heat add up as you multiply especially with LLMs. There seems to be a potential for analog where a transistor acts more on a range 0 - 5v. Which in theory could store more information or directly represent what LLM runs on (floating point). For more context 1 float tends to be 32bits. 1 bit is 1 transistor so 1 float = 32 transistor. While an analog transistor could be 1 float = 1 analog transistor.
You need a nuclear power plant not for a single AI, but for several million instances of it.
Don’t forget that you can run full OSS ChatGPT on a single Mac Mini.
AI is sick, and only silly people can deny that. Yes, future is little scary, but scary things should not stop us from progress.
Yes, we do have a lot of AI slop, but don’t forget that AI can be an exceptionally good tool in right hands, and its spheres of usage grow every month. I can’t wait for generative tools for gauss splattering things, for example - because it can lead games developing process on another level.
I'd trade cocaine for massive amounts of caffeine!
How much have you got? I've got about 3kg of coffee.
I have about a gallon of liquid caffiene, comes with a pump so you can add it to home made soda one dose at a time.
I suspect you could do the same with coke....
Compare 1 human to one LLM session, instead of one human to all LLMs on Earth, and you’ll see that we’re way less efficient
Also, one human brain training takes feeding for years before it can do useful things.
Does that actually add up, though?
Google released stats recently that the median Gemini prompt consumes about 0.24 watt hours of electricity.
For humans performing knowledge based labor, how many prompts is that worth per hour? Let's say that the average knowledge worker is about as productive as one good prompt every 5 minutes, so 12 per hour or 96 per 8-hour workday.
Let's also generously assume that about 25% of the prompts' output are actually useful, and that the median is actually close to the mean (in real life, I would expect both to be significantly worse for the LLM, but let's go with those assumptions for now).
So on the one hand, we have a machine doing 384 prompts (75% of which are discarded), for 92 watt hours of energy, which works out to be 80 kilocalories.
On the other hand, we have a human doing 8 hours of knowledge work, probably burning about 500 calories worth of energy during that sedentary shift.
You can probably see that the specific tasks can be worked through so that some classes of workers might be worth many, many LLM prompts, and some people might be worth more or less energy.
But if averages are within an order of magnitude, we should see that plenty of people are still more energy efficient than the computers. And plenty aren't.
You completely forgot that workers only work one time the shift per day. If you account for the total energy required to do a project, and assume that a human would do alone X in 5 days, then wouldn’t it be better to use prompting as well, which would theoretically, in this model, make X feasible in 2.5 days? Sure, the non-work calories consumption of a human is inevitable, but when strictly talking about productivity, you can make an individual be 2x more productive for a lot less than their daily calories consumption
I however appreciate your analysis and the time you took to write this. It’s really nice to see a comparison. I would have imagined it consumed a lot less than that.
If you compare this to letting a desktop computer run though… it’s negligible. I know mine uses about 80W when idle
All it really takes is the Limitless pill. The protagonist in the movie Limitless gets so much done after taking the Limitless pill in the movie Limitless. Limitless is a movie about a man who discovers the Limitless pill to help him accomplish almost limitless amount of things. But he learns that there is a limit that the Limitless pill can do if he takes it for too long. Because the Limitless pill isn’t really limitless.
Limitless = Adderall
I think it was called 'The Man Whose Brain Couldn't Slow Down'
It gets somewhat retconned in the TV series though where he's basically just this political genius who secretly runs everything.
The TV show is a lot better than it has any right to be, it's a shame it only got one season.
He learns the hard way that he needs to find a way to limit his dependency on the Limitless pill.
It's an efficient, if somewhat finicky intelligence. It checks out commander.
At this point, I feel like they actually excel at classifying people by political views and all the red number as covered by spy agencies... call it a conspiracy, but its my shot in the dark
Natural inteligence would not consume Twix and cocaine.
Real Genius runs on cigarettes, coffee, and cheating on your cousin-wife.
How can we know if the AI is intelligence unless we can prove it is horny?