Every tech company rn
Every tech company rn
Every tech company rn
There's even rumours that the next version of Windows is going to inject a bunch of AI buzzword stuff into the operating system. Like, how is that going to make the user experience any more intuitive? Sounds like you're just going to have to fight an overconfident ChatGPT wannabe that thinks it knows what you want to do better than you do, every time you try opening a program or saving a document.
This is what pisses me off about the whole endeavour. We can't even get a fucking search algo right any more, why the fuck do i want a machine blithely failing to do what it's told as it stumbles off a cliff.
It'll be like if they brought clippy back but only this time hes even more of an asshole and now he can fuck up your OS too.
Windows Co-pilot just popped up on my Windows 11 machine. Its disclaimer said it could provide surprising results. I asked it what kind of surprising results I could expect, it responded that it wasn't comfortable talking about that subject and ended the conversation.
That's incredible. Certified "Directive #4" moment.
I'm so glad I use Linux at home and Mac at work.
They brought Cortana back in Halo Infinite and they're gonna bring Cortana back for Windows Infinite
I sure hope not. I still hate Cortana so much. I don't mind that she dumped me for an evil slime. I mind that she wouldn't stop calling me while I was busy killing her new boyfriend.
I agree that It's going to be every bit as awful as you say, but if it brings back Clippy, I'm down for it.
Google Bard, everyone.
It's sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call time.Now()
, but it 'll be a different story.
if you ask it today's date, it actually does that.
It just doesn't have any actual knowledge of what it's saying. I asked it a programming question as well, and each time it would make up a class that doesn't exist, I'd tell it it doesn't exist, and it would go "You are correct, that class was deprecated in {old version}". It wasn't. I checked. It knows what the excuses look like in the training data, and just apes them.
It spouts convincing sounding bullshit and hopes you don't call it out. It's actually surprisingly human in that regard.
Well obviously a language model is trained on old data , google has been webscraping the data to provide this!
None of it is even AI, Predicting desired text output isn't intelligence
We never called if statements AI until the last year or so. It's all marketing buzz words. It has to be more than just "it makes a decision" to be AI, or else rivers would be AI because they "make a decision" on which path to take to the ocean based on which dirt is in the way.
Yeah, and highlighting that difference is what is important right now.
This is the first AI to masquerade as general artificial intelligence and people are getting confused.
This current thing doesn't have or need rights or ethics. It can't produce new intellectual property. It's not going to save Timmy when he falls into the well. We're going to need a new Timmy before all this is over
I do agree, but on the other hand...
What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?
I've seen this argument so many times and it makes zero sense to me. I don't think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don't think "I just said predicting, what's the next likely word to come after it". That's not even remotely similar to how I think at all.
Inject personal biases :)
AI is whatever machines can't do yet.
Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn't AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became "machine learning" or "model".
Machine learning is still AI. Specifically, it's a subset of AI.
Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don't have mastery of language and they cannot predict desired output
I thought this was an inciteful comment. Language is a kind of 'view' (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It's a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.
Edit: grammar, ironically
"Mastery of language is mastery of human thought." is easy to prove false.
The current batch of AIs is an excellent data point. These things are very good at language, and they still can't even count.
The average celebrity provides evidence that it is false. People who excel at science often suck at talking, and vice-versa.
We didn't talk our way to the moon.
Even when these LLMs master language, it's not evidence that they're doing any actual thinking, yet.
Always remember that it will only get better, never worse.
They said "computers will never do x" and now x is assumed.
There's a difference between "this is AI that could be better!" and "this could one day turn into AI."
Everyone is calling their algorithms AI because it's a buzzword that trends well.
It usually also gets worse while it gets better.
But I take your point. This stuff will continue to advance.
But the important argument today isn't over what it can be, it's an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they're also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can't yet take a useful ethical stand, and so we need to not rely on then that way, if we don't want things to go really badly.
Depends on your definition of AI, and everyone's definition is different.
My cousin got a new TV and I was helping to set it up for him. During the setup thing, it had an option to enable AI enhanced audio and visuals. Turning the ai audio on turned the decent, but maybe a little sub par audio, into an absolute garbage shitshow it sounded like the audio was being passed through an "underwater" filter then transmitted through a tin can and string telephone. Idk who decided this was a feature that was ready to be added to consumer products but it was absolutely moronic
Coupled with laying off a few thousand employees
We'll make it broken!
Unlike the previous bullshit they threw everywhere (3D screens, NFTs, metaverse), AI bullshit seems very likely to stay, as it is actually proving useful, if with questionable results... Or rather, questionable everything.
if it only were AI and not just llms, machine learning or just plain algorithms. but yeah let's call everything AI from here on. NFTs could be useful if used as proof of ownership instead of expensive pictures etc
The NFT as ownership should really become the standard. Instead of having any people "authorizing" yadadada it's done completely by machine and traceable.
No middlemen needed. Just I own x, this says I own x. I can sell you x, and you get ownership of x immediately. No "waiting 45 days to close" or "2 day transaction close" or even "title search verification." Too many middlemen benefitting from the current system to allow NFT to replace them though. That's the actual challenge.
As a programmer and 3D artist getting almost instant art for reference and using chat GPT to help me solve complex coding problems has sped up production significantly. Theirs even plugins that generate and texture 3D models for you now which means I can do way more by myself.
God it’s exhausting. Okay, I’ll buy a 3d television if that’s what I have to do, let’s bring that back instead. Please?
If you take out the AI part it still holds true. 2023 is full of bullshit.
The year of enshittification.
I'm bookmarking this for the next time my supervisor plugs ChatGPT.
I had a manager tell me some stuff was being scanned by AI for one of my projects.
No, you are having it scanned by a regular program to generate keyword clouds that can be used to pull it up when humans type their stupidly-worded questions into our search. It’s not fucking AI. Stop saying everything that happens on a computer that you don’t understand is fucking AI!
I’m just so over it. But at least they aren’t trying to convince us chatGPT is useful (it definitely wouldn’t be for what they would try to use it for)
Snapchat AI. My friends don't want it, they can't block it, and it is proven to lie about certain things, like asking if it has one's location.
What companies are you people working for?
We are being asked not to use AI.
Ain't gotta use it to sell it or slap AI stickers on top of whatever products you're selling
Pretty much this. Just another buzzword. Three months from now it will be something else the media doesn’t understand to spam the public with.
I’m predicting … rubs crystal ball and nipples … it’s going to be some king of cybernetic brain interface thing. Haven’t heard about those in a while. Or maybe nano bots that kill cancer or fix the paint scratches on your car.
Larger companies have been working fast to sandbox the models used by their employees. Once they are safe from spilling data they go all in. I'm currently on a platform team enabling generative Ai capabilities at my company.
Not surprising for North Korea
More Ads and tracking systems, Now With AI!
Commercial...
This makes me think I should stay in IT infrastructure and not move to a developer position.
You didn’t hear? AI can read your mind and create the stuff you desire from all the data they collected on you. Yeah, it might be racist but it’s 100% shit.
I'm actually pleasantly surprised on what ChatGPT can generate for me. It doesn't usually take care of the detailed parts, but like I was able to have it spin up an android application skeleton that I could throw a couple of actions on I needed to test something with.
I've seen it generate very useful YAML and such. I still have to do a fair amount of work to make it behave how I need, but I really enjoy the ability to skip the filler bullshit in my work.
We can thank the open source git codes they been stealing from to train their model !
It's not just every tech company, it's every company. And it's terrifying - it's like giving people who don't know how to ride a bike a 1000hp motorcycle! The industry does not have guardrails in place and the public consciousness "chatGPT can do it" without any thought to checking the output is horrifying.
Basically the internet.
We really need some sort of regulation to prevent AI from catastrophically fucking up everything lol