People with less AI literacy often see the technology as ‘magical’ and awe-inspiring.
The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.
Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
I am a system admin and one of our appliances is a HPE Alletra. The AI in it is awesome and it never tries to interact with me. This is what I want. Just do your fucking job AI, I don't want you to pretend to be a person.
Even using LLMs isn't an issue, it's just another tool. I've been messing around with local stuff and while you certainly have to use it knowing it's limitations it can help for certain things, even if just helping parse data or rephrasing things.
The issue with neural nets is that while it theoretically can do "anything", it can't actually do everything.
And it's the same with a lot of tools like this. People not understanding the limitations or flaws and corporations wanting to use it to replace workers.
There's also the tech bros who feel that creative works can be generated completely by AI because like AI they don't understand art or storytelling.
But we also have others who don't understand what AI is and how broad it is, thinking it's only LLMs and other neural nets that are just used to produce garbage.
How exactly is this a surprise to anyone when the same applied to crypto and NFTs already? AI and blockchain technologies are useful to experts in tiny niches so far but that’s not the usual tech savvy user. For the end user it’s just a toy with little use cases.
i think we give silicon valley too much linguistic power. there should really be more pushback on them rebranding LLMs as AI. it’s just a bunch of marketing nonsense that we’re letting them get away with.
(i know that LLMs are studied in the field of computer science that’s known as artificial intelligence, but i really don’t think that subtlety is properly communicated to the general public.)
I actually think in this case it's the opposite-- your expectations of the term "AI" aren't accurate to the actual research and industry usage. Now, if we want to talk about what people have been trying to pass off as "AGI"...
i think that’s fair point. language does work both ways, and i am certainly not in the majority with this opinion. but what bothers me is that it feels like they’re changing the definition of the word and piggybacking off of its old meaning. i know this kind of thing isn’t all that uncommon, but it still rubs me the wrong way.
there should really be more pushback on them rebranding LLMs as AI.
That's because the target of the language is the know-nothing speculative investor class. The distinction doesn't matter to us because we're not being sold a service, we're being packaged as a product.
The increasingly-impossible-to-opt-out-of nature of LLMs/AIs illustrates as much. We're getting force-fed a "free" service that's fundamentally worse than what came before it, because its an extractive service.
What form of AI are we talking about? Because most of them exposed to the people are glorified toys with shady business models. While tools like AlphaFold are pretty useful.
I suspect it's truly more of a dunning-Kruger situation. When you know nothing You're down to use it for everything. When you start to understand the problems, limits and the morality of it, you start to back off some. And as you approach the ability to host it yourself and do actual work with it, you fully welcome the useful bits in your workflow.
Yea, if you are a pro in something it most of the time only tells you what you already know (I sometimes use it as a sort of sanity check, by writing prompts that I think I know the output that comes)
That tracks for sure. The most enthusiastic guys at work also happen to be the ones who put in the least actual work. Sure, it has some uses… but the things it gets wrong are significant enough that no sane individual should rely on anything that AI is involved with making/running. The intelligence part just isn’t there yet. People are effectively getting wowed by a glorified ELIZA chat bot.
the things it gets wrong are significant enough that no sane individual should rely on anything that AI is involved with making/running
The fundamental use-cases for AI are almost never customer oriented, either. You don't see these tools deployed to reduce wait times or improve authentication or approve access, because the people who deploy them don't actually trust them to do positive scope client interactions. What you see them doing is robo-calls, front-line customer service, claims denials, and (in the bleakest use cases) military targeting operations. Instances where efficiencies of scale accrue to the operator and an error/problems rebounds to the target of the service rather than the vendor.
People are effectively getting wowed by a glorified ELIZA chat bot.
An ELIZA chatbot that double-processes your credit card and then keeps denying you a refund when you manually catch and report it.