Not all AI is bad, just most of it
Not all AI is bad, just most of it
cross-posted from: https://slrpnk.net/post/12723593
Not all ai is bad, just most of it
Not all AI is bad, just most of it
cross-posted from: https://slrpnk.net/post/12723593
Not all ai is bad, just most of it
With respect to the original-original poster, this is wrong. AI plant identification is terrible. It gives you confidence but not enough nuance to know that there are similar plants, some of which look almost but not quite identical, some of which will provide some really nice sustenance, some of which will literally kill you and it'll hurt the whole time you're dying.
It's almost as bad as those AI-written foraging guides that give you enough information to feel confident but not enough information to be able to tell toxic or even deadly plants apart from the real ones.
Word to the wise.
If it looks like a carrot, don't touch it, to dig it up, and especially don't eat it. There are tons of plants in the same familuly of plants that look nearly identical or extremely similar that will give you an extremely bad day, month, year, or death.
analytical AI is great
generative AI is cancer
Even analytical AI needs to be questioned and validated before use.
I've seen a similar thing where the machine learning model started associating rulers with cancer because the images it was fed with known cancer almost always also had a ruler to provide scale to measure the size of the tumor
It's like these geoguesser not guessing the country by the plants and streets or houses, but by the camera angle and some imperfections only occuring in pictures taken in that country.
"When a measure becomes a target, it ceases to be a good measure"
This kind of wacky absolutism is part of the problem. Esp since "AI" doesn't even exist.
https://www.downtoearth.org.in/governance/lavender-wheres-daddy-and-the-ethics-of-ai-driven-war
Plant ID is soooo disappointing - works sometimes though.
Always gotta run the ID, web search for images of the recommendation, compare images to plant.
Semantic search can be helpful:
Guess OP image could be about e.g. Perplexity repeatedly HAMMERING (no caching?) the beautiful open web and slopping out poor syntheses.
If you ever are an executive and you need to explain a product idea you made up and don't want to bother with actual proof of concept, then AI has got you.
If you want custom porn, AI has got you.
These are its two competing functions. And judging by my last foray into what AI is about, the latter is winning hard.
Definitely do not use AI or AI-written guidebooks to differentiate edible mushrooms from poisonous mushrooms
Honestly don’t use any guide book or advice. If you aren’t 100 percent sure on your own maybe just walk away.
My self personally… even if I was 99.999 percent sure it still wouldn’t be worth the risk. I’ll just buy some mushrooms.
People are saying you shouldn't use AI to identify edible mushrooms, which is absolutely correct, but remember that people forage fruits and greens too. Plants are deadly poisonous at a higher rate than mushrooms, so plant ID AI has the potential to be more deadly too.
And then there's the issue that these ID models are very America and/or Europe centric, and will fail miserably most of the time outside of those contexts. And if they do successfully ID a plant, they won't provide information about it being a noxious invasive in the habitat of the user.
Like essentially all AI, even when it works it's barely useful at the most surface level only. When it doesn't work, which is often, it's actively detrimental.
I actually think AI for mushroom identification is okay, but as a step in the process. Sometimes you see a mushroom and your like "what is that?". Do a little scan to see what it is. Okay now you have an idea of what it is, but then comes the next part! https://mushroomexpert.com/ there you can go through the list and see if you get a positive ID.
Like if you're not 100% positive you know what you're foraging why would you take the risk.
rage against the machine learning
Good nerdcore band name
Just dealt with an AI bot this morning when I called a law office. They try so hard to mimic humans. They even added background people talking sounds. But was 100% easily given away when they repeated the same response to my asking to speak with a human. "I will gladly pass on your message to (insert weird pause) "Bill""
There was an interesting story on NPR last week about someone experimenting with AI agent clones of himself. Even his best attempt sounded pretty obvious thanks to stuff like that.
They even add fake keyboard typing sounds now 💀
It’s a decent evolution of the search engine but you have to ask it for sources and it’s way too expensive for it’s use case.
decent
You've misspelled "descent"
Is it?
I've found so many fucking errors in AI summaries that I don't trust shit from AI searches when a direct link to a source or wiki could give me better summarized info.
I guess it's an evolution, but I'm really hoping these mutations prove inferior and it dies off already. But capitalism won't have that with their sunk cost fallacy driven insistence that I just use the inferior product.
That’s why I said you have to ask for the source. It’s summary isn’t good but that you can describe something to it in human language instead of focusing on keywords to start your search and that the sources it gives aren’t just ads yet is useful.
Idk, when I Google Lensed a Sunflower plant, the AI told me it was a Peruvian ground apple....
It also has a lot of trouble identifying lambs quarters and other common wild weeds.
probably because Google lens is made to be all-purpose, if you had a model that has been specifically trained to recognize plants, it wouldn't make such obvious mistakes
Honestly good rule about Machine Learning is just "predicting = good, generating = bad." Rest are case by case but usually bad.
Predict inflation in 3 years - cool
Predict chance of cancer - cool.
Generate image or mail or summary or tech article - fuck you.
Generating speech from text/image is also cool but it's kind of a special case there.
It didn’t mention the star finder apps!
Save yourself the money and time. It’s Venus. That cool star you’re looking at? Yeah that’s Venus. Just trust me.
You don't need AI toshoew a star map. This is the one and only use for Augmented Reality though.
Yeah I actually paid for the full version of mine… even though it’s always Venus
genAI is the enemy. other kinds are useful.
To be fair, some of the memes are bangers as well.
I've yet to see a good one.
I think memes are okay too. Memes are usually recycled jokes anyway.
Friendly reminder that automatic transmissions were sometimes considered to be artificial intelligence.
“Fuck business idiots waxing poetic about the inestimable value of LLMs” isn’t a good community name though.
Automatic transmissions aren't trying to take your creative job.
That's known as image identification with ML though, not "AI". The difference? Capitalism.
The difference is that plant identification is no longer an interesting area for AI research. It was ”AI” 10 years ago, but now it’s more or less a solved problem.
Primarily, it's not interesting financially and therefore for marketing.
They're all known as "apps" because that's all they are. Like Angry Birds if we told people the piggies were "AI".
AI noise reduction and spot removal tools for photo editing get a pass too.
There is no "AI".
But there is endless technology that grifters label as "AI". Ofc some of this technology will be useful. But under capitalism all technology is developed by and for the benefit of the disgustingly privileged via violent control.
Honestly I like when it writes for me too. I tend to be very blunt and concise in my messaging and AI just puts that corporate shine and bubbliness on my messages that everyone seems to feel is important.
+1 for WhoBird
One of the issues with LLM is that it attracted all attention. Classifiers are generally cool, cheap and saved us from multiple issues (ok face recognition aside 🙂)
When the AI bubble will burst (because of LLM being expensive and not good enough to replace a person even if they are good in pretending to be a person) all AI will slow down… including classifiers, NLP, etc
All this because the AI community was obsessed by the Turing test/imitation game 🙄
Turing was a genius but heck if I am upset with him for coming with this BS 🤣
It made sense in the context it was devised in. Back then we thought the way to build an AI was to build something that was capable of reasoning about the world.
The notion that there'd be this massive amount of text generated by a significant percentage of the world's population all typing their thoughts into networked computers for a few decades, coupled with the digitisation of every book written, that could be stitched together in a 1,000,000,000,000-byte model that just spat out the word with the highest chance of being next based on what everyone else in the past had written, producing the illusion of intelligence, would have been very difficult for him to predict.
Remember, Moore's Law wasn't coined for another 15 years, and personal computers didn't even exist as a sci-fi concept until later still.
I dunno about that. We got a pile of architecture research out of it just waiting for some more tests/implementations.
And think of how cheap renting compute will be! It’s already basically subsidized, but imagine when all those A100s/H100s are dumped.