And you can use multiple models, which I find handy.
There is some stuff that AI, or rather LLM search, is useful for, at least the time being.
Sometimes you need some information that would require clicking through a lot of sources just to find one that has what you need. With DDG, I can ask the question to their four models*, using four different Firefox containers, copy and paste.
See how their answers align, and then identify keywords from their responses that help me craft a precise search query to identify the obscure primary source I need.
This is especially useful when you don't know the subject that you're searching about very well.
*ChatGPT, Claude, Llama, and Mixtral are the available models. Relatively recent versions, but you'll have to check for yourself which ones.
At least it's citing sources and you can check to make sure. And from my anecdotal evidence it has been pretty good so far. It also told me on some occasions that the queried information was not found in it's sources instead of just making something up. But it's not perfect for sure, it's always better to do manual research but for a first impression and to find some entry points I've found it useful so far
ChatGPT4o can do some impressive and useful things. Here, Im just sending it a mediocre photo of a product with no other context, I didnt type a question. First, its identifying the subject, a drink can. Then its identifying the language used. Then its assuming I want to know about the product so its translating the text without being asked, because it knows I only read english. Then its providing background and also explaining what tamarind is and how it tastes. This is enough for me to make a fully informed decision. Google translate would require me to type the text in, and then would only translate without giving other useful info.
It goes without saying that this shit doesn't really understand what's outputting; it's picking words together and parsing a grammatically coherent whole, with barely any regard to semantics (meaning).
It should not be trying to provide you info directly, it should be showing you where to find it. For example, linking this or this*.
To add injury in this case it isn't even providing you info, it's bossing you around. Typical Microsoft "don't inform a user, tell it [yes, "it"] what it should be doing" mindset. Specially bad in this case because cost vs. benefit varies a fair bit depending on where you are, often there's no single "right" answer.
*OP, check those two links, they might be useful for you.
LLMs don't "understand" anything, and it's unfortunate that we've taken to using language related to human thinking to talk about software. It's all data processing and models.
Yup, 100% this. And there's a crowd of muppets arguing "ackshyually wut u're definishun of unrurrstandin/intellijanse?" or "but hyumans do...", but come on - that's bullshit, and more often than not sealioning.
Don't get me wrong - model-based data processing is still useful in quite a few situations. But they're only a fraction of what big tech pretends that LLMs are useful for.
At the very least it failed in a way that's obvious by giving you contradictory statements. If it left you with only the wrong statements, that's when "AI" becomes really insidiuos.
That’s a good summary. Google Gemini is no better. Type in a question and it starts off great but then devolves into other brands, other steps to do something that isn’t related to the thing you asked. It’s just terrible and someone will sue them over it next year. Just wait.
Hello, fellow humans. I too am human, just like you! I have skin, and blood, and guts inside me, which is not at all disgusting. Just another day of human!
Won't you share a delicious cup of motor oil lemonaide with me? It's nice and refridgerated, so it will cool down our bodies without the use of cooling fans!
However we too can use cooling fans. They will just be placed on the ceiling, or in a box, or self standing, and oscillating. Not at all inside our bodies, connected to a board controlled by our CPUs that we clearly don't have!
Now come, let us take our colored paper with numbers and pictures of previous human rulers and exchange them for human food prepared by not fully adult humans who haven't matured to the age where their brains develop the ability to care about food sanitation. Then we shall complain that our meal cost too many paper dollars, while recieving less and less potato stick products every year. Ignoring completely the risk of heart disease by indulging in the amounts of food we desire to aquire.
Finally we shall retreat to our place of residence, and complain on the internet that our elected leaders are performing poorly. Rather than terminate the program vote the poor performing humans out, we shall instead complain that it is other humans fault for voting them in. Making no attempt to change our broken system that has been broken our entire existence, with no signs of improving. Instead every 4 years we will make an effort to write down names of people we've already complained about in the hopes that enough people write down the same names, and that will fix the problem.
Oh. Shall I request amazon.com to purchase more fans and cooling units? The news being reported that tempatures will soon reach 130F on a regular basis, and all humans will slowly perish.
Shall I share photographs of the new CEO of starbucks who's daily commute involves a personal jet aircraft, which surely isn't compounding the problem at all?
Yeah AI can be wonky, but what idiot would spend a shitload of money on a graphics card without even being willing to click an article and read a bit?
It's on you if you do that. Even if the AI shit worked way better, why would you trust there aren't shady things happening to influence the AI and have you spend more money.
My dad falls into this category he constantly replaces things with way worse things just because they are new, I can't get my mind round how he can replace really good working stuff with new junk that isn't even capable of doing the job.
Jokes aside (and this whole AI search results thing is a joke) this seems like an artifact of sampling and tokenization.
I wouldn't be surprised if the Gemini tokens for XTX are "XT" and "X" or something like that, so it's got quite a chance of mixing them up after it writes out XT. Add in sampling (literally randomizing the token outputs a little), and I'm surprised it gets any of it right.
Assuming it takes its answer from search results, and the search results are all affiliate marketing sites that just want you to click on a link and buy something, this makes perfect sense.