7.44 + 1.3 ≠ 8.23
7.44 + 1.3 ≠ 8.23
Quick math is the one thing computers were always good at, not anymore apparently.
7.44 + 1.3 ≠ 8.23
Quick math is the one thing computers were always good at, not anymore apparently.
Luckily, China and India have both sunk below 2 children per pair now.
Not to support google but it is a really bad use of AI, AI is meant to do research and find logical answers to them
So this is "normal" that an AI is bad in pure calculation, as it tries to find the most probable answer whereas math operations are not option A with 20% and option B with 80% but pure logical and linear thought
So in a way google (and others) are really dumb to let pure AI do the whole rather than to fetch and organize the work with proper tools (in this case should research the world population in 2015 and 2025 and start a calculation with a regular calculator like program)
AI is great (sometimes), but the overhype over it make it used in every domain rather than useful ones...
I don't see how you're supporting Google here. Yes it's a bad use of AI. Google is the one that chose to make their search engine respond with AI slop.
Yeah it was just to protect myself from the people who read the first two words, and since we're in a fuck AI community I would get down voted instantly...
It's almost like it's more accurate and efficient to do a basic computation than to calculate the "average" written response over the entire internet...
“Approximately” doing some Olympian weightlifting
I think it's trying to say that 1.3 billion people were born between 2015 and 2025. Poor word choice though
Poor word choice though
They literally developed this function by averaging over the entire internet for the best word choice.
Yeah it increased by that much! Decreased a little too, but that's totally unrelated.
Especially during that 2020 thing.
APPROXIMATELY ... Come on guys! Don't hurt the clankers feelfeels it's gonna go skynet on us because of some fucking internet troll
Predictive language models are bad at this because they are not actually parsing meaning from the text. It is just outputting patterns it has seen before from training data based on your inputs. The patterns are complex and the training data is often immense enough that is has seen just about any kind of pattern plenty. That is often good enough to get sensible output, but not always.
There are models that do handle this better through a few different strategies. They can have a team of specialized models take on the problem. Using one model to categorize the prompt and the data generated, it then has other models specifically trained on that kind of data, or even just basic stupid calculators in cases like this, parse and produce results it understands. Then it can take the output of all of those other models through one more model that organizes the data cohesively.
Alternatively, you can also have a series of models that successively breaks down the prompt and data generated into finer details and steps, where instead of guessing at math problems like this, it literally "shows its work", so to speak, applying step by step arithmetic to it instead of just guessing with "good enough" language modeling.