They've ruined a near perfect calculator
They've ruined a near perfect calculator
They've ruined a near perfect calculator
I bet we could save a few gajillion watts if we just put that through an if statement. If math, then just be a calculator Else, make some bullshit up.
Apparently that's the new way to do math in AI. The AI works out you're trying to do math, tries to write some Python code to do the math, runs the python codes, gets the answer, writes a response around the numeric answer.
I can't think of any possible issues with this; it's infallible. /s
they've finally invented a machine that can't run DOOM
AI: YOU PICKED UP A SHOT GUN. WHAT DO YOU DO NOW?
Me: I left click and fire it at the enemy.
AI: bleep bloop YOU ARE LIKELY AN IDIOT. NOW YOU ARE DEAD.
It can infact pretend to run doom.
AI does not handle anything as words. It handles them as tokens. https://blogs.nvidia.com/blog/ai-tokens-explained/
Laymen terms.
If you want to ask a question to an LLM, you need to go down to an arcade and exchange your quarters for tokens. Then you can feed those tokens into your computer every time you want to ask a question.
AI
The pure embodiment of "Fake it til you make it"
"Fake it over and over even if you never make it."
I feel like a really intelligent AI would recognize the context of the text it gets.
Like, if you or I heard someone saying a bunch of numbers we’d know to switch into math mode and start mathing.
But these things just dawdle along and start repeating nursery rhymes or whatever because they’re not smart enough to know what they’re doing.
Kinda surprised wolfram alpha hasn't done more in the ai space
Yeah, llms aren't ai. They are just a fancy Markov model... Need controllers on top to decide when you want to make sentences and when you need to do something else. A controller could be an llm, but a llm by itself is just a tool, not a system
ChatGPT managed to see through my shenanigans:
If John has six apples and Bob had six oranges, and Bob gave john 2 apples, how many oranges does Betty have if it's a summer day in january?
Classic trick question! Let’s break it down:
John has 6 apples.
Bob has 6 oranges.
Bob gives John 2 apples — but wait, Bob didn’t have any apples, only oranges.
So Bob can’t give John 2 apples.
Meanwhile, Betty hasn’t even been mentioned until the last sentence, and there’s no info about her oranges.
Also, “a summer day in January” only makes sense in the southern hemisphere.
Conclusion: We still have no idea how many oranges Betty has — the question is nonsense on purpose.
So the answer is either: “Indeterminate”, “Nonsense question”, or “Depends on how much Betty likes oranges in the summer.”
I think the original message is true for older versions of GPT though, and AI being thrust into everything results in a lot of errors I've seen.
Why can't Bob give John 2 apples?
The restriction is merely implied, but we presume Bob did not have anything prior to being given something. Maybe Bob already had them. Bad AI. Lol
“Depends on how much Betty likes oranges in the summer.”
It did come up with a quite accurately human and sassy response to the orginal question
Hal9000 series has never made a mistake. Except that one time
I agree with that point of view but at the same time, it's mostly false. AI resolve problems that cannot be resolved by a dummy calculator. Surely AI are not the best in pure calculations but it's not there main goal (unless it is explicitly designed for it)
I my opinion you're partially right. Yeah AI doesn't solve any problems by itself, humans can already do it. AI is just humans but faster and more efficient (surely this create other serious problems like employment and more...)
But in this direction is the computer too, resolving no problems, before computers, humans were called computers, and they were doing the job of today iron computers. They didn't resolve problems too, just faster while creating others important social issues
AI is a shit show because of how it is being terribly implemented and math is the main example.
the other wonderful irony?
(basically) the only thing a computer can do is math.
so it's doing a SHITLOAD of math, to do a terrible job, at doing some very basic math.
bravo!
Yup. Look up the calculus and linear algebra that neural networks use to train. It's an insane amount of calculations. So many calculations that it requires hundreds of processing units to crunch at a reasonable speeds. All that to get simple math questions wrong.
All that to hallucinate every response in ways, that make people feel like they know what they are talking about.
Which they don't, and LLMs never will - unless they program in some responses, which then goes against the entire thing.
They should just use lookup tables
Not only that: it's also super inefficient, orders of magnitude more compute-intensive