Yup. Look up the calculus and linear algebra that neural networks use to train. It's an insane amount of calculations. So many calculations that it requires hundreds of processing units to crunch at a reasonable speeds. All that to get simple math questions wrong.
Apparently that's the new way to do math in AI. The AI works out you're trying to do math, tries to write some Python code to do the math, runs the python codes, gets the answer, writes a response around the numeric answer.
I can't think of any possible issues with this; it's infallible. /s
If you want to ask a question to an LLM, you need to go down to an arcade and exchange your quarters for tokens. Then you can feed those tokens into your computer every time you want to ask a question.
Yeah, llms aren't ai. They are just a fancy Markov model... Need controllers on top to decide when you want to make sentences and when you need to do something else. A controller could be an llm, but a llm by itself is just a tool, not a system
I agree with that point of view but at the same time, it's mostly false.
AI resolve problems that cannot be resolved by a dummy calculator.
Surely AI are not the best in pure calculations but it's not there main goal (unless it is explicitly designed for it)
I my opinion you're partially right.
Yeah AI doesn't solve any problems by itself, humans can already do it.
AI is just humans but faster and more efficient (surely this create other serious problems like employment and more...)
But in this direction is the computer too, resolving no problems, before computers, humans were called computers, and they were doing the job of today iron computers.
They didn't resolve problems too, just faster while creating others important social issues