An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
As much as I agree with the sentiment and as much as I despise the current state of tech and llm's, software and tech in general are very brittle, riddled with problems and human mistakes(a bug is just a made up word that allows displacement of responsibility).
I think yes. Look at how long they've been trying to cram voice assistants down our throats. There's no point at which they'll say "no, I don't think these are ready yet, let's pull them back".
It is clear to all that a bubble has been growing.
If you're insisting the bubble will never burst, then there has to eventually be an actual use case for this that makes back the billions they're investing no? What's that use case? A Copilot subscription?
I mean - I still don’t use them so - ? And knowing they’re infected with AI, I wouldn’t use them for anything other than the simplest, statistically-improbable-to-get-wrong tasks.