I wouldn't say that. It's a tool like anything else. You don't say a hammer is useless because it's really bad at driving screws no matter how much your terrible coworker keeps insisting that she just hits the screws in with the hammer and it's fine. I learned programming very quickly with ChatGPT and I use LLMs all the time for help with programming. They're also good for proofreading, learning new languages, and a few other things. The hype is exaggerated but these things are quite useful when used correctly.
Username checks out! :D Yeah, it has some narrow use cases which aren't the worst thing ever. If it wasn't destroying the entire tech industry and to some extent the global economy with utter lies and deceit it might be kind of okay sometimes.
Well for once I have to stand up for apple. What makes them different in the AI space is that the inference actually happens on device and is very privacy focused. Probably why it sucks
Nailed it. I've tried taking notification contexts and generally seeing how hard it is. Their foundational model, I think is 4bit quantized, 3billion parameter model.
So I loaded up llama, phi, and picollm to run some unscientific tests. Honestly they had way better results than I expected. Phi and llama handled notification summaries (I modeled the context window, nothing official) and both performed great. I have no idea wtf AFM is doing, but it's awful.
It sucks for a lot of reasons but mostly because ai is always a “black box” (deep seek the exception) with “magic proprietary code”. You think “Tim Apple” isn’t working with the trump admin to id people for El Salvador?
That’s the problem. It wasn’t quick. If it had been released quickly and been a failure, that would be one thing. But to hype it and hype it and pre-sell it into new devices for 9 months only THEN to release a failure… now that’s fucked up. Apple hardware has been crushing it for years. Software is a mess. Services couldn’t piss themselves if their pants were on fire.