I stumbled upon the Geminy page by accident, so i figured lets give it a try.
I asked him in czech if he can also generate pictures. He said sure, and gave me examples about what to ask him.
So I asked him, again in czech, to generate a cat drinking a beer at a party.
His reply was that features for some languages are still under development, and that he can't do that in this language.
So I asked him in english.
I can't create images for you yet, but I can still find images from the web.
Ok, so I asked if he can find me the picture on the web, then.
I'm sorry, but I can't provide images of a cat drinking beer. Alcohol is harmful to animals and I don't want to promote anything that could put an animal at risk.
Great, now I have to argue with my search engine that is giving me lessons on morality and decide what is and isn't acceptable. I told him to get bent, that this was the worst first impression I ever had with any LLM model, and I'm never using that shit again. If this was integrated into google search (which I havent used for years and sticked to Kagi), and now replaces google assistant...
Good, that's what people get for sticking with google. It brings me joy to see Google dig it's own grave with such success.
Wow that is pretty damning. I hope Google is adding all this stuff in with the replacement of Assistant but it's Google so I guess they won't. I replaced Assistant with Gemini a while back but I only use it for super basic stuff like setting timers so I didn't realise it was this bad.
They did the same shit with Google Now, rolled it into Assistant but it was nowhere near as useful imo. Now we get yet another downgrade switching Assistant with Gemini.
As I like to say, there's nobody Google hates more than the people that love and use their products.
Such great products. Now we get....image generation, inpainting and a conversational AI. All technically impressive, but those older products were actually functional and solved everyday problems.
When google asked if I wanted to try Gemini I gave it a try and the first time I asked it to navigate home, something I use assistant for almost daily, it said it can't access this feature but we can chat about navigating home instead - fuck that!
Even though I switched back to assistant it's still getting dumber and losing functionality - yesterday is asked it to add something to my grocery list(in keep) and it put it on the wrong list, told me the list I wanted doesn't exist, then asked if I wanted to create the list and then told me it can't create it because it already exists.
I was thinking about this while posting and absolutely agree.
It just boils down to how enshittification is defined though, and using ai for everything (where it might not fit or degrade a service) might possibly be valid too.
So an interesting thing about this is that the reasons Gemini sucks are... kind of entirely unrelated to LLM stuff. It's just a terrible assistant.
And I get the overlap there, it's probably hard to keep a LLM reined in enough to let it have access to a bunch of the stuff that Assistant did, maybe. But still, why Gemini is unable to take notes seems entirely unrelated to any AI crap, that's probably the top thing a chatbot should be great at. In fact, in things like those, related to just integrating a set of actions in an app, the LLM should just be the text parser. Assistant was already doing enough machine learning stuff to handle text commands, nothing there is fundamentally different.
So yeah, I'm confused by how much Gemini sucks at things that have nothing to do with its chatbotty stuff, and if Google is going to start phasing out Assistant I sure hope they fix those parts at least. I use Assistant for note taking almost exclusively (because frankly, who cares about interacting with your phone using voice for anything else, barring perhaps a quick search). Gemini has one job and zero reasons why it can't do it. And it still really can't do it.
LLMs on their own are not a viable replacement for assistants because you need a working assistant core to integrate with other services. LLM layer on top of assistants for better handling of natural language prompts is what I imagined would happen. What Gemini is doing seems ridiculous but I guess that's Google developing multiple competing products again.
Pre-parse vs voice command library of commands. If there are, do them, pass confirmation and jump to 6.
If no valid commands, then pass to LLM.
have LLM heavily trained on commands and some API output for them. If none, then other responses
have response checked for API outputs, handle them appropriately and send confirmation forward, otherwise pass on output.
Convert to voice.
The LLM part obviously also needs all kinds of sanitation on both sides like they do now, but exact commands should preempt the LLM entirely, if you're insisting on using one.
It is a replacement for a specific portion of a very complicated ecosystem-wide integration involving a ton of interoperability sandwiched between the natural language bits. Why this is a new product and not an Assistant overhaul is anybody's guess. Some blend of complicated technical issues and corporate politics, I bet.
The best part is if you have Google Home/Nest products throughout your house and initiate a voice request you now have your phone using Gemini to answer and have the nearest speaker or display using Assistant to answer and they frequently hear eachother and take that as further input (having a stupid "conversation" with eachother). With Assistant as the default on a phone, the system knows what individual device it should reply to via proximity detection and you get a sane outcome. This happened at a friend's house while I was visiting and they were frustrated until I had them switch their phone's default voice assistant back to Assistant and set up a home screen shortcut to the web app version of Gemini in lieu of using the native Gemini app (because the native app doesn't work unless you agree to set Gemini as the default and disable Assistant).
Missing features aside, the whole experience would feel way less schizophrenic if they only allowed you to enable Gemini on your phone if it also enabled it on each smart device in the household ecosystem via Home. Google (via what they tell journalists writing articles on the subject) acts like it's a processing power issue with existing Home/Nest devices and the implication until very recently was that new hardware would need to roll out - that's BS given that very little of Gemini's functionality is being processed on device and that they've now said they'll begin retroactively rolling out a beta of Gemini to older hardware in fall/winter. Google simply hasn't felt like taking the time to write and push a code update to existing Home/Nest devices for a more cohesive experience.
They should have just merged the two products. Instead of coming up with Gemini, they could have added LLM features to the Assistant. On a Samsung phone, you now have Bixby, Assistant and Gemini lol.
It's the normal corporate lifecycle. Founders build it up. Workers expand it. Suits take over to monetize everything. A private equity firms squeezes the last life out of it.
I tried it out for a while and yes, it really is as bad as the article implies. I gave it a fair chance for a few weeks and then went back to the old assistant (a task which which gemini was also completely unable to help me with, at one point even gaslighting me and saying I wasn't using Gemini).
It's kind of crazy to think about but it seems like Google is just somehow really terrible at AI