Honestly, that's a smart thing for AI companies to do. AI is surprisingly decent at extrapolating from existing codebases, but it's useless at starting from scratch. If one model says "I can't do that, Dave" and another spits out garbage, you're getting the same amount of useful code out of both and a much better signal-to-noise ratio from the first.
No, it's not smart, I pay for Cursor to generate code, not to patronize me. I would stop paying for it and instead switch to something that at least tries different ideas to get my feature to work.
(Also it's definitely not useless at starting from scratch, you just need a strong design and good understanding of what's possible)
What does it help you with? I can definitely see having snippets or "modular" code on hand as being useful, but you don't really need a LLM for that. What sets it apart? Is there a big time commitment necessary to get to the desired outputs or does it just do what you want right away?
imo it's the opposite, AI is good at starting projects by giving you boilerplate code, but bad at considering the full context of an existing project. Better to be doing the larger structure stuff yourself and only giving the LLM self contained tasks.
Did they train this one on redditors too? Next it’s gonna talk to a lawyer and hit up the gym. Maybe we’ll get lucky and skynet will get confused and delete all of Facebook.