'occasionally incorrect' like any other AI out there?
ChatGPT and Gemini have persistent memory, meaning they can remember conversations across different chats. Copilot can remember certain things, too. DeepSeek’s memory, on the other hand, is limited to each chat, meaning you need to repeat yourself to continue talking about topics from previous ones.
I don't think this is a default for Gemini or Copilot, though it probably can be implemented easily for DeepSeek (but they probably have better things to do than to sell itself)
DeepSeek might be somewhat clunky, but it eventually gets the job done. However, I don’t recommend relying on its text extraction and document analysis for anything mission-critical because it sometimes misunderstands files, just like ChatGPT and Gemini can. Make sure you double-check its responses for anything important.
Actually, my experience is different
Out of every AI I tried including Claude, DeepSeek had the most accurate, detailed, and extensive capability to read a huge file, follow user prompt to a T, and still have great capability to use for other tasks
It's weird how this is rated 2/5, though if you exclusively use DeepSeek as a casual user I would see why
Currently I make use of DeepSeek for anything that requires some thinking, Gemini for whatever else
DeepSeek has a good searching system in my opinion, while it doesn't explicitly tell you which part of the sources they used for their output, it still makes sure to only use relevant information and i felt that it was more reliable than Gemini
Ironically enough, the censorship is limited only to text generation, and I'm unsure if this applies to the API but it has a very progressive stance (compared to Gemini) and isn't parroting CCP talking points like you might have expected.
It also has the most capable model for talking to with Chinese, and it was very useful for translating stuff, and searching stuff in Chinese contexts (it can search in Chinese and reply in English allowing for easy research of anything Chinese)
This article makes it seem extremely subpar but the model behind it is great and worth using even if you had the budget for Claude, you would get more out of it by using a client for it with the API though.
The main issue with Deepseek is about censorship and privacy as the review suggests.
I don't use AI myself and have not read the article, but isn't there censorship and privacy issue at play also with every single non-Chinese AI out there?
I mean, can I ask one of those non-Chinese AI to make me, say, a pornographic image based on some famous person, or would it refuse? Could I ask a non-Chinese 'how can I make a bomb powerful enough so I can blow This or that (whatever one would not legally own)', or 'How should I mount a coup to take hold of power in my country?' or would it refuse to answer any of that? And then, subsidiary question, would any of these questions be reported to legal authorities?
Chinese AI startup DeepSeek’s newest AI model, an updated version of the company’s R1 reasoning model [...] might also be less willing to answer contentious questions, in particular questions about topics the Chinese government considers to be controversial [...] China’s openly available AI models, including video-generating models such as Magi-1 and Kling, have attracted criticism in the past for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, the CEO of AI dev platform Hugging Face, warned about the unintended consequences of Western companies building on top of well-performing, openly licensed Chinese AI.
More than 230 pages of censorship instructions prepared by Chinese social media platforms were shared by industry insiders with the [independent investigators]. The files reveal deep anxiety among Chinese authorities about the spread of any reference to the most violently suppressed pro-democracy movement in the country's history [...]
There are many more from different, very reliable sources.
Feel free to whatabout further, I won't respond to such comments anymore.