China’s DeepSeek AI model represents a transformative development in China’s AI capabilities, and its implications for cyberattacks and data privacy are particularly alarming. By leveraging DeepSeek, China is on its way to revolutionizing its cyber-espionage, cyberwarfare, and information operations.
[...]
DeepSeek’s advanced AI architecture, built on access to vast datasets and cutting-edge processing capabilities, is particularly suited for offensive cybersecurity operations and large-scale exploitation of sensitive information. It is designed to operate in complex and dynamic environments, potentially making it superior in applications like military simulations, geopolitical analysis, and real-time decision-making.
DeepSeek was founded by Liang Wenfeng, co-founder of High-Flyer, a quantitative hedge fund [...] Wenfeng developed DeepSeek cheaper and faster than U.S. companies by exploiting China’s vast datasets [...]
[...]
Wenfeng’s close ties to the Chinese Communist Party (CCP) raises the specter of having had access to the fruits of CCP espionage, [...] Over the past decade, Chinese state-sponsored actors and affiliated individuals have come under heightened scrutiny for targeting U.S. AI startups, academic labs, and technology giants in attempts to acquire algorithms, source code, and proprietary data that power machine learning systems.
[...]
Within the U.S., several high-profile criminal cases have placed a spotlight on the theft of AI-related trade secrets. Although many investigations involve corporate espionage more generally, AI has become a particularly attractive prize due to its utility in strategic industries such as autonomous vehicles, facial recognition, cybersecurity, and advanced robotics.
One well-known incident involved alleged theft of autonomous vehicle technology at Apple’s secretive self-driving car project, where a Chinese-born engineer was accused of downloading large volumes of proprietary data shortly before planning to relocate to a Chinese competitor. In another case, a separate Apple employee was charged with attempting to smuggle similar self-driving car information out of the country. Both cases underscored the vulnerability of AI research to insider threats, as employees with privileged access to code or algorithms can quickly copy crucial files.
[...]
DeepSeek also poses a unique threat in the realm of advanced persistent threats (APTs) – long-term cyber-espionage campaigns often attributed to state actors. The model could be used to sift through massive volumes of encrypted or obfuscated data, correlating seemingly unrelated pieces of information to uncover sensitive intelligence. This might include classified government communications, corporate trade secrets, or personal data of high-ranking officials. DeepSeek’s ability to detect hidden patterns could supercharge such campaigns, enabling more precise targeting and greater success in exfiltrating valuable information.
DeepSeek’s generative capabilities add another layer of danger, particularly in the realm of social engineering and misinformation. For example, it could create hyper-realistic phishing emails or messages, tailored to individuals using insights derived from breached datasets. These communications could bypass traditional detection systems and manipulate individuals into revealing sensitive information, such as passwords or financial data. This is especially relevant given the growing use of AI in creating synthetic identities and deepfakes, which could further deceive targets into trusting malicious communications.
[...]
China’s already substantial surveillance infrastructure and relaxed data privacy laws give it a significant advantage in training AI models like DeepSeek. This includes access to domestic data sources as well as data acquired through cyber-espionage and partnerships with other nations.
[...]
DeepSeek has the potential to reshape the cyber-threat landscape in ways that disproportionately harm the U.S. and the West. Its ability to identify vulnerabilities, enhance social engineering, and exploit vast quantities of sensitive data represents a critical challenge to cybersecurity and privacy.
If left unchecked, DeepSeek could not only elevate China’s cyber capabilities but also redefine global norms around data privacy and security, with long-term consequences for democratic institutions and personal freedoms.
The "open" AI tech comes with censorship and politically biased code. Once again we must note that the base for China's AI development is the so-called "AI Capacity Building and Inclusiveness Plan":
[Chinese] Government rhetoric draws a direct line between AI exports and existing initiatives to expand China’s influence overseas, such as Xi Jinping’s signature Belt and Road Initiative (BRI) and Global Development Initiative (GDI). In this case, the more influence China has over AI overseas, the more it can dictate the technology’s development in other countries [...]
[According to the Chinese government] AI must not be used to interfere in another country’s internal affairs — language that the PRC has invoked for as long as it has existed, both to bring nations of the global south on board in China’s ongoing efforts to seize Taiwan and to deflect international criticism of its human rights record [...]
The whole article makes a good read. If you want "open technology" free of oligarchical and or similar political power, you need to look elsewhere.
I’ve been playing around with the 70b DeepSeek R1 model on my AI rig this morning. It is most definitely biased on certain topics. But like with other open models, uncensored versions will soon arise. But I appreciate that most folks don’t have AI rigs capable of running the latest models and this privilege is not lost on me.
Is it open source? Another article I read earlier said R1 is open weight, not open source. This article only says the org uses open source practices. No other mention of "open".
You are free to learn ‘Xi Jinping thought.’ Doubt this is for the progress of humanity.
That's not how Open Source works. Is this Chinese version of the AI likely biased? Yes...almost certainly.
But Open Source means that anyone can download and use the same source code and same technology to tinker with it and create one that isn't biased and has nothing to do with the Chinese government.
The power of Open Source is that regardless of who creates the software originally, a million eyes are literally looking at the code. It's nearly impossible to hide any shenanigans.
I might just be uninformed, but this all sounds like, "Asian people are collecting the data that white people have been profiting off of for years!" Is that a fair take?
DeepSeek is actually much more "open" than a certain "OpenAI"
And this project has nothing to do with the Chinese state, otherwise you wouldn't see it open itself up like that.
This is blatant misinformation. Everything from China has to do with the Chinese state, including software made by private companies. There is ample evidence for this. Please see also my comment and the source in this thread.
AI in the hands of the ruling class is a collective threat to the world. Apple Intelligence, Copilot computers—they are all gateways for corporations to circumvent security. They scour every piece of data on your devices in order to process it all. It’s going to become a shitty Trojan horse: offers nothing of value to people and we handwave it away, while it casually peeks into all of our “secure” apps on a daily basis.
The particular AI model this article is talking about is actually openly published for anyone to freely use or modify (fine-tune). There is a barrier in that it requires several hundred gigs of RAM to run, but it is public.