What they're actually in panic over is companies using a Chinese service instead of US ones. The threat here is that DeepSeek becomes the standard that everyone uses, and it would become entrenched. At that point nobody would want to switch to US services.
I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn't it have to be reduced to like 1-2% of the size of the official one?
Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don't have a good sense of how much it's being reduced in quality to run locally.
Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.
YouTube-connected the right track still. All these people touting it as an open model likely haven't even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.
My work has already blocked it, but has no problems using AI hosted by a country whose leader is a convicted criminal with close ties to Russia and North Korea
They're gonna ban access to the official service provided by a Chinese company. That's what this is about. The biggest fear is that everybody starts using DeepSeek, and then it will muscle out US companies that fell behind. Once people start using their service, they'll have little reason to switch to something else going forward. Banning it is a protectionist measure that allows US companies to catch up.
You know, if the mainstream news spent this amount of oxygen explaining that it can be downloaded in various sizes and run securely, the whole world would be in a much better situation.
I am not saying that China don't do some of the things they are accused of, but the amount of anti-China fear in Western nations is only hurting ourselves.
That's generally true under the paradigm of profit maximization unless you reach some sort of insane tech breakthrough, which deepseek seems to have accomplished