The Australian government says the Chinese AI app is a threat to it and its assets.
Australia has banned DeepSeek from all government devices and systems over what it says is the security risk the Chinese artificial intelligence (AI) startup poses.
...
Growing - and familiar - concerns
Western countries have a track record of being suspicious of Chinese tech - notably telecoms firm Huawei and the social media platform, TikTok - both of which have been restricted on national security grounds.
...
An Australian science minister previously said in January that countries needed to be "very careful" about DeepSeek, citing "data and privacy" concerns.
The chatbot was removed from app stores after its privacy policy was questioned in Italy. The Italian goverment previously temporarily blocked ChatGPT over privacy concerns in March 2023.
Regulators in South Korea, Ireland and France have all begun investigations into how DeepSeek handles user data, which it stores in servers in China.
...
Generally, AI tools will analyse the prompts sent to them to improve their product.
This is true of apps such as ChatGPT and Google Gemini as much as it is DeepSeek.
All of them gather and keep information, including email addresses and dates of birth.
Meanwhile our governments services and tenders practically demand US software and services provided by US companies on US controlled hosting. I haven't seen any good use for LLMs beyond being an amusement but downloading the Deepseek model to run locally is absolutely safe and local models is all anyone should be using with any data where they have a responsibility, ethical or legal, to maintain privacy and security. And it you are doing things properly and everything is local then Deepseek reportedly has some efficiency advantages that make it worth considering over alternatives.
Preventing exfiltration of Australian data to foreign jurisdictions is absolutely the correct thing to do but block OpenAI and Microsoft and other US companies as well. Once again Australia does whatever its told. I kind of understand when it is the mining barrons or real estate developers given they do at least make some economic contribution to the country. But I have no idea why we suck off US tech bros when all they do is lower our productivity by addicting us to crap products, corrupt our democracy and extort rent from us for the privilege.
... doing things properly and everything is local then Deepseek reportedly has some efficiency advantages that make it worth considering over alternatives
Again i ask the question: why am i left with the perception that end users have the ability to acess or install this in the workplace in the first place.
Any IT department worth it's paycheque would already have everything locked down to hell. I work with a lot of local councils and they've grasped this concept, why hasn't the federal government?
The APS has pretty clear policies on the use of these tools in general. Some experiments are being run but largely policy is "no, it's a liability nightmare"
And yet Copilot is busy burrowing into the flesh of the government like a growing hookworm, a large swathe of big business is simply trusting to Microsoft's: "Oh no we keep your data entirely seperate and safe. We don't use it to train the LLM, pinky promise." Whilst ChatGPT keeps showing up in the hands of the most clueless people, "Oh I gave it all my personal info so it could rewrite my resume. How great is AI!"
I feel like this could be solved immediately and easily, make every privacy breach by any company subject to a fine totalling a single digit percentage of global turnover of the company. So for each privacy breach where Copilot is involved that will be... say... 3 billion dollars. They would yank their "AI Solution" from the local market so quickly you would hear a cracking sound.
Our government banning wealthy off-shore interests just because they happen to be highly toxic and detrimental with negligible benefits to the citizens they are exploiting...
Sounds like a slippery slope there.
I imagine there are more than a few companies/industries that would see that as a dangerous precedent.
You'd have to be mad to put important information into any AI model unless you're hosting it locally and know it isn't sending info anywhere (the latter being the hard part to verify). All of the online AI services really should be blocked if departments/companies are taking security seriously.
Yes, but at the same time, an astounding amount of people are mad when it comes to tech.
My mate in IT says just this month someone in their corpo office used their work email to sign up to a malicious fake copy of a piracy website. If they were reusing the same password, that could let a hacker into the company account, let alone any other things that employee signed up to on that work email.
That doesn't even cover the people posting things they shouldn't on facegram.
That is unfortunately true, for example I find it sadly impressive that one has a decent chance of getting classified info simply by starting an argument on the War Thunder forums...