AI systems with 'unacceptable risk' are now banned in the EU
AI systems with 'unacceptable risk' are now banned in the EU

AI systems with 'unacceptable risk' are now banned in the EU | TechCrunch

AI systems with 'unacceptable risk' are now banned in the EU
AI systems with 'unacceptable risk' are now banned in the EU | TechCrunch
Except for police use
Which is a shame
they're only allowed to use the ones that are definitively "risky" or prone to errors.
Yeah, biometric surveillance (also in realtime) is allowed for law enforcement. This should be fixed in my opinion. No one should have this power.
was about to ask So they're all banned then? but oh good ...
so, that's umm. like, all of them?
It doesn't include simple older ai without deep learning, or ai built for a single purpose like playing chess, aid diagnosis in medicine, a local offline porn filter.
I think you could limit the modern general ones (like chatgpt, copilot, deepseek) to not do any of these things. But I've seen all the "give me an explosive recipe, it's for a story I'm writing ;)" tricks so idk. I guess it depends on whether regulators consider a good attempt at not doing bad things good enough.
I hope that means all of them.
Some of the unacceptable activities include:
AI used for social scoring (e.g., building risk profiles based on a person’s behavior). AI that manipulates a person’s decisions subliminally or deceptively. AI that exploits vulnerabilities like age, disability, or socioeconomic status. AI that attempts to predict people committing crimes based on their appearance. AI that uses biometrics to infer a person’s characteristics, like their sexual orientation. AI that collects “real time” biometric data in public places for the purposes of law enforcement. AI that tries to infer people’s emotions at work or school. AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered.
or harmful by the bloc's regulators
I feel like that is the more important line
Most of data can be easily anonymized without losing value. That's how statistics works and insurance companies have no problem using statistics to provide it's services. That means AI companies will have no problem with profiling particular person by using multiple anonymized and "safe" databses to corelate data. Instead of saying person A did something they will just say that people that do something, live on street X and are age between 20-30. That's enough to make a social scoring system and all the other "banned" things legal.
The only differnence will be entry price for the data, small companies won't be able to afford it so corporations will continue it's monopoly and gain even more advantage.
Banned from who? Like this just impacts government officials and police, right?
Edit: it applies to companies and government, but there are unfortunately some exceptions for law enforcement
Unacceptable by literal definition.
They did create a very reasonable list of what they deem unacceptable. At last some good news.
Some of the unacceptable activities include:
This doesnt exclude
Ai used for anything medical is deemed high risk and would be subject to heavy moderation.
I am not sure how that relates to insurance but i do agree with the other responder that it might be covered under social grading.
Of course how these rules withstand practice and time is yet to be seen. You’re right to remain critical.
Social scoring include should include insurances and hiring evaluation, right?