Meta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent content
For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users' privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?
Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.
Really? Humans? Maybe even qualified humans? Huh! Never would've thought that.
Set your timers. We're going to hear about a non-ethical decision made by this system in 5, 4, 3, ...