Google didn't tell Android users much about Android System SafetyCore before it hit their phones, and people are unhappy. Fortunately, you're not stuck with it.
According to GrapheneOS, a security-oriented Android Open Source Project (AOSP)-based distro: "The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine-learning models that are usable by applications to classify content as spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users."
So it does not scan your photos, it's a library that can be used by apps. I think the point is to allow apps to reject or tag a picture without ever sending it to a server for scanning, thus taking off the load of these servers and putting it on the client-side.
Like, you're trying to post a story on Instagram, the app asks Safetycore if the picture contains porn/violence/something they don't want, Safetycore says yes/no, and Instagram accepts and tag or refuses the picture accordingly.
The danger here IMO is less about privacy and more about censoring : we know every time something is pushed to fight child porn, it ends up used to control activists and political opponents. People may be restricted to share proofs of police violence for example.
No problem if you're using Lemmy, you can use any front-end, so you can use an app that won't use SafetyCore.