Blizzard bans 250,000 Overwatch 2 cheaters, says its AI that analyses voice chat is warning naughty players and can often 'correct negative behaviour immediately'
It's also been identifying accounts that notably group with cheaters, and has banned thousands of these.
Blizzard bans 250,000 Overwatch 2 cheaters, says its AI that analyses voice chat is warning naughty players and can often 'correct negative behaviour immediately'::Overwatch 2 has not been having a good time of it in 2023. May brought the unwelcome news that Blizzard's original plans for the game's post-launch support were being changed, mainly the pro
Why though?
What expectation of privacy do you have for a moderated public platform?
Why is this substantially different than ingame chat logs?
How could they moderate voice chat otherwise?
In the context of online games specifically, harrasment over voice chat is an enormous problem, and it drives players away from VC (decreasing their ability to communicate with a team in a team game), or drives players away from the game entirely.
If it works even just /decently/ well and has a functional appeal process, this is an unqualified win. We need to start filtering out the harassment endemic to the broader gaming community.
How long until information from our conversations is recorded and sold to data firms that will in turn use it to sell to us, or hacked and personal conversations exposed.
No, I just refuse to have some shitty company monitoring me when I'm trying to be relaxed and have fun. It's bad enough you can't do shit online without invasive bullshit, but I'll be fucked if I'm going to deal with it when I'm playing a game.
Idgaf what anyone else is doing or not doing, that's not my problem.
Agreed to a point. I think the ideal system would be to locally store voice chats. If you report someone on your team for being toxic, then they're sent over for review but made anonymous (like "Blue Player 1" and "Red Player 3" instead of "xXxGunKiller69xXx" and "Purple Dream Flute"). If whatever system used to review the chats (preferably humans, but it'll probably be AI) determines that there was an actionable offense, the match identifier is pulled up and then and only then individual player names get seen so actions can be taken, otherwise it's fully deleted.
That's about the best system I can think of to balance privacy with the banning of toxic players. You could use this same system for text interactions as well. While I would love for 100% of my data in mp games to be private, it isn't possible to do that and not be surrounded by toxic assholes 24/7 afaik. And I'm personally fine giving up a bit of privacy if it means not being surrounded by assholes every match.
Now, the big issue is that gaming companies just want to record everything you say to sell it off for data collection, but that's more to do with the fact that capitalism encourages companies to have 0 ethics