I've had customers send passwords to me, plain-text, over chat. Frankly, that end-to-end-encryption is not considered basic security for chat applications, that's just beyond me.
The number of people I've had tell me Slack is secure because they don't get the difference between transport layer security and end to end. Slack's marketing jibe muddies the water about it, too.
This is another one of those situations where for them (and every other company with access to similar content) the upside is just too much money to ignore.
What is the downside? Lost customers? No problem, they'll charge the remaining customers more for new premium features based on the newly trained models. Also if they didn't develop those features in the first place, a competitor would have pulled away customers anyways.
Fines from some government for the egregious violation of a TBD law relating to AI that doesn't even exist yet? Lol, just the cost of business.
And policy changes? Who actually believes they'll discard the model parameters they've already spent presumably millions of dollars training?
This is %100 going to get them in legal trouble with companies who used the platform to share data. It'd be like if Microsoft started using teams messages, from companies who use it at an enterprise level, to train LLMs on. This was a really dumb move from slack and I have no clue what legal team gave the green light on this.