Exactly. There's a reason why user-accessible file structures and categories in things like email, OS functions, and even cloud databases prominently feature search bars. There's a lot of data to be had from search bars. And people having full-on conversations with AI systems is rife with that kind of data.
Naive to think any company actually deletes data, unless it's literally their business to do so, or they're destroying evidence. Even so, there's probably multiple copies available. It's simply too valuable. They just flag it as inactive, but I'd bet it's still there for data mining or training.
Nope! We have policies of regularly deleting data as leftover data can turn into a legal nightmare, especially when it comes to discovery. It’s much easier to point them to the policy of why it isn’t there than to try to compile and give it to them and then potentially have something buried in there. The only thing we keep longer are things legally obligated.
That would be covered under the "destroying evidence" - It's just being destroyed before it can be determined to be evidence, which is legal if done because of retention policies.
(Identifying data and establishing which policies apply to it is part of my work, I just find it ironic that we're effectively pre-deleting evidence).
I took a whole class on this in library school--"records management"--and it was kind of fun watching the more archives-focused students wrap their head around getting rid of data/records as soon as legally allowed.
This kind of reminds me of that time Apple made a big show of resisting court efforts to get them to unlock iphone data; they have every reason to cultivate an impression of caring about privacy, but this isn't actually evidence that they do. Giving them all this info about your life by holding a continual ongoing conversation about what you're doing on their servers is inherently pretty bad for your privacy in a way reassurances can't really fix.
There's a lot of reasons to prefer local AI instead and this is a big one.
You probably aren't. You're on somebody's side who described OpenAI's point of view. You won't have an independent opinion before you've read at least one different source. One that is not owned by a huge media conglomerate, that is. Everyone seems willing to make up their minds after somebody somewhere told them something without verification. That's how you're getting a poorly informed half-wit without an idea of critical thinking.