You would have to train new AI models to recognize and ignore other AI content. But that would be an admission that AI content is useless and can't be trusted.
Nice detail to use when searching the internet btw:
"But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI," he added. "Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'"
Try running searches set pre-2022, at least for older info, to reduce the possibilities of AI generated noise.
Anyway, kinda funny to see these generators may be producing enough noise to make producing more noise somewhat harder. Hopefully this doesn't also impact more productive AI development, such as what's used in scientific research and the like, as that would genuinely suck.
Edit:
Revised from generators "have produced" to "may be producing" to better reflect the lack of concrete info regarding generative AI data pollution as someone else pointed out. As they note:
"Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible," he told The Register.
Plus side of actual useful application of LLM/AI is the data is usually a small subset of data, and it would have to be tested anyways since it would have to be used in the real world. I think the main use of LLM/AI in mainstream is using it on small datasets like that instead of the race for the holy grail of "General" AI.
It makes me think about how low background steel has become a precious commodity. Steel that was made prior to the first atomic bombs has a unique value because it's uncontaminated.
We have archives of the internet prior to AI as we currently know it coming into widespread use. It seems like the future of all LLM model designers are going to need to be very crafty about their source of data and not just ingest everything they crawl.