Analysis shows that indiscriminately training generative artificial intelligence on real and generated content, usually done by scraping data from the Internet, can lead to a collapse in the ability of the models to generate diverse high-quality output.
Of course they do, that's not really a shocking statement. I do like this research though, actually looking at how the underlying data distribution is forgotten.
(There's also a process where people merge AI models together, and models are merges of merges with a largely unknown lineage, and in this space there's a growing inbreeding problem.)
Indiscriminate is the key word in this paper. No one trains this way. Synthetic data and filtering out bad data are already very important steps for training and will continue to stay that way. With proper filtering and evaluation, models trained on synthetic data do better then the ones before.
This is not the end of ai, like so many wish it would be.
I don't think this is going to be the end of AI either, and the corpus of data before AI generated content became prevalent is also huge. So, I don't think there's really lack of training data. I personally think this is more interesting from the perspective of how these algorithms work in general. The fact that they end up collapsing when consuming their own content seems to indicate that the quality of content is fundamentally different from that generated by humans.
Yea that's completly fair, I think ai models in general to have lots of interesting characteristics that are very different from humans. I just see a lot of people taking conclusions from papers like this that aren't justified.
It's obvious enough that this will happen if you cycle one model's output through itself, but they looked at different types of models (LLMs, VAEs, and GMMs) and found the same collapse in all of them. I think that's a big finding.