Skip Navigation

AI models face collapse if they overdose on their own output

25

You're viewing a single thread.

25 comments
  • Wow, this is a peak bad science reporting headline. I hate to be the one to break the news but no, this is deeply misleading. We all want AI to hit it's downfall, but these issues with recursive training data or training on small datasets have been near enough solved for 5+ years now. The nature paper is interesting because it explains the modality of how specific kinds of recursion impact broadly across model types, this doesn't mean AI is going to crawl back into pandoras box. The opposite, in fact, since this will let us design even more robust systems.

    • I've read the source nature article (skimmed though the parts that were beyond my understanding) and I did not get the same impression.

      I am aware that LLM service providers regularly use AI generated text for additional training (from my understanding this done to "tune" the results to give a certain style). This is not a new development.

      From my limited understanding, LLM model degeneracy is still relevant in the medium to long term. If an increasing % of your net new training content is originally LLM generated (and you have difficulties in identifying LLM generated content), it would stand to reason that you would encounter model degeneracy eventually.

      I am not saying you're wrong. Just looking for more information on this issue.

    • AI needs human content and a lot of it, someone calculated that to be good it needs like some extreme amount of data impossible to even gather now hence all the hallucinations and effort to optimize and get by on scraps of semi forged data. Semi forged, artificial data isn’t anywhere close to random gibberish of garbage ai output

      • Depends on what you do with it. Synthetic data seems to be really powerful if it's human controlled and well built. Stuff like tiny stories (simple llm-generated stories that only use the complexity of a 3-year olds vocabulary) can be used to make tiny language models produce sensible English output. My favourite newer example is the base data for AlphaProof (llm-generated translations of proofs in Math-Papers to the proof-validation system LEAN) to teach an LLM the basic structure of Mathematics proofs. The validation in LEAN itself can be used to only keep high-quality (i.e. correct) proofs. Since AlphaProof is basically a reinforcement learning routine that uses an llm to generate good ideas for proof steps to reduce the size of the space of proof steps, applying it yields new correct proofs that can be used to further improve its internal training data.

      • AI needs human content

        Obviously I know what you mean here and I agree with the sentiment, but there's an interesting wrinkle: Intelligence doesn't need human content, since it arose without any. It needs stimuli. Life and central nervous systems evolved in the presence of the universe; light sensors, chemical sensors. Eventually we developed lots of other senses and we used those to build our models (literally mental models!), almost entirely without any human input, since most of that happened before humans existed.

        I think the LLM field might well collapse without new human input--maybe it's too early to be sure one way or the other--but we can still go further by constructing actual senses and a way to explore the world and making computers learn from that.

You've viewed 25 comments.