Topics
ABOUT

AI model collapse might be prevented by studying human language transmission

Griffiths, Thomas L.
5–6 minutes
  • CORRESPONDENCE
  1. Kenny Smith
    1. University of Edinburgh, Edinburgh, UK.

  2. Simon Kirby
    1. University of Edinburgh, Edinburgh, UK.

  3. Shangmin Guo
    1. University of Edinburgh, Edinburgh, UK.

  4. Thomas L. Griffiths
    1. Princeton University, Princeton, New Jersey, USA.

Ilia Shumailov and colleagues show that using data generated by one artificial intelligence (AI) model to train others eventually leads to ‘model collapse’, in which the models lose information about the real world ( I. Shumailov et al. Nature 631 , 755–759; 2024 ). For instance, language models that are trained iteratively produce probable sentences too frequently and generate meaningless word sequences that no human would produce.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

$29.99 / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

$199.00 per year

only $3.90 per issue

Rent or buy this article

Prices vary by article type

from $1.95

to $39.95

Prices may be subject to local taxes which are calculated during checkout

Nature 633 , 525 (2024)

doi: https://doi.org/10.1038/d41586-024-03023-y

Competing Interests

The authors declare no competing interests.

Related Articles

Subjects

Latest on: