Immagine della notizia

Researchers warn of ‘catastrophic overtraining’ in LLMs

Date: 2025-03-28 21:01:20

The researchers compared two versions of OLMo-1b: one pre-trained on 2.3 trillion tokens and another on 3 trillion tokens.


Sources:

Click and go !

More From:

venturebeat.com