Skip Navigation

Outperforming LLMs with less training data and smaller model sizes

blog.research.google

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

LocalLLaMA @sh.itjust.works

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Generative AI @mander.xyz

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Hacker News @lemmy.smeargle.fans

Outperforming larger language models with less training data and smaller models

0 comments

No comments