Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a paper from DeepMind

RETRO - Improving language models by retrieving from trillions of tokens (2021)

> With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters.

https://www.deepmind.com/publications/improving-language-mod...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: