RETRO - Improving language models by retrieving from trillions of tokens (2021)
> With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters.
RETRO - Improving language models by retrieving from trillions of tokens (2021)
> With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters.
https://www.deepmind.com/publications/improving-language-mod...