Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it cannot surpass SOTA in some LM evaluation sets, but please understand that achieving better results requires a very good training dataset, which not everyone can afford.

On the other hand, the main points of Zamba/Mamba are low latency, generation speed, and efficient memory usage. If this is true, LLMs could be much easier for everyone to use. All we need to do is wait for someone with a good training dataset to train a SOTA Mamba.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: