Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would it be possible to scale this up to use LLaMA 30b? Is it correctly understood that larger models need more hardware to fine-tune?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: