Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What people actually want is something like GPT4o/o1 running locally. That's the dream for local LLM people.

Running a 7b model for fun is not what people actually want. 7b models are very niche oriented.



About <10B LLMs, yes it's not that good. However, <10B is a range that allows many people to do their own tweaking and fine-tuning.


For a local LLM, you can't really ask for a certain performance level, it is what it is.

Instead, you can ask for the architecture, be it dense or MoE.

Besides, let's assume the best open weight LLM for now is deepseek r1, is it practical for you to run r1 locally? If not, r1 means nothing to you.

Maybe r1 will be surpassed by llama 4 behemoth. Is it practical for you to run behemoth locally? If not, behemoth also means nothing to you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: