Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Congratulations! Any work to optimise efficiency w.r.t LLMs is much appreciated.

So far I’ve taken only lazy approach to optimising local LLMs by sending small queries to my M4 Mac Mini running MLX models and sending larger queries to my Nvidia 4090; it’s remarkable how efficient M4 is compared to Nvidia and I think Apple is in the right direction with MLX.

I would read about AutoThink and try to integrate it with my workflow.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: