Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
baobabKoodaa
9 months ago
|
parent
|
context
|
favorite
| on:
Show HN: AutoThink – Boosts local LLM performance ...
Yes, if you only care about correctness, you always use the maximum possible inference compute. Everything that does not do that is trading correctness for speed.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: