Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you are on $100 tier Claude, what makes you think the $20 Tier Ollama is enough for you ?


If your workflow is general enough, you can (and should) switch between models. They all have different styles and blind spots.

Like I had Codex + gpt-5-codex (20€ tier) build me a network connectivity monitor for my very specific use case.

It worked, but had some really weird choices. Gave it to Claude Code (20€ tier again) and it immediately found a few issues and simplifications.


Right. And then there's using an MCP tool that instantiates another agent except uses a different model.

Here's a good example. For summarization of a page of content. Content is maybe pulled down by an agentic crawler, so using a local model to summarize is great. It's fast, doesn't cost anything (or much) and I can run it without guardrails as it doesn't represent a cost risk if it ran out of control.


Clearly articulated and repeating what makes the $20 Ollama tier valuable to me is:

1. Access to specific large open models (Qwen3 235b, Deepseek 3.1 671b, Llama 3.1 405b, GPT OSS 120b)

2. Having them available via the Ollama API LOCALLY

3. The ability to set up Codex to use Ollama's API for running tools on different models

I mean, really, nothing else is even close at this point and I would rather eat a bug than use Microsoft's cloud.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: