Hacker Newsnew | past | comments | ask | show | jobs | submit | lherron's commentslogin

Gross.


Somehow every 15 line shell script I write now turns into a 50kloc bun cli or tui app. Apparently there are many such cases.


Different use cases. I want aws-cli for scripting, repeated cases, and embedding those executions for very specific results. I want this for exploration and ad-hoc reviews.

Nobody is taking away the cli tool and you don't have to use this. There's no "turns into" here.


Oh I think you misinterpreted my comment! I am very much a fan of this, wasn't throwing shade. I am just remarking on how my side-project scope today dwarfs my side-project scope of a year or two ago.


I did :) and I from votes I'm guessing many others too. Text communication remains hard as usual, sorry about that :(


Terminal electron.


They buried the lede. The last half of the article with ways to ground your dev environment to reduce the most common issues should be its own article. (However implementing the proper techniques somewhat obviates the need for CodeRabbit, so guess it’s understandable.)


Why would the self-hosted runner fee be per-minute instead of per-job? I don’t get it.


I had the same question — I understand that the Actions control plane has costs on self-hosted runners that GitHub would like to recoup, but those costs are fixed per-job. Charging by the minute for the user’s own resources gives the impression that GitHub is actually trying to disincentivize third-party runners.


Self-hosted runner regularly communicates with the control plane, and control plane also needs to keep track of job status, logs, job summaries, etc.

8h job is definitely more expensive to them than a 1 minute one, but I'd guess that the actual reason is that this way they earn more money, and dissuade users from using a third party service instead of their own runners.


Might be an estimation of logs storage/bandwidth.


That's generous, but doesn't seem consistent with how Microsoft does business. Also, if that's the case why does self-hosted cost the same as the lowest hosted tier?


Because the competitor services that provide much cheaper hosted runners also charge per minute.

This isn't aimed at people actually self-hosting; it's aimed at alternative hosted runners providers. See this list

https://github.com/neysofu/awesome-github-actions-runners


Runner price based on CPU/memory and time makes sense, since those are the costs associated with executing runners.

The costs for GitHub doing action workflows (excluding running) is less related to job duration.

The most charitable interpretation is that per-minute pricing is easier to understand, especially if you already pay runners per minute.

The less charitable interpretation is that they charge that because they can, as they have the mindshare and network effect to keep you from changing.


or some other metric like how many logs your job produces and they have to process

the only rational outside rationale is this has the best financial projections, equitability with the customer be damned

gotta make up for those slumping ai sales somehow, amiright?


Freaking awesome. You should extend clicking on a link, similar to how this article describes infinite content:

https://worksonmymachine.ai/p/solving-amazons-infinite-shelf...


Gemini 3 = Skynet ?


Building an interactive shell inside their CLI seems like a very odd technical solution. I can’t think of any use case where the same context gathering couldn’t be gleaned by examining the file/system state after the session ended, but maybe I’m missing something.

On the other hand, now that I’ve read this, I can see how having some hooks between the code agent CLIs and ghostty/etc could be extremely powerful.


LLMs in general struggles with numbers, it's easy to tell with the medium sized models that struggle with line replacement commands where it has to count, it usually takes a couple of tries to get right.

I always imagined they'd have an easier time if they could start a vim instance and send search/movement/insert commands instead, not having to keep track of numbers and do calculations, but instead visually inspect the right thing happening.

I haven't tried this new feature yet, but that was the first thing that came to mind when seeing it, it might be easier for LLMs to do edits this way.


Gotta be better than codex literally writing a python script to edit a file multiple times in a single prompt response.


Personally haven't had that happen to me, been using Codex (and lots of other agents) for months now. Anecdote, but still. I wrote up a summary of how I see the current difference between the agents right now: https://news.ycombinator.com/item?id=45680796


Still a toss-up for me which one I use. For deep work Codex (codex-high) is the clear winner, but when you need to knock out something small Claude Code (sonnet) is a workhorse.

Also CC tool usage is so much better! Many, many times I’ve seen Codex writing a python script to edit a file which seems to bypass the diff view so you don’t really know what’s going on.


I would add to the list of the vibe engineer’s tasks:

Knowing when the agent has failed and it’s time to roll back. After four or five turns of Claude confidently telling you the feature is done, but things are drifting further off course, it’s time to reset and try again.


Need someone to do this with the old THX test sound.


Even better if it’s dynamic. Just start with a number of tones at random frequencies, and bring them towards unison as the screen opens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: