Hacker Newsnew | past | comments | ask | show | jobs | submit | yosefk's commentslogin

Thank you very much for your work. I think people envious of someone's compensation don't deserve a response

"Also, it [Claude Code] flickers" - it does, doesn't it? Why?.. Did it vibe code itself so badly that this is hopeless to fix?..

Because they target 60 fps refresh, with 11 of the 16 ms budget per frame being wasted by react itself.

They are locked in this naive, horrible framework that would be embarrassing to open source even if they had the permission to do it.


That's what they said, but as far as I can see it makes no sense at all. It's a console app. It's outputing to stdout, not a GPU buffer.

The whole point of react is to update the real browser DOM (or rather their custom ASCII backend, presumably, in this case) only when the content actually changes. When that happens, surely you'd spurt out some ASCII escape sequences to update the display. You're not constrained to do that in 16ms and you don't have a vsync signal you could synchronise to even if you wanted to. Synchronising to the display is something the tty implementation does. (On a different machine if you're using it over ssh!)

Given their own explanation of react -> ascii -> terminal, I can't see how they could possibly have ended up attempting to render every 16ms and flickering if they don't get it done in time.

I'm genuinely curious if anybody can make this make sense, because based on what I know of react and of graphics programming (which isn't nothing) my immediate reaction to that post was "that's... not how any of this works".


Claude code is written in react and uses Ink for rendering. "Ink provides the same component-based UI building experience that React offers in the browser, but for command-line apps. It uses Yoga to build Flexbox layouts in the terminal,"

https://github.com/vadimdemedes/ink


I figured they were doing something like Ink, but interesting to know that they're actually using Ink. Do you have any evidence that's the case?

It doesn't answer the question, though. Ink throttles to at most 30fps (not 60 as the 16ms quote would suggest, though the at most is far more important). That's done to prevent it churning out vast amounts of ASCII, preventing issues like [1], not as some sort of display sync behaviour where missing the frame deadline would be expected to cause tearing/jank (let alone flickering).

I don't mean to be combative here. There must be some real explanation for the flickering, and I'm curious to know what it is. Using Ink doesn't, on it's own, explain it AFAICS.

Edit: I do see an issue about flickering on Ink [2]. If that's what's going on, the suggestion in one of the replies to use alternate screen sounds reasonable and nothing to do with having to render in 16ms. There are tons of TUI programs out there that manage to update without flickering.

[1] https://github.com/gatsbyjs/gatsby/issues/15505

[2] https://github.com/vadimdemedes/ink/issues/359


How about the ink homepage (same link as before), which lists Claude as the first entry under

Who's Using Ink?

    Claude Code - An agentic coding tool made by Anthropic.

Great, so probably a pretty straightforward fix, albeit in a dependency. Ink does indeed write ansiEscapes.clearTerminal [1], which does indeed "Clear the whole terminal, including scrollback buffer. (Not just the visible part of it)" [2]. (Edit: even the eraseLines here [4] will cause flicker.)

Using alternate screen might help, and is probably desirable anyway, but really the right approach is not to clear the screen (or erase lines) at all but just write out the lines and put a clear to end-of-line (ansiEscapes.eraseEndLine) at the end of each one, as described in [3]. That should be a pretty simple patch to Ink.

Likening this to a "small game engine" and claiming they need to render in 16ms is pretty funny. Perhaps they'll figure it out when this comment makes it into Claude's training data.

[1] https://github.com/vadimdemedes/ink/blob/e8b08e75cf272761d63...

[2] https://www.npmjs.com/package/ansi-escapes

[3] https://stackoverflow.com/a/71453783

[4] https://github.com/vadimdemedes/ink/blob/e8b08e75cf272761d63...


Claude code programmers are very open that they vibe code it.

I don't think they say they vibe code, just that claude writes 100% of the code.

The list of the oil producers listed and omitted on a given forum in these contexts is always interesting. On HN it is often SA or Russia, and almost never Qatar or Iran.


How dare you question the rigor of the venerable LLM peer review process! These are some of the most esteemed LLMs we are talking about here.


It's about formalization in Lean, not peer review


TFA explains how std::move is tricky to use and this is not a feature reserved for library writers


Of course it is not reserved for library writers - nothing is. But it is not a feature that application writers should worry about overmuch.


std::move is definitely for there for optimizing application code and is often used there. another silly thing you often see is people allocating something with a big sizeof on the stack and then std::moving it to the heap, as if it saves the copying


> another silly thing you often see is people allocating something with a big sizeof on the stack and then std::moving it to the heap, as if it saves the copying

never seen this - an example?


You could say the same things about assemblers, compilers, garbage collection, higher level languages etc. In practice the effect has always been an increase in the height of a mountain of software that can be made before development grinds to a halt due to complexity. LLMs are no different


In my own experience (and from everything I’ve read), LLMs as they are today don’t help us as an industry build a higher mountain of software because they don’t help us deal with complexity — they only help us build the mountain faster.


I see this response a lot but I think it's self-contradictory. Building faster, understanding faster, refactoring faster — these do allow skilled developers to work on bigger things. When it takes you one minute instead of an hour to find the answer to a question about how something works, of course that lets you build something more complex.

Could you say more about what you think it would look like for LLMs to genuinely help us deal with complexity? I can think of some things: helping us write more and better tests, fewer bugs, helping us get to the right abstractions faster, helping us write glue code so more systems can talk to each other, helping us port things to one stack so we don't have to maintain polyglot piles of stuff (or conversely helping us not worry about picking and choosing the best stuff from every language ecosystem).


> I see this response a lot but I think it's self-contradictory. Building faster, understanding faster, refactoring faster — these do allow skilled developers to work on bigger things. When it takes you one minute instead of an hour to find the answer to a question about how something works, of course that lets you build something more complex.

I partially agree. While LLMs don't magically increase a human's mental capacity, but they do allow a given human to explore the search space of e.g. abstractions faster than they otherwise could before they run out of time or patience.

But (to use GGP's metaphor) do LLMs increase the ultimate height of the software mountain at which complexity grinds everything to a halt?

To be more precise, this is point at which the cost of changing the system gets prohibitively high because any change you make will likely break something else. Progress becomes impossible.

Do current LLMs help us here? No, they don't. It's widely known that if you vibe code something, you'll pretty quickly hit a wall where any change you ask the LLM to make will break something else. To reliably make changes to a complex system, a human still needs to really grok what's going on.

Since the complexity ceiling is a function of human mental capacity, there are two ways to raise that ceiling:

1. Reduce cognitive load by building high-leverage abstractions and tools (e.g. compilers, SQL, HTTP)

2. Find a smarter person/machine to do the work (i.e. some future form of AI)

So while current LLMs might help us do #1 faster, they don't fundamentally alter the complexity landscape, not yet.


Thanks for replying! I disagree that current LLMs can't help build tooling that improves rigor and lets you manage greater complexity. However, I agree that most people are not doing this. Some threads from a colleague on this topic:

https://bsky.app/profile/sunshowers.io/post/3mbcinl4eqc2q

https://bsky.app/profile/sunshowers.io/post/3mbftmohzdc2q

https://bsky.app/profile/sunshowers.io/post/3mbflladlss26


Rust HashSets are HashMaps with an empty type as the value type, but the compiler actually optimizes away the storage for the keys based on the type being empty. Go doesn't bother to either define a set type like most languages do, or to optimize the map implementation with an empty type as the value type


The Chinese are ahead at too many things at this point to think they're only good at copying


And it is not like making a copy for cheaper isn't something that requires skill and innovation. Or then iterate on that copy. Didn't Roomba just fail to these copies. If west was truly so much more innovative and better shouldn't they as company be infinitely ahead still?


That depends heavily on where the cost saving came from. For a long time China made cheap copies with extremely cheap labor, though that may no longer be the case as it seems they're innovating on the manufacturing process these days.


I never said that, or that there's something wrong with copying. I just said the sentence implies copying. Which it does.

And in fact this meme Chinese only copy is crap as I point out in my last paragraph. Over the centuries the Chinese were the first at quite a few things.

But the sentence says what it says.


"Why, then, are the reconstructions so ugly? One factor may be that the specialists who execute them lack the skill of classical artists, who had many years of training in a great tradition."

Has he ever met people doing this stuff?.. Why write about something you know so little about? Why do people think that they can talk about things without experience, based on abstract reasoning?


I am very impressed with the kind of things people pull out of Claude's жопа but can't see such opportunities in my own work. Is success mostly the result of it being able to test its output reliably, and of how easy it is to set up the environment for this testing?


> Is success mostly the result of it being able to test its output reliably, and of how easy it is to set up the environment for this testing?

I won't say so. From my experience the key to success is the ability to split big tasks into smaller ones and help the model with solutions when it's stuck.

Reproducible environments (Nix) help a lot, yes, same for sound testing strategies. But the ability to plan is the key.


One other thing I've observed is that Claude fares much better in a well engineered pre-existing codebase. It adopts to most of the style and has plenty of "positive" examples to follow. It also benefits from the existing test infrastructure. It will still tend to go in infinite loops or introduce bugs and then oscillate between them, but I've found it to be scarily efficient at implement medium sized features in complicated codebases.


Yes, that too, but this particular project was an ancient C++ codebase with extremely tight coupling, manual memory management and very little abstraction.


Claude will also tend to go for the "test-passing" development style where it gets super fixated on making the tests pass with no regards to how the features will work with whatever is intended to be built later.

I had to throw away a couple days worth of work because the code it built to pass the tests wasn't able to do the actual thing it was designed for and the only workaround was to go back and build it correctly while, ironically, still keeping the same tests.

You kind of have to keep it on a short leash but it'll get there in the end... hopefully.


жопа -> jopa (zhopa) for those who don't spot the joke


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: