Hacker Newsnew | past | comments | ask | show | jobs | submit | avikalp's commentslogin

This is only true for "development branch" CRs/pull requests. The whole is greater than the sum of the parts. Every small change in the feature that you are building might make complete sense, so every dev-to-feature branch pull request would get approved easily.

But if you not also reviewing the feature-to-main branch pull request, you are just inviting problems. That is a bigger CR that you should review carefully, and there is no way that could be a small CR.


I’ve done that gig many times. I take your point but if there’s a problem, figuring out who to blame always means going back to the original dev CR.


I agree with you 100%.

In the maker-checker process, if we are imagining a future where AI will be writing/editing most of the code, the AI-code-review tools will need to integrate within its agentic process.

And the job of a better code-review interface (like the one that I am trying to build) would be to provide a higher level of abstraction to the user so that they can verify the output of the AI code generators more effectively.


What you are saying is true, and this is the feedback I hear every time I talk to a small team of developers (generally fewer than 15 developers).

At this stage, you don't need "another set of eyes" because it is not that big of a problem to break something, as you are not going to lose massive amounts of money because of the mistake.

All these teams need is a sanity check. They also generally (even without the AI code reviewers) do not have a strong code review process.

This is why, in the article, I have clearly mentioned that these are learning based on talking to engineers in Series-B and Series-C startups.


I have had a similar discussion with a fellow On-Deck Founder, and here is where we reached:

- More than being "good enough", it is about taking responsibility. - A human can make more mistakes than an AI, and they are still the more appropriate choice because humans can be held responsible for their actions. AI, by its very nature, cannot be 'held responsible' -- this has been agreed upon based on years of research in the field of "Responsible AI". - To completely automate anything using AI, you need a way to trivially verify whether it did the right thing or not. If the output cannot be verified trivially, you are just changing the nature of the job, and it is still a job or a human being (like the staff you mentioned who remotely control Waymos when something goes wrong). - If an action is not trivially verifiable and requires AI's output to directly reach the end-user without a human-in-the-loop, then the creator is taking a massive risk. Which usually doesn't make sense for a business when it comes to mission-critical activities.

In Waymo's case, they are taking massive risks because of Google's backing. But it is not about being 'good enough'. It is about the results of the AI being trivially verifiable - which, in the case of driving, is true. You just need three yes/no answers: Did the customer reach where they wanted? Are they safe? Did they arrive on time? Are they happy with the experience?


I'd be really hesitant to say anything involving humans and human judgement under uncertainty is trivial. What if the customer wants the car to drive aggressively, maybe speed a little where it "seems" safe? Should the car stop for an object that might be a plastic bag or a child's backpack? Even manual drivers are difficult to "verify" because accidents and traffic violations depend on interpretations of events, which is why we often have to go to court.


I'm sorry, I didn't mean it to be an ad. I have been interviewing engineering leaders for months, and my startup idea is born out of it. I don't have the product ready yet - it is evolving based on what I am learning.

I just thought it would be a good idea to share what I have learnt.


This is an open-source project. Link to the source code: https://github.com/avikalpg/typing-analyst


Wow, this is a very well-written article. I have experienced a lot of this in my own experience as a software developer.

This makes me wonder, the concept of pair programming has been around for a very long time. And yet, pull requests have grown in popularity while the use of pair programming remains pretty limited.

Does that mean that companies want to operate like a bunch of individuals instead of a team? Is independence valued more than speed & collaboration when it comes to software development teams?


Pairing doesn't replace code review because the reviewer needs to see the finished branch with fresh eyes, unbiased by discussion and false starts, to know whether it's safe and clearly explained in writing for every oncall in the future.

(I also need peace and quiet to think.)


I have heard the same as well. These days, I am thinking about how AI code gen can be affecting this. When AI is writing code, you can't assume yourself to be the pair programmer, because your speed is not even close to the AI's. You are basically reviewing the code that AI has written.

So should people be thinking about pair-reviewing AI code so that they get the benefits of pair programming along with the speed of AI?


In the "Knowledge Sharing" section of the article, the author says that "Distributed Practice" is one of the most effective ways to learn, and yet says that code reviews are not effective ways of knowledge sharing.

Isn't a code review EXACTLY a distributed practice? What am I missing?

And then he goes on to say "underlining and summarization while reading are least effective" -- I don't understand how is that even related to reviewing code.


If 42% of the comments on a pull request are related to increasing the understandability of the code, can we assume that just understanding the proposed code changes consumes the majority of the time spent on reviewing a pull request?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: