Hacker Newsnew | past | comments | ask | show | jobs | submit | deltaburnt's commentslogin

For pretty much every single person you or I personally know, that would be the equivalent of the end of humanity.

Let’s not nitpick here. Worldwide human suffering and tragedy is equivalent to the end of humanity for most.

We can sit here and armchair while in the most prosperous, comfortable era of human history. But we also have to recognize that this era is a blip of time in history. That is a lot of data showing humanity surviving sure. But it’s also a very small amount of data showing any kind of life most would want to live in.


This is a pretty discriminatory comment that I’ve honestly seen zero hint of in reality. And this is coming from someone who didn't go to a particularly prestigious school. I honestly rarely even find out what school my colleagues went to school. But the ones I know who did go to those prestigious schools are beyond humble.


Not really. That's bad faith. I've worked at lots of places, probably hired about 200 engineers over my career so far and have noticed this pattern.

I stopped looking at the educational background years ago in a fear that it would influence my bias either way. We shouldn't base someone's suitability at 40 upon what opportunities they were afforded at 17.

I do have a somewhat prestigious pedigree btw. I removed it from my resume around 2010 and never looked back


Much less money lost, but Amazon is notorious for not providing free game codes that are supposed to be included with GPU purchases. The customer rep at first apologized and offered a small refund (less than the cost of the game). A later rep started implying I was trying to defraud Amazon.

Many people online share similar experiences. Wonder how much money this wide-scale fraud saves them.


Amazon doing dodgy things with PC parts is why I will no longer purchase them from there - I'll happily take the extra £10-20 hit to buy it from another "proper" retailer (ie, Scan or Overclockers here in the UK), knowing that issues can be resolved more easily


Basically locks you out of HDR, high frame rates, VRR, or (more importantly) new panel technology like OLED.



I don’t think I’ve ever seen someone seriously argue that personal throwaway projects need thorough code reviews of their vibe code. The problem comes in when I’m maintaining a 20 year old code base used by anywhere from 1M to 1B users.

In other words you can’t vibe code in an environment where evaluating “does this code work” is an existential question. This is the case where 7k LOC/day becomes terrifying.

Until we get much better at automatically proving correctness of programs we will need review.


My point about my experience with this plugin isn’t that it’s a throwaway or meaningless project. My point is that it might be enough in some cases to verify output without verifying code. Another example: I had to import tens of thousands of records of relational data. I got AI to write the code for the import. All I verified was that the data was imported correctly. I didn’t even look at the code.


In this context I meant throwaway as "low stakes" not "meaningless". Again, evaluating the output of a database import like that could be existensial for your company given the context. Not to mention there's many cases where evaluating the output isn't feasible for a human.


Human code review does not prove correctness. Almost every software service out there contains bugs. Humans have struggled for decades to reliably produce correct software at scale and speed. Overall, humans have a pretty terrible track record of producing bug-free correct code no matter how much they double-check and review their code along the way.


So the solution is to stop doing code reviews and just YOLO-merge everything? After all, everything is fucked already, how much worse could it get?

For the record, there are examples where human code review and design guidelines can lead to very low-bug code. NASA published their internal guidelines for producing safety-critical code[1]. The problem is that the development cost of software when using such processes is too high for most companies, and most companies don't actually produce safety-critical software.

My experience with the vast majority of LLM code submitted to projects I maintain is that it has subtle bugs that I managed to find through fairly cursory human review. The copilot code review feature on GitHub also tends to miss actual bugs and report nonexistent bugs, making it worse than useless. So in my view, the death of the benefits of human code review have been wildly exaggerated.

[1]: https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...


No, that's not what I wrote, and it's not the correct conclusion. What I wrote (and what you, in fact, also wrote) is that in reality we generally do not actually need provably correct software except in rare cases (e.g., safety-critical applications). Suggesting that human review cannot be reduced or phased out at all until we can automatically prove correctness is wrong, because fully 100% correct and bug-free software is not needed for the vast majority of code being produced. That does not mean we immediately throw out all human review, but the bar for making changes for how we review code is certainly much lower than the above poster suggested.


I don't really buy your premise. What you're suggesting is that all code has bugs, and those bugs have equal severity and distribution regardless of any forethought or rigor put into the code.

You're right, human review and thorough design are a poor approximation of proving assumptions about your code. Yes bugs still exist. No you won't be able to prove the correctness of your code.

However, I can pretty confidently assume that malloc will work when I call it. I can pretty confidently assume that my thoroughly tested linked list will work when I call it. I can pretty confidently assume that following RAII will avoid most memory leaks.

Not all software needs meticulous careful human review. But I believe that the compounding cost of abstractions being lost and invariants being given up can be massive. I don't see any other way to attempt to maintain those other than human review or proven correctness.


I did suggest all code has bugs (up to some limit -- while I wasn't careful to specify this, as discussed above, there does exist an extraordinary level of caution and review that if used can approximate perfect bug-free code, as in your malloc example and in the example of NASA, but that standard is not currently applied to 99.9% of human-generated and human-reviewed code, and it doesn't need to be). I did not suggest anything else you said I suggested, so I'm not sure why you made those parts up.

"Not all software needs meticulous careful human review" is exactly the point. The question of exactly what software needs that kind of review is one whose answer I expect to change over the next 5-10 years. We are already at the point where it's so easy to produce small but highly non-trivial one-off applications that one needn't examine the code at all -- I completely agree with the above poster that we're rapidly discovering new examples of software development where output-verification is all you need, just like right now you don't hand-inspect the machine code generated by your compiler. The question is how far that will be able to go, and I don't think anybody really knows right now, except that we are not yet at the threshold. You keep bringing up examples where the stakes are "existential", but you're underestimating how much software development does not have anything close to existential stakes.


Does Taiwan not have healthcare? Verbatim from Wikipedia:

> According to the Numbeo Health Care Index in 2025, Taiwan has the best healthcare system in the world, scoring 86.5 out of 100,[6] a slight increase from 86 the previous year.[7] This marked the seventh consecutive year that Taiwan has ranked first in the Numbeo Health Care Index.[8]


Access to healthcare and right to healthcare are distinct concepts


Literally everything will say AI generated to avoid potential liability. You'll have a "known to the state of California to cause cancer" situation.


Thank you for sharing this holy crap. I have this exact monitor and that popup drives me absolutely insane.


I'm always happy to hear this from people. I get maybe an email a month specifically about this post haha


It really depends on the engineer. I've seen some engineers in that exact position you describe, their job description says they influence the org broadly so that's what they set out to do. They struggle against a political and technical machine, vying for power, and trying to build a fiefdom.

Other engineers I've seen (a smaller sunset) have that job description more as an observation of their skills and influence. Their mandate isn't to influence, they just do. They are respected for their vast knowledge, historical success, and insight. So they naturally are heeded by most, and consequently they broadly influence the org.

Both cases sound miserable in their own way, but if I had to choose I'd much rather land in the latter. The latter still involves some politics, but at least it sounds like you're not wasting your life playing stupid games.


I would rather land in the latter too.

I actually don't mind that some people are good at influencing others, through well earned respect, good communication skills and technical chops.

I resent it when it becomes a mandate and some official "badge" in the career ladder. I'm suspicious of these principal/architect types who "parachute" out of nowhere into teams and projects, because it's "their mandate", ask lots of questions, mess with stuff, and then leave and don't take responsibility because "the team owns the project, not them". I've seldom seen this work well. A lot of teams end up politely ignoring what these types say, because they know if you're not a true stakeholder, what you're saying doesn't matter.


Aren't you effectively saying that no one will understand the code they're actually deploying? That's always true to an extent, but at least today you mostly understand the code in your sub area. If we're saying the future is AI + careful review, how am I going to have enough context to even do that review?


I expect that in most cases you'll review "hot spots" that AI itself identifies while trusting AI review for the majority of code. When you need to go deeper, I expect you'll have to essentially learn the code to fix it, in roughly the same way people will occasionally need to look at the compiler output to hunt down bugs.


Human trust has to be earned, why should AI trust be any different? If I’m supposed to yolo-approve any random code a machine spits out, it had better prove to me it’s nearly flawless, otherwise I’m applying the same review regiment I apply to any other code. To do otherwise is to shame the word “engineering” and the field thereof.


Engineering is a game of tradeoffs. Time is one of the things you have to trade off, given your strong opinions I expect this is something you've been in the industry long enough to understand intuitively.

Regarding proof, if you have contracts for your software write them up. Gherkin specs, api contracts, unit tests, etc. If you care about performance, add stress tests with SLOs. If you care about code organization create custom lint rules. There are so many ways to take yourself out of the loop rigorously so you can spend your time more efficiently.


> Regarding proof, if you have contracts for your software write them up. Gherkin specs, api contracts, unit tests, etc.

We really need widespread adoption of stuff like design-by-contract in mainstream PLs before we can seriously talk about AI coding.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: