> Software engineers are scared of designing things themselves.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
To be blunt, I think it's a form of mania that drives someone to reject human-written code in favor of LLM-generated code. Every time I read writing from this perspective that exceeds a paragraph, I quickly realize the article itself was written by an LLM. When they automate this much writing, it makes me wonder how much of their own reading they automate away too.
The below captures this perfectly. The author is trying to explain that vibe-coding their own frameworks lets them actually "understand" the code, while not noticing that the LLM-generated text they used to make this point is talking about cutting and sewing bricks.
> But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.
I think the bit you quoted is a tie in with an earlier bit:
“ I can be the architect without the wearing act of laying every single brick and spreading the mortar. I can design the dress without the act of cutting and sewing each individual piece of fabric”
To me, this text doesn’t read as being entirely written by an LLM, there is definitely an air of LLM about it though, so maybe the first draft was.
Correct. The history is rife with examples of manias taking hold of societies, I recommend "Memoirs of Extraordinary Popular Delusions and the Madness of Crowds" by Charles Mackay[1], it's an absolutely fascinating book.
Yeah the “not invented here” syndrome was considered an anti pattern before the agentic coding boom and I don’t see how these tools make it irrelevant. If you’re starting a business, it’s still likely a distraction if you’re writing all of the components of your stack from scratch. Agentic tools have made development less expensive, but it’s still far from zero. By the author’s admission, they still need to think through all these problems critically, architect them, pick the right patterns. You also have to maintain all this code. That’s a lot of energy that’s not going towards the core of your business.
What I think does change is now you can more easily write components that are tailor made to your problem, and situation. Some of these frameworks are meant to solve problems at varying levels of complexity and need to worry about avoid breaking changes. It’s nice to have the option to develop alternatives that are as sophisticated as your problem needs and not more. But I’m not convinced that it’s always the right choice to build something custom.
The cost of replacement-level software drops a lot with agentic coding. And maintenance tasks are similarly much smaller time syncs. When you combine that with the long-standing benefits of inhouse software (customizable to your exact problem, tweakable, often cleaner code because the feature-set can be a lot smaller), I think a lot of previously obvious dependencies become viable to write in house.
It's going to vary a lot by the dependency and scope - obvious owning your own react is a lot different than owning your own leftpad, but to me it feels like there's no way that agentic coding doesn't shift the calculus somewhat. Particularly when agentic coding make a lot of nice-to-have mini-features trivial to add so the developer experience gap between a maintained library and a homegrown solution is smaller than it used to be.
my problem with frameworks has always been that the moment I want to do something the framework writers aren't interested in, I now have three problems: my problem, how to implement it in the underlying platform and how to work around the framework to not break my feature.
Yes this happens in every framework I've ever used. My approach used to be to try to work around it, but now I've got these local exceptions to what the framework does and that is inevitably where problems/bugs pop up. Now I simply say "we can't implement the feature that way in this framework, we need to rework the specification." I no longer try to work against the framework, it's just a massive time sink and creates problems down the road.
It's like designing a kitchen and you don't make all the spaces some multiple of three inches. Now, standard cabinets and appliances will not fit. You will be using filler panels or need custom cabinetry. And anyone who later wants countertops or different cabinets will be working around this design too. Just follow the established standard practices.
I'm so glad software engineering isn't my job. I love solving problems, and I'm somewhat better at using code to do it than my peers (fellow scientists), but I would hate to have a boss/client that says "it needs to do X" and the framework writer (or SDK, ala Android/Xcode) say "no, that hurts my profits/privacy busting".
I've never found something that was impossible to implement in any framework or SDK. Even in Android SDK land, you can easily get access to an OpenGL surface and import the whole world via the NDK. There's nothing limiting other than the OS itself and its mechanism.
Same with Web framework. Even React (a library) has its escape hatches to let in the rest of the world.
Where is your copy of the android source code for the device you’re manufacturing? Because that’s how you can get the full feature set. Otherwise you will be restricted by Android aggresive suspending and killing policy.
> I would hate to have a boss/client that says "it needs to do X" and the framework writer (or SDK, ala Android/Xcode) say "no, that hurts my profits/privacy busting".
An answer to such request should be: "We would need to ship a custom version of Android". Just like if you need to setup a web server on a Linux system, you would need to be root. You don't choose a shared hosting and then complain about the lack of permissions.
that's amazing, shared hosting on the device I bought. no thank you. I'll root the damn thing and do as I please. If future devices don't allow that, I won't have a reason to carry them in my pocket.
Yeah, I'm huge on using LLMs for coding, but one of the biggest wins for me is that the LLM already knows the frameworks. I no longer need to learn whatever newest framework there is. I'll stick to my frameworks, especially when using an LLM to code.
after 3 decades as SWE I mostly found both i) and ii) to not be true, for the most part. a lot of frameworks are not built from the ground up as “i am building a thing to solve x” but “i had a thing and built something that may (or may not) be generally useful.” so a lot of them carry weight from what they were originally built from. then people start making requests to mold the framework to their needs, some get implemented, some don’t. those that don’t good teams will build extensions/plugins etc into the framework and pretty soon you got a monster thing inside of your codebase you probably did not need to begin with. i think every single ORM that i’ve ever used fits this description.
Totally. Frameworks also make it a lot easier for new team members to contribute. React, for example, makes it a lot easier to hire. Any project with moderate size will require some kind of convention to keep things consistent and choosing a framework makes this easier.
Now look at the cross team collaboration and it gets even harder without frameworks. When every team has their own conventions, how would they communicate and work together? Imagine a website with React, Vue, Angular all over the place, all fighting for the same DOM.
And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.
As someone who’s built a lot of frontend frameworks this isn’t what I’ve found. Instead I’ve found that you end up with the middle ground choice which while effective is no better than the externally maintained library of choice. The reason to build your own framework is so it’s tailor suited to your use cases. The architecting required to do that LLMs can help with but you have to guide them and to guide them you need expertise.
I would like a more reliable way to activate this "way more experience."
What I see in my own domain I often recognize as superficially working but flawed in various ways. I have to assume the domains I am less familiar are the same.
> can generate just the bit of code that you actually need
Design is the key. Codebases (libraries and frameworks not exempt,) have a designed uniformity to them. How does a beginner learn to do this sort of design? Can it be acquired completely by the programmer who uses LLMs to generate their code? Can it be beneficial to recognize opinionated design in the output of an LLM? How do you come to recognize opinion?
In my personal history, I've worked alongside many programmers who only ever used frameworks. They did not have coding design sensibilities deeper than a social populist definition of "best practice." They looked to someone else to define what they can or cannot do. What is right to do.
Reducing ambiguity by definition increases effective communication. Any number of social experts would undoubtedly herald an increase in effective communication an unequivocal boon to human relationships.
Despite the name, many people use "credit cards" simply for rewards and enhanced purchase protections, with only incidental use of the credit facility.
In the US market, it is surprising that someone would choose to use a debit card over a credit card (if they have the choice) because they are giving up the rewards and enhanced purchase protections, which are available at effectively zero cost.
If I used a debit card over a credit card, I'd effectively be paying ~2% more for most things I buy, for no benefit.
Not to mention the grace period. Especially with high interest rates, it's another perk to have thousands of my dollars stay in the bank all month while my credit card bill piles up. This matters less when rates are super low.
One thing I didn't truly appreciate until my wife and I consolidated our spending and had children - having nearly every expense flow through a credit card puts total spending into perspective without having to look through bank statements or keep up a spreadsheet. Getting a $10k bill when you're expecting $8k (or a $30k bill when you're expecting $20k) can be a pretty jarring event and is a built-in monthly touch point to review budgeting and spending.
It wouldn't be quite the same impact spread out over 5 cards paid out of multiple checking accounts with slightly different billing cycles.
> One thing I didn't truly appreciate until my wife and I consolidated our spending and had children - having nearly every expense flow through a credit card puts total spending into perspective without having to look through bank statements or keep up a spreadsheet.
This can work amazingly well for some folks. And can be a spiral of debt for others. This is generally good advice if you can and do actually pay off your credit cards every month. This gets quickly out of control as soon as you don't or won't for one reason or another.
Better fraud protection, too. Depending on the bank it can be a real battle to get fraudulent charges dropped and funds restored, but credit card companies go out of their way to make that process easy. Some even offer it as a function of their site/app so you don’t even need to make a call to get things resolved.
I have several cards and don’t keep a balance on any of them. They’re a tool with several uses, and one of mine is to be able to pay for things without exposing my debit card/bank account.
The problem with the housing issue is that real solutions to it are extremely unpopular, even amount people who agree with the scale and intensity of the problem.
The regular voting public doesn't even agree that there's a connection between increasing the supply of housing and housing becoming more affordable.
Their position is, roughly, "there's plenty of housing already - it just needs to be more affordable for regular people". Sometimes this even manifests in support for self-defeating demand subsidies like help-to-buy schemes for new homeowners
This is a position that can never be satisfied because it is fundamentally disconnected from reality. It is equivalent to the meme of the dog with the stick in its mouth who wants you to throw the stick for them, but not take the stick from them.
As property prices increase, developers are more incentivized to build new properties and increase density.
The increase in supply then lowers prices.
The problem comes when local laws and the planning permission system make it hard or impossible to increase the supply of homes. Then there's no balancing force to bring prices down when they go up.
I certainly agree with your last paragraph, however whilst I believe you're not wrong about the first I don't believe that is the only option for increasing supply.
For example, if you look at some of the densest cities in the world they are still predominantly single standing homes, just much more tightly packed, and in homes we can't huild. So I believe zoning and planning are the key issues, and I think property developers would actually play a smaller role in solving the supply problem if you allowed individuals to solve this problem themselves with less strict zoning and planning.
Obviously big developments still play a role, but at the stage American cities are often at, NYC excluded, I think zoning being more favourable to medium density would go a long way.
To me, writing in full, formally correct sentences, being careful to always use correct punctuation, starts to feel a little pretentious or tryhard in some contexts.
It doesn't feel too much like that here on HN. But on reddit, I use less formal structure most of the time, and that feels natural to me.
> If your database goes down at 3 AM, you need to fix it.
Of all the places I've worked that had the attitude "If this goes down at 3AM, we need to fix it immediately", there was only one where that was actually justifiable from a business perspective. I'm worked at plenty of places that had this attitude despite the fact that overnight traffic was minimal and nothing bad actually happened if a few clients had to wait until business hours for a fix.
I wonder if some of the preference for big-name cloud infrastructure comes from the fact that during an outage, employees can just say "AWS (or whatever) is having an outage, there's nothing we can do" vs. being expected to actually fix it
From this perspective, the ability to fix problems more quickly when self hosting could be considered an antifeature from the perspective of the employee getting woken up at 3am
No. You sit on the call and wait to restore your service to your users. There’s bullshit toil in disabling scale in as the outage gets longer.
Eventually, AWS has a VP of something dial in to your call to apologize. They’re unprepared and offer no new information. The get handed to a side call for executive bullshit.
AWS comes back. Your support rep only vaguely knows what’s going on. Your system serves some errors but digs out.
Really? That might be an anecdote sampled from unusually small businesses, then. Between myself and most peers I’ve ever talked to about availability, I heard an overwhelming majority of folks describe systems that really did need to be up 24/7 with high availability, and thus needed fast 24/7 incident response.
That includes big and small businesses, SaaS and non-SaaS, high scale (5M+rps) to tiny scale (100s-10krps), and all sorts of different markets and user bases. Even at the companies that were not staffed or providing a user service over night, overnight outages were immediately noticed because on average, more than one external integration/backfill/migration job was running at any time. Sure, “overnight on call” at small places like that was more “reports are hardcoded to email Bob if they hit an exception, and integration customers either know Bob’s phone number or how to ask their operations contact to call Bob”, but those are still environments where off-hours uptime and fast resolution of incidents was expected.
Between me, my colleagues, and friends/peers whose stories I know, that’s an N of high dozens to low hundreds.
IME the need for 24x7 for B2B apps is largely driven by global customer scope. If you have customers in North American and Asia, now you need 24x7 (and x365 because of little holiday overlap).
That being said, there are a number of B2B apps/industries where global scope is not a thing. For example, many providers who operate in the $4.9 trillion US healthcare market do not have any international users. Similarly the $1.5 trillion (revenue) US real estate market. There are states where one could operate where healthcare spending is over $100B annually. Banks. Securities markets. Lots of things do not have 24x7 business requirements.
I’ve worked for banks, multiple large and small US healthcare-related companies, and businesses that didn’t use their software when they were closed for the night.
All of those places needed their backend systems to be up 24/7. The banks ran reports and cleared funds with nightly batches—hundreds of jobs a night for even small banking networks. The healthcare companies needed to receive claims and process patient updates (e.g. your provider’s EMR is updated if you die or have an emergency visit with another provider you authorized for records sharing—and no, this is not handled by SaaS EMRs in many cases) over night so that their systems were up to date when they next opened for business. The “regular” businesses closed for the night generated reports and frequently had IT staff doing migrations, or senior staff working on something at midnight due the next day (when the head of marketing is burning the midnight oil on that presentation, you don’t want to be the person explaining that she can’t do it because the file server hosting the assets is down all the time after hours).
And again, that’s the norm I’ve heard described from nearly everyone in software/IT that I know: most businesses expect (and are willing to pay for or at least insist on) 24/7 uptime for their computer systems. That seems true across the board: for big/small/open/closed-off-hours/international/single-timezone businesses alike.
You are right that a lot of systems at a lot of places need 24x7. Obviously.
But there are also a not-insignificant number of important systems where nobody is on a pager, where there is no call rotation[1]. Computers are much more reliable than they were even 20 years ago. It is an Acceptable Business Choice to not have 24x7 monitoring for some subset of systems.
Until very recently[2], Citibank took their public website/user portal offline for hours a week.
1 - if a system does not have a fully staffed call rotation with escalations, it's not prepared for a real off-hours uptime challenge
2 - they may still do this, but I don't have a way to verify right now.
This lasts right up until an important customer can't access your services. Executives don't care about downtime until they have it, then they suddenly care a lot.
You can often have services available for VIPs, and be down for the public.
Unless there's a misconfiguration, usually apps are always visible internally to staff, so there's an existing methodology to follow to make them visible to VIPs.
But sometimes none of that is necessary. I've seen at a 1B market cap company, a failure case where the solution was manual execution by customer success reps while the computers were down. It was slower, but not many people complained that their reports took 10 minutes to arrive after being parsed by Eye Ball Mk 1s, instead of the 1 minute of wait time they were used to.
Uptime is also a sales and marketing point, regardless of real-world usage. Business folks in service-providing companies will usually expect high availability by default, only tempered by the cost and reality of more nines.
Also, in addition to perception/reputation issues, B2B contracts typically include an SLA, and nobody wants to be in breach of contract.
I think the parent you're replying to is wrong, because I've worked at small companies selling into large enterprise, and the expectation is basically 24/7 service availability, regardless of industry.
I would say something like 95% of the code I have been paid to write as a software engineer has 0% test coverage. Like, literally, not a single test on the entire project. Across many different companies and several countries, frontend and backend.
I wonder if I'm an anomaly, or if it's actually more common that one might assume?
Once you realize that automated testing gives you a level of confidence throughout iteration that you can't replicate through manual interaction (nor would you want to), you never go back.
It's just a matter of economics. Where the costs of bugs in production is low, which is probably the vast majority of the software out there, extensive test coverage simply doesn't make economic sense. Something breaks in some niche app, maybe someone is bothered enough to complain about it, it gets fixed at some point, and everybody moves on.
Where the costs are high, like say in safety critical software or large companies with highly paid engineers on-call where 9s of uptime matters, the amount of testing and development rigor naturally scale up.
This is why rigid stances like that from "Uncle Bob" are shortsighted: they have no awareness of the actual economics of things.
Way more common. Tests are at best overrated. And doing them properly is big PITA. The first thing is that the person writing the tests and the person writing the code should be different. And our languages are not really suited for the requirements of testing. They can and do save your ass in certain situation, but the false security they provide is probably more dangerous.
This sounds very "the perfect is the enemy of good". Tests don't need to be perfect, they don't need to be written by different people (!!!), they don't need to cover 100% of the code. As long as they're not flakey (tests which fail randomly really can he worse than nothing) it really helps in development and maintenence to have some tests. It's really nice when the (frequent) mistakes I make show up on my machine or on the CI server rather than in production, and my (very imperfect, not 100% "done properly") tests account for a lot of those catches.
Obviously pragmatism is always important and no advice applies to 100% of features/projects/people/companies. Sometimes the test is more trouble to write than it's worth and TDD never worked for me with the exception of specific types of work (good when writing parsers I find!).
From my experience though I often do make logical errors in my code but not in my tests and I do frequently catch errors because of this. I think thats a fairly normal experience with writing automated tests.
Would having someone else write the tests catch more logical errors? Very possibly, I haven't tried it but that sounds reasonable. It also does seem like that (and the other things it implies) would be a pretty extreme change in the speed of development. I can see it being worth it in some situations but honestly I don't see it as something practical for many types of projects.
What I don't understand is saying "well we can't do the really extremely hard version so let's not do the fairly easy version" which is how I took you original comment.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
reply