Vibe coding sits on an axis from smart autocomplete to one-shotting a $1B SaaS. Traditional software engineering was about holding the system in your head and translating it into syntax, fighting tooling and architecture decisions along the way. I think done properly it removes many of these friction points along the way to validating / implementing the idea.
Now it's easier to traverse a live plan and to quickly make micro pivots as you go.
I also think that architecture needs to change. Design patterns that will help to provide as much context to the LLM as possible to increase understanding.
But the model doesn't need to read the node_modules to write a React app, it just needs to write the React code (which it is heavily post-trained to be able to use). So the fair counter example is like:
function Hello() {
return <button>Hello</buttton>
}
Fair challenge to the idea. But what i am saying is that every line of boilerplate, every import statement, every configuration file consumes precious tokens.
The more code, the more surface area the LLM needs to cover before understanding or implementing correctly.
Right now the solution to expensive token limits is the most token-efficient technology. let's reframe it better. Was react made to help humans organize code better or machines?
Is the High Code-to-Functionality Ratio 3 lines that do real work > 50 lines of setup really necessary?
At current prices you can pretty much get away with murder even for the most expensive models out there. You know, $14/million output tokens. 10k output tokens is 14 cents. Which is ~40k words, or whatever.
The way to use LLM's for development is to use the API.
I'm not so worried about the money but more about context rot. I used spec driven development for a week and I had constant compacting with Claude code. I burned 200€ in one week and now I'm trying something different: only show diffs and try to always talk to me in interfaces.
I do think that at some point there will be frameworks or languages optimised for LLMs.
From punch cards to assembly, to C, to modern languages and web frameworks, each generation raised the abstraction. Agentic frameworks are the next one.
With Visual Studio and Copilot I like the fact that runs a comment and then can read the output back and then automatically continues based on the error message let's say there's a compilation error or a failed test case, It reads it and then feeds that back into the system automatically. Once the plan is satisfied, it marks it as completed
Have you tried Scoped context packages? Basically for each task, I create a .md file that includes relevant file paths, the purpose of the task, key dependencies, a clear plan of action, and a test strategy. It’s like a mini local design doc. I found that it helps ground implementation and stabilizes the output of the agents.
I read this suggestion a lot. “Make clear steps, a clear plan of action.” Which I get. But then instead of having an LLM flail away at it could we give to an actual developer? It seems like we’ve finally realized that clear specs makes dev work much easier for LLMs. But the same is true for a human. The human will ask more clarifying questions and not hallucinate. The llm will role the dice and pick a path. Maybe we as devs would just rather talk with machines.
Yes, but the difference is that an LLM produces the result instantly, whereas a human might take hours or days.
So if you can get the spec right, and the LLM+agent harness is good enough, you can move much, much faster. It's not always true to the same degree, obviously.
Getting the spec right, and knowing what tasks to use it on -- that's the hard part that people are grappling with, in most contexts.
I'm using it to help me build what I want and learn how. It being incorrect and needing questioning isn't that bad, so long as you ARE questioning it. It has brought up so many concepts, parameters, etc that would be difficult to find and learn alone. Documentation can often be very difficult to parse. Llms make it easier.
ASP.NET Core comes with its own built-in web server named Kestrel, which is very highly optimized. On most projects, I use it totally bare-metal, though I figure most run it behind a reverse proxy like nginx or yarp.
YARP is (Yet Another) Reverse Proxy, and Aspire is in a similar space to Testcontainers - i.e. orchestration of multiple executables for test (and other things). No it's not an alternative to Kestrel.
These are three different tools that do different things. The point is that these are better examples of the "modern MS ASP Infra" space than "nginx, iis".
> A web server should never be directly exposed to the Internet
That's what web servers are made for, no? Like Apache, Nginx etc. I mean, you could certainly put HAProxy in front but you'd need a good reason to do this.
More often than not, for any serious application backend, you probably want a web application firewall (WAF) in front of it and SSL termination upstream of the web server.
This would have been assembly code, probably 6809 or 68000 system I had back then. 6809 would have required dumping intermediate data to a disk. I don't recall just when I first got a hard-disk, which would probably have been a massive 10 megabytes in size.
And yes, I saw that transition. I learned to program using Fortran IV and IBM 11/30 assembly in the mid-70s, using punched cards. Wrote a MIXAL assembler and simulator for the minicomputer at the local college around 1976; it was about 7000 punched cards in length, all assembly. Got a Commodore PET in 1978, moved on to SS-50 based 6809 and 68008 systems in the late 70s/early 80s, with a serial terminal.
You just reminded me of when I bought my first PC in 1990 and one of my former professors, on learning I had bought it with a 200MB hard drive declared that I was crazy to buy such a large hard drive because I would never fill it up.¹
One of my more embarrassing memories (technical ones, anyway) was having an argument with a couple of friends in college in 1988/89. I felt that sure, an internal hard drive is a nice feature, but swapping floppies wasn't all that terrible.
You could have made a significant amount of money betting against my technical predictions over the last few decades.