Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We have already experimented with letting large neural networks develop software that seems to be correct based on a prompt. They are called developers. This is going to have all the same problems as letting a bunch of green developers go to town on implementation without a design phase.

The point of designing systems is so that the complexity of the system is low enough that we can predict all of the behaviors, including unlikely edge cases from the design.

Designing software systems isn't something that only humans can do. It's a complex optimization problem, and someday machines will be able to do it as well as humans, and eventually better. We don't have anything that comes close yet.



> This is going to have all the same problems as letting a bunch of green developers go to town on implementation without a design phase.

Except without all the downsides, because GPT can rewrite the whole program nearly instantly. Do you see why our intuitions around maintenance, "good architecture/design" and good processes may now be meaningless?

It seems a bit premature to say we don't have anything close when we can get working programs nearly instantly out of GPT right now, and that seemed like a laughable fantasy only two years ago.


Let's say I'm a bank, how do I know that my APIs don't allow the unintentional creation of money?

Presumably because the engineers designed the system to prevent that. They didn't build the system by looking at example API calls and constructing a system which satisfied the examples, but had random behavior elsewhere. They understood this property as an important invariant. More important than matching the timestamps to a particular ISO format.

I'm not talking about "good" design as "adapting to changing requirements" or adhering to "design principles" or whatever else people say makes a design good.

I'm talking about designing for simplicity so that the behavior of the system can be reliably predicted. This is an objective quality of the system. If you can predict the output, then the system has this quality. If you made it like that on purpose, then you designed it for this quality.

LLMs do not have this simplicity, but a software system you would trust to power a bank does.


> Let's say I'm a bank, how do I know that my APIs don't allow the unintentional creation of money?

How do you know now?

> Presumably because the engineers designed the system to prevent that.

How do you know the engineers understood the invariants? How do you know they didn't make a mistake in coding these invariants? Banks still don't use formal methods to prove these invariants last I checked, so no matter what, you need to write tests to check any invariants, and tests still can't achieve 100% certainty.

> I'm talking about designing for simplicity so that the behavior of the system can be reliably predicted.

From the page, it sounds like the system is fairly predictable, generating a program based on a schema and a descriptive method name. If it's not predictable then the model needs to be tuned to make it more predictable, just like how any other software development advances.

If you can design your schema to ensure any invariants are preserved, even better.

Finally, don't confuse the first preview version of the product with where this is going. The project as it is is fairly simple and predictable, but a bit limited. It does point the direction towards what is possible though.

You could also have separate AI trained to do fuzz testing of an API description and automatically and instantly generate thousands of tests checking all possible corner cases, and in principle, such systems could be even more robust than human written ones simply because of the breadth of testing and the number of iterations you can rapidly go through to converge on a final product.


Great response that you've written.

It makes me think of Ben Graham's apt observation from his book "Intelligent Investor" (which Warren Buffet and Charlie Munger both know and cite religiously):

"You do NOT want your banker to be an "optimist."

If you do not understand what this means, just ask http://perplexity.ai to explain the idium/phrase/limitless concept(s). No login/signup required [this replaced Google Search, IMHO, for all but the most-specific technical inquiries].




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: