Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It all boils down to: Javascript developers were already used to this paradigm in the browser.


This. A thousand times, this. It's a really good model for something that is I/O bound because it is easy to reason about. This makes the model perfect for events in the browser (both DOM events and XHR).

It's not that Javascript is especially suited to this model, but thanks to the browser, it was already the most used implementation of this model.


I think this combined with the fact that JS engines were already relatively isolated from the browser itself and able to be embedded with other software (in the case of node.js libuv/libev) made it a really good option. require+npm added a lot more to the mix.

But it was the broad availability of mindshare of those developers who at least knew some JS that really kicked it over the top, given JS as a DSL for I/O bound applications.


Perhaps they're used to it, but are they really fluent in it? Because I keep seeing otherwise bright developers having to use abstraction upon abstraction to keep from screwing up event driven callbacks.

Underscore, Flux, promises.js, Async, Angular, Node fibers, Step...

How much is all this abstraction costing us, both in terms of computation time and developer time (both writing and debugging)?


maintenance time, too.

We're in something of a JS framework boom (or hell, if you wish) right now. These people are developing as if their shit libraries won't exist in 3 years. Hence, the total lack of documentation and ongoing maintenance from so many of them. I've already had to maintain code that used abandoned JS frameworks. I'm a little scared of what's around the corner here...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: