Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Node does have concurrency. It does not have parallelism. This is an important distinction: http://stackoverflow.com/questions/1050222/concurrency-vs-pa...


Well, that's debatable. I suppose if you squint a bit, you mentally perform a reverse CPS transform on a Node program and end up with a set of tasks that are executing concurrently. In that case, I'll rephrase: Erlang as parallelism (on adequate hardware) but no shared state; Node has shared state but no parallelism.

Still, given the way Node forces programmers to manually unravel tasks and write everything as callbacks, I'm not inclined to call it "concurrency" even if eg. the processing of a group of web requests overlap in wall-clock time.


The article is not "gobbledygook". A well known limitation of NodeJS is its lack of support for parallelism. You have to try to take advantage of parallelism of the OS itself by pre-forking the Node server.

NodeJS is great for applications with a lot of clients, but not for CPU intensive apps. That's why I predict similar technologies built on Erlang, Scala, and Go will have more longevity than NodeJS.


NodeJS is great for applications with a lot of clients, but not for CPU intensive apps.

Well, you do with node what you do with anything: if you have 4 CPU cores, run 4 copies of your app. Problem solved.


With Node you have to manage the 4 cores if you're doing something where they need to communicate.


That works really well for handling requests for HTML pages, because they tend to render independently of one other. However, you run into trouble when you to make "Nodes" communicate, and the comment that the article is addressing specifically mentions interprocess communication.


Clearly it's not that easy unless you plan for it from the beginning. One benefit of Erlang is that you have to structure you code like a distributed application. You can mess that up, but it's harder.


This allows you to serve more clients simultaneously, but not to serve any given client faster i.e. you get higher throughput but not lower latency.


Imagine your app has 16GiB of precomputed tables in memory...


Node.js has a heap size limit. One of the biggest weaknesses IMO well before people complain about CPU cores.


I used the word "gobbledygook" because the article is poorly written, not because I think he's wrong to criticize Node. Reread the paragraph on Node and tell me it doesn't meet the Wikipedia definition: "text containing jargon or especially convoluted English that results in it being excessively hard to understand."


poorly written

I had no trouble understanding the paragraph that you complain about and found the article to be very well written.

Where do you see excessive jargon or convolution?


Since the early days of node there has been a proposal to dispatch tasks via the WebWorker API, with a callback - as is currently done for calls to OS subsystems. Sounds like that would be a great way of dispatching CPU intensive tasks without breaking the semantics of Node. What happened to this?


Since the early days of node?

Node's first commit:

    commit 9d7895c567e8f38abfff35da1b6d6d6a0a06f9aa
    Author: Ryan <ry@tinyclouds.org>
    Date:   Mon Feb 16 01:02:00 2009 +0100
    
        add dependencies
How old is Erlang? 25 years or so?

> What happened to this?

There's been some preliminary stuff on giving spawned node processes a more slick API, with the intent of then being able to optimize them in some way.

Whether or not it'll end up at the WebWorker API is yet to be seen, but that'd certainly fit with node's "don't reinvent BOM conventions where they fit" pattern.


There is longevity in nodejs. As long as Javascript remains king on the client-side, you can pretty much guarantee nodejs will have a future.


That seems too nitpicky to me. You can use those definitions if you like, but it's not the common usage. To most programmers those terms are synonyms. In my experience, people trying to be precise about architectures like node.js use the term "asynchronous" and not "concurrent".

Calling node.js concurrent obscures the important fact under discussion: namely that it won't scale beyond one CPU in a world where 8-core servers are routine.


The distinction between parallelism and concurrency is extremely important. The guys who wrote Real World Haskell did a good job of explaining it here http://book.realworldhaskell.org/read/concurrent-and-multico... (explanation has nothing to do with Haskell).

In essence, concurrency has to do with systemsy stuff- how to do things that might overlap without causing problems (race conditions). On the other hand, parallelism is about breaking a problem into smaller parts and attacking it in pieces. The problem with most languages is that they require the programmer to worry about both at the same time; however, languages like Erlang alleviate most of these problems, the biggest of which is shared state.


You're arguing semantics: about words, not meaning. My point wasn't that this isn't interesting, but that the jargon you are using (and that book is using, for that matter) is revisionist and confusing. That's just not what "concurrency" means to most working programmers, who have used it for decades to talk about (ahem) "systemy stuff".

Rewriting language via blog posts doesn't work (c.f. "hacker"). Doing so as a way to, frankly, cover up a huge design flaw in your favorite library just seems dumb to me.


Seems too fine a point for me too. The number of CPUs doesn't matter, that's up to the scheduler. What limits the the scheduler are coordinating shared resources. That includes CPU, locks, memory, disk, network, IO, etc. The multi-node issue doesn't seem that much of a problem in that you can start a node.js process per core. There's very good performance doing this for MySQL, for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: