Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ES6 adds lambdas, destructing assignment, default/rest/ spread arguments and template strings - all of those reduce verbosity. And there is `let` which has a "normal" scope, although I'm not really sure it needs that. Additionally, generators let you use normal control flow constructs for IO, if you prefer that to FP.

While its no Haskell, it certainly isn't much more verbose than other dynamic languages anymore. And a typesystem like TypeScript or Flow pretty much eliminates the rest of the gotchas.

Off the top of my head, there are two embarrassing holes: bigger integers and parallelism. Can't think of anything else at the moment (macros maybe, but they're a double-edged sword wrt tooling). Wonder if anything else is missing?



> ES6 adds lambdas, destructing assignment, default/rest/ spread arguments and template strings - all of those reduce verbosity.

Yes. But the existing warts do not go away (and neither will they ever, due to the need for backwards compatibility).


There's Sweet.js if you want macros.


We've got parallelism in the browser (sorta) with webworkers.


I'm actually glad there isn't parallelism (well threads/actors/tasks).

I'd go so far as to consider it a feature.


May I ask why? Not having parallelism seems like a net negative to me...


[Edit] Before starting, I just want to state: I know JavaScript has parallelism/concurrency in its supporting "system calls" (ie what would, in other languages, be blocking calls, in both the browser or Node). In fact I like this parallelism model, but I was specifically talking about parallelism built into the language (Threads, Actors, Tasks).

This is a long and complicated answer. I doubt I'll do it justice in a few paragraphs, but I'll try.

In one word: simplicity. Primarily, not having parallelism is far simpler than having it. Yes, some models of parallelism are simpler than others but at the end of the day I think we can all agree none of these models are as simple as not having parallelism. As per "net negative": yes, ofcourse, not having parallelism is a "negative". However, "net negative" is interestingly more linked to the applied domain rather than the actual concept. Simply put, the domains that JavaScript is used in don't heavily rely on parallelism (or rather, require the optimization of parallelism).

Thinking about this now, its sort of similar in nature to why people like garbage collection. There is an inherent negative to using a garbage collector, however, (I think we can both agree) in certain (most) domains it turns out to be a net positive. Why? For the same reason, simplicity.

I mean, I could go on, but I'm trying to be as concise as possible. Hopefully, that was a useful. I'm happy to elaborate if you have more specific questions about this.

* s/simplicity/developer efficiency + implicit safety guarantees/g -- Since "simplicity" is quite vague;


This is an interesting point of view. I personally find the erlang approach -no shared memory- to be the least error prone, and as a consequence the most efficient for developers.

With Erlang you can't share memory [0], but in exchange you get sequential code (no callbacks or yields), true parallelism, and even distribution over a cluster. With node.js all the asynchronous calls live in the same memory space, but code is written with nested callbacks (or yields), and of course you get no parallelism.

[0] There are shared dictionaries for when it is really absolutely necessary.


Agreed. Rarely argued I think this is a very valid view. Additionally, some languages which have true parallelism are converging to the same model (say non-blocking IO ala Node), built on top of their crude parallelism primitives.


Honest question: How different is a web worker from an actor?


Surprise answer: It's not.

When people sat down and decided how workers are to act in browsers - Actors are what they had in mind.

This is why Web Workers don't have access to the window scope, use explicit message passing etc.


So to reply to both of you:

Correct me please, but from what I understand web workers are not a part of the JavaScript language. The same way AJAX isn't a part of the JavaScript language. I would consider those (AJAX/WebWorkers) more like "system calls" to the browser. And like any system call, it has privileges the application doesn't.


To be precise:

- The JavaScript language is specified under the ECMAScript specification.

- Web workers are indeed not a part of the ECMAScript specification which considers them "host objects".

- WebWorkers and other browser APIs (timers, ajax etc) are called the DOM API. The DOM (document object model) is how JS interacts with the web page and what capabilities it exposes. (document.getElementById isn't any more JavaScript than web workers).


a cryptographically secure pseudorandom number generator would be nice, too.


Server-side js has that already. On the clientside it would be useless.


What about p2p?


There is a problem with JavaScript in that there is no security mechanism currently in place to ensure the JS file you are running is what you expect to be running.

Imagine your on your favourite social network that used a JS based encryption in a p2p chat to your friend. On that same page advertisers are pushing content to you. That content could be a malicious JS file which can eavesdrop on your conversation, all the while you think its encrypted.


Isn't crypto.getRandomValues created just for this purpose ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: