I'm happy for the person who wrote this because they've found a nice impedance match between themselves and Javascript. But I could have written almost the same article and substituted Google Go for Javascript (with some modifications).
The reality of much programming is that once you've got your set of tools, and a good familiarity with a language which helps you solve problems you're happy. You're happy because you're getting stuff done in a manner that you find elegant, or easy.
For example, I really like Go because it's C-like, with an OO paradigm that I'm comfortable with, lightweight 'processes', and neat communication. Also, builds and tests really, really fast. But you probably don't learn much more from me saying that other than "John likes Google Go because he's comfortable with it".
Well, given that I've seen 7-15 minute build times for 180,000 lines of code in a popular JVM language ... what I wouldn't give for Go-like build / test iteration times in that language.
I think there's some value in your list of things that you like about Go. It gives others an idea of what they might be missing out on. (And for me with my work on Open Dylan, it gives me a nice target to aim for.)
Not the parent. But the people who signed the checks probably didn't care about those details ("We want a solution to our problem and a reassuring longterm support contract.") and the people who banked the check probably just passed the baby to whoever ended up writing/managing these 180,000 LOC.
The JVM is a proven and stable platform, with nice tooling and exceptional support. It's also polyglot.
And 180,000 LOC is not that much --it depends on the requirements. Perhaps you are thinking "I can do all that in 1/10 the size in language X", but chances are you are wrong.
It's not about doing the same thing in the turing-complete sense, it's about doing the same thing in the real world sense.
You might achieve the same functionality with less code. But can you achieve the same interoperability with other systems the company uses with that other language? Could you utilize the 20-something already existing dev team of JVM coders? Could you have the same support from the vendor? The same toolkits? The same performance characteristics? Etc etc...
As a disclaimer, I wasn't intending to bait anyone. I'm genuinely interested if there are any reasons for writing a 100k+ LOC application, rather than dividing the functionality up between smaller applications and libraries.
Operating systems like the Linux kernel are usually over 100k LOC, but Linus has made some compelling arguments about why you'd want a monolithic architecture in an operating system you want to be reasonably performant.
But that argument doesn't appear to apply to server systems running under the JVM. Building an application out of small, interchangeable, independent components seems rather more sensible than one large monolithic architecture. My current hypothesis is that any server application over, say, 10K LOC is badly designed.
I'm therefore quite interested in anyone who maintains or develops very large server systems, and if they could perhaps offer any reasoning why systems that large are the best solution.
Well-designed large apps actually are divided up into smaller components - there'd be no way to maintain them otherwise. However, it doesn't help as much as you'd think. If it's a layered architecture, sometimes adding new feature means you have to modify code in every layer. And dividing code up into libraries doesn't help build times when you need to modify a base library that everything uses.
If you have trouble imagining this, pretend that every dependency you use is actually part of your code base, and sometimes you have to modify them because they're not completely baked. If you have non-trivial functionality, you are certainly using way more than 100k LOC.
I'm afraid I still can't imagine a large system that could not conceivably be divided into small, independent components.
Well, examine some large systems then, and see whether you can fare better.
Depends on what you describe as independent components. The 200LOC thing is probably also comprised of independent components. As in NGINX that somebody mentions elsewhere. That doesn't mean those independent components don't have to work together and a change in one doesn't affect the other.
Consider a plugin host and a plugin. They are independent, alright, but the plugin takes advantage of certain things the host offers. If you change the host in those areas, you'll have to change the plugins too. Components only provide independency until the place where they meet each other, i.e. the "joins". (Even in a functional language, a pure function only provides independency until the point you call it --there you have to adhere to its interface).
Well, examine some large systems then, and see whether you can fare better.
Examining very large codebases takes time, so this is easier said than done! I have worked with several large projects, but all of them would have been better factored out into smaller services.
Components only provide independency until the place where they meet each other, i.e. the "joins".
I don't think that's true. Two components can share the same contract, but still not be dependent on one another.
For example, the Unix applications "wc" and "grep" have no dependency on one another, but they can be piped together because they share the same STDIN/STDOUT interface.
Similarly, the functions "+" and "*" can be used together because they have compatible type signatures, but this does not mean they are not independent.
You have the dependency graph wrong. There's a script that uses both wc and grep and it depends on both of them. At the lowest level they are just streams of characters, but that doesn't mean anything. If you change the output format of wc or grep in any significant way then you will break every script that depends on them. Which is impossible, and that's why Unix commmands don't change. (Unless it's a new flag or something like that.)
In a well-designed system where the interfaces between components isn't actually public, you can change the API by finding all the usages and fixing them. And frequently you have to do so because the API is not a frozen standard - it's still being worked on.
There are some functions that never change their functionality. The expression "1 + 1" will always equal "2". The functionality of "+" might be expanded, for instance to cover complex numbers, but it will never be incompatible with previous versions.
So if the API of "+" is not expected to change, why do you expect the API of other components to change? Why assume that an API of a service has to be mutable?
I'd argue that an API can be immutable if the component is simple, by which I mean a component that only attempts to do one thing. The function "+" is simple, because it does only one thing: add numbers together. Because it is simple, the API doesn't have to change.
If your API of each individual component is frozen, this means they can be considered to be independent. You might have components that use other components, but if their APIs are frozen, unchangeable, then you might as well consider them as entirely separate applications.
In my opinion, a web service of 100K+ LOC indicates that the interfaces between components are not frozen, that the components are not simple, and to me this just seems like bad design.
Well sure, and if we wrote programs that have no bugs then we'd never have to debug them. Your position is seriously naive.
Getting an API right the first time is hard. For example, most languages get date libraries wrong the first time (for example, see Java and Python), even with a lot of design effort up front. And most library designers are not that good, especially when they're only part-time library designers whose main concern is writing an app.
Getting an API right is hard, but if your API is simple, then by definition there are fewer things that can be changed, and that means there's less need to refactor.
I'd rather create 3 simple APIs and throw away the two that didn't work well, than create 1 complex API that needs to be constantly refactored. Small, immutable, simple components are preferable over large, mutable, complex components.
Saying I am "seriously naive" implies I don't have experience with designing components in this way, but it is precisely because of my experience that I advocate this position in the first place. The APIs I've had to change and refactor have been complex; the APIs I have not had to have been simple. Over the past few years, I have been slowly moving away from complex APIs, and there has been a dramatic decline in the amount of refactoring work I've done.
This is not to say that it's easy to create simple APIs, but good design is never easy, and I don't think it's naive to say that if you want good design, you need good designers.
> I'm genuinely interested if there are any reasons for writing a 100k+ LOC application, rather than dividing the functionality up between smaller applications and libraries.
IIRC, Google has such 'large' settings (for whatever reason). All companies I've seen use the latter approach.
Sorry for off topic, but do you have any recommended resources for learning Go? I've been curious for a while and I think I'd like to finally get my hands dirty with it.
I've found that simply reading the documentation[1] was enough to get started. The package reference[2] was pretty easy to understand after going through it. I would also make sure you're running the current RC and using the information at http://weekly.golang.org/ as it's the most up to date and has some substantial differences compared to the stuff at http://www.golang.org/
I spent the weekend playing with ClojureScript One, and this line really stood out for me:
"Moreover, JS is really fast to write and test. Write – save – refresh[2]; it’s an absurdly fast dev cycle that lets me iterate on questionable portions of the code much faster than any environment I’ve worked in."
With ClojureScript and Emacs, this cycle is replaced with "write - C-x C-e" (For the unfamiliar, Ctrl-x Ctrl-e sends the Lisp form immediately before the cursor to the compiler, and sends the resulting JavaScript to the browser for execution. It lets you evaluate bits of code directly from your editing buffer. It's the classic Lisp and Emacs set up and it's fantastic.)
Is there no browser-connected REPL for regular-ole JavaScript? I don't know the details of the ClojureScript implementation, but I would imagine you could hack it up to use plain JS. I'm sure I sound like a smug Lisp weenie, but actually I'm just hoping that this does exist in case I ever have to use regular JS.
It works really, really well. In fact, it's using exactly the same protocol and Emacs libraries that ClojureScript uses (SLIME). It's really neat to see SLIME still being used today for such cool and practical things.
I rely on LiveReload (http://livereload.com/) for my day-to-day web dev work. Point it at a repo folder and a browser window, and any time any files are changed in the repo it'll intelligently update the browser window (full page refresh for HTML/JS changes, partial refresh for CSS-only changes). It can also be configured to auto-compile SASS/LESS/CoffeeScript/etc, and/or execute arbitrary shell commands on save. The Mac version costs $10 (haven't tried the Windows port yet), but it's unbelievably fantastic.
I have been hacking on ClojureScript recently as well (spent the weekend playing with ClojureScript One also!), but I am not an Emacs user (I use Vim). Can you explain the part about sending JavaScript to the browser without refresh from a normal repl (lein repl)? I see this advertised by ClojureScript but not an explanation as to how it works (probably assuming experienced lispers). My method for working has been save, ctrl+tab to browser, ctrl+f5 to refresh.
Honestly, this can actually be a PITA to set up for vanilla ClojureScript apps because the order of steps is very important, and one of the rad things about ClojureScript One for beginners (myself included) is that it's seems much easier to get working, so if you're not an Emacs user to begin with, I would clone ClojureScript One and follow along to try it out. There are some nice scripts and Clojure code meant for dev-time that come with it that smooth out the edges a bit.
Then you need to (:require [clojure.browser.repl :as repl) in your ns, and run (repl/repl "http://localhost:9000 ") (there's a space here to prevent HN from swallowing the close-quote, but it shouldn't be there) at the beginning of your ClojureScript app.
Start the REPL in Emacs (M-x inferior-lisp RET, ie press Alt-x, type "inferior-lisp", hit return) and then open the browser and point it at your app. Test it by typing (js/alert) in Emacs. Then, when you're in a ClojureScript buffer (or any Lisp buffer, I think) C-x C-e will send the form before the point to the REPL, which is actually connected to the browser.
I'll leave setting it up for Vim as an exercise for the reader (kidding, kind of, I just don't have any idea if any one has worked on this yet, although there are Vim modes for regular Clojure, so maybe)
I also kind of misread MatthewPhillips post and thought he was looking for Emacs instructions. For sending code to the browser from a normal REPL, (started with `lein repl`) I think a good place to start is these two files from Cljs One:
I use Vim and did get something similar setup for ClojureScript. I used a lein repl running in Gnu Screen and sent clojurescript from Vim to the browser based repl using vim-slime https://github.com/jpalardy/vim-slime. It worked very well.
"Moreover, JS is really fast to write and test. Write – save – refresh[2]; it’s an absurdly fast dev cycle that lets me iterate on questionable portions of the code much faster than any environment I’ve worked in."
That is just full of crap. What he forgets to mention is that you have to fix a good deal of the bugs at runtime(spending time in debugger), that would otherwise be picked up by a compiler and also figuring out the workarounds on the shitty toolchain.
"We now send hundreds of kb’s of minified Javascript to the client, and we expect all of it to run smoothly."
Again, these are inherent deficiencies of the toolchain, where you have to compress the textual code, instead of using byte-code.
And on top of that we have Coffee Script, LESS, SASS, Jade and God knows what to try to compensate for the amount of legacy, instead of just rebuild proper framework from the start, right and for all.
And on top of that we have Coffee Script, LESS, SASS, Jade and God knows what to try to compensate for the amount of legacy, instead of just rebuild proper framework from the start, right and for all.
But it's not just a case of rebuilding a proper framework, the "internet" is just too big to scrap everything that is already there... we'd end up with this real mixture of technologies floating around, browsers become more bloated to support both set of technologies, we end up with those "Best Viewed in..." gif's all over the place again AND we would never be able to fully move onto that tech set for a long long time... how long before HTML5/CSS3 become the defacto standard for every new project you work on? I'm currently working on a site which needs to work with Javascript off AND support IE6 (both of which just sucks balls).
I am a bit biased. I love working with JavaScript (I am a C# developer btw) and I have loved seeing it evolve, how much it can do, how well it can do it and how GOOD javascript code makes it very easy and straight forward to work with... on the flip side of that I have seen far too many rubbish scripts from non-programmers!
What he forgets to mention is that you have to fix a good deal of the bugs at runtime(spending time in debugger), that would otherwise be picked up by a compiler and also figuring out the workarounds on the shitty toolchain
I think that this is a subjective position, some people find that the dev cycle is faster, in some cases fast enough to offset the debugging and yet others don't experience the issue to the degree that it is highlighted. I see this position from a good deal of people that prefer static languages and for me personally it is just not a huge issue. I just don't run into the issue that static languages are supposed to save me from enough to forgo the rapid development of languages such as JavaScript and Lisp. I think this issue is bore more out of how each of us develops and thinks, rather than some correctness of one over the other.
"So in a strange way, I’m happy that the messy Javascript I sometimes write tend to not last long. By nature, front-end code has a short shelf life: pages are redesigned, A/B tested and overhauled in short succession, and my painstakingly elegant carousel implementation may not have a place in the new design."
And this is why I'm scared of writing anything in node.js.
Just getting modules to work in node seemed less intuitive than C includes. Despite all the hype it felt like a step backwards, and a tool to be used only when its strengths (events/streaming) really demanded it.
I am happy to not be a front-end developer in most of my work :)
Just getting modules to work in node seemed less intuitive than C includes. Despite all the hype it felt like a step backwards, and a tool to be used only when its strengths (events/streaming) really demanded it.
Node modules are actually well thought out. They resolve circular dependency, and all code inside a module is confined to that namespace.
Whatever tool you take - if you start comparing with another tool that you already use, it's going to be strange and less intuitive. It's almost human nature to gravitate towards familiar things. If that's your own goal, then it's fine. But - to discount something just because it's not familiar to the way you do things is not exactly fair.
The problem with client-side Javascript code is largely a result of the horrible DOM. Once you are free of that server side Javascript looks a lot more doable.
The fact that front-end javascript often changes shouldn't be a reason to avoid it as a server side language. There might be other reasons why you'd prefer something else on the server, but I don't see any connection between the two, other than they use the same language. In most projects, there's a pretty clear separation between the front-end and server.
> Moreover, JS is really fast to write and test. Write – save – refresh; it’s an absurdly fast dev cycle that lets me iterate on questionable portions of the code much faster than any environment I’ve worked in.
I'm planning on looking into headless testing to eliminate the whole icky business of switching to the browser and hitting refresh. I find this much more of a drag than rerunning server side tests in a terminal.
Almost impossible to setup non-headless with Continuous Integration. Any solution with that?
By the way, I used to work using GWT and the way we test it is via using MVP pattern + JUnit for any logic code. Works well 99% minus the cross-browser UI bug (but then if we started testing the actual UI code then we're testing GWT itself).
A CI system just needs to be able to ingest the test results (usually as junit xml formatted text). You can easily do with with jsTestDriver or Selenium as your bridge from the running browser instances to the CI server.
I'm currently using Bamboo (Atlassian) running in AWS. Everything is done in AWS.
Let me try to digest and understand what you're saying:
- CI system can be made to work with jsTestDriver/Selenium as long as _somehow_ there is an input file to the build system (i.e.: JUnit XML formatted text).
That's true.
My next question is: I'd like to run everything (the unit-tests) in the same machine _without_ having to deploy and run the back-end as well. I'd like to mock the back-end communication with the UI so I can write unit-tests. Is there a way to do that with the current available JS test framework?
My next question is: if I'm running all of my CI in AWS, without an extra machine _with_ a monitor attach to it, can I still run jsTestDriver/Selenium?
This really only applies if you do 'web scripting', applying some dynamic stuff or AJAX speedups to web pages. Once you start building an actual application (say, Gmail sized), the lack of modules, types, versioning, etc. really starts to hurt. Yes, you can implement all of this with either javsacript libraries or server side tools, but that way you lose a lot of the advantages he mentions.
As a side note, does anyone have any good examples of Javascript being used as a scripting language? I keep meaning to learn Javascript, but I don't do any web programming, and it always seems like I'd have to learn a lot of DOM and other browser-specific callouts to do anything with it.
I keep thinking that Javascript would work okay for things that I'd normally try Lua for (or Guile, if I'm feeling ornery), but I've never seen anyone actually doing it. The existing implementations all seem very heavy-weight.
One of these days, I'll try out node.js. Of course, since I can't help but be tragically attracted to crazy duct tape and baling wire solutions, I'd probably try to run either Amber Smalltalk or ClojureScript on it.
A lot of his points seem more to do with his like of and familiarity with the environment, rather than praise of the language. I too enjoy mucking around in node but sometimes get frustrated with the many workarounds one needs to juggle to get simple things done. There's some inspiring (and fun) things going on in the language though which keeps it exciting.
One of the greatest strengths of the language when used for client-side code is that it largely enforces open source by virtue of the architecture. There's a lot of minified and obfuscated code out there, but even that can be deconstructed, and the amount of clear and well-documented code out there pretty much ensures that you can find tips to solve any problem you have, if not an open source library to handle it for you. Add the network effect from all the people using the language, and the resulting wealth of discussions widely available on its every aspect, and it's difficult to beat.
Is there an equivalent for name-dropping that would describe this kind of article? It seems to be clever and up to date but it is in fact empty and condescending. Removed the string of buzzwords, what you get is personal opinions of very slim interest.
I don't think we should attack the writer for writing up his opinion and posting it on the Internet. That's what blogging is if you're not some great thinker like pg or Joel Spolsky. Somebody here thought it was worth discussing, no less.
The reality of much programming is that once you've got your set of tools, and a good familiarity with a language which helps you solve problems you're happy. You're happy because you're getting stuff done in a manner that you find elegant, or easy.
For example, I really like Go because it's C-like, with an OO paradigm that I'm comfortable with, lightweight 'processes', and neat communication. Also, builds and tests really, really fast. But you probably don't learn much more from me saying that other than "John likes Google Go because he's comfortable with it".