So they are building a server-side app to deliver a usable first-load experience with HTML and CSS, while the JavaScript loads in and runs, and until the JavaScript runs, all the links work like normal anchors.
Typical "progressive enhancement" calls for creating HTML in templates that have no knowledge of the JavaScript, and then using JavaScript to attach behavior to that HTML.
The approach described in this blog post builds a JavaScript app from the get-go, and uses a server-side technique to extract standalone HTML from the JavaScript application.
The net effect is the same (you get HTML on the client before you executed the JavaScript), but the developer paradigm is sharply different.
Typical progressive enhancement techniques require you to carefully construct a version of your application that works without JavaScript, and then find ways to shim in and bring the page alive. The approach I'm working on provides the HTML as a by-product of running the application normally.
Additionally, progressive enhancement techniques almost always involve server-side rendering, which means you lose the benefits of client-side routing I describe in the post.
The goals of progressive enhancement are wonderful. My complaint has always been that previous techniques hamper developer productivity far too much to be realistic. With FastBoot, I'm hoping we can offer the benefits with far fewer costs.
Tom, it's ok to backtrack here. You (and the rest of us) didn't know at the time that progressive enhancement could be possible without making development much more difficult.
I think this might be semantic quibbling at this point.
Yes, progressive enhancement and server-side rendered JS are similar. You can see the latter as a new version of the former. But it is implemented in such a way - a novel way - that it definitely deserves a term of its own.
It's not really useful for the conversation to lump it in with the traditional definition of progressive enhancement that's been done for years.
Even middle of the road efforts I tried a few years ago (such as sharing a templating language between client and server even when the implmentation language differs) was fraught with friction compared to this technique.
Hah! I'm glad others have seen we've come full circle again :)
As a developer of isomorphic (there's another term to pick apart!) React apps, I'm thrilled Ember and others are reaching the same conclusions on why server-side rendering is crucial.
I'll still call it Progressive Enhancement, since a server-generated response is "enhanced" with client-side functionally that still functions for those with it disabled.
I think you're conceptually correct (server-rendered apps don't replace your API/data server); however, it's just as misleading to say that a server-rendered app replaces your CDN.
Instead, server-rendered apps add a new tier (if you were serving your app from S3 before) or replace an existing tier (your old app server). The kinds of people who needed a CDN before will still need one, and your API layer should not be tightly coupled to your app, esp. if you eventually want to support a native app or a developer ecosystem. So you end up with:
- Data layer/API server: exposes your data to a client in a RESTful way
- Browser: a client (one of potentially many) that presents data from your API in an interactive way
- App server: a node/io.js server that renders your app to HTML and sends it to the browser. It optimizes for search and slow devices without duplicating your app logic. This role was previously filled by Flask/Django/Rails/PHP, though sometimes conflated with the data layer.
- CDN: makes your media that doesn't change often highly available and lower latency. If you needed on of these before you added an app server, you probably still need one.
Yes, I totally agree. Sorry if that was confusing—I was trying to simplify the diagrams and perhaps oversimplified.
You would of course still use a CDN for any static assets. Only the HTML that changes would be served by FastBoot, and of course you probably want to cache certain responses from that as well.
(Throwaway because I don't want my real name associated with the site)
So here's another perspective. I'm running a highly dynamic site[1] with 8m sessions and 650m pageviews per month. The site runs on a single moderately sized server (4 cores, 16gb RAM) for MySQL and PHP. I can only do this, because I offload everything I can to the client.
The site loads a single 80kb JS file (which also contains all templates) and fires of one AJAX request that gets the requested data. This data is the same for all visitors (for the same resource), so it can be easily cached on the server for a few seconds, before it gets stale.
Everything on this site can be voted (tags, comments, images). These votes are stored client side and synced with the server if needed. This means I can load my canned data from the server and augment it with client data when rendering.
If I were to render it server side, I'd have to create a custom page for each and every user and load the client's votes from the database for each request. This is simply not feasible with the current hardware.
It's worth noting that while that's OK for desktop visitors, shifting the workflow to the client means you're putting a lot more stress on mobile visitors and resulting in a worse experience for them.
Network performance and latency is all over the place, leaving AJAX an unpredictable mess. On top of that you're adding the overhead of spinning up the javascript runtime on the phone, running the code etc. etc, all of which eats precious battery time for them, and negatively impacts your time to screen, a very important metric.
Good for the simplest of "rich" interactions, but quickly drops off when responsiveness becomes a factor. For example, you cannot implement optimistic responses using this library (click the "save todo" button and the saved item immediately appear in its read-only interface; this interaction requires reimplementing your templating library and validation code on the client and the server).
It isn't a perfect fit for every app, but most apps have at least a few areas where using it would simplify things considerably.
Also, there are a lot of nice things that are extremely easy to use that are a PITA with most other frameworks. For example, AJAX-aware history is an ic-push-url="true" away:
I realize that most front end people who see this will immediately dismiss it as "simple", but it's a very rich library and can be used to accomplish an awful lot: you can fire client side events using custom HTTP headers, show and hide request indicators using only a CSS selector, etc.
It looks for the standard meta CSRF meta tag, it will include it if the request is within a form with a CSRF hidden input, and you can hook in via the usual jQuery AJAX hooks if you are doing something else.
Whatever architecture you use, it all depends on your requirements. Example: If SEO is important, build server side rendering. Period.
Just make sure you provide best user experience possible.
Author claims Bustle is one of the good JS rendered site. If you scroll down http://www.bustle.com/ and click on the item, and if you use back button, you won't be at the same position. This is one of the biggest problem with JS rendered site.
Now compare this to reddit.com. It works great without client side rendering, and not have any issues that most client side rendering sites has.
I am not oppose to JS only app. Problem is, it is hard to build a good JS only app. I do build client only apps.
In Chrome, when I scroll to the bottom of an article at Bustle, click a link to another article, and then go back, I'm directed to the top of the first article, not the bottom where I left off. In my experience, this behavior is the norm at sites that use client-side rendering, and it's the main reason why I hate browsing content sites that use client-side rendering.
Great post but taking your Twitter example why not make a full HTML version like GMail does so you don't have to scrap the client side stuff? Also so you don't have to build a framework specific server library like Fastboot. I feel like Fastboot is a more sensible solution than lets say Meteor.js; however, it is still about solving issues for a minority of your users who are on slow internet and older devices.
If your really using slow internet and old devices than the experience is still going to be slow no matter what you do unless you really strip things down. So just make multiple experiences that are really good instead of one 'meh' experience.
With that said, Twitter made the right decision at the time because their client side implementation sucked on all machines not just the old ones.
It's not just for users on slow internet or older devices- Bustle was a good example, because as a content site, it generally has a lot of assets to load.
Fastboot can get you over the hump of the initial load when you're linked to a content-heavy page on the site for the first time, and hide the download/boot of the JS app for the moment it becomes ready.
Thank you Tom Dale for continuing to make ember better with Fast Boot. I personally write and live-test a lot of my apps on 4chan's /g/ and /r/programming (which I imagine hosts a large "tinfoil" user base), and the most common feedback I would get from them is "why do you need javascript for this trash?" (well, the actual most common feedback I'd get from them are personal insults regarding my sexual preferences, but they're not constructive and heeding them will not lead to a better product). So thank you so much for working to make ember server-side-enabled.
Also, not to put the cart before the horse, but any chance we'll be getting virtual dom any time soon?
I don't think anyone is missing the point at all. I fully understand what you're attempting to do because I tried to implement an Ember project only to have so many negatives pile up that it was actively working against our goals.
I really wanted to like Ember, but even with those issues aside, the two most common comments on out team about Ember were "Wait, why is that working?" and "I didn't change anything, why isn't it working now". Neither is something you should be hearing from your dev team.
You can do some pretty cool things pretty quickly, but there are more promises that resolves.
> I don't think anyone is missing the point at all.
Perhaps you are not, but "anyone"? The article quotes 2 tweets that disagree with the point, and I've heard it misunderstood on internet comments.
Perhaps it's not a common misunderstanding, or perhaps it is - it's hard to tell with anecdotes. Regardless, it's worth clarifying, as the author of the post did.
> If it needs to fetch data from your API server to render, so long as both servers are in the same data center, latency should be very low
Maybe I'm just missing something obvious, but wouldn't FastBoot make things slower if it needs to go hit the database to produce the static page, and then the clients hits the server again to fetch the unrendered templates and data as part of Ember's normal bootstrapping? I don't think it's that uncommon for the database to be a bigger bottleneck than the network.
Are there any benefits over, say, slapping a screenshot of the page as the background of body? (yes, I know I'm oversimplifying, but for instance, Facebook loads up wireframy graphics as placeholders until ajaxed stuff pops up.)
By having the app running on the server it can take advantage of being primed before the first request arrives. Primed in that the app is already running, and/or the data retrieved is cached and regularly refreshed.
Depending on the nature of the application, if it's largely not client personalised (a content site like Bustle, as opposed to a Gmail client), one instance of the app on the server can respond to multiple client requests in it's lifespan. So basically it's an already spun up ready to go instance of the app, or already processed, just needs to squirt the HTML buffer at a response object.
Really? I'd imagine most Ember projects are more like Gmail than Bustle (in the sense that they probably have at least login). If even a single byte of the payload changes, you're looking at O(n=users) instances and then you're potentially back at the problem of having the db hit happen before the first pixel on screen.
Well, I freely concede I am still missing the point. You use client-side javascript because that's what runs in the client. Whatever your server-side code does, why would you write it in javascript?
Because writing an app in one language is easier than writing it in two or more, and since JavaScript is the only language you can do client-side scripting in you're probably going to be writing some JavaScript no matter what. So if you want the ease of writing your whole app in one language, the only language that one language can possibly be is JavaScript.
As the post mentions, the node.js app that servers the initial HTML would be separate from the API server. The code in the API server can, as before, be implemented in whatever language you want.
I think the goal is to have FastBoot use your existing client-side javascript app, the idea being that one codebase in one language is simpler and better.
EDIT: Lots of downvotes for saying that there are other ways to program apps besides doing everything on the client. Render content on the server, enhance content presentation on the client.
---
I don't get it. Is this 1996? Why are people just now 'discovering' that you don't need to throw JS at an application to have it completely usable?
> All modern websites, even server-rendered ones, need JavaScript. There is just a lot of dynamic stuff you need to do that can only be done in JavaScript.
Really? Look at Linode, Amazon, or even Google (including GMail, and probably others). All of which are big names that can and do work ENTIRELY without JavaScript.
> Client-side JavaScript applications are damn fast.
Another (AJAX) HTTP request + rendering the returned data on a tiny mobile phone is faster than one HTTP request for the entire webpage and having dedicated machinery pre-render the content for you? I don't think so. He does talk about this point later on, but which is it? Client-rendered JS apps are, or aren't fast?
---
JS Hipsters offloading rendering everything to the client for absolutely no reason is the 'everything-looks-like-a-nail' or the 'everyone surfs the web like I surf the web and has my specs' problem. Worst of all, in doing so, they completely neglect actual content. You don't know how many million-dollar VC-funded company webpages I've visited that don't even have a damn tagline visible on their landing page without JavaScript enabled. <h1>s with actual content inside of them are too complicated now?
The solution is very simple: render all the data on the server, and progressively enhance subsets of your app with JS, by overriding the defaults of the rendered page. It's literally like we're discovering DHTML all over again.
Linode, Amazon, and Google all absolutely use JavaScript. Tom isn't saying that they use JavaScript on the server, he is suggesting that JavaScript is involved with their website despite using a different language on the server.
Look, saying "client-side JS apps are damn fast" is a vague statement.
What we are talking about here is the architecture of a client-side application in any language. If you can ship application code to a local runtime, then that application code will always be able to respond to user interaction faster than fetching the results of a UI interaction (like a click) from partway across the globe.
The benefits of Client-side JavaScript applications do not come directly from JavaScript. They come from the architecture you can build when treating the Browser as a runtime for applications.
Progressive enhancement (beyond very small apps) is a challenge to maintain, since UI state needs to be shared between a server runtime and client runtime. I don't think there is disagreement that a pure server-rendered app or a pure client-rendered application would be simpler.
And your comments about shitty execution of web pages and apps could apply to any shitty app. They have nothing to do with client-side JavaScript applications. I expect a better argument than "I once saw a webpage that sucked".
> Progressive enhancement (beyond very small apps) is a challenge to maintain, since UI state needs to be shared between a server runtime and client runtime.
It does? Why? The server is handling the data to be rendered either way, and as I say in response to another comment, rendering the data is as simple as:
res.end(render(template, data))
> And your comments about shitty execution of web pages and apps could apply to any shitty app. They have nothing to do with client-side JavaScript applications. I expect a better argument than "I once saw a webpage that sucked".
The argument is "you are VC-funded, and taking a 1mil+ dollars to build the Next Big (S|P)aaS. You are spending tons of money on A/B testing, designers, mockups, UI specialists, and your page does not even load for the lowest common denominator - and easiest to manage (static HTML) users of the web?"
My experiences may be biased, but most startups I work with have target audiences that do not include tin-foil hatted JavaScript disabling individuals.
To the contrary, for them investing any energy in servicing this minority would be a mistake.
I suspect their exists some subset of startups were this may be reversed. But they are absolutely in the minority.
(including GMail, and probably others). All of which are big names that can and do work ENTIRELY without JavaScript.
Gmail has an html/css only interface that uses no javascript. I use it every day, and everything just works. Additionally, on my device it's much snappier even with full page refreshes. This rebuts the original assertion that "All modern websites, even server-rendered ones, need JavaScript.".
This is an entirely different frontend that was added years after Gmail launched. The interface that most of us use absolutely won't function without JavaScript.
Adding a pure html view to a mature, profitable application to appeal to a niche audience probably makes sense. Building it that way from the start probably doesn't.
> Why are people just now 'discovering' that you don't need to throw JS at an application to have it completely usable?
I think a new generation of developers for whom the web has always been, essentially, a rich and dynamic applications platform is 'rediscovering' that it's really just document markup with Turing completeness on the side.
You don't seem to have that many downvotes right now, but I downvoted you, not because you said there are other ways to program, or even because I disagree with you that we've swung the pendulum too far toward the client, but because your comment seems to willfully ignore everything the article says. Here is what I mean:
> JS Hipsters offloading rendering everything to the client for absolutely no reason
Whether you think the reasons given are good or not, it is inarguable that the article lays out man reasons, so coming back and simply claiming there is no reason is not in good faith.
What's interesting is that the solution described in the article is functionally equivalent to what you think the solution should be, but you still don't seem to like it, presumably because retains the advantages of an approach you dislike.
You're getting downvoted because you miss the point. You want the web to cater to you, and big companies with infinite resources will, but smaller companies just won't. The cost to support users such as you is just not worth it, you're too small of a segment. People are going to focus their design + resources on the 99% of users who have decided they want the web to work properly.
I mean, shit, lots of startups completely eschew IE support, and they're a way more significant portion of the market than the neckbeards who disable javascript.
> The cost to support users such as you is just not worth it, you're too small of a segment.
You don't understand. The web is content-based. Like motherfuckingwebsite.com [0] shows, you don't need dynamic content to present your message. If I need to download 1MB of JS to see your freaking tagline, then that's ridiculous.
> Another (AJAX) HTTP request + rendering the returned data on a tiny mobile phone is faster than one HTTP request for the entire webpage and having dedicated machinery pre-render the content for you? I don't think so. He does talk about this point later on, but which is it? Client-rendered JS apps are, or aren't fast?
Very large generalization, it depends on the specifics of an implementation. Plus this isn't as scaleable since you're consuming CPU cycles rendering views on the server.
> The solution is very simple: render all the data on the server, and progressively enhance subsets of your app with JS, by overriding the defaults of the rendered page. It's literally like we're discovering DHTML all over again.
So you're telling me to write a web app that consists of: A Server-side application, multiple page-specific JavaScript "widgets", and an API endpoint for AJAX communication. That doesn't sounds very fun or efficient for the programmer. It's is taking a step back in time in terms of developer happiness, and application complexity (and possibly efficiency but that's implementation specific). Also, what about saving UI state?? Good luck with that...
What sounds better to me is a single RESTful interface and a JS application that renders all of your views. It does have it's downsides (SEO + tin-foil hat people disabling JS), but it's a much more elegant solution.
> Really? Look at Linode, Amazon, or even Google (including GMail, and probably others). All of which are big names that can and do work ENTIRELY without JavaScript.
I think we can all agree, maintaining two discrete code-bases is a sub-optimal experience. Especially for companies without the resource of the "big names" you point out.
> Another (AJAX) HTTP request + rendering the returned data on a tiny mobile phone is faster than one HTTP request for the entire webpage and having dedicated machinery pre-render the content for you? I don't think so. He does talk about this point later on, but which is it? Client-rendered JS apps are, or aren't fast?
Several reason why this experience results in a faster mobile experience.
- cached data serves extremely quickly ;)
- pre-emptively loading data to prime the cache.
- data is smaller then data + html.
- the app remains usable during loading phases.
Additionally, separating the concerns of rendering and data loading, does afford more creative optimizations as the need arises.
> The solution is very simple: render all the data on the server, and progressively enhance subsets of your app with JS, by overriding the defaults of the rendered page. It's literally like we're discovering DHTML all over again.
Client.js, rendered on the client:
render(template, data)
Server.js, rendered on the server:
render(template, data)
---
They're exactly the same. You use a templating engine library for either JS, or your server language to process the same data. What client.js should do is change things dynamically, progressively.
One hard part becomes synchronizing ephemeral UI state between client and server. Such as data not-yet saved, or various UI components that are toggled into some sensible state.
As the complexity increases, this problem explodes.
One can use local storage (when available), but unfortunately this isn't always available, or needed.
Another thing you can do is bounce state back & forth with requests. Unfortunately this becomes a synchronization nightmare.
It is extremely nice to allow ephemeral UI state to remain in the UI. Merely syncing non-ephermal state, and populating a client side pool of data that is quickly addressable, and thus "instant" from the perspective of the UI.
Mitigating the latency of mobile networks is key here, no latency due to data-locality is the only way. Short of Quantum entanglement Physics isn't on the side of server-side rendered experiences.
I think you're simplifying too much in order to make a point. And I've no doubt your point is valid in a lot of cases.
However, this idea of a template rendered on the server and then the same template on the client is often fraught with difficulties the more interactive your application gets.
It gets more complicated when you want to 'componetize' your javascript. A couple of years ago to do that I would have brought in something like backbone.
"I know!" I thought, "I'll switch my templating to mustache, which has a parser in both ruby and javascript. Then I'll have ruby render the page with mustache using a ruby hash. Subsquent AJAX updates then pull down JSON in the same shape and I'll have Backbone use that same mustache template to render it! That way I can send the HTML down from the server pre-rendered for speed and SEO and then progressively enhance!"
All great in theory, but then I had to build awareness into my components so that they could be instantiated without a pre-existing representation built by the server or attached to a pre-rendered version made in Ruby. (You don't want to only make a component that NEEDs a real DOM to be tested do you?)
So while I agree your advice is great for forms, I've found real world cases of highly interactive applications where it reaches its limits. And the proper progressive enhancement we need is only coming now thanks to efforts like this and isomorphic react applications.
I'm not bashing the article - I'm referring to the mindset necessary for the fact that this article even needed to be created. Server-side rendering should have always been first-class.
What you're missing is the whole part of the article that talks about why many people migrated to that mindset. You are dismissing the problems out of hand, as unimportant and easy to work around, which is frustrating for the many of us who changed our mindsets in search of a better solution to those problems, which client-side-heavy apps absolutely are. But the better solution had trade-offs; it spoke to many real problems with the fully server-side rendered approach, but caused new problems. Now we're starting to find solutions to those new problems without giving up the entire approach. That's a good thing. Your comments don't seem to argue that it's a good thing, but rather that the whole endeavor has been folly, which, for many people, just isn't true!
The oft-touted claim is that client-side templating is easier, but they're the same, really. The server solution is literally using the same exact templating engine and code.
What people are conflating is "doing things in the client with JavaScript" and "rendering templates in JavaScript", which are two entirely different domains.
You can't access many cool features client-side without JavaScript, full stop. But rendering things on the client, and presenting an absolutely un-usable design without turning-completeness is not content-first design.
There's an option for the basic HTML layout available if you have JS disabled.
That's the second-best approach. The first is using a JS-redirect to the flashy AJAX page, or just overriding all the default handlers necessary with JavaScript (remember, at this point, your content was already (supposed to be) properly rendered for you by the server).
Sure, there is an option for a no-JS version of Gmail, but the default definitely uses Gmail, because the user experience is much better - I think that's pretty clear.
I agree doing some work on the server can also make sense, and that the new pre-rendered JS is an extension of progressive rendering which is not a new technique. It's a new way of doing it, though.
In most cases these are separate apps. "Just build it twice!" is necessary in some cases, but is far from idea.
Progressive enhancement is hard, at least for complex interfaces that need to maintain state. Hard enough that many sites, if they support disabling JS at all, do so by writing an entirely new frontend with a simplified feature set.
Ember + Fastboot provides many of the advantages of progressive enhancement, but is far more productive in many cases.
They're misusing the term "rendered". Rendering is the graphics operation of turning some non-image representation into an image. Server-side rendering would be generating each page as an image and serving that. The article is just talking about fancier template systems for generating HTML. That's what content management systems do - assemble a page out of parts on the server and deliver it to the client.
I think if we had to do it over again, a better term would have been "serialize". But this has been a commonly used meaning of "render" for more than a decade now, as it was used by the earliest versions of ruby on rails.
You gotta love the Orwellian naming on this one. "Ember FastBoot" meaning Ember-we've-been-selling-you-very-slow-booting-for-years-and-now-we're-hoping-to-mitigate-it-slightly-but-it's-still-vaporware.
Meanwhile, the commonsense approach has always been faster and simpler than this model: Use a small JS framework, not a bloated one, and only load the minimal UI and data set that you need to render the initial page at first. Load the rest later. This includes bootstrapping — remembering to put all the data you need to render the page inline as a JSON object in the HTML.
This can often result in less data sent over the wire than a large HTML page with tons of repeated DOM elements. It's also far easier to cache appropriately.
I don't want to get into a big discussion on small libraries vs. big frameworks (your use of the word "bloat" gives away your position here ;).
I would ask readers of this discussion to visit any of the web applications they use on a daily basis, open the developer tools, and look at the size of the JavaScript payload. Whatever libraries or frameworks they're using, the payload size is almost always several hundred kilobytes.
I'd argue that using small libraries is a noble goal, but in practice, you just need several hundred KB of JavaScript to build modern apps. If we're honest with ourselves about it, we can try to do a good job of managing it from the beginning. Otherwise you just end up with an ad hoc mess.
The other point I tried to make here is that, yes, of course, you can do all of this by hand. But in practice, most teams are so under the gun to ship features that they don't do it. If we can make great boot performance as easy as installing an npm package, why not?
Lastly, regarding the vaporware claim: we've got a very alpha version up on GitHub already. I invite Ember users to play around with it and give us feedback: https://github.com/tildeio/ember-cli-fastboot.
I'd like to stand up and be counted as saying that both
1. Ember absolutely isn't to my taste and I've successfully avoided it so far and intend to continue doing so in future
2. The work you're doing is really cool and I'm looking forward to seeing it completed, both for the benefit of my friends with different tastes who're happily using ember and for the benefit of everybody else in that it provides a worked example of an awesome idea.
(and people who dislike it just because they dislike ember might, possibly, need to remember that trailblazing is important for new ideas)
I would take it a step further and say that server-rendered JavaScript apps completely changes the game and will have ripple effects throughout the web industry.
As you can see most of the code is the client-code. All the server code does is run a function and stringify the results.
What this means is that you no longer need a back-end team and a front-end team. Your front-end team is your team (not counting "API" teams). Sure, some companies have already been doing this but now you can do it with far less code. And you can do it faster. And with fewer developers (there's less work). Which means you can do it cheaper.
All of this means is, if you are writing your front-end and back-end in different languages, you are in a real competitive disadvantage.
So any language used for web development is going to see its usage slide if they can't find an answer for transpiling to JavaScript. And hopefully having an isomorphic answer the way JavaScript now has.
Sounds like Progressive enhancement [1].
[1] http://tomdale.net/2013/09/progressive-enhancement-is-dead/