Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was lucky to be able to catch a preview presentation of this "Holy Grail" project at SpainJS, last summer. It's very interesting stuff. I think that one of the more important caveats to mention is that to the extent that your application is composed of rich interactions, visualizations and interactivity, and logged-in-user-only content, that stuff tends to remain on the client-side alone ... but for your basic rendering of JavaScript models to a flat HTML page, this is a great project to keep a close watch on. In particular:

* Caching of shared static HTML for fast-as-possible page loads.

* Google indexing and searchability.

* Backbone apps that feel more like a series of pages and less like a single-page app.

... are all things that are addressed quite neatly by Rendr.



Author here (spikebrehm):

Jeremy, I think you're mistaking this for Keith Norman's SpainJS presentation (http://www.youtube.com/watch?v=jbn9c_yfuoM). He proposes the same approach, but I don't know if it ever got past a demo. Although it seems like they may be using some form of this at Groupon in production.

Anyway, it is exciting, isn't it? This is just the beginning for us -- we've had to make a few hacky design decisions to be able to ship, but I think we will get the kinks worked out. The trick, and the challenge, seems to lie in choosing the right set of abstractions to hide the code complexity from the application developer. I hope to open source it as soon as I can, to benefit from the input of luminaries such as yourself!

Oh yeah, and the offer to give a Tech Talk at Airbnb next time you're in SF still stands :)


Very interesting stuff :) Can't wait to hear more on this, as it seems like the panacea we've been slowly moving towards for some time.

How does the approach you've taken compare with the architecture outlined in nodejitsu's concept of isomorphic JS?

http://blog.nodejitsu.com/scaling-isomorphic-javascript-code


Oh my goodness. I totally am mistaking it for that presentation. It's the same general concept, and they also called it the "Holy Grail". Apologies for the confusion.

Y'all should get together and compare notes ;)


Jashkenas, do you have any plans to venture into this area?

I was just talking today about how I am tempted to try some of the other libraries that are going this direction. But that whenever I look at their code I'm envious of how clean Backbone is. Seriously the biggest turn-off to Angular is reading the code and seeing that people aren't as nit-picky, pseudo-OCD, whatever you want to call it as you are. Just curious.


I've got no immediate plans to play around with any Backbone-on-the-client-and-server stuff ... the next web app library in the works is only semi-related:

The basic idea is that for many public-facing websites (think NYTimes, natch, or Airbnb's search result listings, for example), the usual Rails convention of "Hey, here comes a request, let me generate a page just for you", is fairly inappropriate. Lots of "publishing" applications will melt very quickly if Rails ever ends up serving dynamic requests. Instead, you cache everything, either on disk with Nginx, in Memcached, or in Varnish.

But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?

I think it would be fun to play around with a Node-based framework that is based around this inverted publishing model, instead of the usual serving one. The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.

It's only a small step from there to pushing down the updates to Backbone models for active users ... but one step at a time, right? No need to couple those things together.

ps. Kudos to you for reading the source. It's always enlightening: https://github.com/angular/angular.js/blob/master/src/ng/roo...


>But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?

>I think it would be fun to play around with a Node-based framework that is based around this inverted publishing model, instead of the usual serving one. The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.

You mean most things don't already do this? I've been working on a personal blog engine with this as one of the core ideas (basically all static assets and pages are compiled on edit), and I thought it was a pretty obvious way to go about it. Looks like I'm indeed not the only one to think of it, but how "new" you present the idea as is a bit surprising to me.


For simple problem domains (Blog, Mom and Pop store website, etc) it's trivial to pre-generate content. For larger content systems you can run into a more complicated dependency tree. Then you have the choice between keeping the dependency logic accurate vs regenerating the entire content set on any change.

It also turns out that content sets that change infrequently, but also unpredictably are a pain to cache. You can cache them for a short time (as long as stale content can be tolerated), but then you lose cache effectiveness. Or you can cache it forever with some sort of generation/versioned cache, but that doesn't interface with named, public resources very well. Telling your visitors and Google that it's yourdomain.com/v12345/pricing not yourdomain.com/v12344/pricing doesn't really fly.

I definitely concur with your surprise about it being novel though. I think that for many situations it's just easier to run extra boxes to handle the increased load of generating dynamic content on the fly over and over again. It's good for SuperMicro and AWS. It's not so good for the planet.

I'm very excited to see Jeremy's approach to addressing the problem.


> "Hey, here comes a request, let me generate a page just for you"

In a blogging context, stuff like Wordpress where pages are generated per request, then cached to handle any form of serious load just rubs me the wrong way... Such an infrastructure to display a few pages seems ludicrous.

> The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.

... so this is exactly what my WIP custom blog engine (ultimately meant to replace my posterous blog) looks like, initially composed of markdown source and makefiles, then ramped up to some rake tasks and a ruby library. An entity change (edit post, add comment...) should trigger generation of each page referencing it exactly once, and possibly immediately.


"But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?"

I'm not trying to pick a fight or anything, but it sounds like you're arguing against lazy loading?

Eager caching is very situational, and not something you want to do unless you can reasonably anticipate the thundering herd or have very few items or have unlimited resources to generate and store a complete cache.

I'm probably misunderstanding though.


Your comment reminds me of a CMS-like system I built over a decade ago based on plain old unix 'make'. 'make' tracked all the dependencies to determine what parts of the site needed to be updated when new content was added or updated. Content was authored unstyled in a simplified subset of html, and make ran it through an XSLT to do styling and aggregation, like to build indices. The whole thing worked very well, building over 2k pages in seconds. I still miss it!


The cool part is that Rendr builds a backbone hierarchy around server generated HTML, attaching the rich interactions after the page has been displayed. Although, if the user starts to interact before that view hierarchy has been constructed, for example by clicking on links, the behavior won't be "rich". (However that would require some fast clicking!)


The author (spikebrehm) speaking here - that's a good observation! It's possible that the user will click on a link before the JS finishes downloading. Luckily, we've taken care to use real URLs for all links, and our pages render fully on the server as well, so if they click before the Backbone views initialize, they will fall through to the server. Not the fastest experience, but still fine at the end of the day.


Cool! I think it's a reasonable assumption for rendr to make, since using "real" URLs is a best practice regardless!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: