Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean, this is a "single file website" in the sense that `<iframe src="https://google.com"></iframe>` is a "search engine implementation in one line of code".

The only semi-interesting thing here is that this demo pulls dependencies from 3rd party registries via HTTP without an explicit install step. It's really not that different than doing regular Node.js development with a committed node_modules (hi, Google), except that if node.land or crux.land go down, you've lost your reproducibility.

The thing about "familiar/modern techonologies" seem like superficial vanity. A vanilla Node.js equivalent might look something like this

    import {createServer} from 'http'
    import {parse} from 'url'

    const route = path => {
      switch (path) {
        case '/': return home()
        case '/about': return about()
        default: return error()
      }
    }

    const home = () => `Hello world`
    // etc...

    createServer((req, res) => {
      res.write(route(parse(req.url)))
      res.end()
    }).listen(80)
Which is really not anything to write home about, nor an intimidating monstrosity by any measure. Serving cacheable HTML is really not rocket science, it simply does not require "the latest and greatest" anything.


I wouldnt say an iframe and this are in any way shape or form comparable. this is a "full-fledged" website.

> except that if node.land or crux.land go down, you've lost your reproducibility.

Dependencies are cached. This is no different from if npm would go down.

> The only semi-interesting thing here is that this demo pulls dependencies from 3rd party registries via HTTP without an explicit install step

Given that this seems interesting to you, it seems you haven't heard of Deno (https://deno.land). It is not related to node in terms of environment, its a new completely separate runtime.

In regards to your node example, this is fairly different: the dependency pulled in from deno.land is a wrapper around the built-in http server, which does various error handling for you and simplifies the usage. The router isnt a simple switch statement either; its a URLPattern (the web's version of path-to-regexp) based minimal router. Campring these to the node built-ins isnt exactly a fair comparison I would say.

Also on top of this, with node you need a configuration to get typescript working, then you need a package.json, etc etc.


Yes, I know what Deno is, and when I say "semi-interesting", I mean I'm trying to find a silver lining to praise Deno for. To clarify, the similarity is that this claims to be a "single file" thing by importing the meat of the functionality from elsewhere. Which is not really interesting at all, because using batteries to make websites was already a thing with PHP in the 90s. Or, as I mentioned, it's not that different from just using express or path-to-regexp or lodash or whatever in a typical Node.js setup.

Caching dependencies is very different from general reproducibility. Committing node_modules guarantees that the app works even if the NPM registry were to implode. Try to deploy your deno thing from a cold state (e.g. maybe you're moving to a different AWS region or a different provider or whatever) while there's a deno.land outage and it will blow up. I'm actually curious what this caching story looks like for large cloud fleet deployments. Hopefully you don't have every single machine individually and simultaneously trying to warm up their own caches by calling out to domains on the internet, because that's a recipe for network flake outs. At least w/ something like yarn PNP, you can control exactly how dep caches get shuttled in and out of tightly controlled storage systems in e.g. a cloud CI/CD setup using AWS spot instances to save money.

These deno discussions frankly feel like trying too hard to justify themselves. It's always like, hey look Typescript out of the box. Um, sure, CRA does that too, and it does HMR out of the box to boot. But so what? There's a bunch of streamlined devexp setups out there, from Svelte to Next.js to vite-* boilerplates. To me, deno is just another option in that sea of streamlined DX options, but it isn't (yet) compatible with much of the larger JS ecosystem. </two-cents>


IMO the silver lining to Deno is incredibly simple: it’s compatibility with the web platform. I’m not sure what you mean by not compatible with the larger ecosystem, as Deno is basically spec compliant except for the Deno namespace (which you can polyfill out).

If you haven’t experienced any pain authoring isomorphic JS with Node, that’s great! My experience has been the opposite of that. But with Deno, everything feels completely web native. You never need to worry about modules, syntax, platform features (even localStorage!), or packages… it just works.

On top of that, all the built-in tooling is high quality and I’ve never felt the need to replace them. A formatter, test runner, bundler, type checker, doc generator, benchmarker, and even the built in deployment platform. In fact, I’ve never experienced a more smooth deployment experience anywhere. There is nothing this cohesive in Node.

If you need one more reason, Deno is arguably the most secure runtime in the world. I would not be surprised to see more corporations start to use Deno for user submitted programs as we’ve seen recently with Supabase and Slack, for this reason.


To be clear, IMHO, Deno looks fine for what it is. The features are great. The cons ironically mostly boil down to "it's not node", i.e. ejecting a non-trivial app from CRA into some vite setup is doable with some effort, but migrating to Deno is, charitably, likely a monumental task that nobody would ever undertake, even considering the upsides.

At the risk of diving too deep into opinion territory, I'm not all that enthusiastic about isomorphic JS (and I say this as a someone who's worked on a isomorphic framework). The promise of low learning curve is certainly appealing, especially for those still in the learning phase, but at least in my experience I find that isomorphism falls a bit short in practice because server and client semantics are just... different.

When I talk about compatibility, I'm specifically talking about non-platform compatibility, i.e. library authors need to consciously target Deno if they want to support it, and the way to do so may be entirely non-trivial (e.g. the lengths that postgres.js goes to, compared to slonik, comes to mind). But most of the JS ecosystem lives on NPM and imports things willy-nilly with no regard for whether their thing will work in Deno because that's the path of least resistance. This is not Deno's fault of course, just an unfortunate reality.


Leo, your comments save me a lot of typing on threads like this, and since I recently wrote[1] what beeandapenguin wrote above almost to a point (sans security), I feel obliged to expand a bit.

You are right about incompatibility being a major issue; Deno recognizes that as well, hence, they are working on a compatibility mode that allows using Node specific libraries in Deno[2].

> migrating to Deno is, charitably, likely a monumental task that nobody would ever undertake, even considering the upsides.

This is, of course, contingent on the architecture used: for code tightly coupled to frameworks/runtimes it is indeed a monumental task. I have two small to mid size SaaS apps happily running on Node.js, but I'm looking forward to replacing it with Deno solely for the streamlined DX. The apps follow DDD architecture, thus, framework specific stuff is decoupled into a service/adapter and changing it is a day's work. The major technical road-block for now is indeed incompatibility of third-party libraries/SDKs written for Node.js (google sdk, mongdbo driver, etc.).

[1] https://itnext.io/moving-libraries-to-deno-the-whys-and-hows...

[2] https://github.com/denoland/deno/issues/12577


> The major technical road-block

Yeah, this is primarily what I'd expect would hold back migrations, both in actual technical terms (e.g. Deno-flavored libraries for some tasks may not exist at all) and buy-in from engineers. Don't get me wrong, I'd love to seriously consider Deno for our (very large) codebase (a 1000+ package monorepo with 400+ engineers committing), and I say this as someone who's successfully lead a number of massive migrations for this monorepo. But Node -> Deno at even 1/100 of this scale is, in my mind, potentially orders of magnitude more difficult than even a monorepo-wide Flow -> Typescript conversion, which is already fairly daunting migration.


> Committing node_modules guarantees that the app works even if the NPM registry were to implode. Try to deploy your deno thing from a cold state (e.g. maybe you're moving to a different AWS region or a different provider or whatever) while there's a deno.land outage and it will blow up

You can just move your DENO_DIR (cache) along with the rest of your code the same way you can move your node_modules folder.

See: https://deno.land/manual/linking_to_external_code


Or you can use `deno vendor` to check in your dependencies into version control, or put a caching HTTP proxy between you and the origin server. Don’t be fooled: Node & NPM have these same problems.


> I wouldnt say an iframe and this are in any way shape or form comparable. this is a "full-fledged" website.

This is what's called an "analogy".

But your other points are valid.


Now, add jsx and ssr to your example, deploy it, then compare with the deno version in terms of performance, code length, and dev time.


Why? Just so you can tell another developer that there's a compiler transpiling non-standard syntax into function calls that concatenate strings at runtime? While the output HTML that the user sees is exactly the same? That's exactly why I'm calling out to be library vanity. My example is SSR, that's literally the default baseline. It doesn't make a very strong argument to imply my 5 min thing will somehow be worse if only you get to decide what random garbage to add to it to make the alternative look better. E.g. Make hegel types work in the original and then let's talk loss of productivity from arbitrary decisions.

Deployment for a vanilla node.js thing is as simple as adding `node index` as the entry point in your favorite provider (because they all have node.js images these days), I've had such a thing humming along for years. Again, it's really not rocket science.


Different use cases. You equate SSR to just serving strings. Others need to use jsx + SSR together (be it personal preference or hard requirement).

Imperative vanilla code vs Declarative components. Both should have their place.


I'm with lhorie. SSR literally is about serving strings... you're the one equating it with server-side JSX. JSX is syntactical sugar that abstracts vanilla JS which in turn renders strings.

Rendering HTML on the server has always been the standard way of doing it, so the whole concept of SSR is funny to me. We've been creating new abstractions that trade old problems for new problems, and then newer abstractions that trade out problems again, since the dawn of time.


My point is that it doesn't matter, serving strings or rendering react comps. For folks who has to work with jsx + ssr for one reason or another, they will appreciate what deno's team has done here.

And yeah sure, you can always take a simple demo app with Declarative components and turn it into a few lines of imperative vanilla code and say it's simpler this way. But then what? How are you tackling scaling, organization, composability, and deployment? (these are the real things the deno team is trying to show here, are they not?) By the time you design everything out and put all these in place for your vanilla code, you'll end up spending just as much resources (if not more) as you'd have for using Declarative components with deno.


What I was trying to get at is that whether you have to work with JSX or whatever, that doesn't really have much correlation with Deno per se. CRA/Next/Remix give you decent JSX setups out of the box too (for scopes where JSX is actually justifiable), and so on for all the popular framework flavors, so it kinda doesn't do Deno any justice to say what amounts to "hey look, it can do the most basic of things when you pull in a bunch of libs".

If the point of the article was to highlight a super simple, no-fuss edge computing deployment thing, maybe it would have been better to lead with that? Because if you lead w/ "A whole static website in a single JS file", then let's not blame me for pointing out that that's a relatively trivial task to accomplish with other technologies.


Yes, you are certainly not alone on that. The headline could be made better. Focus should be more on the composability and tooling side of things.


Agree 100%. The first time I heard the term "Server side rendering" I wondered what the hell it meant! Must have been coined by the new-fangled DOM-manipulator army. Modern web development is a big, clunky, slow mess, and for no good reason.


SSR means server-side rendering. String-ness is irrelevant (everything is a string as far as HTTP is concerned). The difference is between serving HTML vs JS for the purposes of generating a DOM tree. The article is using nanossr specifically to server HTML, MPA-style. My thing is using template string, which is what systems like lit-html use for their flavor of "declarative components"

Whether one wants to squint at this and think of React is neither here nor there, IMHO. Svelte, for example, cannot implement this website in this MPA format with only one `.svelte` file, but I don't think it's necessarily more verbose or slower to develop with than, say, Gatsby.


>The only semi-interesting thing here is that this demo pulls dependencies from 3rd party registries via HTTP without an explicit install step.

We used to call those script tags back in the olden days...


> except that if node.land or crux.land go down, you've lost your reproducibility.

I wouldn’t say you lost it, I’d say you never had it in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: