Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's worth noting that back-end web development is an usually pleasant place. In most other types of programming, many technical decisions are made by the platform and working with tons of legacy cruft is the norm, not the exception. For example, where else can you so freely choose the programming language?

Linux Driver? Use C. Mobile App? Java or Swift/Objective-C for Android or iPhone, respectively. GUI App? Again depending on your platform that will be either C++, Swift/Objective-C, or some .NET thing. Making a neural net? You could do everything from scratch, but it probably makes more sense to just use a platform like Tensor Flow and python.

Backend web development on the other hand: use anything! Wanna use lisp? Go ahead. Wanna store your data in an unproven experimental database? No problem. Wanna use micro-services? Monoliths? Anything goes.

Browsers are extremely complex application platforms. But despite how you may feel, they didn't succeed because of their weaknesses. The browser as an application platform succeeded because of its strengths. Web browser are the least bad option (and beat out many other worse options).

For better or worse, browsers are what we have. Make the most of their strengths, and deal as best you can with the weaknesses.



This is exactly why I've always been puzzled by the urge to move more and more logic to the frontend. In backend development you have a choice of many mature languages, tools and frameworks which are fairly sane. I'll take that environment as the foundation of a web application any day and apply the mania of client-side javascript to it selectively in cases where we need to minimize server roundtrips.


> why I've always been puzzled by the urge to move more and more logic to the frontend.

This is usually about a mix of scalability, responsiveness, and partition resilience; with the level of importance of those being dependent on the application.

* Scale: if the client makes fewer request of the server(s) then you have reduced server and network requirements. As server resources get cheaper (and the cheap options more reliable) that side of things is becoming less important, but the network requirements extend far beyond you and almost all the way to the user (see the following two points) who might have a very slow connection at times.

* Responsiveness: the more that you can do client side the quicker your application will feel to the user, even if there is a delay in updates actually hitting the global view you can make it look/feel instantaneous (though you do start to have more problems wrt conflict management when people are collaborating). Have a look at the different methods online games (where real responsiveness and the appearance of instant iteration are vital) manage this.

* Partition resilience: if the client can keep working for a while without needing to talk to other hosts, perhaps queuing updates for reply when communication is restored, your application can keep functioning if there is an issue at your end (glitch at your server provider perhaps) or the user's (on mobile and temporarily in a radio blackspot?) or in between (i.e. a routing issue between ISPs). Even partial resilience is better than none: even if I can't see other people's updates until I'm back in a good signal area, if the app calmly tells me so but allows me to continue to add my work and review already locally cached information I can continue to work (or play!) in a way that wouldn't be possible if more of the logic were server-side.


I recently moved more logic out to the frontend for the one and only reason of Responsiveness. The experience was a lot better, with the sole exception of older android phones on wifi (all these new improvements to cellular data means we have a lot more bandwidth than we used to, but latency is often no better now than it was a decade ago).


>This is usually about a mix of scalability, responsiveness, and partition resilience; with the level of importance of those being dependent on the application.

I have never seen anyone make those arguments. I've seen people vaguely reference them in a hand wavey "I don't understand this but google facebook I am right" way. But that's as close as it gets. It is usually about following trends.


In some cases it may be about following trends.

But this particular trend started from an effort to push things client-side for good reasons that are vital, or at least useful, to many projects. It being trendy doesn't necessarily mean it is wrong! Don't be so fast to dismiss something because to don't immediately see/understand the benefit.

There is often some confusion between "mobile first" and "offline first" - the two overlap considerably but being a good mobile app doesn't absolutely necessitate being able to operate offline and offline capability does not require or imply an app will be usable on mobile (it may be to large and/or fiddly in terms of UI on a small screen, not practical to use in a touch manner, or too demanding of CPU and memory resources). Some of the hand-wavey-ness is people who know they are driving for "mobile first" and think that means they have to drive for "offline first" without really understanding the benefits & implications of that.


You are responding to a strawman. I said nothing about the trend, much less dismissed it. I addressed the reasoning I've seen people use exclusively to push for it.


I think it mainly comes from the fact that web applications are becoming more interactive. If you are going to implement photoshop in the browser, there's going to be a lot of client side logic.

Before, web development was more like, "fill in this form and when you're ready hit submit and we'll check it." Now people are implementing word processors and music libraries.


That's very true, there are web apps where "cases where we need to minimize server roundtrips" are the default case. For these situations it makes sense to run a lot of the logic on the client.

However it seems like this has been taken way too far -- I run into a fair share of developers out there who look at basically any proposed web app and say, "Oh yeah, this needs to be a SPA built with React." Why? "Because it's better" and then a rattling off of generic reasons that SPA and doing everything client-side are superior. This is basically the modern version of the Lisp fanboy, they find a good hammer they want to work with more and now everything they see is a nail.

These are often the younger guys who have not yet seen this pattern carry itself out over a generation or two of developers -- if there's any reason why tech ageism is amazingly dumb it's this one imo (have the guy on your team who's graduated past the fanboy stage make your tool, platform and framework choices, you will save far more than the premium that you have to pay him).


> [...] I've always been puzzled by the urge to move more and more logic to the frontend.

Moreover, we've already had applications written in client-database architecture, and it was nowhere near pretty. Fortunately they died long ago, but unfortunately they're resurrected now again.


Network performance increases and reliability have taken a step back due to the shfit to wireless, and client devices have become more powerful relative to servers.

If the 1995-2005 trends had continued, by now internet services would be served by 15 GHz servers and low-latency 1-10 gigabit networks.


The reason for doing that is that you can deliver experiences that are impossible if you do everything in the backend.


What are the current best systems that generate rich data models and runnable processing rules from the backend?


Remember MS-DOS also succeeded because of its "strengths" in some very special, narrow sense. In the more ordinary sense, it succeeded in spite of its weakness, but Windows has slowly recovered from that and is now a good system.

The modern web is somewhat the the reverse: it is worse than its progenitor. HTML, HTTP and what would eventually be called ReST were all good ideas and succeeded because (a) they were a good way to put hyperlinked multimedia on the internet and (b) hyperlinked multimedia turned out to be the thing that masses of users wanted.

The subsequent effort to retrofit that and turn it into a zero-install software delivery platform is where all the insane hackery comes from; and the hacks that succeeded didn't have to be good. They are just what worked at the time with some combination of IE, Netscape and Flash.


It's not good, just better than anything else.

If things like running apps over an x-display server didn't totally suck, then the clients would have been adopted and eventually it would be zero install. God didn't command that browsers be installed everywhere. Companies and users did so of their own free will.

Maybe I haven't dug deep enough, but what are these alternative, non-hacky delivery platforms that the world passed up in favor of the web? The ones I've heard about have tons of problems, just like browsers.


That was the promise of Java - write once run anywhere. JavaScript managed to achieve it instead.


You mean the browser achieved it instead of the JVM. It could have been something other than JavaScript as the scripting language. What mattered was the platform everyone ended up using.


For practical purposes it's the same either way.


But it's not the same, because not all platforms use a markup language with a separate style language in addition to the actual programming language.

A JVM or .NET based web would have been a very different kind of platform.


What I mean is that the VM and the language are a package, in this case.


IMO the browser's strength is really 'zero-install distribution of applications', I don't think HTML/CSS/JS have anything to do with it.

Having said that, I think your argument its a straw-man, back-end web development is just a part (sometimes even unnecessary) of browser programming.


This is why I don't understand the purpose of electron apps. Why not remove the electron part and just install and run a local http server written in any language and access it from your browser? No need to download the same 50MB+ for electron every time. The overhead from running multiple browser engines is gone. Basically everything is as it should be.

The Kimchi KMV UI is a pretty good example how to do it. https://news.ycombinator.com/item?id=12477030


Marketing. People pay more for an application than a web page. Why did YNAB change from being a spreadsheet to an app?


Well I think it has something to do with the web's success.

For one thing, the HTML/CSS/JS separation of concerns made it so that someone could make a non-interactive, static web page with very little knowledge. The basics of HTML and CSS can be taught in a week. This is important! I think we would have far less success training people to create static documents with many of the various programming language UI Kits. Though becoming less important, there was definitely a time when this helped to increase the adoption of browsers (and I would argue still makes it easier to teach).

Another thing, is the HTML/CSS/JS were all open standards that have accepted contributions from a wide range of parties. Look at scheme: beautiful, consistent, and completely unused. HTML/CSS/JS would without question be much more pleasant if they had been designed by a benevolent dictator. But open nature of the RFC process I think both greatly helped adoption as well as contributed to the inconsistent nature of the APIs.

You only have zero-install (i.e. ships with the OS) because the web got wide spread adoption, and I think HTML/CSS/JS contributed to that.


> Making a neural net?

That's not as constrained as you might think[1] but yes, I get your main point and, as a back-end web developer, I agree.

[1] https://en.wikipedia.org/wiki/Comparison_of_deep_learning_so...


JVM can be targeted from Scala or Frege too.

You argument boils down to transpiler availability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: