This post feels like a uninformed and undifferentiated rant against "things are too complex". Let's start with the first paragraph: What does the JavaScript fetch API have to do with data management? How can you compare the fetch() API with Swagger (an API documentation format) with Protobuf (a serialisation format)? That doesn't even make sense.
Second paragraph: "The UI should automatically update everywhere the data is used". Again, what does this have to do with any of the above? That is state management, yeah, and you can build proper state management with any HTTP library and any message serialisation format.
Request batching: How would that happen "automatically"? By waiting to fire requests and then batching them?
UX when fetching data: What does that have to do with any of the above? You still have to decide how your UI displays the fact that a piece of data is loading. What do you expect there to be in place? Best thing I could imagine is to have a global operations indicators a la Adobe Lightroom which tells you how many HTTP requests are in flight.
I could go on, but the last paragraphs maybe highlights the lack of understanding the author had: "UI Frameworks (at this point, React has won)". If React had "won" then why would we be having this discussion. React hasn't "won" because it solves one piece of the puzzle: Rendering. For every little other thing you have to incorporate another library or figure out your own solution: Routing, State Management, CRUD / HTTP API, etc. If anything, Ember.js would most closely fit the bill of incorporating most of the things the author seems to care about yet can't articulate clearly.
Data fetching, caching, consistency, and UX are all closely related. If you treat them as separate problems, you're punting the problem onto product engineers, who won't solve it well. (See the last paragraph of the post, which suggests this same idea.)
> That is state management, yeah, and you can build proper state management with any HTTP library and any message serialisation format.
You're right that to do this manually it's just state management. But to automatically update the UI, it means your client data layer (eg. Apollo) needs to know & track the identity of fields; the data layer also needs to be able to subscribe to fetches anywhere in your app, not local fetches; it also means your protocol needs to support this identity (via agreed-upon ID fields); etc. These problems are all closely related.
> Request batching: How would that happen "automatically"? By waiting to fire requests and then batching them?
eg. Relay does this by statically combining requests throughout a React tree into a single request at the top of the tree. You could also do it dynamically, as you suggest. The tradeoff is often performance.
> UX when fetching data
Having engineers manually define loading states doesn't scale. React is approaching this problem with Suspense, and you could imagine standard loading states when fetches are in flight.
State management, for anybody half competent, is stupid easy and not tied to the UI. State management is a data storage/retrieval problem only. Once the UI has the state information it needs you can populate it for the user however you want in many different ways.
State management is also proclaimed as something it isn’t because web UI is full of incompetent expert beginners who can’t write two lines of original code and are helplessly mortified if their colossal framework is taken away.
Here is simple web UI state management:
* have a centrally available object in the UI code that stores state data.
* store that data upon change. That storage can be localStorage for maximum simplicity. It can also be a locally written file if you have a local running service or an http server. The further away from the local computer that storage location becomes the slower it is regardless of access mechanism.
* state changes can occur from user interaction or system updates to remote data. Most important are user interactions because that data is locally available. Keep up with remote system changes as best you can, but unless you own that data as well it’s a window into some distant concern.
* when state changes update your central state object save the changed state object. Most of the time UI developers are only concerned with state updates and not saving state because they don’t know what they are doing and hope the framework does everything (or it must obviously be unnecessary).
* once the state object is updated, and optionally saved, update the UI. For most UI developers updating the UI is the ultimate and only struggle. This is such a junior level basic required skill for UI developers. Knowing that you can root out the incompetent people during hiring.
But but but what about 2 way data binding... Treat it as an event. Update the central state store. Process the change. Still simple and no framework is needed.
I'm curious about your approach to making UIs. Are there any libraries you do use?
Do you use anything to render your view or do you construct the nodes and insert them into the DOM directly?
Is your centrally available object an instance of a class or is it a plain JavaScript object?
Does it manage the state for all your UI or is it segmented in some way?
How do you drive changes to your UI? Is there an imperative process that renders the UI when you know the data model has changed or is your state object an observable or something that your views subscribe to?
The state object is just a regular JavaScript object. I prefer to avoid classes as they increase complexity for almost no substantive benefit.
Changes to the UI are handled as events whether those events are from user interactions, HTTP responses, or web socket data events.
I update the DOM using old school DOM methods and some custom DOM methods I have written to do things the standards do not provide.
This stuff that people cry tears of blood over is easy and very basic. I really think the bane of evil is addEventListener allowing multiple handlers for the same events which allows laziness and super increased complexity.
What’s more challenging is writing test automation that executes in the browser and coordinates between browsers of different computers.
I have been away from my primary employer for about a year and they are HUGE with many different teams that do many different things in many different ways. I just try to gel to the team and not be disruptive.
I think the post is a bit unfortunate in its wording and this seems to have sent you off on a wrong track.
From how I read it, this the post is not specifically about JS and a discussion of the specific technologies mentioned but rather concerned with the following very general situation:
1. There's a user sitting in front of a browser.
2. There's a backend server providing data and points of interaction with that data.
I think the central point of the post now is that we don't have a satisfying technical solution for this situation.
Let's take a look at one of the points you mentioned. Maybe you might find there's actually some valid points in the post and give it a more favourable reread.
> UX when fetching data: What does that have to do with any of the above?
Here, the author of the post writes: "It’s a big burden for engineers to have to manually add loading spinners and error states, and engineers often forget."
I think it's more or less clear how you could implement this using only plain vanilla js. Cumbersome but doable: A very manual, imperative process.
Now let's envision a technology from a possible future:
const twitterFeed = createMagicDataSource("https://twitter.com/...")
const feedComponent = magicRendererComponent(twitterFeed, state => /* HTML like declarative description of the visuals */)
Imagine this was everything you had write in your code to get the following:
* state gets automatically loaded when the first instance of the component is created
* updates are automatically visualized in the client based on the internals of the declaration in the renderer
* you don't have to specify if the updates are done by polling, websockets or whatever: the two magic methods figure this out by themselves.
* you don't have to specify how the data is fetched in the first place. giving the a URI to the `createMagicDataSource` function is enough.
* additional instances of the component don't fetch the data again
* updates are efficient: the sync method only exchanges exactly the data required, only the minimal visual updates are performed
* marking feeds as "seen" by the user is also done by magic and syncs across devices (same for other non-ephemeral ui state).
Now: And I'm sure you'd agree that we are not there yet technologically. But I hope you agree that this would be really nice.
> But I hope you agree that this would be really nice.
Black boxes with tons of magic are great if you need to do exactly that one thing that creators had in mind. When you need it to work a little bit different you're usually either just out of luck, or the added flexibility makes whole api very complex (and usually buggy, as it goes hand in hand). It's just super hard to make things very high-level and simple, while still flexible enough.
Having a number of simpler functionalities that you're free to compose any way you like is for me preferred approach anytime. Most of the things that you mention can be solved with fairly simple wrapper libs doing just one thing (tracking state of connection, tracking errors, batching requests, etc.). Perhaps not in a perfectly reusable way, but good enough to allow easy recycling from project to project, and it's way easier to use and debug and adjust than if we had one huge built-in service object to handle it all.
There are a few black boxes most people are very happy not to peek into; Having these black boxes is a huge productivity win:
* Compilers
* Garbage Collection
* OS Kernels
* File systems
* Docker
* App Stores / package managers
* Infrastructure as code frameworks (i.e. terraform)
* JS Frontend frameworks
I think there's three things at work here: 1. Familarity with treating item X as a black box (High for compilers, low for infrastructure as code). 2. Maturity of the interfaces (High for compilers, low for infrastructure as code). 3. Relevance of the details for your use case / business context.
> Having a number of simpler functionalities that you're free to compose any way you like is for me preferred approach anytime. Most of the things that you mention can be solved with fairly simple wrapper libs doing just one thing
I totally agree: That's how I imagine how the magic functions are defined.
> Black boxes with tons of magic are great if you need to do exactly that one thing that creators had in mind.
This totally depends what your business context is. For most companies, dealing with CORS in HTTP requests is very far from their business domain. Still, developers spend a lot of time configuring and building endpoints, API clients, etc. at a low level of abstraction.
I think parent is completely missing the point. In my opinion the way it should be is that in the backend you export a typed function like "getTasks(filter: Filter): Tasks
And in the frontend component you should be able to use it immediately and it would be correctly typed without having to duplicate types or anything. And if function is not complete yet it will not render the component before.
This should be instead of REST and Graphql, both of which have many issues imo.
So backend and frontend must have a wholly compatible type system? Even good old WSDL only got 90% of the way there. Keep going with fancier approaches and you end up looking at CORBA and its ilk.
Yeah they should be capable of having a compatible type system. So if you change a field name on backend, frontend won't compile unless you fix it there too.
Why? REST doesn't have anything to say about this. You can design your application however you prefer, and publish input and output specs however you prefer, without tailoring your application to the HTTP layer.
Instead of having to implement REST apis is what I mean.
It could work with REST underneath or REST being auto generated based off your function names, but you shouldn't have to deal with that. And REST in general would probably not be optimal to transfer the data here.
I’m not sure what you think REST is suboptimal for here, the function you described maps trivially to it.
GET /tasks?whatever=filters
I agree the part that wires up the HTTP behavior should be minimal and mostly uninvolved with your application logic. (I’m actually working on such a beast!).
Yeah, I'm saying that you shouldn't have to think of such things like URL mapping. What if now all of sudden you want to post json as a filter? You will have to start thinking of new arbitrary ways to handle this query etc. You have to both parse this new deep nested filter you thought of in backend and also you have to stringify it in the frontend. You add so much extra overhead to try to convert the query into some form of string URI that would adhere to URL rules.
Let's say you now want to add a filter for frontend where you can choose multiple dynamic fields and their conditions can be either greater than, equal, or IN some values, I think you get the gist.
You then will have to do it something like that tasks?status=in:todo,progress
Or status=todo&status=done
But then you suddenly need greaterThan query. Are you going to do tasks?priority=gt:5 for example?
Or things like that. And you have to worry if this even adheres to URL standards. Things eventually get so messy. And you are very constrained in everything you do.
And how do you typecheck all of that? Also all of this is just error prune, you have to write documentation for it and every REST api is a little bit different, people often mess up when naming the urls etc.
> Yeah, I'm saying that you shouldn't have to think of such things like URL mapping. What if now all of sudden you want to post json as a filter? You will have to start thinking of new arbitrary ways to handle this query etc. You have to both parse this new deep nested filter you thought of in backend and also you have to stringify it in the frontend. You add so much extra overhead to try to convert the query into some form of string URI that would adhere to URL rules.
I’m not sure I even follow what you’re proposing. That URLs are insufficient for nested structural queries or that you want URL queries and POSTed JSON queries (which is not even REST) at the same time?
If it’s the former, this isn’t something a client or server should need to worry about. Server tools should make defining the API simple in the native language, and generate documentation which can provide client SDKs (again I’m building such a tool).
> Let's say you now want to add a filter for frontend where you can choose multiple dynamic fields and their conditions can be either greater than, equal, or IN some values, I think you get the gist. [...] status=in:todo,progress [...] status=todo&status=done
The most common way this is handled is either your second syntax or status[]=todo&status[]=done (or even with explicit indexes) to make clear it’s multiple values. AFAIK most major URL (de)serializers handle this automatically with no developer effort.
> priority=gt:5
Why not priority=>5? Again a library can trivially handle simple expressions like this, you don’t have to.
> And how do you typecheck all of that?
This is something the library should handle too. And it’s something I know about because I’ve built it (and again I’m working on one I can make open source). For a hint of how this might look and some prior art, check out io-ts.
> Also all of this is just error prune, you have to write documentation for it and every REST api is a little bit different, people often mess up when naming the urls etc
I bet you can already predict it, but the library should take care of this and I’m building it.
None of this should be so messy or so much work for people developing services. You got that right! But that doesn’t mean REST is bad, it means the tools for building and consuming REST services aren’t very mature. But they certainly can be.
Should be doable. Applications only recently have a need for custom queries due to BI rising, so I guess DBs aren't yet there.
BTW IMO most of BI type stuff could be solved by just allowing direct SQL, but for this DBs would need to be written without buffer overflow bugs etc.
I would love to have SQLite that has easy data-at-rest encryption amd that supports access control.
Is PouchDB popular at all on the web these days? Things like Realm etc. on mobile solve this problem by just giving you a client side database to read and write from, transport to a backend is handled by a sync engine. I guess this is sort of what meteor was trying to do but it never really took off.
I actually think the problems outlined are pretty valid.
Yeah, there are solutions to them, but I think what the author is saying is that you have to subscribe to something batteries-included, like Meteor, or otherwise implement the solutions yourself from bits and pieces.
"You have to subscribe to something batteries-included, like Meteor, or otherwise implement the solutions yourself from bits and pieces."
...yes; that is, in fact, the conceptual dichotomy in solutions to essentially-complex problems. Either someone else solves them for you, or you have to solve them yourself.
I mean, yes, there is a part of the solution-space in-between these two extremes — a point where there's a batteries-included thing that someone else built but then gives away for free, perhaps as an open-source project you just have to run on your own infra. So it's mostly solved for you, and then you just do a little bit to "get the solution running" for your use-case. That point in solution-space is conspicuously absent for web data sync.
But, in domains with essential complexity, that middle-ground part of the solution-space is usually conspicuously absent. Because it takes continuous dedicated effort (i.e. labor; capital expenditure) to solve the complex problem in a cleanly-abstracted way. And it's very rare that anyone's going to go to the effort, unless they expect a return on their labor investment (by e.g. keeping the solution proprietary, and building a SaaS business around it.)
I might intuit that 'request batching' could be benefit http/2 but probably not what they were thinking about. And you'd have to have a lot of simultaneous http requests for it to help at all.
But I'm with you on trying to batch this with your JS should not be a browser thing or even a built in standard library JS function.
One could do it themselves - maybe graphQl kind of this idea of more complex queries + maybe just throw the requests you want to make into an array and when it hits length = 5 send them all at once. But I don't see how that would help response times at all.
almost all of the mentioned issues have solutions out their but the author is jumping around the stack with no sense of true purpose. How does graphql impact your SPA updating views, it doesn’t..
Of course it does, because GraphQL allows you to roll up queries for separate entities and get the data back for those in a single HTTP request. And that capability is something Relay and Apollo take advantage of. In this way a component can reference only the data it needs and the query rollup and caching happens automatically.
Half of the stuff he mentioned are covered by apollo-graphql and libs in that eco-system: query batching, notifications (via subscriptions), automatic state updates via cache, typesafety with typescript and grapqhl-codegen. It doesn‘t seem like he even looked at the libraries he listed. :(
Yeah, exactly, half of the stuff. You should be able to have all of this available by default. And it should have first class support without having to do complex setup. Also writing gql queries has always felt hacky and uncomfortable to me, without IDEs having the best support for it, and type safety? But maybe I haven't had the perfect setup.
Also graphql forces you to a certain way to resolve your data which may not always be the best fit.
Sorry if it was unclear, but my argument is that while many libraries solve pieces of the problem, no one library (or standard) solves all, or even most, of these.
Ember.js seems to check off a lot of those boxes, if you are willing to use a complete solution and not do the 'pick and choose' method of react and friends. While it has lost popularity in the last few years, it has been moving forward technically. It has become leaner and meaner.
Really learning the "Ember Way" to do things can reduce some of the friction the author mentions.
Ember Data can allow you to talk to multiple api backends (differing schemas) while presenting the same model to your UI. If you use the default JsonApi[0] based backend, you get a lot of things for free: powerful filtering, side loading data (relationships), consistent error handling. Sometimes it can be chatty, but that's a spot where HTTP2 can help.
Use ember-concurrency add-on and you have a nice way to manage your requests, things like de-bouncing, handling loading spinners, etc.
I'm saddened to see that almost a decade after Ember's release, the front-end world still insists rolling their own (terrible) reimplementation of it.
I would've expected that a decade later we would've settled on an Ember-like framework for the front-end and moved beyond constructing URLs and parsing JSON responses manually but apparently not.
As someone who's not a front-end developer, but who works near them and sees the (React) code they produce and their general productivity (or lack thereof), what's so bad about it?
To me as a backend developer just reading Ember's docs, it seems like Ember provides most of the conveniences I take for granted in a backend web framework such as Django, Rails or Laravel.
On the other hand, the front-end code I see from the people I work with seems to have no standard for structure (every project has its own), reinvents the wheel all the time (using Axios and building the URLs manually with string concatenation for example), etc. Most of the stuff they do (and redo) from scratch seems like something that would be handled by Ember to begin with.
So what's so bad? I feel like (as an outsider - feel free to prove me wrong) Ember is fine for most purposes, and edge-cases where React or alternative approaches do provide a benefit can be used ad-hoc without having to use it for your entire application.
Ember does indeed include a lot, and it’s all designed to work nicely together. I also develop backend much more than front end, but knowing that we just are using the primary Ember way of doing something reduces the number of decision one has to make. And it should allow other Ember devs to hop into your software more easily due to its opinionated nature.
I thought Sails.js or something like it would take the JS scene by storm, and React would move into more of a specialty application spot, but instead everyone decided they were special.
Ember is the closest to what I thought JS development should have been like, and I really appreciate what they’ve built. RedwoodJS seems promising too but we’ll see.
I was just talking with someone earlier today about how I sometimes wonder if we traded off “performance” (heavy air quotes intended) for developer productivity too quickly. Ember solves a lot of the big problems of modern web dev and data handling like this blog post points out, and when I think back to my times working with React and wonder if that trade off was really worth it.
> You shouldn’t have to ship kilobytes of metadata describing your data schema to clients in order to fetch data.
I feel like this requirement makes every other requirement impossible unless you use some kind of compiled language feature and even then you're not going to be able to have type safety without some kind of metadata.
Also, I don't think it's possible to make any one library that solves all "data fetching" scenarios. How could you reasonably live data (eg stock ticker), enormous data sets (eg looking through individual analytics events) slow but largely static lookups (eg searching a library catalog) and fast but uncachable (eg a SPA forum) into the same API without making it horrendous?
That said, we do need some better standards. I look after a bunch of API integrations at work and we have everything from SOAP to GraphQL to downloading a magically named .xml.gzip file in a FTP directory with at least 10 other homemade REST implementation between and every single one of them is basically returning the same info but in wildly incompatible ways.
At least the SOAP one crashes when they change the API without telling us :/
I'm currently experimenting with React and WebSockets and they seem to be a perfect fit.
No need to write wrappers for Fetch, network errors and reconnects can be handled on high-level, handlers for each message type can be mounted and unmounted on useEffect hooks, all back-end jobs can notify the user in realtime, all session-based client-side data can be updated in realtime (in single or multiple open tabs).
I'm also using uWebSockets.js[0] which is great in terms of API design, stability, and performance. Their benchmarks[1] are just convincing. Highly recommend people using ExpressJS / Koa / whatever to try it.
When I was researching stuff I've read a lot of good things about Phoenix, I honestly think socket.io and ws NodeJS libraries are inferior compared to that.
It's just almost pure luck that the guy who wrote uWebSockets (written in C & C++) also wrote NodeJS bindings, otherwise we'll be stuck with the ones with sub-par api, docs, and performance.
Also your load balancer stack needs to support it, which is becoming less of a problem these days but there's still some stragglers. You could also end up with an imbalance where some servers have a lot of connections and others have few.
Yes that's some genuine concern. Though on B2B and B2C SaaS where you got paying users, I think each of them deserve a websocket, and it could* alleviate some headaches for developers.
Not to toot our own horn, but while this mentions GraphQL with Relay / Apollo as fetching clients, with urql and its normalised cache Graphcache we started approaching more of these problems.
Solving the mentioned problems in the article for best practices around fragments is on our near future roadmap, but there are some other points here that we've worked on that especially Apollo did not (yet)
Request batching is in my humble opinion not quite needed with GraphQL and especially with HTTP/2 and edge caching via persisted queries, however we have stronger guarantees around commutative application of responses from the server.
We also have optimistic updates and a lot of intuitive safe guards around how these and other updates are applied to all normalised data. They're applied in a pre-determined order and optimistic updates are applied in such a way that the optimistic/temporary data can never be mixed with "permanent" data in the cache. It also prevents races by queueing up queries that would otherwise overwrite optimistic data accidentally and defers them up until all optimistic updates are completed, which will all settle in a single batch, rather than one by one.
I find this article really interesting since it seems to summarise a lot of the efforts that we've also identified as "weaknesses" in normalised caching and GraphQL data fetching, and common problems that come up during development with data fetching clients that aren't aware of these issues.
Together with React and their (still experimental / upcoming) Suspense API it's actually rather easy to build consistent loading experiences as well. The same goes for Vue 3's Suspense boundaries too.
Edit: Also this all being said, in most cases Relay actually also does a great job on most of the criticism that the author lays out here, so if the only complaint that a reader here picks up are the DX around fragments and nothing else applies this once again shows how solid Relay can be as well.
Isn't this like someone in 2011 saying "UI frameworks (at this point, jQuery has won)"? I think things like Svelte and other up and coming UI frameworks are still being developed because we all recognize that React is not the ultimate UI Framework. Perhaps we will never get there, but surely we can do better than where we are at.
^ React is just the new jQuery. We have people who basically don't really know HTML, CSS, or JavaScript but make React apps day-in day-out. Ten years ago the exact same thing was happening with jQuery.
I still cringe when I see questions on StackOverflow where they ask "how can I XYZ with jQuery" where XYZ is something that has absolutely nothing to do with the DOM.
I too am curious. All of the platform-specific frameworks which do networking+datastore, that I've used, check even fewer boxes. Especially when it comes to reactive UI and optimistic responses.
This post is hard to digest - it is a rant on 10 different things. All are valid hurdles but I don't see how we can just tie it all up in a bag and call it "data fetching".
Architectures are what help solve these sorts of problems - not tools or libraries. Placing the blame on tools means the real issue has not been identified.
I've found Relay to be pretty solid with error handling, though I think the ability in how to handle GraphQL errors does hurt Relay here, but it does for all GraphQL clients.
Re: parent, I also was surprised to see Relay dismissed on the list, it solves every problem they bring up in the list, perhaps maybe not the real-time aspects
I should have clarified that Relay flavored GraphQL from a backend perspective is lacking in this respect.
Yeah, I think relay is pretty good, but I'm not sure the broader ecosystem is really there yet. Outside of node, the backend libraries aren't really fully baked yet, and the standard seems to have some holes. But I'm somewhat optimistic.
This is possible with the Fetch API actually. You can get the chunked up contents of the request as they arrive. This link has a good example: https://javascript.info/fetch-progress
The issue is with "Content-Encoding: gzip". In that case, "Content-Length" isn't enough. You can make it work if you toss in an extra header server-side.
This applies to everything in the browser honestly. Why can’t I bind a variable to the DOM natively? I want the variable X to match the value of <input> and vice versa without having to set up a bunch of listeners and hope they don’t go in a loop.
sounds like you want something like Vue baked into the browser. but i'd prefer React baked into the browser! we can't have both, so it's probably best we have neither :)
seriously though, why not just write a simple wrapper around event listeners + maybe some proxy magic and get the semantics you prefer? or find a library that does that, i'm sure there's a bunch out there
Server side react components has the best data fetching story I've ever seen
They demoed an application that has a react component powered directly by a SQL query, when the client next fetched anything from that page, changes in terms of data coming out of the SQL query were automatically propagated to the client
No data fetch API just coding like you would in a server side app with React handling the IO
I hope someone starts implementing something like Drupal on top for automatic deadlock prevention, declarative data driven interfaces for forms, schema etc
You can serialize pretty complex graphs of data and UI information in a new format called "HTML."
The software capable of consuming this format is able to merge fragments of it into something called a DOM.
If you're doing something more complex than this in your app then quit whining about browsers not being convenient for you to abuse. The users don't want your crap any more than browser implementers do.
I've been disappointed at how SWR and react-query seem to only support the bare minimum for mutations. As far as I can tell, they don't offer any kind of protection from refreshes overwriting any local mutations that haven't been saved to the server yet. They don't do any handling around updating the data on the server, so you need to handle issues yourself like retrying, deciding when to send to the server, and making sure you never have overlapping update requests racing each other. I've had trouble finding any library in the React or general JS ecosystem that handles this stuff.
Safe client-side SQL sounds like what the author wants.
* Consistency when fetching data: define a UI that matches a particular SQL view.
* Request batching: It's probably not hard to send deltas on a large view with server-side support for sessions.
* Avoiding network waterfalls: basically the definition of a SQL view, with some session management to de-duplicate traffic when updating the view. A lot of clients support batching queries.
* UX when fetching data: this seems entirely like an oversite of some UI packages, but when all the other points line up there are very distinct points in application flow when the UI will know a query and potential update is in progress.
* Colocation: the definition of a view is necessarily local.
* File size: SQL is pretty lightweight on the schema details in results
* Type safety: SQL doesn't support dependent types, but e.g. postgres is pretty type-y
* Versioning: no explicit support in SQL but for changes to the UI a new view can be created. For changes to the data model a compatible change to the view can be committed in the same transaction as the data model change.
* Realtime updates: the glaring hole in the glory of SQL. You'll have to do some user-defined triggers server-side to support this.
* Consistency when updating data: the relational model with constraints enforces this
* Optimistic updates: if the theoretical differential view update described above exists then this is pretty trivial. Compare local and remote deltas when refreshing the view and indicate to the user the expected state that wasn't committed.
* Request queueing: this one seems to be more of a data model problem. If the data model approximates a state machine and queries are on transitions then yes, they need to be serialized. But why not do complex updates all at once with a multi-statement update transaction that commits or fails atomically?
* UX when updating data: presumably a UI tied to a single view is easier to indicate busy-ness for. If an update transaction is in flight, display a spinner. When it fails or commits display an OK or error.
* Durability when updating data: this could almost certainly be improved for most sql clients. However, idempotence is definitely covered by conditional update transactions.
* Type safety: user defined types if necessary but constraints are where a lot of the type-safety happens in SQL.
* Versioning: backward-compatibility by updating mutable views in the same transaction as data model changes. Forward-compatibility by rendering directly from the view that's returned, not expecting any particular view.
> Realtime updates. Sometimes, you want data to be pushed, not pulled (eg. for things like notifications). But the APIs for pushing and pulling data are often different, both on the client and on the server.
Two and a half months since Google announced the removal of HTTP Push[1]. There'd been a number of threads open (some 5 years old now) asking the Fetch spec to please include some way to allow the page to be notified of Push[2], which would have made HTTP Push super interesting & useful.
After 5 years, we never got anywhere, never got any help from standards body/google in trying to get Push useful. The topic came up a couple times when Abort was being worked on, and it seemed like maybe we'd see some progress, but those finer points got drowned among the noise of the Abort work.
And then Google announced we developers weren't using Push enough, & that they were going to drop it. Without ever having tried to make it useful in the first place.
HTTP/2 seemed to have such promise, was such elation finally getting released in 2015. Seeing so much promise go unfulfilled & then get cancelled has been quite dark. Unlike so so many of the alternatives that we are up to for data fetching, this path was both realtime & still http/resource centric. It still felt like core web architecture, in a way few others do. The unfulfilment of this destiny, this loss, has been jarring, & continues to leave us with these open quandries of what new entirely different alternatives do we invent, over WebSockets, over SSE, over WebTransport, &c. HTTP (almost) was here.
Edit: there was a deleted comment saying that this kind of use case is not what Push was for, is not there to send unprompted data. Technically, Push can only be done in reply to a http request, so there is a factual limitation. Web Push Protocol gets around this by holding a GET open, to allow "unprompted" data. I think a sizable amount of engineers at the time were indeed thinking of Push as an optimization, a way to get pages to load faster. But this contingent of folks who saw HTTP/2 Push enabling pushing data capabilities, who were hungry for it, who counted on it's maturation: I think they had reasonable expectations, and that we're in a very weird place on the web, with HTTP technically having the ability to push semi-unprompted, but browsers never delivering that capability that's now inbuilt in HTTP.
Using Push is still possible for so called unprompted updates even! At least it was till Push got removed. The trick is that the server needed some way to tell the browser which resources to send: so the server might be pushing resources, then using SSE to notify the page of those updates, for ex.
> this path was both realtime & still http/resource centric
There's always long-polling. :) Not quite as efficient as Server Push, but the logical behavior is similar in that the server pushes an HTTP resource at the client.
Second paragraph: "The UI should automatically update everywhere the data is used". Again, what does this have to do with any of the above? That is state management, yeah, and you can build proper state management with any HTTP library and any message serialisation format.
Request batching: How would that happen "automatically"? By waiting to fire requests and then batching them?
UX when fetching data: What does that have to do with any of the above? You still have to decide how your UI displays the fact that a piece of data is loading. What do you expect there to be in place? Best thing I could imagine is to have a global operations indicators a la Adobe Lightroom which tells you how many HTTP requests are in flight.
I could go on, but the last paragraphs maybe highlights the lack of understanding the author had: "UI Frameworks (at this point, React has won)". If React had "won" then why would we be having this discussion. React hasn't "won" because it solves one piece of the puzzle: Rendering. For every little other thing you have to incorporate another library or figure out your own solution: Routing, State Management, CRUD / HTTP API, etc. If anything, Ember.js would most closely fit the bill of incorporating most of the things the author seems to care about yet can't articulate clearly.