I was reluctant to use Web Components in any personal project and shoo'ed my coworkers away from using it on commercial projects because the uptake by browser vendors seemed so lackadaisical. I'm thankful to Mozilla for explaining the reasons why -- it's complicated! noone agrees!
I am loathe to polyfill modern browsers adding completely needless overhead trying to chase some panacea. At least Chrome and Firefox need to agree on (and be in the process of implementing) a feature for me to consider using it and polyfilling it.
The Web Components thing seemed overengineered and too much work for component developers to get wide adoption outside of widget toolkits. All that behind-the-scenes offsetTop confusion tells me that the way Google envisioned makes it a bad fit for the DOM/CSS/JavaScript hodgepodge cruft-pile that has built up over the years. The fact that there are so many open proposals out there for the various features should be a signal to architects to run the other way on Web Components.
So, into this vendor mess strides React, (and to a much lesser extent Angular and Ember due to ocean-boiling), which gives us a fancy component model atop standard DOM and JS. And at just the right time -- web apps are far too big this decade to be coded monolithically. I mean, everyone knows that, but what could you do about it before a decent UI component framework appeared?
Being able to compose your app in terms of nested components is the way build large, high-quality, well-tested web apps because it lets us reason about parts of the UI rather than the whole. And the DOM performance with React ain't too bad neither. And being able to render the same React component statically on the server and dynamically on the client solves a lot more problems too.
None of the frameworks you have mentioned support proper sandboxing of component internals. They are leaky [1] and incompatible with each other abstractions. We need at least some basic native shadow DOM support in order to hide implementation details and avoid naming clashes.
Agreed, most libraries and frameworks have nasty leaky abstractions; that is part of the cost of choosing to add those packages in your architecture. And heck, the browser itself is a clown car full of leaky abstractions, we deal with the problem everywhere.
I think successful open source projects have a pretty good track record managing leaky abstractions since they have so many users. JQuery did a good job in this area over time, papering over many of the leaky abstractions present in the DOM across browsers. Looking at the bug tracker for React, I think they are doing okay too after a rocky start. Haven't really tracked Angular, but I hear Ember is a pretty well run project too.
Shadow DOM does sound pretty good, but not essential to componentization if you're managing your ID's carefully. Honestly, I don't care if a framework or the browser manages my component scopes, I just want it taken care of.
I always thought scoped stylesheets would be more important to non-leaky UI component development, but it seems like that is yet another dead-end experiment that didn't catch on. http://caniuse.com/#feat=style-scoped So we fall back to tooling again, and get something like SASS to manage and build our monolithic stylesheets.
ID prefixing won't prevent component internals from leaking to event listeners, querySelector results, TreeWalkers/NodeIterators, innerHTML, and many other APIs.
Native shadow DOM allows me to write components with strict encapsulation on pair with the built-ins such as <button> or <input>, neither framework comes even close to that.
Styles defined inside shadow DOM are always scoped to the local shadow tree, so there is really no need for style[scoped]. Chrome and Opera (and maybe Firefox) support CSS encapsulation inside shadow DOM making the CSS preprocessors less needed.
Shadow DOM does sound pretty cool, I wish its design allowed it to fit in better with existing DOM.
I think react components give you pretty good isolation. You don't spend any time querying by selector or innerHtml, you just update your model, it re-renders the component tree, and then a diff of the DOM yields the mutations it will do on your behalf. Event binding is similarly abstracted. It's a different paradigm than what you're describing, and I think it works well for a lot more use cases than just "componentize the DOM".
I'd love to be able to combine shadow DOM and react somehow (assuming wide browser support emerges). I bet a lot of performance optimizations could be derived in the browser from native Shadow DOM.
I'm really surprised to see Apple's name on this list. Their behavior over the last few years has led me to conclude that they are no longer interested in seeing the Web flourish now that they have market power. Glad to see progress (even if it's obscenely slow).
I often wonder to what degree any given web feature is influenced from higher up in vendors' organizations. It would be fascinating to know how high up these decisions go.
It's finally nice to hear about Web Components again. I remember when they started Polymer you heard how Web Components were the future, and then they just dropped off the radar.
Good to know that all the major vendors are continuing to develop them though.
Web Components were a Google effort and little negotiation was made with other browsers before shipping. Like most negotiations in life, parties that don’t feel involved lack enthusiasm and tend not to agree.
Web Components were an ambitious proposal. Initial APIs were high-level and complex to implement (albeit for good reasons), which only added to contention and disagreement between vendors.
Google pushed forward, they sought feedback, gained community buy-in; but in hindsight, before other vendors shipped, usability was blocked.
> but in hindsight, before other vendors shipped, usability was blocked.
Haven't we learned this lesson before?
I think it's more likely that Google didn't want other vendors' interference, to avoid design by committee thwarting whatever they were trying to achieve.
It seems like the way to do it is from a bit of both camps. Someone has just got to make it, deploy it and use it. Critically is this bit: Everyone (including the author) needs to learn from it, decide how to improve, what to change what to scrap. For there is were the specification process starts for all involved.
Refer to how SPDY was developed. Lessons were learned because stuff got done and it was out there. Then everybody came back to the table a little wiser. On the whole at least that's how it looked to me.
It's like the difference between TypeScript and ES6. The goals are intentionally aligned, and as the underlying platform adds support for the constructs that power TypeScript, it can continue to delegate more and more to the underlying platform.
Knockout custom elements, for example, are extremely similar to web components. As support for web components proliferates, Knockout can keep the same framework API but use web components under the covers.
I look forward to using Web Components myself, regardless of the actual method we'll end up creating them.
Just like jQuery has become "useless" in many cases, it took us a long way to where we are today. Sure we'll need to worry about SEO, etc. but Googlebot (and others, hopefully) are already quite capable of executing Javascript.
To me the Riot.js web component syntax seems very natural to write - more so than JSX. Here's an example:
React has some obvious advantages when used with TypeScript and it's new TSX support you get type safe templates with static analysis and highlighting errors on IDE. Web components are not even trying to be type safe, however they shouldn't be seen as one or another, I don't see why they both couldn't live side by side. Who knows maybe React may merge with web components in future?
As I understand, the mental models are different enough that it wouldn't make much sense to merge them. You could certainly bundle a React app as a Web Component, but React components only make sense as React components. Because React treats the DOM as effectively write-only and optimizes for performance, I doubt the React core team would ever want to change the internals to make use of Web Components.
I am loathe to polyfill modern browsers adding completely needless overhead trying to chase some panacea. At least Chrome and Firefox need to agree on (and be in the process of implementing) a feature for me to consider using it and polyfilling it.
The Web Components thing seemed overengineered and too much work for component developers to get wide adoption outside of widget toolkits. All that behind-the-scenes offsetTop confusion tells me that the way Google envisioned makes it a bad fit for the DOM/CSS/JavaScript hodgepodge cruft-pile that has built up over the years. The fact that there are so many open proposals out there for the various features should be a signal to architects to run the other way on Web Components.
So, into this vendor mess strides React, (and to a much lesser extent Angular and Ember due to ocean-boiling), which gives us a fancy component model atop standard DOM and JS. And at just the right time -- web apps are far too big this decade to be coded monolithically. I mean, everyone knows that, but what could you do about it before a decent UI component framework appeared?
Being able to compose your app in terms of nested components is the way build large, high-quality, well-tested web apps because it lets us reason about parts of the UI rather than the whole. And the DOM performance with React ain't too bad neither. And being able to render the same React component statically on the server and dynamically on the client solves a lot more problems too.