Bravo to the React team for the work during the last few months! It's truly amazing, specially for the new users.
Among other things:
1) Create-React-App[1]: Lets everyone starts a React app with Babel/Webpack (JSX, classes and import/export) without any knowledge about Babel/Webpack.
2) The new Contributing docs (Codebase Overview, Implementation Notes (discussed here) and Design Principles) offer a starting point to figure out how React works.
3) A revamp of the current documentation is going to be released soon[2]
For some older projects I found a hand-built flux helper[0] of ~220 lines worked great and didn't require any toolchain modifications. The switching costs to some of the ES6 stuff was too high in that case, but it's also a potentially leaner app if you don't need the advanced features of Redux and ES6.
1. good: having a sideeffect-free, serializable application state. That concept can also be used in mobile apps for easy pause/resume.
2. good: being able to render that state from one parent, because you don't need to manage two separate statemachines (the UI and the business logic) anymore.
3. bad: shoe-horning all state-transition into functions/reducers. Fp is awesome, but sometimes imperative constructs are more appropriate, especially in a mostly imperative language. Use fp where it feels natural, not everywhere.
4. bad: shoe-horning everything into immutable data-structures.
Just change the state. If you want to prevent errors, freeze and seal your state. If you want time-travelling debugging, copy it via serialization. No need to slow down production code by copying and throwing away data all the time.
5. bad: using Immutable.js, thereby slowing your code down by two or three orders of magnitude. And I guess losing many GC and JIT optimizations by throwing out different types of objects and pressing everything into the generic map-structures, that immutable.js uses internally)
I wrote multiple sites/apps using immutable and redux, but i'm back to pure react and lodash. Code is more concise, simpler and way faster, even though componentDidUpdate has to use lodash's _.isEqual instead of a simple comparison.
Unless something has changed very recently, you don't have to use Immutable.js and redux together. Redux advises against mutating your state, but doesn't prescribe a way to do that.
You don't need to use Immutable.js (I never do) but making your state immutable/functional is what makes the magic work. You don't have to worry about observing data or manually handling data updates -- everything just works. Compare this to MobX where there's an explicit pubsub model. In my opinion that works best for small apps, but falls apart in large ones (where Redux shines).
2. good: being able to render that state from one parent, because you don't need to manage two separate statemachines (the UI and the business logic) anymore.
This is one of the areas I found to be most disappointing in practice about using a data store built with an immutable state tree and then using self-contained transactions to update it, which is similar to the Redux+Immutable model.
One of my current projects is a somewhat large web application. It has relatively complicated state to maintain for a web app, with very heterogeneous data and lots of relationships and constraints to enforce between data points.
In the early days, using React for rendering was quite convenient, and using Immutable then made writing shouldComponentUpdate quick and reliable, which was necessary almost immediately to achieve acceptable performance with React. Essentially, you’re using references within the immutable data as a proxy for a “dirty” flag on each part of your state.
However, it wasn’t long before the presentation code started to depend on derived state that was expensive to recompute: temporary tables, automatic diagram layouts, etc. This is where I find the immutable data structures really lose out compared to some sort of lazy observer architecture, because you are back to having a synchronisation problem between your derived view state and your underlying data model.
You can set up your derived view state as immutable values as well, and thus keep the reasonably simple shouldComponentUpdate mechanics, but you still need to either push changes from the underlying data model or pull them from the view. In the former case, you’ve effectively given up the declarative rendering that makes React more attractive than actively updating the DOM in the first place. In the latter case, you’ve created cache invalidation problems that undermine the benefits of having a cleanly updated, immutability-based data store in the first place.
There are several other advantages that have proved to be useful with this particular combination of tools, but they do come with some nasty performance and scalability implications, and in particular, they don’t generalise and compose cleanly in the way that a good observer-based design would. Worse, I suspect the difficulty of managing derived view state efficiently is inherent to this sort of architecture, because it seems almost inevitable in any system with data complicated enough that using a separate data store and declarative view rendering is worth the performance overheads in the first place.
My main hesitation with Redux is how awkward it is to insert async calls in the chain of events. Things like redux-thunk and redux-saga are impressive hacks but feel like a lot of gymnastics to do something that pretty much every React app needs to do.
I dislike redux-thunk, because it breaks serialisability. I don't know redux-saga, but this is my way of doing it:
Create a data-structure like "running requests" in your state and put a serializable object in, could be complex or just a string "GET_USER_DATA".
Then write a simple "NetworkManager". React reconciles the UI to match the state, the NetworkManager does the same for your "running requests": make the actual running requests match those in the state.
In some projects I even (ab)used React to do that: add a component that renders a <div/> and manage the XHR in componentDidUpdate etc.
Part of these notes, Fiber [0], reminds me of a half-joking "corollary" to Greenspun's Tenth Rule (credit @shriramkmurthi):
Any sufficiently complicated JavaScript program contains an ad hoc,
informally-specified, bug-ridden, slow implementation of delimited
continuations.
Control over continuations and the stack is our main compiler and runtime engineering hurdle in Pyret. Projects like Doppio [1], WeScheme [2] and GopherJS [3] go through staggering amounts of overhead and effort to get pausable, resumable, and stoppable computation in the browser environment.
I'm excited to see how this develops in React with fiber. It's much more application-specific, but it's the same underlying problem.
What about generators? I've built coroutines based on generators. CSP has been implemented in JS using generators. Not sure why those projects went thru "staggering amount of overhead" to get what we get for free with generators. Please educate.
The main difference is that all of the use cases I mentioned necessarily don't distinguish between calls/functions that may pause, and calls that don't (it's just the semantics of those languages that arbitrary calls might need to pause). So to use generators as a compilation target, every function has to be a generator, and every call a generator instantiation followed by yield*.
I actually don't know if that qualifies as "staggering". Your sibling comment has some truth; I can only speak generally about Gopherjs and Doppio, because I know them less intimately, but I know that Pyret and Whalesong were definitely started before generators had widespread adoption. Compiling to generators, rather than to the handwritten stack unwinding we have, is on my list of things to try and measure.
Maybe "significant" overhead would be more obviously true than "staggering," since I don't have clear numbers to back it up.
Self-reply for posterity. Here's a sketched-out comparison of generators vs. the manual strategy we use in Pyret (this isn't _exactly_ what Pyret generated code looks like, but it's close):
<<So to use generators as a compilation target, every function has to be a generator>>
Sure, but in those cases the functions are transpiled to JS so it doesn't really matter, not like they're coded by hand, right...? I think I understand what you mean.
I don't know what you mean by "it doesn't really matter." Surely there's a cost to using generators instead of regular function calls and returns, right? That's where the overhead would come from, because generators aren't free.
Right now, Pyret and GopherJS (the last time I checked in GopherJS's case) basically manually encode the `IteratorResult` type, and check for "stack unwind" vs "regular result" when each function call returns, if it might pause.
The first question for generators is if they are less overhead than this manual process. There's a bunch of other details, too, but this is the main one. And generators are certainly going to cause _some_ overhead over regular calls and returns, just like the manual checking of return values has overhead.
My original comment was about the lack of something like delimited continuations in JS, which would allow saving portions of the stack while intentionally minimizing the overhead of regular function calls. That's a well-fitting language-level solution to this issue.
No problem. Thanks for asking bluntly about generators. It made me write some more experiments using them.
Right now, the main thing that I think stands in the way of them being a good solution for Pyret is that they have limited stack depth – about 7000 frames on Chrome Canary, for instance. One thing we get out of our strategy, which reifies stack frames onto the heap when pausing, is that we can simulate a much deeper stack.
See http://imgur.com/a/GaBg2 (I don't need to be able to do sum of ten million, but 10,000 would be nice! This is just so a non-tail-recursive map works over reasonable-sized lists; we'd rather not have the concept of tail recursion as a curricular dependency for working with that size of data.)
Another dumb question: doesn't every recursive function have an iterative version? and in the case where you're transpiling to JS couldn't 'map' (whatever the syntax in Pyret) and other functional primitives be converted to imperative code? Generator functions are no different than regular functions when it comes to stack depth. I think that depends on the amount of memory you have, so different from machine to machine. I'm learning by asking dumb questions... :)
> doesn't every recursive function have an iterative version
True in the abstract, yes. But it's a sophisticated compiler indeed that turns something like a recursive binary tree traversal into a loop (it would need to synthesize the stack worklist).
In practice, it's easy to do this for tail recursion (and mutual tail recursion, with a little more sophistication). You can get slightly fancier with "tail recursion modulo cons," which is a little more clever and handles map. Beyond that, it's pretty gnarly to do a good transformation, because recursive code is implicitly using the stack in interesting ways.
> couldn't... functional primitives be converted to imperative code
Indeed, and we do write those in pure JS with carefully-crafted while loops to make those primitives more efficient. But if students are learning to write their own map, or another functional combinator on lists, those need to work, too, and will be implemented recursively by them.
> Generator functions are no different than regular functions when it comes to stack depth.
Yeah, stack depth in general is annoyingly low on modern browsers, IMO, so this isn't just a problem with generators. It's also unpredictable (http://stackoverflow.com/a/28730491/2718315). So we're working around the normal stack limit already. I was sort of hoping that when a generator's continuation was captured, it would stay heap-allocated and not "count" towards stack space when restarted, but that's not the case.
Interesting. Do you also address scheduling? I.e., giving some computations priority over others? And do you address the implicit transfer of those priorities based on a dependency graph?
My conjecture: Scheduling with priority is pretty easy across these systems. The dependency graph transfer would be outside the scope of what they already tackle, and require new engineering.
In Pyret, I prototyped virtual threads at one point, and each thread had the same amount of "fuel" before yielding (at the top of every compiled function in Pyret, there's a decrement to a "fuel" counter, and when it reaches zero, the stack is unwound). That could easily be configured on each start/restart of a thread to provide different amounts of fuel, or to re-order the restarts based on typical thread-scheduling policies.
Whalesong does the same thing with fuel, so I imagine it could be extended similarly.
I don't know how Doppio and GopherJS do scheduling in detail, but here's where I'd start looking:
I'm a Rails developer. I work on pretty standard web apps for a living. Some more complicated than others, but still, web apps.
I still haven't found a person that was able to give me a concrete reason why someone like me should invest time and resources into learning React and using it in my projects.
This is an honest question, I'm not trying to be sarcastic. It's just that there are so many frameworks that come out that it's very hard to know where to invest your time.
I understand your dilemma very well. I have experience building server-rendered web apps, web apps enhanced with jQuery, Angular apps, and React apps. I resisted React for a while, but I'm glad I finally tried it out. Compared with jQuery or purely server-rendered apps, React elevates what you can accomplish within a short time frame. A complex web form with async validation, client side calculations, and multiple branches can be a real pain to code as a server-rendered web app or using jQuery, but React uses a different abstraction that makes a lot of code easier to reason about.
React achieves that with the virtual DOM concept. On the initial render, the virtual DOM concept is nearly equivalent to the kind of HTML templating you're familiar with; you simply render a document with some substitutions. It's what comes next that's interesting: when you need to change something on the page (for example, when you need to display validation feedback for a form field), you don't add code that finds the DOM element and changes it. Instead, with a virtual DOM, you re-render your components with new data, and the virtual DOM figures out what changed and applies the changes. Event handling is much simpler.
React has only a few concepts; once you get it, there's only occasionally a need to read the React documentation. Angular is also powerful, but Angular has many concepts and I found myself referring to the documentation for every little thing. That may have changed with Angular 2.
EDIT: I should also point out that React adheres to the idea of putting HTML tags in code, rather than putting code in HTML tags. For simple templates, it doesn't really make a difference, but for some of the whopper HTML templates I've written before, putting HTML in code (aka JSX) would have been a major benefit. There are plugins for editors like Sublime Text that make JSX smooth.
React only needs a render method, not a render method and an update method for every state transition. Without the virtual dom this would destroy performance, so it is a rather key detail in understanding React's appeal.
FWIW we are trying to avoid "virtual DOM" in the new docs. The thing you create in render() and that describes the tree has been called "a React element" for many versions by now. "Virtual DOM" was more of a marketing term and I find it misleading because it doesn't make sense with e.g. React Native, and also makes React seem like a performance trick. React is not a performance trick. That elements get compared by React DOM renderer is its implementation detail. React is abstraction for dividing UI into predictable pieces, not a performance optimization.
If I wanted to do what React does, but without React, I might start like this:
document.innerHTML = myComponent.render();
That way, I can write components similar to React components and it can render very quickly. However, this strategy would work only on the initial render, because this naive strategy would destroy implicit DOM state like scroll positions, focus state, and cursor selections. The React DOM renderer (formerly known as the virtual DOM) lets me apply this rendering strategy without ruining implicit state.
The DOM renderer does not give me components or performance, since I can already achieve those things with raw Javascript. What it gives me is the ability to use a simple, clean rendering strategy without destroying the DOM state. That's why the React DOM renderer is not just an implementation detail.
For me, once I started declaring my UI as a series of components that simply render what they are provided (the common `v = f(d)` expression), it was hard to go back to the Backbone/jQuery way of mutating things all over the place and manually updating DOM nodes to try and reflect the current app state.
For view/UI, yes (if you can call React a framework). I personally do not use jQuery anymore as there is not much need to deal with the DOM directly when using React and when there is, I just use vanilla JS.
Since React only deals with UI, I use it alongside a bunch of other great libraries such as Redux (for state management).
The backend where I work (Airbnb) runs on Rails but most of my projects use node on the backend since they usually just expose a REST API.
Since it only cares about UI, React is obviously not opinionated about your backend/server environment unless you're trying to do server rendering (then you need something in your stack capable of executing JS).
I've spent some time looking at the 'needs a JS runtime' part and am prototyping an isometric framework in rust which renders with natively compiled code on the server and generated JS on the client, both generated from the same source template and logic. The JS can also be cached and served statically.
The server-rendered page is then instanced in the browser and differential rendering can be performed. I have been interested in doing this concept using React but have decided to prototype with a simpler implementation called Incremental DOM.
Nice. I was always wondering why no one was trying to do isometric stuff in other languages since every language and their mother seem to have a compile-to-js feature.
consider a complex submission form with updating state
eg you want to show uploaded images while editing parts of it etc
the realistic case:
if you have a rails app and do frontend "interactive stuff" you end up w/ some js frontend framework - let's say backbone - or even if it's just jquery send as ejs - you will have some sort of dynamic updates, maybe even frontend duplication of templates, you will add classes to places and interact with parts of the page, other people will add more features, some parts of the feature gets removed, that part of it moved to another site, to keep it working you need a certain setup on the page, etc et
pretty soon this gets complex - this is where react shines
reasoning about the frontend is easy because you think in components and in state. not in ui and the interactivity with it over time.
tl;dr: react can help for either a by design complex part of your frontend or your frontend becomes complex over time
it does not help for fast prototyping or small apps imo
"it does not help for fast prototyping or small apps imo"
I can understand this sentiment---React really is meant for serious apps. There is a decent learning curve. There is small overhead. And though it now only requires just a tiny bit of setup thanks to create-react-app, it might easily require a very significant change to workflow habits. (And the fact that CSS is still held by some as an ideal way of maintaining UIs rather than a temporary requirement given what browsers can consume in 2016 is a testament to how how hard habits can be to change. But I digress!) If you're not already knee-deep in it, React is almost certainly overkill for a marketing page or a mostly static form-y interface for some rails backend.
But just to give an alternative opinion: I think React is great for prototyping. The big reason is that React allows you to easily build and use custom abstractions. When you get down to it, that's sorta the whole point of the library.
If you need to do something---maps, a text editor, a flipping clock---there's a decent chance a catalog like js.coach will already have a React component for it. And at just a single line import away, unlike old jQuery "imports" which typically required more setup which then was more fragile once you wanted to make a change somewhere else (like, you know, renaming an id).
And over the years, like many others I'm sure, I've evolved my own toolkit which happens to be perfect for prototyping. I can vertically align <Rows /> and <Columns /> just by passing "center". I can add a modal simply by typing <Modal isOpen={isProfileVisible}><Profile /></Modal> right next to the button which opens it. I can add data persistence and real-time updating mostly just by typing <FirebaseWrapper /> and tweaking some config for the new app. I could go on and on. And these are just little personal things which make life a little easier. There's also projects like react-flip-move which smoothly animates transitions in a line of code, react-sortable-hoc which gives, in a line of code, a sortable with pretty good touch support and essentially infinite lists......
Obviously in principle you can get these benefits in other frameworks. But React is one place where lots and lots of people are actually contributing to the ecosystem.
I'm still somewhat scarred from the time I invested a ton of time into Angular.
Not sure you would be able to answer this, but is React something that works well with Rails out of the box? For instance, I found that Angular (when I was learning it) required a fair bit of shoehorning to get it to work well with rails.
Angular and React are like black and white when we talk about time of learning.
Simply because :
- Angular is a full framework including html templating, directives, components, controllers, services, router, xhr abstractions ($http and $resource), dependency injection, two-way data binding by default, one-way data binding if you want (to fix performance issues) and the list can continue. Plus, you need to set and learn a style guide [1] and some good practices because, if you have 2 developers, none of them will code in Angular in the same way.
=> it took me weeks to learn, and months to master.
- React is just a library to build views. You code 90% of your time in basic JS and 10% in React APIs (basically: states, props, lifecycle methods and that's it).
=> it took me 1 hour to learn, and 1 week to master.
If you want to learn React, you just need to read the official tutorial [2].
This is pretty misleading as a comparison, yes you can learn React itself very shortly, but the ecosystem around it will probably bring you much closer to Angular learning times, and that's also without the AngularJS documentation which is top notch.
I prefer React to AngularJS but I don't think it's fair to tell half the story when comparing them.
I'm working on an app for work that's a Rails API backend with a React front end. This stack is a pleasure to work with, once you get past the initial bits of setup.
With React I use Redux for state and ES7 bits for async and await. Productivity is high.
I was in quite the same situation, Angular 1.x was quite hard to grok at first. But React just "clicked" instantly.
For integration with Rails, I found https://github.com/reactjs/react-rails to be quite useful. It integrates the Babel transformer for JSX into the asset pipeline and brings a couple of helper functions, so no additional setup needed.
You start writing your components, "mount" them via the helpers in standard .erb templates (or even without the template directly from a controller action), done.
React in its basic form is pretty simple, you'll grok it in a couple sessions of playing with it. And since it's 100% view, you can fit it over pretty much anything.
This is assuming you're prepared to do rendering in the browser. If you want to do server side rendering I think you'll be doing more shoehornig :)
When you separate the web app from the server and build it as a stand-alone React codebase, you can create applications that run in the browser, iOS, Android (via Cordova / reapp) and cross-platform desktop operating systems (via Electron).
Now wait a minute, you're going to say, I can do all of that with my server-side web application. And faster too. Sure, but does your application work when the user is offline? Does it have access to the client runtime's local storage for data persistence? Can it access client hardware like the camera, GPS, and accelerometer? Can it be distributed through Google Play and the Apple App Store? Can it be installed as a desktop application?
How do you use the same codebase across web, mobile, desktop when your app is going to access camera/gps/accelerometer which aren't available on desktop?
There are some impressive numbers (from google I think) about conversion/retention/bounce rates in relation to page load times. I believe it was about a 50% reduction for every extra second.
Page loads are mostly download (bandwidth-limited) and layout. For many actions, you can reduce drastically reduce data transfers if only the actual data that changes is transferred (no markup/images/content already loaded). Similarly, only a small part of the layout may change. That can result in anything from doesn't really matter (newspaper) to "this project wouldn't make any sense without it" (google docs).
Given that SPAs tend to have payloads in the MBs, high conversion rates must be based on the end user having a very fast broadband connection. They probably wouldn't stick around for the initial page load otherwise.
Modern browsers are very good at caching resources. For server-side apps the browser is often just downloading the html for the page and nothing else. Compare that with downloading json from the server and applying changes to the virtual DOM. The difference is neglible wrt to speed and is, IMO, oversold.
Thanks for asking this question. I'm also a Rails dev, pretty much in the same situation. Some people are mentioning that if you want to do some server side rendering then things can get a bit more hairy.
Well I don't want my backend app to be just an API and the frontend to be a client. I'm not trying to argue whether or not that is the correct way to build web apps, but I don't want to do all the rendering on the frontend.
The reason is that I think this would make smaller projects a good deal more complex. Maybe you can convince me I'm wrong. Again, I can imagine that if you are a big company this might be the way to go. But for building small to medium web apps I think Rails (and the Rails way of building apps) works well. What I'm trying to figure out is if I can incorporate React into my toolbelt and have it work well with standard rails apps.
It really depends on what kind of apps you are building. If it's a complicated CRUD application that can be created without much heavy lifting javascript, then you'll probably be fine with vanilla js. However, once you start getting to the build where your front end application code gets larger and more complex, something like React will scale very well.
I'm a Rails developer too. I always use Turbolink for everything with jQuery. If the app requires mobile and web, I'll just work on the rest api on rails. The client site will be another app. This way doesn't mix ugly js files with rails files.
What you need is Vue.js. Other frameworks are waste of time. Angular 2 is complex and slow. React has it's own drama(jsx, react router).
I have a question, lack of dynamic scope in render functions is causing me major issues in making highly dynamic and complicated UIs in ClojureScript. I understand that Javascript doesn't have proper dynamic scope so you guys probably weren't thinking about it, but I also see you guys moving farther and farther away, there's the whole Context hacks and now the web worker call stack serialization thing. Are there technical reasons that this can't be made to work with dynamic scope? I think you lose a lot of stuff by, well, not being actual function composition, only pretending to be.
I don't know much Clojure, but it sounds like this is exactly the use case that context is meant for.
If you were to use dynamic scope, how would the variables be restored if one of the descendants updated via setState? It doesn't seem to me like it would work.
Also, not sure what "context hacks" you mean or what "web worker call stack serialization" is.
I think dynamic scope can work with setState by allocating a new closure that closes over the dynamic vars, aliasing them into lexical scope of the render function.
The problem with React Context is that it bypasses the general solution offered at language layer for a react-specific solution, essentially breaking any code written without prior knowledge of React.
Can you say what changes would be required in React for this to work? I assume that you are asking for parent components to be on the stack when child components render, but that is simply not how React works – you return a description of what you want to render then React calls into the children. How could the parent be on the stack?
If you have a concrete suggestion for something we could change in React I'd be happy to chat about it. I'd love to support CLJS better.
I'm wondering about something. Let's say I have a listbox with N elements, and one element is appended or changes state. Will the reconciler have to perform O(N) work for these operations?
If elements are added one by one, will the reconciler have performed O(N^2) operations by the time the last element was added?
1. If a component changes state and its parent doesn't, then the other children aren't considered so it would be O(1). If the parent rerenders, then yes, we would consider each child and it would take O(n).
2. Yes, it would. You would really have to go out of your way to make that happen, though. React has a feature for batching state updates together to prevent unnecessary rerenders if all of the updates occur together. Beyond that, it's rarely an issue.
If you have a particularly long list you can use a tool like react-virtualized (https://github.com/bvaughn/react-virtualized) so that only the onscreen rows are rendered at all.
> You would really have to go out of your way to make that happen, though. React has a feature for batching state updates together to prevent unnecessary rerenders if all of the updates occur together.
These batched state updates only happen "within React's walls", right?
So for example, say you had an RxJs observable that omitted a list of items to display. When you first subscribe, it emits say, ten items, one by one, but all in the same browser tick. It may emit items later in the future as well.
In this case, each of the initial emitted items triggers a full rerender and you get the O(N^2) behavior, correct? My point here isn't that this is the end of the world, but just that it isn't that crazy to say that it might happen.
Yes, you're generally correct. If you can call into ReactDOM.unstable_batchedUpdates at the top of the stack, that's another approach. We might be able to make that the default in the future so synchronous calls are always grouped even if React isn't on the stack.
We generally see the most success when you update your data wholesale with the latest copy from your stores, but your RxJS example is a good one.
I am part of a meet-up of theorists who do build websites... but are also curious how frameworks like React work under the hood. What algorithms they use, frameworks, etc.
It's possible to build a nice web-site without knowing any code at all. Or knowing some code and algorirthms. In that case the code may become very difficult to control, or revise or share with your peers.
If you know some algorithms and code it's sometimes helpful to see what is under the hood.
Awesome that these implementation notes are so simple and laid out over just a few pages. Speaks to the conceptually simple underlying model of React I think. Nice job.
Among other things:
1) Create-React-App[1]: Lets everyone starts a React app with Babel/Webpack (JSX, classes and import/export) without any knowledge about Babel/Webpack.
2) The new Contributing docs (Codebase Overview, Implementation Notes (discussed here) and Design Principles) offer a starting point to figure out how React works.
3) A revamp of the current documentation is going to be released soon[2]
4) "You might not need Redux"[3]
---
[1] https://github.com/facebookincubator/create-react-app
[2] https://github.com/facebook/react/pulls?q=is%3Apr+is%3Aopen+...
[3] https://medium.com/@dan_abramov/you-might-not-need-redux-be4...