Posted in javascript, Flux, Reactjs, architecture

Flux data-flow architecture

Flux is a unidirectional data flow architecture for building front end applications. Coined (and used) by Facebook Inc.
Flux is a pattern, not a framework. You can implement Flux in any existing application, with or without React. Although React isn't technically a dependency - it is recommended.

React's components are great, but the larger and more complex your application becomes, those components become will become bloated quickly. Flux really completes the "React application" story.

Is it needed?

No of course not. Just like anything, if you're only building small/simple UI's, and you have simple data and validation requirements; it's probably not worth implementing the full Flux pattern.

However, problems such as data-flow, validation and business logic need to be handled. React is very good at working with the UI, but adding these complexities really isn't playing to its strengths and will inevitably lead to headaches.

Let's get started

In order to achieve a unidirectional architecture, Flux is heavily pub/sub driven and can be broken down into 4 components:

  • Actions – Event objects which get created as actions occur.
  • Dispatcher – Receives actions and broadcasts payloads to all registered callbacks.
  • Stores – Containers for application state & logic that have callbacks registered to the dispatcher
  • Views – React Components that grab the state from Stores and pass it down via props to child components.


At the centre of this pattern is the Dispatcher. It's the dispatcher’s job to receive actions (events) and call all registered callbacks. Yes - it's just an event dispatcher. You can write your own easy enough, however if you're a bit lazy; the Facebook guys do offer a dispatcher you can use (it's actually rather handy due to its waitFor() function, which allows callbacks to be ran in a specific order).

Before you go make a generic event dispatcher, first understand the Flux dispatcher is a little different in two ways:

  1. Callbacks are not subscribed to particular events. Every payload is
    dispatched to every registered callback.
  2. Callbacks can be deferred in whole or part until other callbacks have
    been executed.

The dispatcher needs to have 3 functions:

  • register (function callback) : string
  • unregister (string id) : void
  • dispatch (object payload) : bool

Here is an example of a dispatcher (albeit basic):

var dispatcher = function() {
  this._callbacks = {};
  this._id = 1;

  this.register = function(callback) {
    var id = this._id++;
    this._callbacks[this._id++] = callback;
    return id;

  this.unregister = function(id) {
      delete this._callbacks[id];

  this.dispatch = function(payload) {
    for (var id in this._callbacks) {

You will notice register (callback) returns a token, this can be anything as long as it's unique. I have used an int in this example for quickness but you might want to consider something more meaningful in practice. The reason for the token is that you might need to run callbacks in a particular order - this way we have a token reference to that callback, in other words; a key.

Another thing worth noticing is that the dispatch (payload) method loops through and calls all registered callbacks, passing in the payload. It is at the callback's discretion whether or not to take action.


Stores hold the state and logic for a particular collection of components. The idea is these stores are the source of truth for the data. Views query these stores for updates whenever they are told to (via events). Similar to a repository. Stores look like this:

var CHANGE_EVENT = 'change';
var _comments = {};

function create(comment) {
  var id =;
  _comments[id] = {
    id: id,
    text: comment.text

function destroy(id) {
  delete _comments[id];

// merge, assign etc.. - all similar to jquery's $.extend()
// here we are extending from Node's built in event emitter object
var CommentStore = merge(EventEmitter.prototype, { 

  getAll: function() {
    return _comments;

  emitChange: function() {

  addChangeListener: function(callback) {
    this.on(CHANGE_EVENT, callback);

  removeChangeListener: function(callback) {
    this.removeListener(CHANGE_EVENT, callback);

  dispatcherIndex: appDispatcher.register(function(payload) {
    var action = payload.action;

    // If you're not a fan of switches (like me), you can always apply the strategy pattern here.
    switch(action.actionType) {
      case 'CREATE':    // actionType constants should be encapsulated if needed.
      case 'DESTROY':

module.exports = CommentStore;

We only want to export the store itself, we don't want to expose any functions which change state, such as the create() and destroy() functions. This way we ensure correct data flow. In other words, if anything but the store itself wants to change state, it must send an action to the dispatcher.

The Store emits its own event when something has changed. Your React views can listen for these change events and re-render the UI as and when it needs to. You can start to see this is very easily scaled. Adding more views which care about data from the store can simply subscribe to the change event.


On to more familiar ground now, the React components.

function getCommentState() {
  return {
    allComments: CommentStore.getAll()

var CommentApp = React.createClass({
  getInitialState: function() {
    return getCommentState();
  componentDidMount: function() {
  componentWillUnmount: function() {
  render: function() {
    return (
        <CommentsList allComments={this.state.allComments} />
        <CommentsForm />
  _onChange: function() {

module.exports = CommentsApp;

Pretty easy to understand what is going on here.

Due to React's one way data flow, it goes hand in hand with Flux. Enforcing a re-render when data changes (if need be), keeping the unidirectional data flow in tacked. Really nice.

Can this be used instead of other SPA frameworks?

Sure. Facebook is using it right now. Only thing we are really missing here is a router. There are literally dozens of open source routers out there to choose from.

Differences to MV* patterns?

  • For those who are used to MV* patterns, coming to a unidirectional data flow architecture can be daunting at first. I'll be the first to admit that unless you thrive and enjoy event driven architectures it may be jarring at first. Though oddly enough, Facebook themselves say their new developers are much more productive in Flux than they were with an MV* architecture, due to it being much easier to reason about the code.
  • Incredibly scalable. Making independent application components work together with ease. The only bottle neck becomes performance of the dispatcher. I'd guess you would have to have A LOT going on for this to be a problem. At that point, you should rethink your solution - there is probably a better way.


It's not another framework we have to deal with, it's just a pattern. We can look at it as an alternative to MVC. Patterns are harder to depreciate and stop support for - like frameworks. They also don't bite you in the arse when you try to do something which isn't on their happy path.

I've found React to be pretty damn powerful on its own. I certainly enjoy its approach over other front end frameworks, but until I came across Flux I thought that if you choose to build a large application using React; you would naturally evolve an architecture similar to Flux anyway. Which reinforces my liking of it.

This scales, and has proven to scale well at Facebook. I can certainly see why, it just makes more sense to me than an MV* architecture.