Survivejs Webpack and React
Survivejs Webpack and React
Juho Vepslinen
This book is for sale at http://leanpub.com/survivejs_webpack_react
ISBN 978-1523910502
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
What is Webpack? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
What is React? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
What Will You Learn? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
How is This Book Organized? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
What is Kanban? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Who is This Book for? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
How to Approach the Book? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Book Versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Extra Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Getting Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Announcements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
I Setting Up Webpack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1. Webpack Compared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 The Rise of the SPAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Task Runners and Bundlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Make . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Grunt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Gulp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Browserify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Webpack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.8 JSPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.9 Why Use Webpack? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.10 Module Formats Supported by Webpack . . . . . . . . . . . . . . . . . . . . . . . . 11
1.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Structuring React Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Directory per Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Directory per Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Directory per View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
CONTENTS
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
EPEERINVALID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Warning: setState(): Cannot update during an existing state transition . . . . . . . . . . 269
Warning: React attempted to reuse markup in a container but the checksum was invalid . 269
Module parse failed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Project Fails to Compile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Introduction
Front-end development moves forward fast. A good indication of this is the pace in which new
technologies appear to the scene. Webpack and React are two recent newcomers. Combined, these
tools allow you to build all sorts of web applications swiftly. Most importantly, learning these tools
provides you perspective. Thats what this book is about.
What is Webpack?
Web browsers have been designed to consume HTML, JavaScript, and CSS. The simplest way to
develop is simply to write files that the browser understands directly. The problem is that this
becomes unwieldy eventually. This is particularly true when you are developing web applications.
There are multiple ways to approach this problem. You can start splitting up your JavaScript and
CSS to separate files. You could load dependencies through script tags. Even though this is better,
it is still a little problematic.
If you want to use technologies that compile to these target formats, you will need to introduce
preprocessing steps. Task runners, such as Grunt and Gulp, allow you to achieve this, but even then
you need to write a lot of configuration by hand.
What is React?
Facebooks React, a JavaScript library, is a component based view abstraction. A component could
be a form input, button, or any other element in your user interface. This provides an interesting
contrast to earlier approaches as React isnt bound to the DOM by design. You can use it to implement
mobile applications for example.
https://webpack.github.io/
https://facebook.github.io/react/
i
Introduction ii
Kanban application
This book teaches you to build a Kanban application. Beyond this, more theoretical aspects of web
development are discussed. Completing the project gives you a good idea of how to implement
something on your own. During the process you will learn why certain libraries are useful and will
be able to justify your technology choices better.
The final, theoretical part of the book covers more advanced topics. If you are reading the commercial
edition of this book, theres something extra in it for you. I will show you how to deal with typing
in React in order to produce higher quality code. You will also learn to test your components and
logic.
I will also show you how to lint your code effectively using ESLint and various other tools. There
is a chapter in which you learn to author libraries at npm. The lessons learned there will come
in handy for applications as well. Finally, you will learn to style your React application in various
emerging ways.
There are a couple of appendices at end. They are meant to give food for thought and explain aspects,
such as language features, in greater detail. If theres a bit of syntax that seems weird to you in the
book, youll likely find more information there.
What is Kanban?
Kanban, originally developed at Toyota, allows you to track the status of tasks. It can be modeled in
terms of Lanes and Notes. Notes move through Lanes representing stages from left to right as they
become completed. Notes themselves can contain information about the task itself, its priority, and
so on as required.
The system can be extended in various ways. One simple way is to apply a Work In Progress (WIP)
limit per lane. The effect of this is that you are forced to focus on getting tasks done. That is one of
the good consequences of using Kanban. Moving those notes around is satisfying. As a bonus you
get visibility and know what is yet to be done.
quite powerful, and you can find use for it in many places.
From start to end - This would be the traditional way to approach a book. It will also require
the most amount of time. But on the plus side you get a steady progression.
React first, Webpack after - An alternative is to skip the early chapters on Webpack, download
a starting point from the repository, and go through the Kanban demonstration first. Follow
the Webpack chapters after that to understand what the configuration is doing and why. The
Advanced Techniques part and appendices complement this content well.
Webpack only - If you know React very well, maybe it makes sense to go through the Webpack
portions only. You can apply the same skills beyond React after all.
Advanced techniques only - Given React ecosystem is so vast, the Advanced Techniques part
covers interesting niches you might miss otherwise. Pick up techniques like linting or learn to
improve your npm setup. It may be worth your while to dig into various styling approaches
discussed to find something that suits your purposes.
The book doesnt cover everything you need to know in order to develop front-end applications.
Thats simply too much for a single book. I do believe, however, that it might be able to push you to
the right direction. The ecosystem around Webpack and React is fairly large and Ive done my best
to cover a good chunk of it.
Given the book relies on a variety of new language features, Ive gathered the most important ones
used to a separate Language Features appendix that provides a quick look at them. If you want to
understand the features in isolation or feel unsure of something, thats a good place to look.
Book Versioning
Given this book receives a fair amount of maintenance and improvements due to the pace of
innovation, theres a rough versioning scheme in place. I maintain release notes for each new version
at the book blog. That should give you a good idea of what has changed between versions. Also
examining the GitHub repository may be beneficial. I recommend using the GitHub compare tool
for this purpose. Example:
https://github.com/survivejs/webpack_react/compare/v1.9.10...v1.9.17
The page will show you the individual commits that went to the project between the given version
range. You can also see the lines that have changed in the book. This excludes the private chapters,
but its enough to give you a good idea of the major changes made to the book.
The current version of the book is 2.0.5.
https://github.com/survivejs/webpack_react/tree/master/project_source/03_webpack_and_react/kanban_app
http://survivejs.com/blog/
Introduction vi
Extra Material
The book content and source are available at books repository at GitHub. Please note that the
repository defaults to the dev branch of the project. This makes it convenient to contribute. To find
source matching the version of the book you are reading, use the tag selector at GitHubs user
interface as in the image below:
The book repository contains code per chapter. This means you can start from anywhere you want
without having to type it all through yourself. If you are unsure of something, you can always refer
to that.
You can find a lot of complementary material at the survivejs organization. Examples of this are
alternative implementations of the application available written in mobservable, Redux, and
Cerebral/Baobab. Studying those can give you a good idea of how different architectures work
out using the same example.
Getting Support
As no book is perfect, you will likely come by issues and might have some questions related to the
content. There are a couple of options to deal with this:
If you post questions to Stack Overflow, tag them using survivejs so I will get notified of them. You
can use the hashtag #survivejs at Twitter for same effect.
I have tried to cover some common issues at the Troubleshooting appendix. That will be expanded
as common problems are found.
Announcements
I announce SurviveJS related news through a couple of channels:
Mailing list
Twitter
Blog RSS
Acknowledgments
An effort like this wouldnt be possible without community support. There are a lot of people to
thank as a result!
Big thanks to Christian Alfoni for starting the react-webpack-cookbook with me. That work
eventually lead to this book.
The book wouldnt be half as good as it is without patient editing and feedback by my editor Jess
Rodrguez Rodrguez. Thank you.
https://gitter.im/survivejs/webpack_react
https://twitter.com/survivejs
https://twitter.com/bebraw
mailto:[email protected]
https://github.com/survivejs/ama/issues
http://eepurl.com/bth1v5
https://twitter.com/survivejs
http://survivejs.com/atom.xml
http://www.christianalfoni.com/
https://github.com/christianalfoni/react-webpack-cookbook
https://github.com/Foxandxss
Introduction viii
Special thanks to Steve Piercy for numerous contributions. Thanks to Prospect One and Dixon &
Moe for helping with the logo and graphical outlook. Thanks for proofreading to Ava Mallory and
EditorNancy from fiverr.com.
Numerous individuals have provided support and feedback along the way. Thank you in no
particular order Vitaliy Kotov, @af7, Dan Abramov, @dnmd, James Cavanaugh, Josh Perez,
Nicholas C. Zakas, Ilya Volodin, Jan Nicklas, Daniel de la Cruz, Robert Smith, Andreas Eldh, Brandon
Tilley, Braden Evans, Daniele Zannotti, Partick Forringer, Rafael Xavier de Souza, Dennis Bunskoek,
Ross Mackay, Jimmy Jia, Michael Bodnarchuk, Ronald Borman, Guy Ellis, Mark Penner, Cory House,
Sander Wapstra, Nick Ostrovsky, Oleg Chiruhin, Matt Brookes, Devin Pastoor, Yoni Weisbrod,
Guyon Moree, Wilson Mock, Herryanto Siatono, Hctor Cascos, Erick Bazn, Fabio Bedini, Gunnari
Auvinen, Aaron McLeod, John Nguyen, Hasitha Liyanage, Mark Holmes, Brandon Dail, Ahmed
Kamal, Jordan Harband, Michel Weststrate, Ives van Hoorne, Luca DeCaprio, @dev4Fun, Fernando
Montoya, Hu Ming, @mpr0xy, David @davegomez Gmez, Aleksey Guryanov, Elio Dantoni,
Yosi Taguri, Ed McPadden, Wayne Maurer, Adam Beck, Omid Hezaveh, Connor Lay, Nathan
Grey, Avishay Orpaz, Jax Cavalera, Juan Diego Hernndez, Peter Poulsen, Harro van der Klauw,
Tyler Anton, Michael Kelley, @xuyuanme, @RogerSep, Jonathan Davis, @snowyplover, Tobias
Koppers, Diego Toro, George Hilios, Jim Alateras, @atleb, Andy Klimczak, James Anaipakos,
Christian Hettlage, Sergey Lukin, Matthew Toledo, Talha Mansoor, Pawel Chojnacki, @eMerzh,
Gary Robinson, Omar van Galen, Jan Van Bruggen, Savio van Hoi, Alex Shepard, Derek Smith, and
Tetsushi Omi. If Im missing your name, I might have forgotten to add it.
http://prospectone.pl/
http://dixonandmoe.com/
I Setting Up Webpack
Webpack is a powerful module bundler. It hides a lot of power behind configuration. Once you
understand its fundamentals, it becomes much easier to use this power. Initially, it can be a confusing
tool to adopt, but once you break the ice, it gets better.
In this part, we will develop a Webpack based project configuration that provides a solid foundation
for the Kanban project and React development overall.
1
1. Webpack Compared
You can understand better why Webpacks approach is powerful by putting it into historical context.
Back in the day, it was enough just to concatenate some scripts together. Times have changed,
though, and now distributing your JavaScript code can be a complex endeavor.
2
Webpack Compared 3
1.3 Make
You could say Make goes way back. It was initially released in 1977. Even though its an old tool, it
has remained relevant. Make allows you to write separate tasks for various purposes. For instance,
you might have separate tasks for creating a production build, minifying your JavaScript or running
tests. You can find the same idea in many other tools.
Even though Make is mostly used with C projects, its not tied to it in any way. James Coglan
discusses in detail how to use Make with JavaScript. Consider the abbreviated code based on James
post below:
Makefile
PATH := node_modules/.bin:$(PATH)
SHELL := /bin/bash
libraries := vendor/jquery.js
all: $(app_bundle)
build/%.js: %.coffee
coffee -co $(dir $@) $<
clean:
rm -rf build
With Make, you model your tasks using Make-specific syntax and terminal commands. This allows
it to integrate easily with Webpack.
https://blog.jcoglan.com/2014/02/05/building-javascript-projects-with-make/
Webpack Compared 4
1.4 Grunt
Grunt
Grunt went mainstream before Gulp. Its plugin architecture, especially, contributed towards its
popularity. At the same time, this architecture is the Achilles heel of Grunt. I know from experience
that you dont want to end up having to maintain a 300-line Gruntfile. Heres an example from Grunt
documentation:
module.exports = function(grunt) {
grunt.initConfig({
jshint: {
files: ['Gruntfile.js', 'src/**/*.js', 'test/**/*.js'],
options: {
globals: {
jQuery: true
}
}
},
watch: {
files: ['<%= jshint.files %>'],
tasks: ['jshint']
}
});
grunt.loadNpmTasks('grunt-contrib-jshint');
grunt.loadNpmTasks('grunt-contrib-watch');
http://gruntjs.com/sample-gruntfile
Webpack Compared 5
grunt.registerTask('default', ['jshint']);
};
In this sample, we define two basic tasks related to jshint, a linting tool that locates possible problem
spots in your JavaScript source code. We have a standalone task for running jshint. Also, we have a
watcher based task. When we run Grunt, well get warnings in real-time in our terminal as we edit
and save our source code.
In practice, you would have many small tasks for various purposes, such as building the project. The
example shows how these tasks are constructed. An important part of the power of Grunt is that it
hides a lot of the wiring from you. Taken too far, this can get problematic, though. It can become
hard to thoroughly understand whats going on under the hood.
Note that the grunt-webpack plugin allows you to use Webpack in a Grunt environment.
You can leave the heavy lifting to Webpack.
1.5 Gulp
Gulp
Gulp takes a different approach. Instead of relying on configuration per plugin, you deal with actual
code. Gulp builds on top of the tried and true concept of piping. If you are familiar with Unix, its
the same idea here. You simply have sources, filters, and sinks.
Sources match to files. Filters perform operations on sources (e.g., convert to JavaScript). Finally, the
results get passed to sinks (e.g., your build directory). Heres a sample Gulpfile to give you a better
idea of the approach, taken from the projects README. It has been abbreviated a bit:
https://www.npmjs.com/package/grunt-webpack
Webpack Compared 6
var paths = {
scripts: ['client/js/**/*.coffee', '!client/external/**/*.coffee']
};
// The default task (called when you run `gulp` from CLI)
gulp.task('default', ['watch', 'scripts']);
Given the configuration is code, you can always just hack it if you run into troubles. You can wrap
existing Node.js packages as Gulp plugins, and so on. Compared to Grunt, you have a clearer idea of
whats going on. You still end up writing a lot of boilerplate for casual tasks, though. That is where
some newer approaches come in.
Webpack Compared 7
1.6 Browserify
Browserify
Dealing with JavaScript modules has always been a bit of a problem. The language itself actually
didnt have the concept of modules till ES6. Ergo, we have been stuck in the 90s when it comes to
browser environments. Various solutions, including AMD, have been proposed.
In practice, it can be useful just to use CommonJS, the Node.js format, and let the tooling deal with
the rest. The advantage is that you can often hook into npm and avoid reinventing the wheel.
Browserify is one solution to the module problem. It provides a way to bundle CommonJS modules
together. You can hook it up with Gulp. There are smaller transformation tools that allow you to
move beyond the basic usage. For example, watchify provides a file watcher that creates bundles
for you during development. This will save some effort and no doubt is a good solution up to a point.
https://www.npmjs.com/package/gulp-webpack
http://requirejs.org/docs/whyamd.html
http://browserify.org/
https://www.npmjs.com/package/watchify
Webpack Compared 8
The Browserify ecosystem is composed of a lot of small modules. In this way, Browserify adheres
to the Unix philosophy. Browserify is a little easier to adopt than Webpack, and is, in fact, a good
alternative to it.
1.7 Webpack
webpack
You could say Webpack (or just webpack) takes a more monolithic approach than Browserify.
Whereas Browserify consists of multiple small tools, Webpack comes with a core that provides a
lot of functionality out of the box. The core can be extended using specific loaders and plugins.
Webpack will traverse through the require statements of your project and will generate the bundles
you have defined. You can even load your dependencies in a dynamic manner using a custom
require.ensure statement. The loader mechanism works for CSS as well and @import is supported.
There are also plugins for specific tasks, such as minification, localization, hot loading, and so on.
To give you an example, require('style!css!./main.css') loads the contents of main.css and
processes it through CSS and style loaders from right to left. Given that declarations, such as this,
tie the source code to Webpack, it is preferable to set up the loaders at Webpack configuration. Here
is a sample configuration adapted from the official webpack tutorial:
webpack.config.js
http://webpack.github.io/docs/tutorials/getting-started/
Webpack Compared 9
module.exports = {
entry: './entry.js',
output: {
path: __dirname,
filename: 'bundle.js'
},
module: {
loaders: [
{
test: /\.css$/,
loaders: ['style', 'css']
}
]
},
plugins: [
new webpack.optimize.UglifyJsPlugin()
]
};
Given the configuration is written in JavaScript, its quite malleable. As long as its JavaScript,
Webpack is fine with it.
The configuration model may make Webpack feel a bit opaque at times. It can be difficult to
understand what its doing. This is particularly true for more complicated cases. I have compiled
a webpack cookbook with Christian Alfoni that goes into more detail when it comes to specific
problems.
https://christianalfoni.github.io/react-webpack-cookbook/
Webpack Compared 10
1.8 JSPM
JSPM
Using JSPM is quite different than earlier tools. It comes with a little CLI tool of its own that is used
to install new packages to the project, create a production bundle, and so on. It supports SystemJS
plugins that allow you to load various formats to your project.
Given JSPM is still a young project, there might be rough spots. That said, it may be worth a look
if you are adventurous. As you know by now, tooling tends to change quite often in front-end
development, and JSPM is definitely a worthy contender.
Bundle Splitting
Aside from the HMR feature, Webpacks bundling capabilities are extensive. It allows you to split
bundles in various ways. You can even load them dynamically as your application gets executed. This
sort of lazy loading comes in handy, especially for larger applications. You can load dependencies
as you need them.
Asset Hashing
With Webpack, you can easily inject a hash to each bundle name (e.g., app.d587bbd6e38337f5accd.js).
This allows you to invalidate bundles on the client side as changes are made. Bundle splitting allows
the client to reload only a small part of the data in the ideal case.
CommonJS
If you have used Node.js, it is likely that you are familiar with CommonJS already. Heres a brief
example:
https://webpack.github.io/docs/comparison.html
Webpack Compared 12
ES6
ES6 is the format we all have been waiting for since 1995. As you can see, it resembles CommonJS
a little bit and is quite clear!
AMD
AMD, or asynchronous module definition, was invented as a workaround. It introduces a define
wrapper:
// or
define(['./MyModule.js'], function (MyModule) {
// export as module function
return {
hello: function() {...}
};
});
This approach definitely eliminates some of the clutter. You will still end up with some code that
might feel redundant. Given theres ES6 now, it probably doesnt make much sense to use AMD
anymore unless you really have to.
UMD
UMD, universal module definition, takes it all to the next level. It is a monster of a format that
aims to make the aforementioned formats compatible with each other. I will spare your eyes from
it. Never write it yourself, leave it to the tools. If that didnt scare you off, check out the official
definitions.
Webpack can generate UMD wrappers for you (output.libraryTarget: 'umd'). This is particularly
useful for library authors. Well get back to this later when discussing npm and library authorship
in detail at the Authoring Packages chapter.
1.11 Conclusion
I hope this chapter helped you understand why Webpack is a valuable tool worth learning. It solves
a fair share of common web development problems. If you know it well, it will save a great deal of
time. In the following chapters well examine Webpack in more detail. You will learn to develop a
simple development configuration. Well also get started with our Kanban application.
You can, and probably should, use Webpack with some other tools. It wont solve everything. It
does solve the difficult problem of bundling, however. Thats one less worry during development.
Just using package.json, scripts, and Webpack takes you far, as we will see soon.
https://github.com/umdjs/umd
2. Developing with Webpack
If you are not one of those people who likes to skip the introductions, you might have some clue
what Webpack is. In its simplicity, it is a module bundler. It takes a bunch of assets in and outputs
assets you can give to your client.
This sounds simple, but in practice, it can be a complicated and messy process. You definitely dont
want to deal with all the details yourself. This is where Webpack fits in. Next, well get Webpack set
up and your first project running in development mode.
Before getting started, make sure you are using a recent version of Node.js as that will save
some trouble. There are packages available for many platforms. A good alternative is to
set up a Vagrant box and maintain your development environment there.
Especially css-loader has issues with Node 0.10 given its missing native support for
promises. Consider polyfilling Promise through require('es6-promise').polyfill() at
the beginning of your Webpack configuration if you still want to use 0.10. This technique
depends on the es6-promise package.
mkdir kanban_app
cd kanban_app
npm init -y # -y gives you default *package.json*, skip for more control
https://nodejs.org/en/download/package-manager/
https://www.vagrantup.com/
https://github.com/webpack/css-loader/issues/144
https://www.npmjs.com/package/es6-promise
http://nodejs.org/
14
Developing with Webpack 15
As a result, you should have package.json at your project root. You can still tweak it manually to
make further changes. Well be doing some changes through npm tool, but its fine to tweak the file
to your liking. The official documentation explains various package.json options in more detail. I
also cover some useful library authoring related tricks later in this book.
You can set those npm init defaults at /.npmrc. See the Authoring Packages chapter for
more information about npm and its usage.
Setting Up Git
If you are into version control, as you should, this would be a good time to set up your repository.
You can create commits as you progress with the project.
If you are using git, I recommend setting up a .gitignore to the project root:
.gitignore
node_modules
At the very least, you should have node_modules here as you probably dont want that to end
up in the source control. The problem with that is that as some modules need to be compiled per
platform, it gets rather messy to collaborate. Ideally, your git status should look clean. You can
extend .gitignore as you go.
You can push operating system level ignore rules, such as .DS_Store and *.log to
/.gitignore. This will keep your project level rules simpler.
npm maintains a directory where it installs possible executables of packages. You can display the
exact path using npm bin. Most likely it points at .../node_modules/.bin. Try executing Webpack
from there through terminal using node_modules/.bin/webpack or a similar command.
You should see a version, a link to the command line interface guide and a long list of options. We
wont be using most of those, but its good to know that this tool is packed with functionality, if
nothing else.
https://docs.npmjs.com/files/package.json
Developing with Webpack 16
kanban_app $ node_modules/.bin/webpack
webpack 1.12.12
Usage: https://webpack.github.io/docs/cli.html
Options:
--help, -h, -?
--config
--context
--entry
...
--display-cached-assets
--display-reasons, --verbose, -v
Webpack works using a global install as well (-g or --global flag during installation). It is preferred
to keep it as a project dependency instead. This way you have direct control over the version you
are running. This is a good practice overall as by keeping tools as your project dependencies means
you have something that works standalone in other environments.
We can use --save and --save-dev to separate application and development dependencies. The
former will install and write to package.json dependencies field whereas the latter will write to
devDependencies instead. This separation keeps project dependencies more understandable. The
separation will come in handy when we generate a vendor bundle later on at the Building Kanban
chapter.
There are handy shortcuts for --save and --save-dev. -S maps to --save and -D to
--save-dev. So if you want to optimize for characters written, consider using these instead.
/app
index.js
component.js
/build
index.html
Developing with Webpack 17
package.json
webpack.config.js
In this case, well generate bundle.js using Webpack based on our /app. To make this possible, we
should set up some assets and webpack.config.js.
module.exports = function () {
var element = document.createElement('h1');
return element;
};
Next, we are going to need an entry point for our application. It will simply require our component
and render it through the DOM:
app/index.js
document.body.appendChild(app);
app.appendChild(component());
We are also going to need some HTML so we can load the generated bundle:
build/index.html
Developing with Webpack 18
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Kanban app</title>
</head>
<body>
<div id="app"></div>
<script src="./bundle.js"></script>
</body>
</html>
Well generate this file dynamically at Building Kanban, but the current setup is good enough for
now.
const PATHS = {
app: path.join(__dirname, 'app'),
build: path.join(__dirname, 'build')
};
module.exports = {
// Entry accepts a path or an object of entries. We'll be using the
// latter form given it's convenient with more complex configurations.
entry: {
app: PATHS.app
},
output: {
path: PATHS.build,
filename: 'bundle.js'
}
};
Developing with Webpack 19
The entry path could be given as a relative one. The context field can be used to configure that
lookup. Given plenty of places expect absolute paths, I prefer to use absolute paths everywhere to
avoid confusion.
I like to use path.join, but path.resolve would be a good alternative. path.resolve is equivalent
to navigating the file system through cd. path.join gives you just that, a join. See Node.js path API
for the exact details.
If you execute node_modules/.bin/webpack, you should see output like this:
Hash: 2dca5a3850ce5d2de54c
Version: webpack 1.12.13
Time: 85ms
Asset Size Chunks Chunk Names
bundle.js 1.75 kB 0 [emitted] app
[0] ./app/index.js 144 bytes {0} [built]
[1] ./app/component.js 136 bytes {0} [built]
This means you have a build at your output directory. You can open the build/index.html file
directly through a browser to examine the results. On OS X open ./build/index.html works.
Another way to serve the contents of the directory through a server, such as serve (npm i
serve -g). In this case, execute serve at the output directory and head to localhost:3000
at your browser. You can configure the port through the --port parameter.
...
"scripts": {
"build": "webpack"
},
...
https://webpack.github.io/docs/configuration.html#context
https://nodejs.org/api/path.html
Developing with Webpack 20
You can execute the scripts defined this way through npm run. If you execute npm run build now,
you should get a build at your output directory just like earlier.
This works because npm adds node_modules/.bin temporarily to the path. As a result, rather than
having to write "build": "node_modules/.bin/webpack", we can do just "build": "webpack".
Unless Webpack is installed to the project, this can point to a possible global install. That can be
potentially confusing. Prefer local installs over global for this reason.
Task runners, such as Grunt or Gulp, allow you to achieve the same result while operating in a
cross-platform manner. If you go through package.json like this, you may have to be more careful.
On the plus side, this is a very light approach. To keep things simple, well be relying on it.
You should use webpack-dev-server strictly for development. If you want to host your
application, consider other, standard solutions, such as Apache or Nginx.
http://livereload.com/
http://www.browsersync.io/
Developing with Webpack 21
...
"scripts": {
"build": "webpack"
"build": "webpack",
"start": "webpack-dev-server --content-base build"
},
...
If you execute either npm run start or npm start now, you should see something like this at the
terminal:
> webpack-dev-server
http://localhost:8080/
webpack result is served from /
content is served from .../kanban_app/build
404s will fallback to /index.html
The output means that the development server is running. If you open http://localhost:8080/ at your
browser, you should see something. If you try modifying the code, you should see output at your
terminal. The problem is that the browser doesnt catch these changes without a hard refresh. Thats
something we need to resolve next.
Hello world
If you fail to see anything at the browser, you may need to use a different port through
webpack-dev-server port 3000 kind of invocation. One reason why the server might fail
to run is simply because theres something else running in the port. You can verify this
through a terminal command, such as netstat -na | grep 8080. If theres something
running in the port 8080, it should display a message. The exact command may depend
on your platform.
Developing with Webpack 22
Maintain configuration in multiple files and point Webpack to each through --config
parameter. Share configuration through module imports. You can see this approach in action
at webpack/react-starter.
Push configuration to a library which you then consume. Example: HenrikJoreteg/hjs-
webpack.
Maintain configuration within a single file and branch there. If we trigger a script through
npm (i.e., npm run test), npm sets this information in an environment variable. We can match
against it and return the configuration we want.
I prefer the last approach as it allows me to understand whats going on easily. It is ideal for small
projects, such as this.
To keep things simple and help with the approach, Ive defined a custom merge function that
concatenates arrays and merges objects. This is convenient with Webpack as well soon see. Execute
...
const merge = require('webpack-merge');
module.exports = {
const common = {
https://github.com/webpack/react-starter
https://github.com/HenrikJoreteg/hjs-webpack
Developing with Webpack 23
// Default configuration
if(TARGET === 'start' || !TARGET) {
module.exports = merge(common, {});
}
Now that we have room for expansion, we can hook up Hot Module Replacement to make the
browser refresh and make the development mode more useful.
...
const webpack = require('webpack');
...
...
...
"scripts": {
"build": "webpack"
"start": "webpack-dev-server --content-base build"
"start": "webpack-dev-server"
},
...
Execute npm start and surf to localhost:8080. Try modifying app/component.js. It should refresh
the browser. Note that this is hard refresh in case you modify JavaScript code. CSS modifications
work in a neater manner and can be applied without a refresh. In the next chapter we discuss how
to achieve something similar with React. This will provide us a little better development experience.
If you using Windows and it doesnt refresh, see the following section for an alternative setup.
webpack-dev-server can be very particular about paths. If the given include paths dont
match the system casing exactly, this can cause it to fail to work. Webpack issue #675
discusses this in more detail.
If you want to default to some other port than 8080, you can use a declaration like port:
process.env.PORT || 3000.
HMR on Windows
The setup may be problematic on certain versions of Windows. Instead of using devServer and
plugins configuration, implement it like this:
webpack.config.js
https://github.com/webpack/webpack/issues/675
Developing with Webpack 26
...
...
package.json
...
"scripts": {
"build": "webpack",
"start": "webpack-dev-server"
"start": "webpack-dev-server --watch-poll --inline --hot"
},
...
Given this setup polls the filesystem, it is going to be more resource intensive. Its worth giving a go
if the default doesnt work, though.
dotenv allows you to define environment variables through a .env file. This can be
somewhat convenient during development!
Note that there are slight differences between the CLI and the Node.js API. This is the
reason why some prefer to solely use the Node.js API.
Now that we have the loaders we need, well need to make sure Webpack is aware of them. Configure
as follows:
webpack.config.js
...
const common = {
...
}
},
module: {
loaders: [
{
// Test expects a RegExp! Note the slashes!
https://webpack.github.io/docs/webpack-dev-middleware.html
https://webpack.github.io/docs/webpack-dev-server.html#api
https://www.npmjs.com/package/dotenv
https://github.com/webpack/webpack-dev-server/issues/106
Developing with Webpack 28
test: /\.css$/,
loaders: ['style', 'css'],
// Include accepts either a path or an array of paths.
include: PATHS.app
}
]
}
}
...
The configuration we added means that files ending with .css should invoke given loaders. test
matches against a JavaScript style regular expression. The loaders are evaluated from right to left. In
this case, css-loader gets evaluated first, then style-loader. css-loader will resolve @import and url
statements in our CSS files. style-loader deals with require statements in our JavaScript. A similar
approach works with CSS preprocessors, like Sass and Less, and their loaders.
Loaders are transformations that are applied to source files, and return the new source.
Loaders can be chained together, like using a pipe in Unix. See Webpacks What are
loaders? and list of loaders.
If include isnt set, Webpack will traverse all files within the base directory. This can hurt
performance! It is a good idea to set up include always. Theres also exclude option that
may come in handy. Prefer include, however.
body {
background: cornsilk;
}
Also, well need to make Webpack aware of it. Without having a require pointing at it, Webpack
wont be able to find the file:
app/index.js
http://webpack.github.io/docs/using-loaders.html
http://webpack.github.io/docs/list-of-loaders.html
Developing with Webpack 29
require('./main.css');
...
Execute npm start now. Point your browser to localhost:8080 if you are using the default port.
Open up main.css and change the background color to something like lime (background: lime).
Develop styles as needed to make it look a little nicer.
An alternative way to load CSS would be to define a separate entry through which we
point at CSS. I discuss that at the Building Kanban chapter.
...
...
Developing with Webpack 30
If you run the development build now using npm start, Webpack will generate sourcemaps. Web-
pack provides many different ways to generate them as discussed in the official documentation. In
this case, were using eval-source-map. It builds slowly initially, but it provides fast rebuild speed
and yields real files.
Faster development specific options, such as cheap-module-eval-source-map and eval, produce
lower quality sourcemaps. All eval options will emit sourcemaps as a part of your JavaScript code.
Therefore they are not suitable for a production environment. Given size isnt an issue during
development, they tend to be a good fit for that use case.
It is possible you may need to enable sourcemaps in your browser for this to work. See Chrome
and Firefox instructions for further details.
https://webpack.github.io/docs/configuration.html#devtool
https://developer.chrome.com/devtools/docs/javascript-debugging
https://developer.mozilla.org/en-US/docs/Tools/Debugger/How_to/Use_a_source_map
https://www.npmjs.com/package/npm-install-webpack-plugin
Developing with Webpack 31
...
// Default configuration
if(TARGET === 'start' || !TARGET) {
module.exports = merge(common, {
...
plugins: [
new webpack.HotModuleReplacementPlugin(),
new webpack.HotModuleReplacementPlugin(),
new NpmInstallPlugin({
save: true // --save
})
]
});
}
After this change we can save quite a bit of typing and context switches.
2.13 Conclusion
In this chapter, you learned to build and develop using Webpack. I will return to the build topic
at the Building Kanban chapter. The current setup is not ideal for production. At this point its the
development configuration that matters. In the next chapter, we will see how to expand the approach
to work with React.
3. Webpack and React
Combined with Webpack, React becomes a joy to work with. Even though you can use React with
other build tools, Webpack is a good fit and quite straightforward to set up. In this chapter, well
expand our configuration. After that, we have a good starting point for developing our application
further.
Common editors (Sublime Text, Visual Studio Code, vim, emacs, Atom and such) have good
support for React. Even IDEs, such as WebStorm, support it up to an extent. Nuclide, an
Atom based IDE, has been developed with React in mind.
React
Facebooks React has changed the way we think about front-end development. Also, thanks to React
Native the approach isnt limited just to web. Although simple to learn, React provides plenty of
power.
React isnt a framework like Angular.js or Ember. Frameworks tend to provide a lot of solutions
out of the box. With React you will have to assemble your application from separate libraries. Both
approaches have their merits. Frameworks may be faster to pick up, but they can become harder to
work with as you hit their boundaries. In a library based approach you have more flexibility, but
also responsibility.
https://www.jetbrains.com/webstorm/
http://nuclide.io/
https://facebook.github.io/react/
https://facebook.github.io/react-native/
32
Webpack and React 33
React introduced a concept known as virtual DOM to web developers. React maintains a DOM of its
own unlike all the libraries and frameworks before it. As changes are made to virtual DOM, React
will batch the changes to the actual DOM as it sees best.
return (
<div>
<h2>Names</h2>
If you havent seen JSX before it will likely look strange. It isnt uncommon to experience JSX
shock until you start to understand it. After that, it all makes sense.
https://github.com/Matt-Esch/virtual-dom
https://github.com/paldepind/snabbdom
https://facebook.github.io/react/docs/top-level-api.html
https://facebook.github.io/jsx/
Webpack and React 34
Cory House goes into more detail about the shock. Briefly summarized, JSX gives us a level of
validation we havent encountered earlier. It takes a while to grasp, but once you get it, its hard to
go back.
Note that render() must return a single node. Returning multiple wont work!
Solutions such as preact and react-lite allow you to reach far smaller bundle sizes
while sacrificing some functionality. If you are size conscious, consider checking out these
solutions.
The interesting side benefit of Reacts approach is that it doesnt depend on the DOM. In
fact, React can use other targets, such as mobile, canvas, or terminal. The DOM just
happens to be the most relevant one for web developers.
https://medium.com/@housecor/react-s-jsx-the-other-side-of-the-coin-2ace7ab62b98
https://facebook.github.io/react/tips/maximum-number-of-jsx-root-nodes.html
https://github.com/dominictarr/hyperscript
https://www.npmjs.com/package/hyperscript-helpers
https://developit.github.io/preact/
https://github.com/Lucifier129/react-lite
https://facebook.github.io/react-native/
https://github.com/Flipboard/react-canvas
https://github.com/Yomguithereal/react-blessed
Webpack and React 35
There is a semantic difference between React components, such as the one above, and React
elements. In the example each of those JSX nodes would be converted into one. In short,
components can have state whereas elements are simpler by nature. They are just pure
objects. Dan Abramov goes into further detail in a blog post of his.
3.2 Babel
Babel
Babel has made a big impact on the community. It allows us to use features from the future of
JavaScript. It will transform your futuristic code to a format browsers understand. You can even use
it to develop your own language features. Babels built-in JSX support will come in handy here.
Babel provides support for certain experimental features from ES7 beyond standard ES6. Some of
these might make it to the core language while some might be dropped altogether. The language
proposals have been categorized within stages:
Stage 0 - Strawman
Stage 1 - Proposal
https://facebook.github.io/react/blog/2015/12/18/react-components-elements-and-instances.html
https://babeljs.io/
https://babeljs.io/docs/usage/experimental/
Webpack and React 36
Stage 2 - Draft
Stage 3 - Candidate
Stage 4 - Finished
I would be very careful with stage 0 features. The problem is that if the feature changes or gets
removed you will end up with broken code and will need to rewrite it. In smaller experimental
projects it may be worth the risk, though.
You can try out Babel online to see what kind of code it generates.
Configuring babel-loader
You can use Babel with Webpack easily through babel-loader. It takes our ES6 module definition
based code and turn it into ES5 bundles. Install babel-loader with
babel-core contains the core logic of Babel so we need to install that as well.
To make this work, we need to add a loader declaration for babel-loader to the loaders section of
the configuration. It matches against both .js and .jsx using a regular expression (/\.jsx?$/).
To keep everything performant we should restrict the loader to operate within ./app directory. This
way it wont traverse node_modules. An alternative would be to set up an exclude rule against
node_modules explicitly. I find it more useful to include instead as thats more explicit. You never
know what files might be in the structure after all.
Heres the relevant configuration we need to make Babel work:
webpack.config.js
https://babeljs.io/repl/
https://www.npmjs.com/package/babel-loader
Webpack and React 37
...
const common = {
entry: {
app: PATHS.app
},
// Add resolve.extensions.
// '' is needed to allow imports without an extension.
// Note the .'s before extensions as it will fail to match without!!!
resolve: {
extensions: ['', '.js', '.jsx']
},
output: {
path: PATHS.build,
filename: 'bundle.js'
},
module: {
loaders: [
{
test: /\.css$/,
loaders: ['style', 'css'],
include: PATHS.app
}
},
// Set up jsx. This accepts js too thanks to RegExp
{
test: /\.jsx?$/,
// Enable caching for improved performance during development
// It uses default OS directory by default. If you need something
// more custom, pass a path to it. I.e., babel?cacheDirectory=<path>
loaders: ['babel?cacheDirectory'],
// Parse only app files! Without this it will go through entire project.
// In addition to being slow, that will most likely result in an error.
include: PATHS.app
}
]
}
};
...
Note that resolve.extensions setting will allow you to refer to JSX files without an extension. Ill
be using the extension for clarity, but you can omit it if you want.
Webpack and React 38
As resolve.extensions gets evaluated from left to right, we can use it to control which
code gets loaded for given configuration. For instance, you could have .web.js to define
web specific parts and then have something like ['', '.web.js', '.js', '.jsx']. If a
web version of the file is found, Webpack would use that instead of the default.
Setting Up .babelrc
Also, we are going to need a .babelrc. You could pass Babel settings through Webpack (i.e.,
babel?presets[]=react,presets[]=es2015), but then it would be just for Webpack only. Thats
why we are going to push our Babel settings to this specific dotfile. The same idea applies for other
tools, such as ESLint.
Babel 6 relies on plugins. There are two types of plugins: syntax and transform. The former allow
Babel to parse additional syntax whereas latter apply transformations. This way the code that is
using future syntax can get transformed back to JavaScript older environments can understand.
To make it easier to consume plugins, Babel supports the concept of presets. Each preset comes with
a set of plugins so you dont have to wire them up separately. In this case well be relying on ES2015
and React presets:
Instead of typing it all out, we could use brace expansion. Example: npm i
babel-preset-{es2015,react} -D. -D equals --save-dev as you might remember.
In addition, well be enabling a couple of custom features to make the project more convenient to
develop:
Property initializers - Example: renderNote = (note) => {. This binds the renderNote
method to instances automatically. The feature makes more sense as we get to use it.
Decorators - Example: @DragDropContext(HTML5Backend). These annotations allow us to
attach functionality to classes and their methods.
Object rest/spread - Example: const {a, b, props} = this.props. This syntax allows us to
easily extract specific properties from an object.
https://babeljs.io/docs/usage/babelrc/
https://github.com/jeffmo/es-class-static-properties-and-fields
https://github.com/wycats/javascript-decorators
https://github.com/sebmarkbage/ecmascript-rest-spread
Webpack and React 39
In order to make it easier to set up the features, I created a specific preset containing them. It
also contains babel-plugin-transform-object-assign and babel-plugin-array-includes plugins. The
former allows us to use Object.assign while the latter provides Array.includes without having to
worry about shimming these for older environments.
A preset is simply a npm module exporting Babel configuration. Maintaining presets like this can
be useful especially if you want to share the same set of functionality across multiple projects. Get
the preset installed:
{
"presets": [
"es2015",
"react",
"survivejs-kanban"
]
}
Babel provides stage specific presets. It is clearer to rely directly on any custom features you might
want to use. This documents your project well and keeps it maintainable. You could even drop
babel-preset-es2015 and enable the features you need one by one. There are other possible .babelrc
options beyond the ones covered here.
If you dont like to maintain a .babelrc file, another alternative is to write the configuration
below babel field at package.json. Babel will pick it up from there.
https://github.com/survivejs/babel-preset-survivejs-kanban
https://www.npmjs.com/package/babel-plugin-transform-object-assign
https://www.npmjs.com/package/babel-plugin-array-includes
https://www.npmjs.com/package/object-assign
https://www.npmjs.com/package/object.assign
https://babeljs.io/docs/usage/options/
Webpack and React 40
module.exports = {
plugins: [
require('babel-plugin-syntax-class-properties'),
require('babel-plugin-syntax-decorators'),
require('babel-plugin-syntax-object-rest-spread'),
In this case were pulling specific plugins to our preset. You could pull the plugins directly to your
.babelrc through the plugins field. That can be handy if you happen to need just one or two for
some reason. When the configuration begins to grow, consider extracting it to a preset. You could
also define presets field here if you want to bring other presets to your project through your preset.
Assuming we named our package as babel-preset-survivejs-kanban, we could then install it to our
project as above and connect it with Babel configuration. Note the babel-preset prefix. The great
advantage of developing a package like this is that it allows us to maintain shared presets across
multiple, similar projects.
The Authoring Packages chapter goes into greater detail when it comes to npm and dealing with
packages. To make it easier for other people to find your preset, consider including babel-preset to
your package keywords.
For this to work, you will need to have babel-register installed to your project. Webpack relies
internally on interpret to make this work.
{
test: /\.jsx?$/,
loaders: [
'babel?cacheDirectory,presets[]=react,presets[]=es2015,presets[]=survivejs-k\
anban'
],
include: PATHS.app
}
Given passing a query string like this isnt particularly readable, another way is to use the
combination of loader and query fields:
{
test: /\.jsx?$/,
loader: 'babel',
query: {
cacheDirectory: true,
presets: ['react', 'es2015', 'survivejs-kanban']
},
include: PATHS.app
}
This approach becomes problematic with multiple loaders since its limited just to one loader at a
time. If you want to use this format with multiple, you need separate declarations.
Its a good idea to keep in mind that Webpack loaders are always evaluated from right to left and
from bottom to top (separate definitions). The following two declarations are equal based on this
rule:
https://www.npmjs.com/package/babel-register
https://www.npmjs.com/package/interpret
Webpack and React 42
{
test: /\.css$/,
loaders: ['style', 'css'],
},
{
test: /\.css$/,
loaders: ['style'],
},
{
test: /\.css$/,
loaders: ['css'],
},
The loaders of the latter definition could be rewritten in the query format discussed above after
performing a split like this.
Another way to deal with query parameters would be to rely on Node.js querystring module and
stringify structures through it so they can be passed through a loaders definition.
You can import portions from react using the syntax import React, {Component} from
'react';. Then you can do class App extends Component. It is important that you import
React as well because that JSX will get converted to React.createElement calls. I prefer
import React from 'react' simply because its easier to grep for React.Component than
Component given its more unique.
https://nodejs.org/api/querystring.html
Webpack and React 43
It may be worth your while to install React Developer Tools extension to your browser.
Currently, Chrome and Firefox are supported. This will make it easier to understand whats
going on while developing. Note that the developer tools wont work if you are using the
iframe mode (/webpack-dev-server/) of webpack-dev-server!
Setting Up Note
We also need to define the Note component. In this case, we will just want to show some text like
Learn Webpack. Hello world would work if you are into clichs. Given the component is so simple,
we can use Reacts function based component definition:
app/components/Note.jsx
Even though we arent referring to React directly through code here, it is good to remember that the
JSX will get transformed into calls going through it. Hence if you remove the import statement, the
code will break. Babel plugin known as babel-plugin-react-require is able to generate the imports
for you automatically if you prefer to avoid the imports.
Note that were using the jsx extension here. It helps us to tell modules using JSX syntax
apart from regular ones. It is not absolutely necessary, but it is a good convention to have.
Just returning Learn Webpack from the component wont work. You will have to wrap it
like this. Sometimes it can be convenient just to return null in case you dont want to
return anything.
https://github.com/facebook/react-devtools
https://www.npmjs.com/package/babel-plugin-react-require
Webpack and React 44
import './main.css';
If you are running the development server, you should see something familiar at localhost:8080:
Hello React
In case you try to modify your React components, you can see Webpack forces a full refresh. This
is something we are going to fix next by enabling hot loading.
Before moving on, this is a good time to get rid of the old component.js file in the app root directory.
We wont be needing that anymore.
If you arent seeing the correct result, make sure your index.html has <div
id="app"></div> within its body. Alternatively you can create the element into which
to render through the DOM API itself. I prefer to handle this on template level, though.
Avoid rendering directly to document.body. This can cause strange problems when relying
on it and React will give you a warning about it.
We have already implemented the development server side setup for this. The problem is that we
are missing a part that allows the client portion to catch the changes and patch the code. Some setup
is needed in order to add it to our project.
babel-plugin-react-transform allows us to instrument React components in various ways. Hot
loading is one of these. It can be enabled through react-transform-hmr.
react-transform-hmr will swap React components one by one as they change without forcing a full
refresh. Given it just replaces methods, it wont catch every possible change. This includes changes
made to class constructors. There will be times when you will need to force a refresh, but it will
work most of the time.
A Babel preset known as babel-preset-react-hmre will keep our setup simple. It comes with
reasonable defaults and cuts down the amount of configuration you need to maintain. Install it
through:
We also need to make Babel aware of HMR. First, we should pass the target environment to Babel
through our Webpack configuration. This allows us to control environment specific functionality
through .babelrc. In this case we want to enable HMR just for development. If you wanted to enable
some specific plugins for a production build, you would use the same idea.
An easy way to control .babelrc is to set BABEL_ENV environment variable as npm lifecycle event.
This gives us a predictable mapping between package.json and .babelrc:
webpack.config.js
...
process.env.BABEL_ENV = TARGET;
const common = {
...
};
...
In addition we need to expand our Babel configuration to include the plugin we need during
development. This is where that BABEL_ENV comes in. Babel determines the value of env like this:
{
"presets": [
"es2015",
"react",
"survivejs-kanban"
]
],
"env": {
"start": {
"presets": [
"react-hmre"
]
}
}
}
Try executing npm start again and modifying the component. Note what doesnt happen this time.
Theres no flash! It might take a while to sink in, but in practice, this is a powerful feature. Small
things like this add up and make you more productive.
Note that sourcemaps wont get updated in Chrome and Firefox due to browser level
bugs! This may change in the future as the browsers get patched, though.
capabilities. In later chapters we will go through various alternative approaches. They allow you
to reach roughly equivalent results as you can achieve with mixins. Often a decorator is all you
need.
Also, ES6 class based components wont bind their methods to the this context by default. This is
the reason why it can be a good practice to bind the context in the component constructor. Another
way to solve the problem is to use property initializers. Well be using that approach as it cuts down
the amount of code nicely and makes it easier to follow whats going on.
The class based approach decreases the amount of concepts you have to worry about. constructor
helps to keep things simpler than in the React.createClass based approach. There you need to
define separate methods to achieve the same result.
3.8 Conclusion
You should understand how to set up React with Webpack now. Hot loading is one of those features
that sets Webpack apart. Now that we have a good development environment, we can focus on React
development. In the next chapter, you will see how to implement a little note-taking application. That
will be improved in the subsequent chapters into a full blown Kanban board.
II Developing a Kanban Application
React, even though a young library, has made a significant impact on the front-end development
community. It introduced concepts, such as the virtual DOM, and made the community understand
the power of components. Its component oriented design approach works well for the web. But React
isnt limited to the web. You can use it to develop mobile and even terminal user interfaces.
In this part, we will implement a small Kanban application. During the process, you will learn the
basics of React. As React is just a view library we will also discuss supporting technology. Alt, a Flux
framework, provides a good companion to React and allows you to keep your components clean.
You will also see how to use React DnD to add drag and drop functionality to the Kanban board.
Finally, you will learn how to create a production grade build using Webpack.
48
4. Implementing a Basic Note
Application
Given we have a nice development setup now, we can actually get some work done. Our goal here is
to end up with a crude note-taking application. It will have basic manipulation operations. We will
grow our application from scratch and get into some trouble. This way you will understand why
architectures, such as Flux, are needed.
Hot loading isnt foolproof always. Given it operates by swapping methods dynamically,
it wont catch every change. This is problematic with property initializers and bind. This
means you may need to force a manual refresh at the browser for some changes to show
up!
[
{
id: '4a068c42-75b2-4ae2-bd0d-284b4abbb8f0',
task: 'Learn Webpack'
},
{
id: '4e81fc6e-bfb6-419b-93e5-0242fb6f3f6a',
task: 'Learn React'
},
{
id: '11bbffc8-5891-4b45-b9ea-5c99aadf870f',
task: 'Do laundry'
}
];
Each note is an object which will contain the data we need, including an id and a task we want to
perform. Later on it is possible to extend this data definition to include things like the note color or
the owner.
49
Implementing a Basic Note Application 50
If you are interested in the math behind this, check out the calculations at Wikipedia for
details. Youll see that the possibility for collisions is somewhat miniscule and something
we dont have to worry about.
Setting Up App
Now that we know how to deal with ids and understand what kind of data model we want, we need
to connect our data model with App. The simplest way to achieve that is to push the data directly
to render() for now. This wont be efficient, but it will allow us to get started. The implementation
below shows how this works out in React terms:
app/components/App.jsx
https://www.ietf.org/rfc/rfc4122.txt
https://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates
Implementing a Basic Note Application 51
id: uuid.v4(),
task: 'Learn React'
},
{
id: uuid.v4(),
task: 'Do laundry'
}
];
We are using various important features of React in the snippet above. Understanding them is
invaluable. I have annotated important parts below:
<ul>{notes.map(note => ...}</ul> - {}s allow us to mix JavaScript syntax within JSX. map
returns a list of li elements for React to render.
<li key={note.id}>{note.task}</li> - In order to tell React in which order to render the
elements, we use the key property. It is important that this is unique or else React wont be
able to figure out the correct order in which to render. If not set, React will give a warning.
See Multiple Components for more information.
If you run the application now, you can see a list of notes. Its not particularly pretty, but its a start:
A list of notes
https://facebook.github.io/react/docs/multiple-components.html
Implementing a Basic Note Application 52
If you want to examine your application further, it can be useful to attach a debugger;
statement to the place you want to study. It has to be placed on a line that will get executed
for the browser to pick it up! The statement will cause the browser debugging tools to
trigger and allow you to study the current call stack and scope. You can attach breakpoints
like this through the browser, but this is a good alternative.
...
this.state = {
notes: [
{
id: uuid.v4(),
task: 'Learn Webpack'
},
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/debugger
Implementing a Basic Note Application 53
{
id: uuid.v4(),
task: 'Learn React'
},
{
id: uuid.v4(),
task: 'Do laundry'
}
]
};
}
render() {
const notes = [
{
id: uuid.v4(),
task: 'Learn Webpack'
},
{
id: uuid.v4(),
task: 'Learn React'
},
{
id: uuid.v4(),
task: 'Do laundry'
}
];
const notes = this.state.notes;
...
}
}
After this change and refreshing the browser, our application works the same way as before. We
have gained something in return, though. We can now begin to alter the state through setState.
In the earlier versions of React, you could achieve the same result with getInitialState.
Were passing props to super by convention. If you dont pass it, this.props wont get set!
Calling super invokes the same method of the parent class and you see this kind of usage
in object oriented programming often.
Implementing a Basic Note Application 54
...
return (
<div>
<button onClick={this.addNote}>+</button>
<ul>{notes.map(note =>
<li key={note.id}>{note.task}</li>
)}</ul>
</div>
);
}
// We are using an experimental feature known as property
// initializer here. It allows us to bind the method `this`
// to point at our *App* instance.
//
// Alternatively we could `bind` at `constructor` using
// a line, such as this.addNote = this.addNote.bind(this);
addNote = () => {
// It would be possible to write this in an imperative style.
// I.e., through `this.state.notes.push` and then
// `this.setState({notes: this.state.notes})` to commit.
//
// I tend to favor functional style whenever that makes sense.
// Even though it might take more code sometimes, I feel
// the benefits (easy to reason about, no side effects)
// more than make up for it.
//
Implementing a Basic Note Application 55
If we were operating with a back-end, we would trigger a query here and capture the id from the
response. For now its enough to just generate an entry and a custom id.
In case you refresh the browser and click the plus button now, you should see a new item at the list:
We are still missing two crucial features: editing and deletion. Before moving onto these, its a good
idea to make room for them by expanding our component hierarchy. It will become easier to deal
with the features after that. Working with React is like this. You develop a component for a while
until you realize it could be split up further.
Using autobind-decorator would be a valid alternative for property initializers. In this case
we would use @autobind annotation either on class or method level. To learn more about
decorators, read Understanding Decorators.
App - App retains application state and deals with the high level logic.
Notes - Notes acts as an intermediate wrapper in between and renders individual Note
components.
Note - Note is the workhorse of our application. Editing and deletion will be triggered here.
That logic will cascade to App through wiring in between.
Later on we can expand the hierarchy to a full Kanban by introducing the concepts of Lane and
Lanes to it. These two concepts fit between App and Notes. We dont need to care about this just yet,
though.
One natural way to model component hierarchies is to draw out your application on paper.
You will begin to see entities that will map to components. This allows you to identify
especially presentational components that focus on displaying data. You have container
components that connect with data on a higher level. Dan Abramov discusses this in his
Medium post known as Presentational and Container Components.
You can certainly develop components organically. Once they begin to feel too big, refactor
and extract the components you identify. Sometimes finding the right composition may
take some time and patience. Component design is a skill to learn and master.
https://www.npmjs.com/package/autobind-decorator
https://medium.com/@dan_abramov/smart-and-dumb-components-7ca2f9a7c7d0#.q8c68v3ff
Implementing a Basic Note Application 57
Extracting Note
A good first step towards the hierarchy we want is to extract Note. Note is a component which will
need to receive task as a prop and render it. In terms of JSX this would look like <Note task="task
goes here" />.
In addition to state, props are another concept you will be using a lot. They describe the external
interface of a component. You can annotate them as discussed in the Typing with React chapter. To
keep things simple, we are skipping propType annotations here.
A function based component will receive props as its first parameter. We can extract specific props
from it through ES6 destructuring syntax. A function based component is render() by itself. They
are far more limited than class based ones, but they are perfect for simple presentational purposes,
such as this. To tie these ideas together, we can end up with a component definition such as this:
app/components/Note.jsx
To understand the destructuring syntax in greater detail, check out the Language Features
appendix.
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment#Object_destructuring
Implementing a Basic Note Application 58
return (
<div>
<button onClick={this.addNote}>+</button>
<ul>{notes.map(note =>
<li key={note.id}>{note.task}</li>
<li key={note.id}>
<Note task={note.task} />
</li>
)}</ul>
</div>
);
}
...
}
The application should still look the same. To achieve the structure we are after, we should perform
one more tweak and extract Notes.
Extracting Notes
Extracting Notes is a similar operation. We need to understand what portion of App belongs to the
component and then write a definition for it. It is the same idea as for Note earlier:
app/components/Notes.jsx
return (
<div>
<button onClick={this.addNote}>+</button>
<ul>{notes.map(note =>
<li key={note.id}>
<Note task={note.task} />
</li>
)}</ul>
<Notes notes={notes} />
</div>
);
}
addNote = () => {
this.setState({
notes: this.state.notes.concat([{
id: uuid.v4(),
task: 'New task'
}])
});
};
}
The application should still behave the same way. Structurally we are far better off than earlier,
though. Now we can begin to worry about adding new functionality to the system.
This means Note will need to track its editing state somehow. In addition, we need to communicate
that the value (task) has changed so that App knows to update its state. Resolving these two problems
gives us something functional.
return this.renderNote();
}
renderEdit = () => {
// We deal with blur and input handlers here. These map to DOM events.
// We also set selection to input end using a callback at a ref.
// It gets triggered after the component is mounted.
Implementing a Basic Note Application 61
//
// We could also use a string reference (i.e., `ref="input") and
// then refer to the element in question later in the code. This
// would allow us to use the underlying DOM API through
// this.refs.input. This can be useful when combined with
// React lifecycle hooks.
return <input type="text"
ref={
(e) => e ? e.selectionStart = this.props.task.length : null
}
autoFocus={true}
defaultValue={this.props.task}
onBlur={this.finishEdit}
onKeyPress={this.checkEnter} />;
};
renderNote = () => {
// If the user clicks a normal note, trigger editing logic.
return <div onClick={this.edit}>{this.props.task}</div>;
};
edit = () => {
// Enter edit mode.
this.setState({
editing: true
});
};
checkEnter = (e) => {
// The user hit *enter*, let's finish up.
if(e.key === 'Enter') {
this.finishEdit(e);
}
};
finishEdit = (e) => {
// `Note` will trigger an optional `onEdit` callback once it
// has a new value. We will use this to communicate the change to
// `App`.
//
// A smarter way to deal with the default value would be to set
// it through `defaultProps`.
//
// See the *Typing with React* chapter for more information.
const value = e.target.value;
Implementing a Basic Note Application 62
if(this.props.onEdit) {
this.props.onEdit(value);
If you try to edit a Note now, you should see an input and be able to edit the data. Given we havent
set up onEdit handler, it doesnt do anything useful yet, though. Well need to capture the edited
data next and update App state so that the code works.
It can be a good idea to name your callbacks using on prefix. This will allow you to
distinguish them from other props and keep your code a little tidier.
onEdit flow
As onEdit is defined on App level, well need to pass onEdit handler through Notes. So for the stub
to work, changes in two files are needed. Heres what it should look like for App:
app/components/App.jsx
Implementing a Basic Note Application 63
return (
<div>
<button onClick={this.addNote}>+</button>
<Notes notes={notes} />
<Notes notes={notes} onEdit={this.editNote} />
</div>
);
}
addNote = () => {
...
};
editNote = (id, task) => {
// Don't modify if trying set an empty value
if(!task.trim()) {
return;
}
return note;
});
this.setState({notes});
};
}
To make the scheme work as designed, we need to modify Notes to work according to the idea. It
will bind the id of the note in question. When the callback is triggered, the remaining parameter
receives a value and the callback gets called:
Implementing a Basic Note Application 64
app/components/Notes.jsx
If you refresh and try to edit a Note now, the modification should stick. The same idea can be used
to implement a lot of functionality and this is a pattern you will see a lot.
The current design isnt flawless. What if we wanted to allow newly created notes to be editable
straight from the start? Given Note encapsulated this state, we dont have simple means to access it
from the outside. The current solution is enough for now. Well address this issue properly in From
Notes to Kanban chapter and extract the state there.
Edited a note
As before, well need to define some logic on App level. Deleting a note can be achieved by first
looking for a Note to remove based on id. After we know which Note to remove, we can construct
a new state without it.
Just like earlier, it will take three changes. We need to define logic at App level, bind the id at Notes,
and then finally trigger the logic at Note through its user interface. To get started, App logic can be
defined in terms of filter:
app/components/App.jsx
return (
<div>
<button onClick={this.addNote}>+</button>
<Notes notes={notes} onEdit={this.editNote} />
<Notes notes={notes}
onEdit={this.editNote}
onDelete={this.deleteNote} />
</div>
);
}
deleteNote = (id, e) => {
// Avoid bubbling to edit
e.stopPropagation();
this.setState({
notes: this.state.notes.filter(note => note.id !== id)
});
};
...
}
app/components/Notes.jsx
Implementing a Basic Note Application 66
Finally, we need to attach a delete button to each Note and then trigger onDelete when those are
clicked:
app/components/Note.jsx
...
return (
<div onClick={this.edit}>
<span>{this.props.task}</span>
{onDelete ? this.renderDelete() : null }
</div>
);
};
renderDelete = () => {
return <button onClick={this.props.onDelete}>x</button>;
Implementing a Basic Note Application 67
};
...
}
After these changes and refreshing you should be able to delete notes as you like.
Deleted a note
You may need to trigger a refresh at the browser to make these changes show up. Hit
CTRL/CMD-R.
return (
<div>
<button onClick={this.addNote}>+</button>
<button className="add-note" onClick={this.addNote}>+</button>
<Notes notes={notes}
onEdit={this.editNote}
onDelete={this.deleteNote} />
</div>
);
}
...
}
app/components/Notes.jsx
app/components/Note.jsx
Implementing a Basic Note Application 69
return (
<div onClick={this.edit}>
<span>{this.props.task}</span>
<span className="task">{this.props.task}</span>
{onDelete ? this.renderDelete() : null }
</div>
);
};
renderDelete = () => {
return <button onClick={this.props.onDelete}>x</button>;
return <button
className="delete-note"
onClick={this.props.onDelete}>x</button>;
};
...
}
Styling Components
The first step is to get rid of that horrible serif font.
app/main.css
body {
background: cornsilk;
font-family: sans-serif;
}
Sans serif
A good next step would be to constrain the Notes container a little and get rid of those list bullets.
app/main.css
...
.add-note {
background-color: #fdfdfd;
border: 1px solid #ccc;
}
.notes {
margin: 0.5em;
padding-left: 0;
max-width: 10em;
list-style: none;
}
No bullets
...
.note {
margin-bottom: 0.5em;
padding: 0.5em;
background-color: #fdfdfd;
box-shadow: 0 0 0.3em 0.03em rgba(0, 0, 0, 0.3);
}
.note:hover {
box-shadow: 0 0 0.3em 0.03em rgba(0, 0, 0, 0.7);
transition: 0.6s;
}
.note .task {
/* force to use inline-block so that it gets minimum height */
display: inline-block;
}
Styling notes
I animated Note shadow in the process. This way the user gets a better indication of what Note is
being hovered upon. This wont work on touch based interfaces, but its a nice touch for the desktop.
Finally, we should make those delete buttons stand out less. One way to achieve this is to hide them
by default and show them on hover. The gotcha is that delete wont work on touch, but we can live
with that.
app/main.css
Implementing a Basic Note Application 72
...
.note .delete-note {
float: right;
padding: 0;
background-color: #fdfdfd;
border: none;
cursor: pointer;
visibility: hidden;
}
.note:hover .delete-note {
visibility: visible;
}
Delete on hover
After these few steps, we have an application that looks passable. Well be improving its appearance
as we add functionality, but at least its somewhat visually appealing.
componentWillMount() gets triggered once before any rendering. One way to use it would be
to load data asynchronously there and force rendering through setState.
componentDidMount() gets triggered after initial rendering. You have access to the DOM here.
You could use this hook to wrap a jQuery plugin within a component, for instance.
componentWillReceiveProps(object nextProps) triggers when the component receives new
props. You could, for instance, modify your component state based on the received props.
shouldComponentUpdate(object nextProps, object nextState) allows you to optimize
the rendering. If you check the props and state and see that theres no need to update, return
false.
componentWillUpdate(object nextProps, object nextState) gets triggered after should-
ComponentUpdate and before render(). It is not possible to use setState here, but you can set
class properties, for instance. The official documentation goes into greater details. In short,
this is where immutable data structures, such as Immutable.js, come handy thanks to their
easy equality checks.
componentDidUpdate() is triggered after rendering. You can modify the DOM here. This can
be useful for adapting other code to work with React.
componentWillUnmount() is triggered just before a component is unmounted from the DOM.
This is the ideal place to perform cleanup (e.g., remove running timers, custom DOM elements,
and so on).
Beyond the lifecycle hooks, there are a variety of properties and methods you should be aware of
if you are going to use React.createClass:
displayName - It is preferable to set displayName as that will improve debug information. For
ES6 classes this is derived automatically based on the class name.
getInitialState() - In class based approach the same can be achieved through constructor.
getDefaultProps() - In classes you can set these in constructor.
mixins - mixins contains an array of mixins to apply to components.
statics - statics contains static properties and method for a component. In ES6 you can
assign them to the class as below:
https://facebook.github.io/react/docs/advanced-performance.html#shouldcomponentupdate-in-action
https://facebook.github.io/immutable-js/
https://facebook.github.io/react/docs/component-specs.html
Implementing a Basic Note Application 74
class Note {
render() {
...
}
}
Note.willTransitionTo = () => {...};
Some libraries, such as React DnD, rely on static methods to provide transition hooks. They allow you
to control what happens when a component is shown or hidden. By definition statics are available
through the class itself.
Both class and React.createClass based components allow you to document the interface of your
component using propTypes. To dig deeper, read the Typing with React chapter.
Both support render(), the workhorse of React. In function based definition render() is the function
itself. render() simply describes what the component should look like. In case you dont want to
render anything, return either null or false.
React provides a feature known as refs so you can perform operations on React
elements through DOM. This is an escape hatch designed for those cases where React
itself doesnt cut it. Performing measurements is a good example. Refs need to be
attached to stateful components in order to work.
4.10 Conclusion
You can get quite far just with vanilla React. The problem is that we are starting to mix data related
concerns and logic with our view components. Well improve the architecture of our application by
introducing Flux to it.
5. React and Flux
You can get pretty far by keeping everything in components. Eventually, that will become painful,
though. Flux application architecture helps to bring clarity to our React applications. Its not the
only solution, but its a decent starting point.
Flux will allow us to separate data and application state from our views. This helps us to keep them
clean and the application maintainable. Flux was designed with large teams in mind. As a result, you
might find it quite verbose. This comes with great advantages, though, as it can be straightforward
to work with.
So far, weve been dealing only with views. Flux architecture introduces a couple of new concepts to
the mix. These are actions, dispatcher, and stores. Flux implements unidirectional flow in contrast
to popular frameworks, such as Angular or Ember. Even though two-directional bindings can be
convenient, they come with a cost. It can be hard to deduce whats going on and why.
Dispatcher
When we trigger an action, the dispatcher will get notified. The dispatcher will be able to deal with
possible dependencies between stores. It is possible that a certain action needs to happen before
another. The dispatcher allows us to achieve this.
https://facebook.github.io/flux/docs/overview.html
76
React and Flux 77
At the simplest level, actions can just pass the message to the dispatcher as is. They can also trigger
asynchronous queries and hit the dispatcher based on the result eventually. This allows us to deal
with received data and possible errors.
Once the dispatcher has dealt with an action, the stores listening to it get triggered. In our case,
NoteStore gets notified. As a result, it will be able to update its internal state. After doing this, it
will notify possible listeners of the new state.
Flux Dataflow
This completes the basic unidirectional, yet linear, process flow of Flux. Usually, though, the
unidirectional process has a cyclical flow and it doesnt necessarily end. The following diagram
illustrates a more common flow. It is the same idea again, but with the addition of a returning cycle.
Eventually, the components depending on our store data become refreshed through this looping
process.
This sounds like a lot of steps for achieving something simple as creating a new Note. The approach
does come with its benefits. Given the flow is always in a single direction, it is easy to trace and
debug. If theres something wrong, its somewhere within the cycle.
Advantages of Flux
Even though this sounds a little complicated, the arrangement gives our application flexibility. We
can, for instance, implement API communication, caching, and i18n outside of our views. This way
they stay clean of logic while keeping the application easier to understand.
Implementing Flux architecture in your application will actually increase the amount of code
somewhat. It is important to understand that minimizing the amount of code written isnt the goal
of Flux. It has been designed to allow productivity across larger teams. You could say that explicit
is better than implicit.
React and Flux 78
Redux has taken the core ideas of Flux and pushed them into a tiny form (2 kB). Despite
this, its quite powerful approach and worth checking out. Theres a Redux implementation
of the Kanban board. It can be interesting to compare it to the Alt one.
Alt
In this chapter, well be using a library known as Alt. It is a flexible, full-featured implementation
that has been designed with universal (isomorphic) rendering in mind.
In Alt, youll deal with actions and stores. The dispatcher is hidden, but you will still have access
to it if needed. Compared to other implementations, Alt hides a lot of boilerplate. There are special
features to allow you to save and restore the application state. This is handy for implementing
persistency and universal rendering.
With this pattern, we reuse the same instance within the whole application. To achieve this we can
push it to a module of its own and then refer to that from everywhere. Set it up as follows:
app/libs/alt.js
Webpack caches the modules so the next time you import Alt, it will return the same instance again.
If you arent using npm-install-webpack-plugin, remember to install alt and the utilities
we are going to need to your project through npm i alt alt-container alt-utils
node-uuid -S.
There is a Chrome plugin known as alt-devtool. After it is installed, you can connect to
Alt by uncommenting the related lines above. You can use it to debug the state of your
stores, search, and travel in time.
Setting Up a Skeleton
As a first step, we can set up a skeleton for our store. We can fill in the methods we need after that.
Alt uses standard ES6 classes, so its the same syntax as we saw earlier with React components.
Heres a starting point:
app/stores/NoteStore.js
class NoteStore {
constructor() {
this.bindActions(NoteActions);
this.notes = [];
}
create(note) {
}
update(updatedNote) {
}
delete(id) {
}
}
We call bindActions to map each action to a method by name. After that we trigger the appropriate
logic at each method. Finally, we connect the store with Alt using alt.createStore.
Note that assigning a label to a store (NoteStore in this case) isnt required. It is a good practice,
though, as it protects the code against minification. These labels become important when we persist
the data.
React and Flux 81
Implementing create
Compared to the earlier logic, create will generate an id for a Note automatically. This is a detail
that can be hidden within the store:
app/stores/NoteStore.js
class NoteStore {
constructor() {
...
}
create(note) {
const notes = this.notes;
note.id = uuid.v4();
this.setState({
notes: notes.concat(note)
});
}
...
}
To keep the implementation clean, we are using this.setState. It is a feature of Alt that allows us
to signify that we are going to alter the store state. Alt will signal the change to possible listeners.
Implementing update
update follows the earlier logic apart from some renaming. Most importantly we commit the new
state through this.setState:
app/stores/NoteStore.js
React and Flux 82
...
class NoteStore {
...
update(updatedNote) {
const notes = this.notes.map(note => {
if(note.id === updatedNote.id) {
// Object.assign is used to patch the note data here. It
// mutates target (first parameter). In order to avoid that,
// I use {} as its target and apply data on it.
//
// Example: {}, {a: 5, b: 3}, {a: 17} -> {a: 17, b: 3}
//
// You can pass as many objects to the method as you want.
return Object.assign({}, note, updatedNote);
}
return note;
});
}
}
Implementing delete
delete is straightforward. Seek and destroy, as earlier, and remember to commit the change:
app/stores/NoteStore.js
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer
React and Flux 83
...
class NoteStore {
...
delete(id) {
this.setState({
notes: this.notes.filter(note => note.id !== id)
});
}
}
Instead of slicing and concatenating data, it would be possible to operate directly on it. For example
a mutable variant, such as this.notes.splice(targetId, 1) would work. We could also use
a shorthand, such as [...notes.slice(0, noteIndex), ...notes.slice(noteIndex + 1)]. The
exact solution depends on your preferences. I prefer to avoid mutable solutions (i.e., splice) myself.
It is recommended that you use setState with Alt to keep things clean and easy to understand.
Manipulating this.notes directly would work, but that would miss the intent and could become
problematic in larger scale as mutation is difficult to debug. setState provides a nice analogue to
the way React works so its worth using.
We have almost integrated Flux with our application now. We have a set of actions that provide an
API for manipulating Notes data. We also have a store for actual data manipulation. We are missing
one final bit, integration with our view. It will have to listen to the store and be able to trigger actions
to complete the cycle.
The current implementation is nave in that it doesnt validate parameters in any way.
It would be a very good idea to validate the object shape to avoid incidents during
development. Flow based gradual typing provides one way to do this. Alternatively you
could write nice tests. Thats a good idea regardless.
Our NoteStore provides two methods in particular that are going to be useful. These are Note-
Store.listen and NoteStore.unlisten. They will allow views to subscribe to the state changes.
As you might remember from the earlier chapters, React provides a set of lifecycle hooks. We
can subscribe to NoteStore within our view at componentDidMount and componentWillUnmount.
By unsubscribing, we avoid possible memory leaks.
Based on these ideas we can connect App with NoteStore and NoteActions:
app/components/App.jsx
this.state = {
notes: [
{
id: uuid.v4(),
task: 'Learn Webpack'
},
{
id: uuid.v4(),
task: 'Learn React'
},
{
id: uuid.v4(),
task: 'Do laundry'
}
]
};
this.state = NoteStore.getState();
}
componentDidMount() {
NoteStore.listen(this.storeChanged);
}
componentWillUnmount() {
NoteStore.unlisten(this.storeChanged);
React and Flux 85
}
storeChanged = (state) => {
// Without a property initializer `this` wouldn't
// point at the right context because it defaults to
// `undefined` in strict mode.
this.setState(state);
};
render() {
const notes = this.state.notes;
return (
<div>
<button className="add-note" onClick={this.addNote}>+</button>
<Notes notes={notes}
onEdit={this.editNote}
onDelete={this.deleteNote} />
</div>
);
}
deleteNote = (id, e) => {
// Avoid bubbling to edit
e.stopPropagation();
this.setState({
notes: this.state.notes.filter(note => note.id !== id)
});
};
deleteNote(id, e) {
e.stopPropagation();
NoteActions.delete(id);
}
addNote = () => {
this.setState({
notes: this.state.notes.concat([{
id: uuid.v4(),
task: 'New task'
}])
});
};
addNote() {
NoteActions.create({task: 'New task'});
React and Flux 86
}
editNote = (id, task) => {
// Don't modify if trying set an empty value
if(!task.trim()) {
return;
}
return note;
});
this.setState({notes});
};
editNote(id, task) {
// Don't modify if trying set an empty value
if(!task.trim()) {
return;
}
NoteActions.update({id, task});
}
}
The application should work just like before now. As we alter NoteStore through actions, this
leads to a cascade that causes our App state to update through setState. This in turn will cause
the component to render. Thats Fluxs unidirectional flow in practice.
We actually have more code now than before, but thats okay. App is a little neater and its going to
be easier to develop as well soon see. Most importantly we have managed to implement the Flux
architecture for our application.
1. Suppose we wanted to persist the notes within localStorage. Where would you implement
that? One approach would be to handle that at application initialization.
React and Flux 87
2. What if we had many components relying on the data? We would just consume NoteStore
and display it, however we want.
3. What if we had many, separate Note lists for different types of tasks? We could set up another
store for tracking these lists. That store could refer to actual Notes by id. Well do something
like this in the next chapter, as we generalize the approach.
This is what makes Flux a strong architecture when used with React. It isnt hard to find answers
to questions like these. Even though there is more code, it is easier to reason about. Given we are
dealing with a unidirectional flow we have something that is simple to debug and test.
Understanding localStorage
localStorage has a sibling known as sessionStorage. Whereas sessionStorage loses its data when
the browser is closed, localStorage retains its data. They both share the same API as discussed
below:
storage.getItem(k) - Returns the stored string value for the given key.
storage.removeItem(k) - Removes the data matching the key.
storage.setItem(k, v) - Stores the given value using the given key.
storage.clear() - Empties the storage contents.
Note that it is convenient to operate on the API using your browser developer tools. For instance, in
Chrome you can see the state of the storages through the Resources tab. Console tab allows you to
perform direct operations on the data. You can even use storage.key and storage.key = 'value'
shorthands for quick modifications.
localStorage and sessionStorage can use up to 10 MB of data combined. Even though they are
well supported, there are certain corner cases with interesting failures. These include running out
of memory in Internet Explorer (fails silently) and failing altogether in Safaris private mode. It is
possible to work around these glitches, though.
You can support Safari in private mode by trying to write into localStorage first. If that
fails, you can use Safaris in-memory store instead, or just let the user know about the
situation. See Stack Overflow for details.
https://developer.mozilla.org/en/docs/Web/API/Window/localStorage
https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API/Using_the_Web_Storage_API
https://stackoverflow.com/questions/14555347/html5-localstorage-error-with-safari-quota-exceeded-err-dom-exception-22-an
React and Flux 88
app/libs/storage.js
export default {
get(k) {
try {
return JSON.parse(localStorage.getItem(k));
}
catch(e) {
return null;
}
},
set(k, v) {
localStorage.setItem(k, JSON.stringify(v));
}
};
The implementation could be generalized further. You could convert it into a factory (storage =>
{...}) and make it possible to swap the storage. Now we are stuck with localStorage unless we
change the code.
We can take a snapshot of the entire app state and push it to localStorage every time Final-
Store changes. That solves one part of the problem. Bootstrapping solves the remaining part as
alt.bootstrap allows us to set state of the all stores. The method doesnt emit events. To make our
stores populate with the right state, we will need to call it before the components are rendered. In
our case, well fetch the data from localStorage and invoke it to populate our stores.
An alternative way would be to take a snapshot only when the window gets closed. Theres
a Window level beforeunload hook that could be used. The problem with this approach is
that it is brittle. What if something unexpected happens and the hook doesnt get triggered
for some reason? Youll lose data.
In order to integrate this idea to our application, we will need to implement a little module to manage
it. We take the possible initial data into account there and trigger the new logic.
app/libs/persist.js does the hard part. It will set up a FinalStore, deal with bootstrapping (restore
data) and snapshotting (save data). I have included an escape hatch in the form of the debug flag.
If it is set, the data wont get saved to localStorage. The reasoning is that by doing this, you can
set the flag (localStorage.setItem('debug', 'true')), hit localStorage.clear() and refresh the
browser to get a clean slate. The implementation below illustrates these ideas:
app/libs/persist.js
try {
alt.bootstrap(storage.get(storeName));
}
catch(e) {
console.error('Failed to bootstrap data', e);
}
finalStore.listen(() => {
if(!storage.get('debug')) {
storage.set(storeName, alt.takeSnapshot());
}
});
}
Finally, we need to trigger the persistency logic at initialization. We will need to pass the relevant
data to it (Alt instance, storage, storage name) and off we go.
app/index.jsx
React and Flux 90
...
import alt from './libs/alt';
import storage from './libs/storage';
import persist from './libs/persist';
If you try refreshing the browser now, the application should retain its state. The solution should
scale with minimal effort if we add more stores to the system. Integrating a real back-end wouldnt
be a problem. There are hooks in place for that now.
You could, for instance, pass the initial payload as a part of your HTML (universal rendering), load
it up, and then persist the data to the back-end. You have a great deal of control over how to do this,
and you can use localStorage as a backup if you want.
Universal rendering is a powerful technique that allows you to use React to improve the performance
of your application while gaining SEO benefits. Rather than leaving all rendering to the front-end,
we perform a part of it at the back-end side. We render the initial application markup at back-end
and provide it to the user. React will pick that up. This can also include data that can be loaded to
your application without having to perform extra queries.
Our persist implementation isnt without its flaws. It is easy to end up in a situation where
localStorage contains invalid data due to changes made to the data model. This brings you
to the world of database schemas and migrations. There are no easy solutions. Regardless,
this is something to keep in mind when developing something more sophisticated. The
lesson here is that the more you inject state to your application, the more complicated it
gets.
http://alt.js.org/docs/components/altContainer/
React and Flux 91
this.state = NoteStore.getState();
}
componentDidMount() {
NoteStore.listen(this.storeChanged);
}
componentWillUnmount() {
NoteStore.unlisten(this.storeChanged);
}
storeChanged = (state) => {
// Without a property initializer `this` wouldn't
// point at the right context (defaults to `undefined` in strict mode).
this.setState(state);
};
render() {
const notes = this.state.notes;
return (
<div>
<button className="add-note" onClick={this.addNote}>+</button>
<Notes notes={notes}
onEdit={this.editNote}
onDelete={this.deleteNote} />
<AltContainer
stores={[NoteStore]}
inject={{
notes: () => NoteStore.getState().notes
}}
>
<Notes onEdit={this.editNote} onDelete={this.deleteNote} />
</AltContainer>
</div>
);
React and Flux 92
}
...
}
The AltContainer allows us to bind data to its immediate children. In this case, it injects the notes
property in to Notes. The pattern allows us to set up arbitrary connections to multiple stores and
manage them. You can find another possible approach at the appendix about decorators.
Integrating the AltContainer tied this component to Alt. If you wanted something forward-looking,
you could push it into a component of your own. That facade would hide Alt and allow you to replace
it with something else later on.
Redux is a Flux inspired architecture that was designed with hot loading as its primary
constraint. Redux operates based on a single state tree. The state of the tree is manipulated
using pure functions known as reducers. Even though theres some boilerplate code, Redux
forces you to dig into functional programming. The implementation is quite close to the Alt
based one. - Redux demo
Compared to Redux, Cerebral had a different starting point. It was developed to provide
insight on how the application changes its state. Cerebral provides more opinionated way to
develop, and as a result, comes with more batteries included. - Cerebral demo
Mobservable allows you to make your data structures observable. The structures can
then be connected with React components so that whenever the structures update, so do
the React components. Given real references between structures can be used, the Kanban
implementation is surprisingly simple. - Mobservable demo
http://rackt.org/redux/
https://github.com/survivejs/redux-demo
http://www.cerebraljs.com/
https://github.com/survivejs/cerebral-demo
https://mweststrate.github.io/mobservable/
https://github.com/survivejs/mobservable-demo
React and Flux 93
5.9 Relay?
Compared to Flux, Facebooks Relay improves on the data fetching department. It allows you to
push data requirements to the view level. It can be used standalone or with Flux depending on your
needs.
Given its still largely untested technology, we wont be covering it in this book yet. Relay comes
with special requirements of its own (GraphQL compatible API). Only time will tell how it gets
adopted by the community.
5.10 Conclusion
In this chapter, you saw how to port our simple application to use Flux architecture. In the process
we learned about basic concepts of Flux. Now we are ready to start adding more functionality to
our application.
https://facebook.github.io/react/blog/2015/02/20/introducing-relay-and-graphql.html
6. From Notes to Kanban
Kanban board
So far we have developed an application for keeping track of notes in localStorage. We still have
work to do to turn this into a real Kanban as pictured above. Most importantly our system is missing
the concept of Lane.
A Lane is something that should be able to contain many Notes within itself and track their order.
One way to model this is simply to make a Lane point at Notes through an array of Note ids. This
relation could be reversed. A Note could point at a Lane using an id and maintain information about
its position within a Lane. In this case, we are going to stick with the former design as that works
well with re-ordering later on.
94
From Notes to Kanban 95
In addition, we are going to need a LaneStore and a method matching to create. The idea is pretty
much the same as for NoteStore earlier. create will concatenate a new lane to the list of lanes. After
that, the change will propagate to the listeners (i.e., FinalStore and components).
app/stores/LaneStore.js
class LaneStore {
constructor() {
this.bindActions(LaneActions);
this.lanes = [];
}
create(lane) {
const lanes = this.lanes;
lane.id = uuid.v4();
lane.notes = lane.notes || [];
this.setState({
lanes: lanes.concat(lane)
});
}
}
We are also going to need a stub for Lanes. We will expand this later. For now we just want something
simple to show up.
app/components/Lanes.jsx
From Notes to Kanban 96
Next, we need to make room for Lanes at App. We will simply replace Notes references with Lanes,
set up actions, and store as needed:
app/components/App.jsx
>
<Notes onEdit={this.editNote} onDelete={this.deleteNote} />
<Lanes />
</AltContainer>
</div>
);
}
deleteNote = (id, e) => {
e.stopPropagation();
NoteActions.delete(id);
};
addNote = () => {
NoteActions.create({task: 'New task'});
};
editNote = (id, task) => {
// Don't modify if trying set an empty value
if(!task.trim()) {
return;
}
NoteActions.update({id, task});
};
addLane() {
LaneActions.create({name: 'New lane'});
}
}
If you check out the implementation at the browser, you can see that the current implementation
doesnt do much. It just shows a plus button and lanes should go here text. Even the add button
doesnt work yet. We still need to model Lane and attach Notes to that to make this all work.
We are also going to need a Lane component to make this work. It will render the Lane name and
associated Notes. The example below has been modeled largely after our earlier implementation of
App. It will render an entire lane, including its name and associated notes:
app/components/Lane.jsx
return (
<div {...props}>
<div className="lane-header">
<div className="lane-add-note">
<button onClick={this.addNote}>+</button>
</div>
<div className="lane-name">{lane.name}</div>
</div>
<AltContainer
stores={[NoteStore]}
inject={{
notes: () => NoteStore.getState().notes || []
}}
>
<Notes onEdit={this.editNote} onDelete={this.deleteNote} />
</AltContainer>
From Notes to Kanban 99
</div>
);
}
editNote(id, task) {
// Don't modify if trying set an empty value
if(!task.trim()) {
return;
}
NoteActions.update({id, task});
}
addNote() {
NoteActions.create({task: 'New task'});
}
deleteNote(id, e) {
e.stopPropagation();
NoteActions.delete(id);
}
}
I am using Object rest/spread syntax (stage 2) (const {a, b, ...props} = this.props) in the
example. This allows us to attach a className to Lane and we avoid polluting it with HTML
attributes we dont need. The syntax expands Object key value pairs as props so we dont have
to write each prop we want separately.
If you run the application and try adding new notes, you can see theres something wrong. Every
note you add is shared by all lanes. If a note is modified, other lanes update too.
https://github.com/sebmarkbage/ecmascript-rest-spread
From Notes to Kanban 100
Duplicate notes
The reason why this happens is simple. Our NoteStore is a singleton. This means every component
that is listening to NoteStore will receive the same data. We will need to resolve this problem
somehow.
Setting Up attachToLane
When we add a new Note to the system using addNote, we need to make sure its associated to some
Lane. This association can be modeled using a method, such as LaneActions.attachToLane({laneId:
<id>, noteId: <id>}). Before calling this method we should create a note and gets its id. Heres
an example of how we could glue it together:
LaneActions.attachToLane({
noteId: note.id,
laneId
});
In order to implement attachToLane, we need to find a lane matching to the given lane id and then
attach note id to it. Furthermore, each note should belong only to one lane at a time. We can perform
a rough check against that:
app/stores/LaneStore.js
class LaneStore {
...
attachToLane({laneId, noteId}) {
const lanes = this.lanes.map(lane => {
if(lane.id === laneId) {
if(lane.notes.includes(noteId)) {
console.warn('Already attached note to lane', lanes);
}
else {
lane.notes.push(noteId);
}
}
return lane;
});
this.setState({lanes});
}
}
We also need to make sure NoteActions.create returns a note so the setup works just like in the
code example above. The note is needed for creating an association between a lane and a note:
app/stores/NoteStore.js
From Notes to Kanban 102
...
class NoteStore {
constructor() {
this.bindActions(NoteActions);
this.notes = [];
}
create(note) {
const notes = this.notes;
note.id = uuid.v4();
this.setState({
notes: notes.concat(note)
});
return note;
}
...
}
...
Setting Up detachFromLane
deleteNote is the opposite operation of addNote. When removing a Note, its important to remove its
association with a Lane as well. For this purpose we can implement LaneActions.detachFromLane({laneId:
<id>}). We would use it like this:
LaneActions.detachFromLane({laneId, noteId});
NoteActions.delete(noteId);
The implementation will resemble attachToLane. In this case, well remove the possibly found Note
instead:
app/stores/LaneStore.js
From Notes to Kanban 103
class LaneStore {
...
attachToLane({laneId, noteId}) {
...
}
detachFromLane({laneId, noteId}) {
const lanes = this.lanes.map(lane => {
if(lane.id === laneId) {
lane.notes = lane.notes.filter(note => note !== noteId);
}
return lane;
});
this.setState({lanes});
}
}
Just building an association between a lane and a note isnt enough. We are going to need some way
to resolve the note references to data we can display through the user interface. For this purpose,
we need to implement a special getter so we get just the data we want per each lane.
Just implementing the method isnt enough. We also need to make it public. In Alt, this can be
achieved using this.exportPublicMethods. It takes an object that describes the public interface of
the store in question. Consider the implementation below:
app/stores/NoteStore.jsx
From Notes to Kanban 104
class NoteStore {
constructor() {
this.bindActions(NoteActions);
this.notes = [];
this.exportPublicMethods({
getNotesByIds: this.getNotesByIds.bind(this)
});
}
...
getNotesByIds(ids) {
// 1. Make sure we are operating on an array and
// map over the ids
// [id, id, id, ...] -> [[Note], [], [Note], ...]
return (ids || []).map(
// 2. Extract matching notes
// [Note, Note, Note] -> [Note, ...] (match) or [] (no match)
id => this.notes.filter(note => note.id === id)
// 3. Filter out possible empty arrays and get notes
// [[Note], [], [Note]] -> [[Note], [Note]] -> [Note, Note]
).filter(a => a.length).map(a => a[0]);
}
}
Note that the implementation filters possible non-matching ids from the result.
...
import LaneActions from '../actions/LaneActions';
return (
<div {...props}>
<div className="lane-header">
<div className="lane-add-note">
<button onClick={this.addNote}>+</button>
</div>
<div className="lane-name">{lane.name}</div>
</div>
<AltContainer
stores={[NoteStore]}
inject={{
notes: () => NoteStore.getState().notes || []
notes: () => NoteStore.getNotesByIds(lane.notes)
}}
>
<Notes onEdit={this.editNote} onDelete={this.deleteNote} />
</AltContainer>
</div>
);
}
editNote(id, task) {
// Don't modify if trying set an empty value
if(!task.trim()) {
return;
}
NoteActions.update({id, task});
}
addNote() {
NoteActions.create({task: 'New task'});
}
deleteNote(id, e) {
e.stopPropagation();
NoteActions.delete(id);
From Notes to Kanban 106
}
addNote = (e) => {
const laneId = this.props.lane.id;
const note = NoteActions.create({task: 'New task'});
LaneActions.attachToLane({
noteId: note.id,
laneId
});
};
deleteNote = (noteId, e) => {
e.stopPropagation();
LaneActions.detachFromLane({laneId, noteId});
NoteActions.delete(noteId);
};
}
Methods where we need to refer to this have been bound using a property initializer. An
alternative way to achieve this would have been to bind at render or at constructor.
notes: () => NoteStore.getNotesByIds(notes) - Our new getter is used to filter notes.
addNote, deleteNote - These operate now based on the new logic we specified. Note that
we trigger detachFromLane before delete at deleteNote. Otherwise we may try to render
non-existent notes. You can try swapping the order to see warnings.
After these changes, we have a system that can maintain relations between Lanes and Notes. The
current structure allows us to keep singleton stores and a flat data structure. Dealing with references
is a little awkward, but thats consistent with the Flux architecture.
If you try to add notes to a specific lane, they shouldnt be duplicated anymore. Also editing a note
should behave as you might expect:
From Notes to Kanban 107
Separate notes
// Triggers waitFor
LaneActions.attachToLane({laneId});
app/stores/LaneStore.js
class LaneStore {
...
attachToLane({laneId, noteId}) {
if(!noteId) {
this.waitFor(NoteStore);
noteId = NoteStore.getState().notes.slice(-1)[0].id;
}
...
}
}
http://alt.js.org/guide/wait-for/
From Notes to Kanban 108
Fortunately, we can avoid waitFor in this case. You should use it carefully. It becomes necessary
when you need to deal with asynchronously fetched data that depends on each other, however.
return this.renderNote();
}
render() {
const {value, onEdit, onValueClick, editing, ...props} = this.props;
return (
<div {...props}>
{editing ? this.renderEdit() : this.renderValue()}
</div>
);
}
renderEdit = () => {
return <input type="text"
From Notes to Kanban 109
ref={
(e) => e ? e.selectionStart = this.props.task.length : null
(e) => e ? e.selectionStart = this.props.value.length : null
}
autoFocus={true}
defaultValue={this.props.task}
defaultValue={this.props.value}
onBlur={this.finishEdit}
onKeyPress={this.checkEnter} />;
};
renderNote = () => {
const onDelete = this.props.onDelete;
return (
<div onClick={this.edit}>
<span className="task">{this.props.task}</span>
{onDelete ? this.renderDelete() : null }
</div>
);
};
renderValue = () => {
const onDelete = this.props.onDelete;
return (
<div onClick={this.props.onValueClick}>
<span className="value">{this.props.value}</span>
{onDelete ? this.renderDelete() : null }
</div>
);
};
renderDelete = () => {
return <button
className="delete-note"
className="delete"
onClick={this.props.onDelete}>x</button>;
};
edit = () => {
// Enter edit mode.
this.setState({
editing: true
});
};
From Notes to Kanban 110
if(this.props.onEdit) {
this.props.onEdit(value);
Editable uses uncontrolled design with its input. This means we pass the control over
its state to DOM and capture it through event handlers. If you wanted to validate the input
when the user is typing, it would be useful to convert it into a controlled design. In this case
you would define a onChange handler and a value prop. Its more work, but also provides
more control. React documentation discusses controlled components in greater detail.
https://facebook.github.io/react/docs/forms.html
From Notes to Kanban 111
...
.note .task {
.note .value {
/* force to use inline-block so that it gets minimum height */
display: inline-block;
}
.note .delete-note {
.note .delete {
...
}
.note:hover .delete-note {
.note:hover .delete {
visibility: visible;
}
If you refresh the browser, you should see Uncaught TypeError: Cannot read property 'bind'
of undefined. This has to do with that onValueClick definition we added. We will address this
next.
Typing with React chapter discusses how to use propTypes to work around this problem.
Its a feature that allows us to set good defaults for props while also checking their types
during development.
...
import Editable from './Editable.jsx';
return (
<div {...props}>
<div className="lane-header">
<div className="lane-header" onClick={this.activateLaneEdit}>
<div className="lane-add-note">
<button onClick={this.addNote}>+</button>
</div>
<div className="lane-name">{lane.name}</div>
<Editable className="lane-name" editing={lane.editing}
value={lane.name} onEdit={this.editName} />
<div className="lane-delete">
<button onClick={this.deleteLane}>x</button>
</div>
</div>
<AltContainer
stores={[NoteStore]}
inject={{
notes: () => NoteStore.getNotesByIds(lane.notes)
}}
>
From Notes to Kanban 113
NoteActions.update({id, task});
}
addNote = (e) => {
// If note is added, avoid opening lane name edit by stopping
// event bubbling in this case.
e.stopPropagation();
LaneActions.attachToLane({
noteId: note.id,
laneId
});
};
...
editName = (name) => {
const laneId = this.props.lane.id;
If you try to edit a lane name now, you should see a log message at the console:
We are also going to need LaneStore level implementations for these. They can be modeled based
on what we have seen in NoteStore earlier:
app/stores/LaneStore.js
From Notes to Kanban 115
...
class LaneStore {
...
create(lane) {
...
}
update(updatedLane) {
const lanes = this.lanes.map(lane => {
if(lane.id === updatedLane.id) {
return Object.assign({}, lane, updatedLane);
}
return lane;
});
this.setState({lanes});
}
delete(id) {
this.setState({
lanes: this.lanes.filter(lane => lane.id !== id)
});
}
attachToLane({laneId, noteId}) {
...
}
...
}
If a lane is deleted, it would be a good idea to get rid of the associated notes as well. In
the current implementation they are left hanging in the NoteStore. It doesnt hurt the
functionality but its one of those details that you may want to be aware of.
Now that we have resolved actions and store, we need to adjust our component to take these changes
into account:
app/components/Lane.jsx
From Notes to Kanban 116
...
export default class Lane extends React.Component {
...
editNote(id, task) {
// Don't modify if trying set an empty value
if(!task.trim()) {
return;
}
NoteActions.update({id, task});
}
editNote(id, task) {
// Don't modify if trying set an empty value
if(!task.trim()) {
NoteActions.update({id, editing: false});
return;
}
return;
}
LaneActions.delete(laneId);
};
activateLaneEdit = () => {
const laneId = this.props.lane.id;
Try modifying a lane name now. Modifications now should get saved the same way as they do for
notes. Deleting lanes should be possible as well.
If you want that lanes and notes are editable after they are created, set lane.editing =
true; or note.editing = true; when creating them.
body {
background: cornsilk;
font-family: sans-serif;
}
.lane {
display: inline-block;
margin: 1em;
background-color: #efefef;
border: 1px solid #ccc;
border-radius: 0.5em;
min-width: 10em;
vertical-align: top;
}
.lane-header {
overflow: auto;
padding: 1em;
color: #efefef;
background-color: #333;
border-top-left-radius: 0.5em;
border-top-right-radius: 0.5em;
}
.lane-name {
From Notes to Kanban 119
float: left;
}
.lane-add-note {
float: left;
margin-right: 0.5em;
}
.lane-delete {
float: right;
margin-left: 0.5em;
visibility: hidden;
}
.lane-header:hover .lane-delete {
visibility: visible;
}
background-color: #fdfdfd;
border: 1px solid #ccc;
}
.lane-delete button {
padding: 0;
cursor: pointer;
color: white;
background-color: rgba(0, 0, 0, 0);
border: 0;
}
...
Styled Kanban
As this is a small project, we can leave the CSS in a single file like this. In case it starts growing,
consider separating it to multiple files. One way to do this is to extract CSS per component and then
refer to it there (e.g., require('./lane.css') at Lane.jsx).
Besides keeping things nice and tidy, Webpacks lazy loading machinery can pick this up. As a result,
the initial CSS your user has to load will be smaller. I go into further detail later as I discuss styling
at Styling React.
https://facebook.github.io/react/docs/jsx-in-depth.html#namespaced-components
From Notes to Kanban 121
)}</div>
);
}
app/components/Lane.jsx
...
Now we have pushed the control over Lane formatting to a higher level. In this case, the change isnt
worth it, but it can make sense in a more complex case.
You can use a similar approach for more generic components as well. Consider something like Form.
You could easily have Form.Label, Form.Input, Form.Textarea and so on. Each would contain your
custom formatting and logic as needed.
6.7 Conclusion
The current design has been optimized with drag and drop operations in mind. Moving notes within
a lane is a matter of swapping ids. Moving notes from one lane to another is again an operation over
ids. This structure leads to some complexity as we need to track ids, but it will pay off in the next
chapter.
There isnt always a clear cut way to model data and relations. In other scenarios, we could push the
references elsewhere. For instance, the note to lane relation could be inversed and pushed to Note
level. We would still need to track their order within a lane somehow. We would be pushing the
complexity elsewhere by doing this.
Currently, NoteStore is treated as a singleton. Another way to deal with it would be to create
NoteStore per Notes dynamically. Even though this simplifies dealing with the relations somewhat,
From Notes to Kanban 122
this is a Flux anti-pattern better avoided. It brings complications of its own as you need to deal with
store lifecycle at the component level. Also dealing with drag and drop logic will become hard.
We still cannot move notes between lanes or within a lane. We will solve that in the next chapter,
as we implement drag and drop.
7. Implementing Drag and Drop
Our Kanban application is almost usable now. It looks alright and theres some basic functionality
in place. In this chapter, Ill show you how to take it to the next level. We will integrate some drag
and drop functionality as we set up React DnD. After this chapter, you should be able to sort notes
within a lane and drag them from one lane to another.
...
import {DragDropContext} from 'react-dnd';
import HTML5Backend from 'react-dnd-html5-backend';
@DragDropContext(HTML5Backend)
export default class App extends React.Component {
...
}
After this change, the application should look exactly the same as before. We are now ready to add
some sweet functionality to it.
https://gaearon.github.io/react-dnd/
https://github.com/yahoo/react-dnd-touch-backend
123
Implementing Drag and Drop 124
We also need to tweak Notes to use our wrapper component. We will simply wrap Editable using
Note, and we are good to go. We will pass note data to the wrapper as well need that later when
dealing with logic:
app/components/Notes.jsx
onEdit={onEdit.bind(null, note.id)}
onDelete={onDelete.bind(null, note.id)} />
</li>
<Note className="note" id={note.id} key={note.id}>
<Editable
editing={note.editing}
value={note.task}
onValueClick={onValueClick.bind(null, note.id)}
onEdit={onEdit.bind(null, note.id)}
onDelete={onDelete.bind(null, note.id)} />
</Note>
)}</ul>
);
}
After this change, the application should look exactly the same as before. We have achieved nothing
yet. Fortunately, we can start adding functionality, now that we have the foundation in place.
export default {
NOTE: 'note'
};
This definition can be expanded later as we add new types to the system.
Next, we need to tell our Note that its possible to drag and drop it. This is done through @DragSource
and @DropTarget annotations.
const noteSource = {
beginDrag(props) {
console.log('begin dragging note', props);
return {};
}
};
return connectDragSource(
<li {...props}>{props.children}</li>
);
}
}
If you drag a Note now, you should see a debug message at the console.
Implementing Drag and Drop 127
We still need to make sure Note works as a @DropTarget. Later on this will allow swapping them as
we add logic in place.
Note that React DnD doesnt support hot loading perfectly. You may need to refresh the
browser to see the log messages you expect!
app/components/Note.jsx
const noteSource = {
beginDrag(props) {
console.log('begin dragging note', props);
return {};
}
};
const noteTarget = {
hover(targetProps, monitor) {
const sourceProps = monitor.getItem();
render() {
const {connectDragSource, id, onMove, ...props} = this.props;
return connectDragSource(
<li {...props}>{props.children}</li>
);
}
render() {
const {connectDragSource, connectDropTarget,
id, onMove, ...props} = this.props;
return connectDragSource(connectDropTarget(
<li {...props}>{props.children}</li>
));
}
}
Refresh the browser and try to drag a note around. You should see a lot of log messages.
Both decorators give us access to the Note props. In this case, we are using monitor.getItem() to
access them at noteTarget. This is the key to making this to work properly.
...
const noteSource = {
beginDrag(props) {
console.log('begin dragging note', props);
return {};
}
beginDrag(props) {
return {
id: props.id
};
}
};
const noteTarget = {
hover(targetProps, monitor) {
const sourceProps = monitor.getItem();
...
If you run the application now, youll likely get a bunch of onMove related errors. We should make
Notes aware of it:
app/components/Notes.jsx
Implementing Drag and Drop 130
If you drag a Note around now, you should see log messages like source <id> target <id> in the
console. We are getting close. We still need to figure out what to do with these ids, though.
We should connect this action with the onMove hook we just defined:
app/components/Notes.jsx
It could be a good idea to refactor onMove as a prop to make the system more flexible. In our
implementation the Notes component is coupled with LaneActions. This isnt particularly
nice if you want to use it in some other context.
Implementing Drag and Drop 132
...
class LaneStore {
...
detachFromLane({laneId, noteId}) {
...
}
move({sourceId, targetId}) {
console.log(`source: ${sourceId}, target: ${targetId}`);
}
}
You should see the same log messages as earlier. Next, well need to add some logic to make this
work. We can use the logic outlined above here. We have two cases to worry about: moving within
a lane itself and moving from lane to another.
https://facebook.github.io/react/docs/update.html
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/splice
Implementing Drag and Drop 133
...
import update from 'react-addons-update';
class LaneStore {
...
move({sourceId, targetId}) {
console.log(`source: ${sourceId}, target: ${targetId}`);
}
move({sourceId, targetId}) {
const lanes = this.lanes;
const sourceLane = lanes.filter(lane => lane.notes.includes(sourceId))[0];
const targetLane = lanes.filter(lane => lane.notes.includes(targetId))[0];
const sourceNoteIndex = sourceLane.notes.indexOf(sourceId);
const targetNoteIndex = targetLane.notes.indexOf(targetId);
this.setState({lanes});
}
}
If you try out the application now, you can actually drag notes around and it should behave as you
expect. Dragging to empty lanes doesnt work, though, and the presentation could be better.
It would be better if we indicated the dragged notes location more clearly. We can do this by hiding
the dragged note from the list. React DnD provides us the hooks we need for this purpose.
Implementing Drag and Drop 134
...
return connectDragSource(connectDropTarget(
<li {...props}>{props.children}</li>
));
}
render() {
const {connectDragSource, connectDropTarget, isDragging,
onMove, id, ...props} = this.props;
return connectDragSource(connectDropTarget(
<li style={{
opacity: isDragging ? 0 : 1
}} {...props}>{props.children}</li>
));
}
}
If you drag a note within a lane, the dragged note should be shown as blank. If you try moving the
note to another lane and move it there, you will see this doesnt quite work, though.
Implementing Drag and Drop 135
The problem is that our note component gets unmounted during this process. This makes it lose
isDragging state. Fortunately, we can override the default behavior by implementing a isDragging
check of our own to fix the issue. Perform the following addition:
app/components/Note.jsx
...
const noteSource = {
beginDrag(props) {
return {
id: props.id
};
}
};
const noteSource = {
beginDrag(props) {
return {
id: props.id
};
},
isDragging(props, monitor) {
return props.id === monitor.getItem().id;
}
};
...
This tells React DnD to perform our custom check instead of relying on the default logic. After this
change, unmounting wont be an issue and the feature works as we expect.
There is one little problem in our system. We cannot drag notes to an empty lane yet.
app/components/Lane.jsx
Implementing Drag and Drop 136
...
import {DropTarget} from 'react-dnd';
import ItemTypes from '../constants/itemTypes';
const noteTarget = {
hover(targetProps, monitor) {
const targetId = targetProps.lane.id;
const sourceProps = monitor.getItem();
const sourceId = sourceProps.id;
return (
return connectDropTarget(
...
);
}
...
}
If you refresh your browser and drag a note to a lane now, you should see log messages at your
console. The question is what to do with this data? Before actually moving the note to a lane, we
should check whether its empty or not. If it has content already, the operation doesnt make sense.
Our existing logic can deal with that.
This is a simple check to make. Given we know the target lane at our noteTarget hover handler, we
can check its notes array as follows:
app/components/Lane.jsx
Implementing Drag and Drop 137
...
const noteTarget = {
hover(targetProps, monitor) {
const targetId = targetProps.lane.id;
const sourceProps = monitor.getItem();
const sourceId = sourceProps.id;
if(!targetProps.lane.notes.length) {
console.log('source', sourceId, 'target', targetProps);
}
}
};
...
If you refresh your browser and drag around now, the log message should appear only when you
drag a note to a lane that doesnt have any notes attached to it yet.
...
const noteTarget = {
hover(targetProps, monitor) {
const sourceProps = monitor.getItem();
const sourceId = sourceProps.id;
if(!targetProps.lane.notes.length) {
console.log('source', sourceId, 'target', targetProps);
}
Implementing Drag and Drop 138
}
hover(targetProps, monitor) {
const sourceProps = monitor.getItem();
const sourceId = sourceProps.id;
if(!targetProps.lane.notes.length) {
LaneActions.attachToLane({
laneId: targetProps.lane.id,
noteId: sourceId
});
}
}
};
...
There is one problem, though. What happens to the old instance of the Note? In the current solution,
the old lane will have an id pointing to it. As a result, we will have duplicate data in the system.
Earlier, we resolved this using detachFromLane. The problem is that we dont know to which lane
the note belonged. We could pass this data through the component hierarchy, but that doesnt feel
particularly nice.
We can resolve this by adding a check against the case at attachToLane:
app/stores/LaneStore.js
...
class LaneStore {
...
attachToLane({laneId, noteId}) {
const lanes = this.lanes.map(lane => {
if(lane.notes.includes(noteId)) {
lane.notes = lane.notes.filter(note => note !== noteId);
}
return lane;
});
this.setState({lanes});
}
...
}
removeNote(noteId) goes through LaneStore data. If it finds a note by id, it will get rid of it.
After that, we have a clean slate, and we can add a note to a lane. This change allows us to drop
detachFromLane from the system entirely, but Ill leave that up to you.
After these changes you should be able to drag notes to empty lanes.
...
);
}
...
return connectDragSource(connectDropTarget(
return dragSource(connectDropTarget(
<li style={{
opacity: isDragging ? 0 : 1
}} {...props}>{props.children}</li>
));
}
}
This small change gives us the behavior we want. If you try to edit a note now, the input should
work as you might expect it to behave normally. Design-wise it was a good idea to keep editing
state outside of Editable. If we hadnt done that, implementing this change would have been a lot
harder as we would have had to extract the state outside of the component.
Now we have a Kanban table that is actually useful! We can create new lanes and notes, and edit
and remove them. In addition we can move notes around. Mission accomplished!
Implementing Drag and Drop 141
7.8 Conclusion
In this chapter, you saw how to implement drag and drop for our little application. You can model
sorting for lanes using the same technique. First, you mark the lanes to be draggable and droppable,
then you sort out their ids, and finally, youll add some logic to make it all work together. It should
be considerably simpler than what we did with notes.
I encourage you to expand the application. The current implementation should work just as a starting
point for something greater. Besides extending the DnD implementation, you can try adding more
data to the system. You could also do something to the visual outlook. One option would be to try
out various styling approaches discussed at the Styling React chapter.
To make it harder to break the application during development, you can also implement tests
as discussed at Testing React. Typing with React discussed yet more ways to harden your code.
Learning these approaches can be worthwhile. Sometimes it may be worth your while to design
your applications test first. It is a valuable approach as it allows you to document your assumptions
as you go.
In the next chapter, well set up a production level build for our application. You can use the
techniques discussed in your own projects.
8. Building Kanban
Now that we have a nice Kanban application up and running, we can worry about showing it to
the public. The goal of this chapter is to set up a nice production grade build. There are various
techniques we can apply to bring the bundle size down. We can also leverage browser caching.
> webpack
Hash: 807faffbf966eb7f08fc
Version: webpack 1.12.13
Time: 3967ms
Asset Size Chunks Chunk Names
bundle.js 1.12 MB 0 [emitted] app
+ 337 hidden modules
1.12 MB is a lot! There are a couple of basic things we can do to slim down our build. We can apply
some minification to it. We can also tell React to optimize itself. Doing both will result in significant
size savings. Provided we apply gzip compression on the content when serving it, further gains may
be made.
Minification
Minification will convert our code into a smaller format without losing any meaning. Usually this
means some amount of rewriting code through predefined transformations. Sometimes, this can
break code as it can rewrite pieces of code you inadvertently depend upon. This is the reason why
we gave explicit ids to our stores for instance.
The easiest way to enable minification is to call webpack -p (-p as in production). Alternatively,
we an use a plugin directly as this provides us more control. By default Uglify will output a lot
of warnings and they dont provide value in this case, well be disabling them. Add the following
section to your Webpack configuration:
webpack.config.js
142
Building Kanban 143
Uglify warnings can help you to understand how it processes the code. Therefore it may
be beneficial to have a peek at the output every once in a while.
If you execute npm run build now, you should see better results:
> webpack
Hash: ff80bbb1bdd7df271313
Version: webpack 1.12.13
Time: 9159ms
Asset Size Chunks Chunk Names
bundle.js 368 kB 0 [emitted] app
+ 337 hidden modules
Given it needs to do more work, it took longer. But on the plus side the build is much smaller now.
Setting process.env.NODE_ENV
We can perform one more step to decrease build size further. React relies on process.env.NODE_ENV
based optimizations. If we force it to production, React will get built in an optimized manner. This
https://webpack.github.io/docs/list-of-plugins.html#uglifyjsplugin
Building Kanban 144
will disable some checks (e.g., property type checks). Most importantly it will give you a smaller
build and improved performance.
In Webpack terms, you can add the following snippet to the plugins section of your configuration:
webpack.config.js
...
This is a useful technique for your own code. If you have a section of code that evaluates as false
after this process, the minifier will remove it from the build completely.
Execute npm run build again, and you should see improved results:
> webpack
Hash: cde2c1861fbd65f03c3b
Version: webpack 1.12.13
Time: 9032ms
Asset Size Chunks Chunk Names
bundle.js 307 kB 0 [emitted] app
+ 333 hidden modules
So we went from 1.12 MB to 368 kB, and finally, to 307 kB. The final build is a little faster than the
previous one. As that 307 kB can be served gzipped, it is quite reasonable. gzipping will drop around
another 40%. It is well supported by browsers.
Building Kanban 145
We can do a little better, though. We can split app and vendor bundles and add hashes to their
filenames.
https://www.npmjs.com/package/babel-plugin-transform-inline-environment-variables
https://babeljs.io/docs/plugins/transform-inline-environment-variables/
Building Kanban 146
process.env.BABEL_ENV = TARGET;
const common = {
entry: {
app: PATHS.app
},
resolve: {
extensions: ['', '.js', '.jsx']
},
output: {
path: PATHS.build,
filename: 'bundle.js'
// Output using entry name
filename: '[name].js'
},
...
};
},
plugins: [
...
]
});
}
This tells Webpack that we want a separate entry chunk for our project vendor level dependencies.
Beyond this, its possible to define chunks that are loaded dynamically. This can be achieved through
require.ensure.
If you execute the build now using npm run build, you should see something along this:
> webpack
Hash: 192a0643b9245a61a6e0
Version: webpack 1.12.13
Time: 14745ms
Asset Size Chunks Chunk Names
app.js 307 kB 0 [emitted] app
vendor.js 286 kB 1 [emitted] vendor
[0] multi vendor 112 bytes {1} [built]
+ 333 hidden modules
Now we have separate app and vendor bundles. Theres something wrong, however. If you examine
the files, youll see that app.js contains vendor dependencies. We need to do something to tell
Webpack to avoid this situation. This is where CommonsChunkPlugin comes in.
Setting Up CommonsChunkPlugin
CommonsChunkPlugin allows us to extract the code we need for the vendor bundle. In addition, we
will use it to extract a manifest. It is a file that tells Webpack how to map each module to each file.
We will need this in the next step for setting up long term caching. Heres the setup:
webpack.config.js
https://webpack.github.io/docs/code-splitting.html
Building Kanban 148
...
If you run npm run build now, you should see output as below:
> webpack
Hash: 3a08642b633ebeafa62f
Version: webpack 1.12.13
Time: 11044ms
Asset Size Chunks Chunk Names
app.js 21.3 kB 0, 2 [emitted] app
vendor.js 286 kB 1, 2 [emitted] vendor
manifest.js 743 bytes 2 [emitted] manifest
[0] multi vendor 112 bytes {1} [built]
+ 333 hidden modules
The situation is far better now. Note how small app bundle compared to the vendor bundle. In order
to really benefit from this split, we should set up caching. This can be achieved by adding cache
busting hashes to filenames.
Using these placeholders you could end up with filenames, such as:
app.d587bbd6e38337f5accd.js
vendor.dc746a5db4ed650296e1.js
If the file contents are different, the hash will change as well, thus invalidating the cache, or more
accurately the browser will send a new request for the new file. This means if only app bundle gets
updated, only that file needs to be requested again.
An alternative way to achieve the same would be to generate static filenames and invalidate
the cache through a querystring (i.e., app.js?d587bbd6e38337f5accd). The part behind the
question mark will invalidate the cache. This method is not recommended. According to
Steve Souders, attaching the hash to the filename is a more performant way to go.
We can use the placeholder idea within our configuration like this:
webpack.config.js
If you execute npm run build now, you should see output like this.
http://www.stevesouders.com/blog/2008/08/23/revving-filenames-dont-use-querystring/
Building Kanban 150
> webpack
Hash: 7ddb226a34540aa401bc
Version: webpack 1.12.13
Time: 8741ms
Asset Size Chunks Chunk Names
app.5b758fea66f30faf0f0e.js 21.3 kB 0, 2 [emitted] app
vendor.db9a3343cf47e4b3d83c.js 286 kB 1, 2 [emitted] vendor
manifest.19a8a5985bb61f546ce3.js 763 bytes 2 [emitted] manifest
[0] multi vendor 112 bytes {1} [built]
+ 333 hidden modules
Our files have neat hashes now. To prove that it works, you could try altering app/index.jsx and
include a console.log there. After you build, only app and manifest related bundles should change.
One more way to improve the build further would be to load popular dependencies, such as React,
through a CDN. That would decrease the size of the vendor bundle even further while adding an
external dependency on the project. The idea is that if the user has hit the CDN earlier, caching can
kick in just like here.
In order to connect it with our project, we need to tweak the configuration a notch. While at it, get
rid of build/index.html as we wont need that anymore. The system will generate it for us after this
step:
webpack.config.js
https://webpack.github.io/docs/node.js-api.html
https://www.npmjs.com/package/html-webpack-plugin
https://www.npmjs.com/package/html-webpack-template
Building Kanban 151
...
const HtmlWebpackPlugin = require('html-webpack-plugin');
...
const common = {
...
module: {
...
}
},
plugins: [
new HtmlWebpackPlugin({
template: 'node_modules/html-webpack-template/index.ejs',
title: 'Kanban app',
appMountId: 'app',
inject: false
})
]
};
...
If you execute npm run build now, the output should include index.html:
Building Kanban 152
> webpack
Hash: 7ddb226a34540aa401bc
Version: webpack 1.12.13
Time: 9200ms
Asset Size Chunks Chunk Names
app.5b758fea66f30faf0f0e.js 21.3 kB 0, 2 [emitted] app
vendor.db9a3343cf47e4b3d83c.js 286 kB 1, 2 [emitted] vendor
manifest.19a8a5985bb61f546ce3.js 763 bytes 2 [emitted] manifest
index.html 648 bytes [emitted]
[0] multi vendor 112 bytes {1} [built]
+ 333 hidden modules
Child html-webpack-plugin for "index.html":
+ 3 hidden modules
Even though this adds some configuration to our project, we dont have to worry about gluing things
together now. If more flexibility is needed, its possible to implement a custom template.
webpack.config.js
...
const CleanPlugin = require('clean-webpack-plugin');
...
After this change, our build directory should remain nice and tidy when building. See clean-
webpack-plugin for further options.
If you want to preserve possible dotfiles within your build directory, you can use
[path.join(PATHS.build, '/*')] instead of [PATHS.build].
An alternative would be to use your terminal (rm -rf ./build/) and set that up in the
scripts section of package.json.
to get started. Next, we need to get rid of our current CSS related declaration at common configuration.
After that, we need to split it up between build and dev configuration sections as follows:
webpack.config.js
https://www.npmjs.com/package/clean-webpack-plugin
https://www.npmjs.com/package/extract-text-webpack-plugin
Building Kanban 154
...
const ExtractTextPlugin = require('extract-text-webpack-plugin');
...
const common = {
entry: {
app: PATHS.app
},
resolve: {
extensions: ['', '.js', '.jsx']
},
module: {
loaders: [
{
// Test expects a RegExp! Note the slashes!
test: /\.css$/,
loaders: ['style', 'css'],
// Include accepts either a path or an array of paths.
include: PATHS.app
},
{
test: /\.jsx?$/,
loaders: ['babel?cacheDirectory'],
include: PATHS.app
}
]
},
plugins: [
new HtmlWebpackPlugin({
template: 'node_modules/html-webpack-template/index.html',
title: 'Kanban app',
appMountId: 'app',
inject: false
})
]
};
...
},
module: {
loaders: [
// Define development specific CSS setup
{
test: /\.css$/,
loaders: ['style', 'css'],
include: PATHS.app
}
]
},
plugins: [
...
]
});
}
});
}
Using this setup, we can still benefit from the HMR during development. For a production build, we
generate a separate CSS. html-webpack-plugin will pick it up automatically and inject it into our
index.html.
If you want to pass more loaders to the ExtractTextPlugin, you should use ! syntax.
Example: ExtractTextPlugin.extract('style', 'css!postcss').
After running npm run build, you should see output similar to the following:
> webpack
If you are getting Module build failed: CssSyntaxError: error, make sure your common
configuration doesnt have a CSS related section set up!
Now our styling has been pushed to a separate CSS file. As a result, our JavaScript bundles have
become slightly smaller. We also avoid the FOUC problem. The browser doesnt have to wait for
Building Kanban 157
JavaScript to load to get styling information. Instead, it can process CSS separately avoiding flash
of unstyled content (FOUC).
If you have a complex project with a lot of dependencies, it is likely a good idea to
use the DedupePlugin. It will find possible duplicate files and deduplicate them. Use new
webpack.optimize.DedupePlugin() in your plugins definition to enable it.
import './main.css';
...
webpack.config.js
...
const PATHS = {
app: path.join(__dirname, 'app'),
build: path.join(__dirname, 'build')
build: path.join(__dirname, 'build'),
style: path.join(__dirname, 'app/main.css')
};
...
const common = {
entry: {
app: PATHS.app
app: PATHS.app,
style: PATHS.style
Building Kanban 158
},
...
}
...
If you build the project now through npm run build, you should see something like this:
Hash: 22e7d6f1b15400035cbb
Version: webpack 1.12.13
Time: 9271ms
Asset Size Chunks Chunk Names
app.5d41daf72705cb65cd89.js 16.4 kB 0, 3 [emitted] app
style.0688e2aa1fa6c618dcdd.js 38 bytes 1, 3 [emitted] style
vendor.ec174332c803122d2dba.js 286 kB 2, 3 [emitted] vendor
manifest.035b449d16a98df2cb4f.js 788 bytes 3 [emitted] manifest
style.0688e2aa1fa6c618dcdd.css 1.16 kB 1, 3 [emitted] style
index.html 770 bytes [emitted]
[0] multi vendor 112 bytes {2} [built]
+ 333 hidden modules
Child html-webpack-plugin for "index.html":
+ 3 hidden modules
Child extract-text-webpack-plugin:
+ 2 hidden modules
After this step we have managed to separate styling from JavaScript. Changes made to it shouldnt
affect JavaScript chunk hashes or vice versa. The approach comes with a small glitch, though. If you
look closely, you can see a file named style.64acd61995c3afbc43f1.js. It is a file generated by Webpack
and it looks like this:
webpackJsonp([1,3],[function(n,c){}]);
Technically its redundant. It would be safe to exclude the file through a check at HtmlWebpackPlu-
gin template. But this solution is good enough for the project. Ideally Webpack shouldnt generate
these files at all.
In the future we might be able to avoid this problem by using [contenthash] placeholder.
Its generated based on file content (i.e., CSS in this case). Unfortunately it doesnt work as
expected when the file is included in a chunk as in our original setup. This issue has been
reported as Webpack issue #672.
https://github.com/webpack/webpack/issues/672
Building Kanban 159
{
...
"scripts": {
"stats": "webpack --profile --json > stats.json",
...
},
...
}
webpack.config.js
...
...
If you execute npm run stats now, you should find stats.json at your project root after it has finished
processing. We can take this file and pass it to the online tool. Note that the tool works only over
HTTP! If your data is sensitive, consider using the standalone version instead.
Besides helping you to understand your bundle composition, the tool can help you to optimize your
output further.
8.7 Deployment
Theres no one right way to deploy our application. npm run build provides us something static to
host. If you drop that on a suitable server, it will just work. One neat way to deal with it for small
demos is to piggyback on GitHub Pages.
http://webpack.github.io/analyse/
https://github.com/webpack/analyse
Building Kanban 160
{
...
"scripts": {
"deploy": "gh-pages -d build",
...
},
...
}
If you execute npm run deploy now and everything goes fine, you should have your applica-
tion hosted through GitHub Pages. You should find it at https://<name>.github.io/<project>
(github.com/<name>/<project> at GitHub) assuming it worked.
If you need a more elaborate setup, you can use the Node.js API that gh-pages provides.
The default CLI tool it provides is often enough, though.
8.8 Conclusion
Beyond the features discussed, Webpack allows you to lazy load content through require.ensure.
This is handy if you happen to have a specific dependency on some view and want to load it when
you need it.
Our Kanban application is now ready to be served. We went from a chunky build to a slim one.
Even better the production version can benefit from caching and it is able to invalidate it.
If you wanted to develop the project further, it could be a good idea to rethink the project structure.
Ive discussed the topic at the Structuring React Projects appendix. It can be beneficial to read the
Authoring Packages chapter for more ideas on how to improve the npm setup of your project.
https://www.npmjs.com/package/gh-pages
https://webpack.github.io/docs/code-splitting.html
III Advanced Techniques
There are a variety of advanced Webpack and React techniques that are good to be aware of. Linting
can improve the quality of your code as it allows you to spot potential issues earlier. We will also
discuss various ways you can use Webpack to bundle your application.
Besides consuming libraries, it can be fun to create them. As a result, I will discuss common authoring
related concerns and show how to get libraries out there with minimal effort. There are a bunch of
smaller tricks that you should be aware of and that will make your life as a library author easier.
Styling React is a complicated topic itself. There are multiple ways to achieve that and theres no
clear consensus on what is the correct way in the context of React. I will provide you a good idea of
the current situation.
161
9. Testing React
Testing allows us to make sure everything works as we expect. It provides reassurance when making
changes to our code. Even a small change could break something crucial, and without tests we might
not realize the mistake until it has been deployed into production. It is still possible to break things,
but tests allow us to catch mistakes early in the development cycle. You can consider them as a safety
net.
Testing pyramid
Levels of testing can be characterized using the concept of the testing pyramid popularized by
Mike Cohn. He splits testing into three levels: unit, service, and user interface. He posits that you
should have unit test the most, service level (integration) the next, and on top of it all there should
be user interface level tests.
Each of these levels provides us information about the way the system behaves. They provide us
confidence in that the code works the way we imagine it should. On a high level, we can assert that
the application follows its specification. On a low level, we can assert that some particular function
operates the way we mean it to operate.
When studying a new system, testing can be used in an exploratory manner. The same idea is useful
for debugging. Well make a series of assertions to help us isolate the problem and fix it.
There are various techniques we can use to cover the testing pyramid. This is in no way an exhaustive
list. Its more about giving you some idea of what is out there. Well use a couple of these against
our project later in this chapter.
162
Testing React 163
Unit Testing
Unit testing is all about testing a piece of code a unit in isolation. We can, for instance, perform
unit tests against a single function to assert the way it behaves. The definition of a unit can be larger
than this, though. At some point well arrive to the realm of integration testing. By doing that, we
want to assert that parts of a system work together as they should.
Once you have gone far enough, you can refactor your code with confidence.
Testing doesnt come without its cost. Now you have two codebases to maintain. What if your
tests arent maintained well, or worse, are faulty? Even though testing comes with some cost, it
is extremely valuable especially as your project grows. It keeps the complexity maintainable and
allows you to proceed with confidence.
Acceptance Testing
Unit level tests look at the system from a technical perspective. Acceptance tests are at the other end
of the spectrum. They are more concerned about how does the system look for the user. Here we
are exercising every piece that lies below the user interface. Integration tests fit between these two
ends.
Acceptance tests allow us to measure qualitative concerns. We can, for example, assert that certain
elements are visible to the user. We can also have performance requirements and test against those.
Testing React 164
Tools, such as Selenium, allow us to automate this process and perform acceptance testing across
various browsers. This can help us to discover user interface level issues our tests might miss
otherwise. Sometimes, browsers behave in wildly different manners, and this in turn can cause
interesting yet undesirable behavior.
There can be as many of these tests as we want. We can generate them in a pseudo-random way.
This means well be able to replay the same tests if we want. This allows us to reproduce possible
bugs we might find.
The biggest advantage of the approach is that it allows us to test against values and ranges we might
not test otherwise. Computers are good at generating tests. The problem lies in figuring out good
invariants to test against.
This type of testing is very popular in Haskell. Particularly, QuickCheck made the
approach well-known and there are implementations for other languages as well. In
JavaScript environment JSVerify can be a good starting point.
If you want to check invariants during runtime while developing, a package known as
invariant can come in handy. Facebook uses it for some extra safety with React and Flux.
http://www.seleniumhq.org/
https://hackage.haskell.org/package/QuickCheck
https://jsverify.github.io/
https://www.npmjs.com/package/invariant
Testing React 165
Mutation Testing
Mutation testing allows you to test your tests. Mutation testing frameworks will mutate your source
code in various ways and see how your tests behave. It may reveal parts of code that you might be
able to remove or simplify based on your tests and their coverage. Its not a particularly popular
technique but its good to be aware of it.
Test Runner
Karma is a test runner. It allows you to execute tests against a browser. In this case, well be using
PhantomJS, a popular Webkit based headless browser. Karma performs most of the hard work for
us. It is possible to configure it to work with various testing tools depending on your preferences.
Testem is a valid alternative to Karma. Youll likely find a few others as theres no lack of testing
tools for JavaScript.
Unit Testing
Mocha will be used for structuring tests. It follows a simple describe, it format. It doesnt specify
assertions in any way. Well be using Node.js assert as thats enough for our purposes. Alternatives,
such as Chai, provide more powerful and expressive syntax.
Facebooks Jest is a popular alternative to Mocha. It is based on Jasmine, another popular tool,
and takes less setup than Mocha. Unfortunately, there are some Node.js version related issues and
there are a few features (mainly auto-mocking by default) that can be a little counter-intuitive. Its
a valid alternative, though.
rewire-webpack allows you to manipulate your module behavior to make unit testing
easier. It uses rewire internally. If you need to mock dependencies, this is a good way to
go. An alternative way to use rewire is to go through babel-plugin-rewire.
https://karma-runner.github.io
https://mochajs.org/
https://github.com/cesarandreu/web-app
http://phantomjs.org/
https://github.com/airportyh/testem
https://nodejs.org/api/assert.html
http://chaijs.com/
https://facebook.github.io/jest/
https://jasmine.github.io/
https://github.com/jhnns/rewire-webpack
https://github.com/jhnns/rewire
https://www.npmjs.com/package/babel-plugin-rewire
Testing React 166
Code Coverage
In order to get a better idea of the code coverage of our tests, well be using Istanbul. It will provide
us with an HTML report to study. This will help us to understand what parts of code could use tests.
It doesnt tell us anything about the quality of the tests, however. It just tells that we have hit some
particular branches of the code with them.
Installing Dependencies
Our test setup will require a lot of new dependencies. Execute the following command to get them
installed:
{
...
"scripts": {
"stats": "webpack --profile --json > stats.json",
"build": "webpack",
"start": "webpack-dev-server",
"test": "karma start",
"tdd": "karma start --auto-watch --no-single-run"
},
...
}
npm run test, or just npm test, will simply execute our tests. npm run tdd will keep on running the
tests as we work on the project. Thats what youll be relying upon during development a lot.
https://gotwarlost.github.io/istanbul/
http://blanketjs.org/
Testing React 167
/tests
demo_test.js
editable_test.jsx
note_store_test.js
note_test.jsx
There are multiple available conventions for this. One alternative is to push your tests to the
component level. For instance, you could have a directory per component. That directory would
contain the component, associated styling, and tests. The tests directory and the tests could be named
specs instead. In this case, you would have /specs and demo_spec.js, for example.
Configuring Karma
We are still missing a couple of important bits to make this setup work. Well need to configure both
Karma and Webpack. We can set up Karma first:
karma.conf.js
// Reference: http://karma-runner.github.io/0.13/config/configuration-file.html
module.exports = function karmaConfig (config) {
config.set({
frameworks: [
// Reference: https://github.com/karma-runner/karma-mocha
// Set framework to mocha
'mocha'
],
reporters: [
// Reference: https://github.com/mlex/karma-spec-reporter
// Set reporter to print detailed results to console
'spec',
// Reference: https://github.com/karma-runner/karma-coverage
// Output code coverage files
'coverage'
],
Testing React 168
files: [
// Reference: https://www.npmjs.com/package/phantomjs-polyfill
// Needed because React.js requires bind and phantomjs does not support it
'node_modules/phantomjs-polyfill/bind-polyfill.js',
preprocessors: {
// Reference: http://webpack.github.io/docs/testing.html
// Reference: https://github.com/webpack/karma-webpack
// Convert files with webpack and load sourcemaps
'tests/**/*_test.*': ['webpack', 'sourcemap']
},
browsers: [
// Run tests using PhantomJS
'PhantomJS'
],
singleRun: true,
As you can see from the comments, you can configure Karma in a variety of ways. For example,
you could point it to Chrome or even multiple browsers at once.
We still need to write some Webpack specific configuration to make this all work correctly.
Testing React 169
Configuring Webpack
Webpack will require some special configuration of its own. In order to make Karma find the code
we want, we need to point Webpack to it. In addition, we need to configure isparta-instrumenter-
loader so that our code coverage report generation will work through Istanbul. isparta is needed
given we are using Babel features. Consider the test configuration below:
webpack.config.js
...
const TARGET = process.env.npm_lifecycle_event;
const PATHS = {
app: path.join(__dirname, 'app'),
build: path.join(__dirname, 'build'),
style: path.join(__dirname, 'app/main.css')
style: path.join(__dirname, 'app/main.css'),
test: path.join(__dirname, 'tests')
};
process.env.BABEL_ENV = TARGET;
const common = {
entry: {
app: PATHS.app
app: PATHS.app,
style: PATHS.style
},
...
};
vendor: Object.keys(pkg.dependencies).filter(function(v) {
return v !== 'alt-utils';
})
}),
style: PATHS.style
},
...
});
}
Given there are no tests yet, the setup is supposed to fail, so its all good. Even npm run tdd works.
Note that you can kill that process using ctrl-c.
describe('add', () => {
it('adds', () => {
assert.equal(1 + 1, 2);
});
});
If you trigger npm test now, you should see something like this:
Testing React 172
add
adds
Even better, we can try npm run tdd now. Execute it and try tweaking the test. Make it fail for
example. As you can see, this provides us a nice testing workflow. Now that weve tested that testing
works, we can focus on real work.
Ideally, your unit tests should test only one thing at a time. Keeping them simple is a good idea
as that will help when you are debugging your code. As discussed earlier, unit tests wont prove
absence of bugs. Instead, they prove that the code is correct for that specific test. This is what makes
unit tests useful for debugging. You can use them to prove your assumptions.
To get started, we should make a test plan for Editable and get some testing done. Note that you can
implement these tests using npm run tdd and type them as you go. Feel free to try to break things
to get a better feel for it.
Testing React 173
I am sure there are a couple of extra cases that would be good to test, but these will get us started
and help us to understand how to write unit tests for React components.
describe('Editable', () => {
it('renders value', () => {
const value = 'value';
const component = renderIntoDocument(
<Editable value={value} />
);
assert.equal(valueComponent.textContent, value);
});
});
Testing React 174
There are a couple of important parts here that are good to understand as theyll repeat later:
Given React test API can be somewhat verbose, people have developed lighter alternatives
to it. See jquense/teaspoon, Legitcode/tests, and react-test-tree, for example.
React provides a lighter way to assert component behavior without the DOM. Shallow
rendering is still missing some functionality, but it provides another way to test React
components.
We can follow a similar idea here as before. This case is more complex, though. First, we need to
enter the edit mode somehow. After that, we need to check that the input displays the correct value.
Consider the implementation below:
tests/editable_test.jsx
https://facebook.github.io/react/docs/test-utils.html
https://github.com/jquense/teaspoon
https://github.com/Legitcode/tests
https://github.com/QubitProducts/react-test-tree
https://facebook.github.io/react/docs/test-utils.html#shallow-rendering
Testing React 175
describe('Editable', () => {
...
assert.equal(triggered, true);
});
});
Simulate.click triggers the onClick behavior weve defined at Editable. There are methods like
this for simulating other user input as well.
Theres still some work left to do. Well want to check out onEdit behavior next.
As per our component definition, onEdit should get triggered after the user triggers blur event
somehow. We can assert that it receives the input value it expects. This probably could be split up
into two separate tests but this will do just fine:
tests/editable_test.jsx
Testing React 176
...
describe('Editable', () => {
...
Simulate.blur(input);
assert.equal(triggered, true);
});
});
findRenderedDOMComponentWithTag is used to match against the input tag. Its the same idea as
with classes. Theres also a scry variant that works in a similar way but against tag names.
Compared to the earlier tests, there isnt much new here. We perform an assertion at onEdit and
trigger the behavior through Simulate.blur, but apart from that were in a familiar territory. You
could probably start refactoring some common parts of the tests into separate functions now, but
we can live with the current solution. At least were being verbose about what we are doing.
One more test to go.
Checking that Editable allows deletion is a similar case as triggering onEdit. We just check that
the callback triggered:
tests/editable_test.jsx
Testing React 177
describe('Editable', () => {
...
assert.equal(deleted, true);
});
});
We have some basic tests in place now, but what about test coverage?
Istanbul coverage
Based on the statistics, were quite good. We are still missing some branches, but we cover most of
Editable, so thats nice. You can try removing tests to see how the statistics change. You can also
try to figure out which branches arent covered.
You can see the component specific report by checking out components/Editable.jsx.html in your
browser. That will show you that checkEnter isnt covered by any test yet. It would be a good idea
to implement the missing test for that at some point.
Testing React 178
Testing Note
Even though Note is a trivial wrapper component, it is useful to test it as this will help us understand
how to deal with React DnD. It is a testable library by design. Its testing documentation goes into
great detail.
Instead of HTML5Backend, we can rely on TestBackend in this case. The hard part is in building the
context we need for testing that Note does indeed render its contents. Execute
to get the backend installed. The test below illustrates the basic idea:
tests/note_test.jsx
describe('Note', () => {
it('renders children', () => {
const test = 'test';
const NoteContent = wrapInTestContext(Note);
const component = renderIntoDocument(
<NoteContent id="demo">{test}</NoteContent>
);
assert.equal(component.props.children, test);
});
});
// https://gaearon.github.io/react-dnd/docs-testing.html
function wrapInTestContext(DecoratedComponent) {
@DragDropContext(TestBackend)
class TestContextContainer extends React.Component {
render() {
return <DecoratedComponent {...this.props} />;
https://gaearon.github.io/react-dnd/docs-testing.html
Testing React 179
}
}
return TestContextContainer;
}
The test itself is easy. We just check that the children prop was set as we expect. The test could be
improved by checking the rendered output through DOM.
In addition, we could test against special cases and try to see how NoteStore behaves with various
types of input. This is the useful minimum and will allow us to cover the common paths well.
Creating new notes is simple. We just need to hit NoteActions.create and see that Note-
Store.getState results contain the newly created Note:
tests/note_store_test.js
http://alt.js.org/docs/testing/actions/
http://alt.js.org/docs/testing/stores/
Testing React 180
describe('NoteStore', () => {
it('creates notes', () => {
const task = 'test';
NoteActions.create({task});
assert.equal(state.notes.length, 1);
assert.equal(state.notes[0].task, task);
});
});
Apart from the imports needed, this is simpler than our React tests. The test logic is easy to follow.
In order to update, well need to create a Note first. After that, we can change its content somehow.
Finally, we can assert that the state changed:
tests/note_store_test.js
...
describe('NoteStore', () => {
...
NoteActions.create({task});
assert.equal(state.notes.length, 1);
assert.equal(state.notes[0].task, updatedTask);
});
});
The problem is that assert.equal(state.notes.length, 1); will fail. This is because our Note-
Store is a singleton. Our first test already created a Note to it. There are two ways to solve this:
1. Push alt.CreateStore to a higher level. Now we create the association at the module level
and this is causing issues now.
2. flush the contents of Alt store before each test.
...
describe('NoteStore', () => {
beforeEach(() => {
alt.flush();
});
...
});
After this little tweak, our test behaves the way we expect them to. This just shows that sometimes,
we can make mistakes even in our tests. It is a good idea to understand what they are doing under
the hood.
Testing delete is straight-forward as well. Well need to create a Note. After that, we can try to
delete it by id and assert that there are no notes left:
tests/note_store_test.js
Testing React 182
describe('NoteStore', () => {
...
NoteActions.delete(note.id);
assert.equal(state.notes.length, 0);
});
});
It would be a good idea to start pushing some of the common bits to shared functions now. At least
this way the tests will remain self-contained even if theres more code.
Theres only one test left for get. Given its a public method of a NoteStore, we can go directly
through NoteStore.get:
tests/note_store_test.js
describe('NoteStore', () => {
...
assert.equal(notes.length, 1);
assert.equal(notes[0].task, task);
});
});
This test proves that the logic works on a basic level. It would be a good idea to specify what happens
with invalid data, though. We might want to react to that somehow.
Testing React 183
9.5 Conclusion
We have some basic unit tests in place for our Kanban application now. Its far from being tested
completely. Nonetheless, weve managed to cover some crucial parts of it. As a result, we can have
more confidence in that it operates correctly. It would be a nice idea to test the remainder, though,
and perhaps refactor those tests a little. There are also special cases that we havent given a lot of
thought to.
We are also missing acceptance tests completely. Fortunately, thats a topic that can be solved outside
of React, Alt, and such. Nightwatch is a tool that runs on top of Selenium server and allows you
to write these kind of tests. It will take some effort to pick up a tool like this. It will allow you to test
more qualitative aspects of your application, though.
https://github.com/Legitcode/tests#testing-alt-stores
http://nightwatchjs.org/
10. Typing with React
Just like linting, typing is another feature that can make our lives easier especially when working
with larger codebases. Some languages are very strict about this, but as you know JavaScript is very
flexible.
Flexibility is useful during prototyping. Unfortunately, this means its going to be easy to make
mistakes and not notice them until its too late. This is why testing and typing are so important.
Typing is a good way to strengthen your code and make it harder to break. It also serves as a form
of documentation for other developers.
In React, you document the expectations of your components using propTypes. It is possible to go
beyond this by using Flow, a syntax for gradual typing. There are also TypeScript type definitions
for React, but we wont go into that.
Annotation Styles
The way you annotate your components depends on the way you declare them. Ive given simplified
examples using various syntaxes below:
ES5
https://github.com/borisyankov/DefinitelyTyped/tree/master/react
184
Typing with React 185
module.exports = React.createClass({
displayName: 'Editable',
propTypes: {
value: React.PropTypes.string
},
defaultProps: {
value: ''
},
...
});
ES6
Editable.propTypes = {
value: React.PropTypes.string
};
Editable.defaultProps = {
value: ''
};
Annotation Types
propTypes support basic types as follows: React.PropTypes.[array, bool, func, number, ob-
ject, string]. In addition, theres a special node type that refers to anything that can be rendered
Typing with React 186
by React. any includes literally anything. element maps to a React element. Furthermore there are
functions as follows:
Its also possible to implement custom validators by passing a function using the following signature
to a prop type: function(props, propName, componentName). If the custom validation fails, you
should return an error (i.e., return new Error('Not a number!')).
The documentation goes into further detail.
Annotating Lanes
Lanes provide a good starting point. It expects an array of lanes. We can make it optional and
default to an empty list. This means we can simplify App a little:
app/components/App.jsx
// instead of
lanes: () => LaneStore.getState().lanes || []
// we can do
lanes: () => LaneStore.getState().lanes
Now weve documented what Lanes expects. App is a little neater as well. This doesnt help much,
though, if the lane items have an invalid format. We should annotate Lane and its children to guard
against this case.
Annotating Lane
As per our implicit definition, Lane expects an id, a name, and connectDropSource. id should be
required, as a lane without one doesnt make any sense. The rest can remain optional. Translated to
propTypes we would end up with this:
app/components/Lane.jsx
If our basic data model is wrong somehow now, well know about it. To harden our system further,
we should annotate notes contained by lanes.
Annotating Notes
As you might remember from the implementation, Notes accepts notes, onEdit, and onDelete
handlers. We can apply the same logic to notes as for Lanes. If the array isnt provided, we can
default to an empty one. We can use empty functions as default handlers if they arent provided.
The idea would translate to code as follows:
app/components/Notes.jsx
Even though useful, this doesnt give any guarantees about the shape of the individual items. We
could document it here to get a warning earlier, but it feels like a better idea to push that to Note
level. After all, thats what we did with Lanes and Lane earlier.
Annotating Note
In our implementation, Note works as a wrapper component that renders its content. Its primary
purpose is to provide drag and drop related hooks. As per our implementation, it requires an
id prop. You can also pass an optional onMove handler to it. It receives connectDragSource and
connectDropSource through React DnD. In annotation format we get:
app/components/Note.jsx
Typing with React 189
...
export default class Note extends React.Component {
class Note extends React.Component {
...
}
Note.propTypes = {
id: React.PropTypes.string.isRequired,
editing: React.PropTypes.bool,
connectDragSource: React.PropTypes.func,
connectDropTarget: React.PropTypes.func,
isDragging: React.PropTypes.bool,
onMove: React.PropTypes.func
};
Note.defaultProps = {
onMove: () => {}
};
Weve annotated almost everything we need. Theres just one bit remaining, namely Editable.
Annotating Editable
In our system, Editable takes care of some of the heavy lifting. It is able to render an optional
value. It should receive the onEdit hook. onDelete is optional. Using the annotation syntax we get
the following:
app/components/Editable.jsx
if(this.props.onEdit) {
this.props.onEdit(value);
}
this.props.onEdit(e.target.value);
};
Typing with React 190
}
Editable.propTypes = {
value: React.PropTypes.string,
editing: React.PropTypes.bool,
onEdit: React.PropTypes.func.isRequired,
onDelete: React.PropTypes.func,
onValueClick: React.PropTypes.func
};
Editable.defaultProps = {
value: '',
editing: false,
onEdit: () => {}
};
We have annotated our system now. In case we manage to break our data model somehow, well
know about it during development. This is very nice considering future efforts. The earlier you catch
and fix problems like these, the easier it is to build on top of it.
Even though propTypes are nice, they are also a little verbose. Flow typing can help us in that regard.
Flow
Facebooks Flow provides gradual typing for JavaScript. This means you can add types to your
code as you need them. We can achieve similar results as with propTypes and we can add additional
invariants to our code as needed. To give you an idea, consider the following trivial example:
http://flowtype.org/
Typing with React 191
The definition states that add should receive two numbers and return one as a result. This is the
way its typically done in statically typed languages. Now we can benefit from the same idea in
JavaScript.
Flow relies on a static type checker that has to be installed separately. As you run the tool, it will
evaluate your code and provide recommendations. To ease development, theres a way to evaluate
Flow types during runtime. This can be achieved through a Babel plugin.
At the time of this writing, major editors and IDEs have poor support for Flow annotations.
This may change in the future.
Setting Up Flow
There are pre-built binaries for common platforms. You can also install it through Homebrew on
Mac OS X (brew install flow).
As Flow relies on configuration and wont run without it, we should generate some. Execute flow
init. That will generate a .flowconfig file that can be used for advanced configuration.
We are going to need some further tweaks to adapt it to our environment. Well want to make sure
it skips ./node_modules while parses through the ./app directory. In addition we need to set up an
entry for Flow interfaces and make Flow ignore certain language features.
Since we want avoid parsing node_modules, we should tweak the configuration as follows:
.flowconfig
https://tryflow.org/
http://flowtype.org/docs/getting-started.html
http://flowtype.org/docs/advanced-configuration.html
Typing with React 192
[ignore]
.*/node_modules
[include]
./app
[libs]
./interfaces
[options]
esproposal.decorators=ignore
esproposal.class_instance_fields=ignore
Running Flow
Running Flow is simple, just execute flow check. You will likely see something like this:
$ flow check
find: ..././interfaces: No such file or directory
Found 0 errors
Well fix that warning in a bit. Before that, we should make it easy to trigger it through npm. Add
the following bit to your package.json:
package.json
{
...
"scripts": {
...
"flow": "flow check"
},
...
}
Typing with React 193
After this, we can execute npm run flow, and we dont have to care about the exact details of how
to call it. Note that if the process fails, it will give a nasty looking npm error (multiple lines of npm
ERR!). If you want to disable that, you can run npm run flow --silent instead to chomp it.
To gain some extra performance, Flow can be run in a daemon mode. Simply execute flow
to start it. If you execute flow again, it you should get instant results. This process may be
closed using flow stop.
Setting Up a Demo
Flow expects that you annotate the files in which you use it using the declaration /* @flow */ at the
beginning of the file. Alternatively, you can try running flow check --all. Keep in mind that it can
be slow, as it will process each file you have included, regardless of whether it has been annotated
with @flow! We will stick to the former in this project.
To get a better idea of what Flow output looks like, we can try a little demo. Set it up as follows:
demo.js
/* @flow */
function add(x: number, y: number): number {
return x + y;
}
add(2, 4);
Run npm run flow --silent now. If this worked correctly, you should see something like this:
demo.js:9
Typing with React 194
9: add('foo', 'bar');
^^^^^^^^^^^^^^^^^ function call
9: add('foo', 'bar');
^^^^^ string. This type is incompatible with
2: function add(x: number, y: number): number {
^^^^^^ number
Found 2 errors
This means everything is working as it should and Flow caught a nasty programming error for us.
Someone was trying to pass values of incompatible type to add. Thats good to know before going
to production.
Given we know that Flow can catch issues for us, get rid of the demo file before moving on.
/* @flow */
import React from 'react';
value: '',
editing: false,
onEdit: () => {}
};
render() {
render(): Object {
...
}
renderEdit = () => {
renderEdit: () => Object = () => {
...
};
renderValue = () => {
renderValue: () => Object = () => {
...
};
renderDelete = () => {
renderDelete: () => Object = () => {
...
};
checkEnter = (e) => {
checkEnter: (e: Object) => void = (e) => {
...
};
finishEdit = (e) => {
finishEdit: (e: Object) => void = (e) => {
...
};
}
Editable.propTypes = {
value: React.PropTypes.string,
editing: React.PropTypes.bool,
onEdit: React.PropTypes.func.isRequired,
onDelete: React.PropTypes.func,
onValueClick: React.PropTypes.func
};
Editable.defaultProps = {
value: '',
editing: false,
onEdit: () => {}
};
Typing with React 196
If you execute npm run flow --silent, you shouldnt see any errors:
Found 0 errors
/* @flow */
import React from 'react';
import {DragSource, DropTarget} from 'react-dnd';
import ItemTypes from '../constants/itemTypes';
...
onMove: () => {}
};
render() {
render(): Object {
...
}
}
Note.propTypes = {
id: React.PropTypes.string.isRequired,
editing: React.PropTypes.bool,
connectDragSource: React.PropTypes.func,
connectDropTarget: React.PropTypes.func,
isDragging: React.PropTypes.bool,
onMove: React.PropTypes.func
};
Note.defaultProps = {
onMove: () => {}
};
Executing npm run flow --silent should yield an error like this:
Found 1 error
Typing with React 198
The error means Flow is looking for a module, but is failing to find it. This is where that interfaces
directory comes in. Given our module expects typing, we should give it a type definition as follows:
interfaces/react-dnd.js
This states that react-dnd is a module that exports two functions that can return any type. If we
have third party code in our application, we need to implement declarations, such as this, to interface
with it. To give you another example, interface for alt-container could look like this:
interfaces/alt-container.js
alt-container is a module that exports an object. We could of course be more specific than this. Flow
documentation goes into greater detail about the topic.
If you execute npm run flow --silent now, you shouldnt get any errors or warnings at all:
Found 0 errors
As of writing, Flow doesnt provide nice means to type function based components (#1081). Once
the situation changes, Ill show you how to do that. For now, this little demo should do to give you
a rough idea of how to use Flow.
There are still some rough areas, but when it works, it can help to find potential problems sooner.
The same annotations can be useful for runtime checking during development. Babel plugin known
as babel-plugin-typecheck can achieve that.
{
...
"env": {
"start": {
"presets": [
"react-hmre"
]
],
"plugins": [
[
"typecheck"
]
]
}
}
}
After this change, our Flow checks will get executed during development. Flow static checker will
be able to catch more errors. Runtime checks have their benefits, though, and its far better than
nothing. You can test it by breaking the render() of Editable by purpose. Try returning a string
from there and see how it blows up. Flow would catch it, but its nice to get feedback like this during
the development as well.
10.6 TypeScript
Microsofts TypeScript is a good alternative to Flow. It has supported React officially since version
1.6 as it introduced JSX support. It is a more established solution compared to Flow. As a result you
can find a large amount of type definitions for popular libraries, React included.
I wont be covering TypeScript in detail as the project would have to change considerably in order
to introduce it. Instead, I encourage you to study available Webpack loaders:
ts-loader
http://www.typescriptlang.org/
https://github.com/DefinitelyTyped/DefinitelyTyped
https://www.npmjs.com/package/ts-loader
Typing with React 200
awesome-typescript-loader
typescript-loader
This section may be expanded later depending on the adoption of TypeScript within the React
community.
10.7 Conclusion
Currently, the state of type checking in React is still in bit of a flux. propTypes are the most stable
solution. Even if a little verbose, they are highly useful for documenting what your components
expect. This can save your nerves during development.
More advanced solutions, such as Flow and TypeScript, are still in a growing stage. There are still
some sore points, but both have a great promise. Typing is invaluable especially as your codebase
grows. Early on, flexibility has more value, but as you develop and understand your problems better,
solidifying your design may be worth your while.
https://www.npmjs.com/package/awesome-typescript-loader
https://www.npmjs.com/package/typescript-loader
11. Linting in Webpack
Nothing is easier than making mistakes when coding in JavaScript. Linting is one of those techniques
that can help you to make less mistakes. You can spot issues before they become actual problems.
Better yet, modern editors and IDEs offer strong support for popular tools. This means you can spot
possible issues as you are developing. Despite this, it is a good idea to set them up with Webpack.
That allows you to cancel a production build that might not be up to your standards for example.
201
Linting in Webpack 202
var common = {
...
module: {
preLoaders: [
{
test: /\.js?$/,
loaders: ['jshint'],
// define an include so we check just the files we need
include: PATHS.app
}
]
},
};
preLoaders section of the configuration gets executed before loaders. If linting fails, youll know
about it first. Theres a third section, postLoaders, that gets executed after loaders. You could
include code coverage checking there during testing, for instance.
JSHint will look into specific rules to apply from .jshintrc. You can also define custom settings
within a jshint object at your Webpack configuration. Exact configuration options have been
covered at the JSHint documentation in detail. .jshintrc could look like this:
.jshintrc
{
"browser": true,
"camelcase": false,
"esnext": true,
"indent": 2,
"latedef": false,
"newcap": true,
"quotmark": "double"
}
This tells JSHint were operating within browser environment, dont care about linting for camelcase
naming, want to use double quotes everywhere and so on.
If you try running JSHint on our project, you will get a lot of output. Its not the ideal solution for
React projects. ESLint will be more useful so well be setting it up next for better insights. Remember
to remove JSHint configuration before proceeding further.
http://jshint.com/docs/
Linting in Webpack 203
ESLint
ESLint is a recent linting solution for JavaScript. It builds on top of ideas presented by JSLint and
JSHint. More importantly it allows you to develop custom rules. As a result, a nice set of rules have
been developed for React in the form of eslint-plugin-react.
Since v1.4.0 ESLint supports a feature known as autofixing. It allows you to perform certain
rule fixes automatically. To activate it, pass the flag --fix to the tool.
This will add ESLint and the plugin we want to use as our project development dependencies. Next,
well need to do some configuration:
package.json
http://eslint.org/
https://www.npmjs.com/package/eslint-plugin-react
http://eslint.org/blog/2015/09/eslint-v1.4.0-released/
Linting in Webpack 204
"scripts": {
...
"lint": "eslint . --ext .js --ext .jsx"
}
...
This will trigger ESLint against all JS and JSX files of our project. That will lint a bit too much. Set
up .eslintignore to the project root like this to skip build/ :
.eslintignore
build/
{
"extends": "eslint:recommended",
"parserOptions": {
"ecmaVersion": 6,
"ecmaFeatures": {
"jsx": true
},
"sourceType": "module"
},
"env": {
"browser": true,
"node": true
},
https://www.npmjs.com/package/eslint-friendly-formatter
http://eslint.org/docs/rules/
Linting in Webpack 205
"plugins": [
"react"
],
"rules": {
"no-console": 0,
"new-cap": 0,
"strict": 0,
"no-underscore-dangle": 0,
"no-use-before-define": 0,
"eol-last": 0,
"quotes": [2, "single"],
"jsx-quotes": 1,
"react/jsx-no-undef": 1,
"react/jsx-uses-react": 1,
"react/jsx-uses-vars": 1
}
}
ESLint supports ES6 features through configuration. You will have to specify the features
to use through the ecmaFeatures property.
Some rules, such as quotes, accept an array instead. This allows you to pass extra parameters to
them. Refer to the rules documentation for specifics.
The react/ rules listed above are just a small subset of all available rules. Pick rules from eslint-
plugin-react as needed.
Note that you can write ESLint configuration directly to package.json. Set up a
eslintConfig field, and write your declarations below it.
http://eslint.org/docs/user-guide/configuring.html#specifying-language-options
https://www.npmjs.com/package/eslint-plugin-react
Linting in Webpack 206
"scripts": {
...
"lint": "eslint . --ext .js --ext .jsx || true"
}
...
The problem with this approach is that if you invoke lint through some other command, it will
pass even if there are failures. If you have another script that does something like npm run lint &&
npm run build, it will build regardless of the output of the first command!
Note that eslint-loader will use a globally installed version of ESLint unless you have one
included with the project itself! Make sure you have ESLint as a development dependency
to avoid strange behavior.
Next, we need to tweak our development configuration to include it. Add the following section to
it:
webpack.config.js
https://www.npmjs.com/package/eslint-loader
Linting in Webpack 207
var common = {
...
module: {
preLoaders: [
{
test: /\.jsx?$/,
loaders: ['eslint'],
include: PATHS.app
}
]
},
};
We are including the configuration to common so that linting always gets performed. This way you
can make sure your production build passes your rules while making sure you benefit from linting
during development.
If you execute npm start now and break some linting rule while developing, you should see that in
the terminal output. The same should happen when you build the project.
// everything
/* eslint-disable */
...
/* eslint-enable */
Linting in Webpack 208
// specific rule
/* eslint-disable no-unused-vars */
...
/* eslint-enable no-unused-vars */
// tweaking a rule
/* eslint no-comma-dangle:1 */
Note that the rule specific examples assume you have the rules in your configuration in the first
place! You cannot specify new rules here. Instead, you can modify the behavior of existing rules.
Setting Environment
Sometimes, you may want to run ESLint in a specific environment, such as Node.js or Mocha. These
environments have certain conventions of their own. For instance, Mocha relies on custom keywords
(e.g., describe, it) and its good if the linter doesnt choke on those.
ESLint provides two ways to deal with this: local and global. If you want to set it per file, you can
use a declaration at the beginning of a file:
Global configuration is possible as well. In this case, you can use env key like this:
.eslintrc
{
"env": {
"browser": true,
"node": true,
"mocha": true
},
...
}
Linting in Webpack 209
Codemod allows you to perform large scale changes to your codebase through AST based
transformations.
In ESLints case we just want to check the structure and report in case something is wrong. Getting
a simple rule done is surprisingly simple:
1. Set up a new project named eslint-plugin-custom. You can replace custom with whatever
you want. ESLint follows this naming convention.
2. Execute npm init -y to create a dummy package.json
3. Set up index.js in the project root with content like this:
eslint-plugin-custom/index.js
module.exports = {
rules: {
demo: function(context) {
return {
Identifier: function(node) {
context.report(node, 'This is unexpected!');
}
};
}
}
};
In this case, we just report for every identifier found. In practice, youll likely want to do something
more complex than this, but this is a good starting point.
https://github.com/benjamn/recast
http://esprima.org/demo/parse.html
http://astexplorer.net/
https://github.com/facebook/codemod
Linting in Webpack 210
Next, you need to execute npm link within eslint-plugin-custom. This will make your plugin
visible within your system. npm link allows you to easily consume a development version of a
library you are developing. To reverse the link you can execute npm unlink when you feel like it.
If you want to do something serious, you should point to your plugin through package.json.
We need to alter our project configuration to make it find the plugin and the rule within.
.eslintrc
{
...
"plugins": [
"react",
"react",
"custom"
],
"rules": {
"custom/demo": 1,
...
}
}
If you invoke ESLint now, you should see a bunch of warnings. Mission accomplished!
Of course the rule doesnt do anything useful yet. To move forward, I recommend checking out the
official documentation about plugins and rules.
You can also check out some of the existing rules and plugins for inspiration to see how they achieve
certain things. ESLint allows you to extend these rulesets through extends property. It accepts
either a path to it ("extends": "./node_modules/coding-standard/.eslintrc") or an array of
paths. The entries are applied in the given order and later ones override the former.
ESLint Resources
Besides the official documentation available at eslint.org, you should check out the following blog
posts:
http://eslint.org/docs/developer-guide/working-with-plugins.html
http://eslint.org/docs/developer-guide/working-with-rules.html
http://eslint.org/docs/user-guide/configuring.html#extending-configuration-files
http://eslint.org/
Linting in Webpack 211
Lint Like Its 2015 - This post by Dan Abramov shows how to get ESLint to work well with
Sublime Text.
Detect Problems in JavaScript Automatically with ESLint - A good tutorial on the topic.
Understanding the Real Advantages of Using ESLint - Evan Schultzs post digs into details.
eslint-plugin-smells - This plugin by Elijah Manor allows you to lint against various
JavaScript smells. Recommended.
If you just want some starting point, you can pick one of eslint-config- packages or go with the
standard style. By the looks of it, standard has some issues with JSX so be careful with that.
...
var stylelint = require('stylelint');
...
var common = {
...
module: {
preLoaders: [
{
test: /\.css$/,
loaders: ['postcss'],
include: PATHS.app
https://medium.com/@dan_abramov/lint-like-it-s-2015-6987d44c5b48
http://davidwalsh.name/eslint
http://rangle.io/blog/understanding-the-real-advantages-of-using-eslint/
https://github.com/elijahmanor/eslint-plugin-smells
https://www.npmjs.com/search?q=eslint-config
https://www.npmjs.com/package/standard
https://github.com/feross/standard/issues/138
http://stylelint.io/
https://www.npmjs.com/package/postcss-loader
Linting in Webpack 212
},
...
],
...
},
postcss: function () {
return [stylelint({
rules: {
'color-hex-case': 'lower'
}
})];
},
...
}
If you define a CSS rule, such as background-color: #EFEFEF;, you should see a warning at your
terminal. See stylelint documentation for a full list of rules. npm lists possible stylelint rulesets.
You consume them as your project dependency like this:
...
stylelint(configSuitcss)
Given stylelint is still under development, theres no CLI tool available yet. .stylelintrc type
functionality is planned.
https://www.npmjs.com/search?q=stylelint-config
Linting in Webpack 213
JSCS
Especially in a team environment, it can be annoying if one guy uses tabs and another uses spaces.
There can also be discrepancies between space usage. Some like to use two spaces, and some like
four for indentation. In short, it can get pretty messy without any discipline. To solve this issue, JSCS
allows you to define a style guide for your project.
Just like ESLint, also JSCS has autofixing capabilities. To fix certain issues, you can invoke
jscs --fix and it will modify your code.
jscs-loader provides Webpack hooks to the tool. Integration is similar as in the case of ESLint. You
would define a .jscsrc with your style guide rules and use configuration like this:
https://github.com/unindented/jscs-loader
Linting in Webpack 214
module: {
preLoaders: [
{
test: /\.jsx?$/,
loaders: ['eslint', 'jscs'],
include: PATHS.app
}
]
}
{
"esnext": true,
"preset": "google",
"requireCurlyBraces": true,
"requireParenthesesAroundIIFE": true,
"maximumLineLength": 120,
"validateLineBreaks": "LF",
"validateIndentation": 2,
"disallowKeywords": ["with"],
"disallowSpacesInsideObjectBrackets": null,
"disallowImplicitTypeConversion": ["string"],
"safeContextKeyword": "that",
"excludeFiles": [
"dist/**",
"node_modules/**"
]
}
ESLint implements a large part of the functionality provided by JSCS. It is possible you can
skip JSCS altogether provided you configure ESLint correctly. Theres a large amount of
presets available for both.
Linting in Webpack 215
11.7 EditorConfig
EditorConfig allows you to maintain a consistent coding style across different IDEs and editors.
Some even come with built-in support. For others, you have to install a separate plugin. In addition
to this youll need to set up a .editorconfig file like this:
.editorconfig
root = true
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[app/**.js]
indent_style = space
indent_size = 2
11.8 Conclusion
In this chapter, you learned how to lint your code using Webpack in various ways. It is one of those
techniques that yields benefits over the long term. You can fix possible problems before they become
actual issues.
http://editorconfig.org/
12. Authoring Packages
npm is one of the reasons behind the popularity of Node.js. Even though it was used initially for
managing back-end packages, it has become increasingly popular for front-end usage as well. As
you saw in the earlier chapters, it is easy to consume npm packages through Webpack.
Eventually, you may want to author packages of your own. Publishing one is relatively easy. There
are a lot of smaller details to know, though. This chapter goes through those so that you can avoid
some of the common problems.
index.js - On small projects its enough to have the code at the root. On larger ones you may
want to start splitting it up further.
package.json - npm metadata in JSON format
README.md - README is the most important document of your project. It is written in
Markdown format and provides an overview. For simple projects the whole documentation
can fit there. It will be shown at the package page at npmjs.com.
LICENSE - You should include licensing information within your project. You can refer to it
from package.json.
CONTRIBUTING.md - A guide for potential contributors. How should the code be developed
and so on.
CHANGELOG.md - This document describes major changes per version. If you do major API
changes, it can be a good idea to cover them here. It is possible to generate the file based on
Git commit history, provided you write nice enough commits.
.travis.yml - Travis CI is a popular continuous integration platform that is free for open source
projects. You can run the tests of your package over multiple systems using it. There are other
alternatives of course, but Travis is very popular.
.gitignore - Ignore patterns for Git, i.e., which files shouldnt go under version control. It can
be useful to ignore npm distribution files here so they dont clutter your repository.
https://www.npmjs.com/
https://travis-ci.org/
216
Authoring Packages 217
.npmignore - Ignore patterns for npm. This describes which files shouldnt go to your
distribution version. A good alternative is to use the files field at package.json. It allows you
to maintain a whitelist of files to include into your distribution version.
.eslintignore - Ignore patterns for ESLint. Again, tool specific.
.eslintrc - Linting rules. You can use .jshintrc and such based on your preferences.
webpack.config.js - If you are using a simple setup, you might as well have the configuration
at project root.
In addition, youll likely have various directories for source, tests, demos, documentation, and so on.
{
/* Name of the project */
"name": "react-component-boilerplate",
/* Brief description */
"description": "Boilerplate for React.js components",
/* Who is the author + optional email + optional site */
"author": "Juho Vepslinen <email goes here> (site goes here)",
/* Version of the package */
"version": "0.0.0",
/* `npm run <name>` */
"scripts": {
"start": "webpack-dev-server",
"gh-pages": "webpack",
"gh-pages:deploy": "gh-pages -d gh-pages",
"gh-pages:stats": "webpack --profile --json > stats.json",
https://docs.npmjs.com/files/package.json#files
https://docs.npmjs.com/files/package.json
https://github.com/survivejs/react-component-boilerplate
Authoring Packages 218
"dist": "webpack",
"dist:min": "webpack",
"dist:modules": "babel ./src --out-dir ./dist-modules",
"homepage": "https://bebraw.github.io/react-component-boilerplate/",
"bugs": {
"url": "https://github.com/bebraw/react-component-boilerplate/issues"
},
/* Keywords related to package. */
/* Fill this well to make the package findable. */
"keywords": [
"react",
"reactjs",
"boilerplate"
],
/* Which license to use */
"license": "MIT"
}
As you can see, package.json can contain a lot of information. You can attach non-npm specific
metadata there that can be used by tooling. Given this can bloat package.json, it may be preferable
to keep metadata at files of their own.
JSON doesnt support comments even though Im using them above. There are extended
notations, such as Hjson, that do.
When creating a project, npm init respects the values set at /.npmrc. Hence it may be
worth your while to set reasonable defaults there to save some time.
Publishing a Package
Provided you have logged in, creating new packages is just a matter of executing npm publish. Given
that the package name is still available and everything goes fine, you should have something out
there! After this, you can install your package through npm install or npm i.
http://hjson.org/
https://docs.npmjs.com/cli/adduser
https://docs.npmjs.com/cli/logout
Authoring Packages 220
An alternative way to consume a library is to point at it directly in package.json. In that case, you
can do "depName": "<github user>/<project>#<reference>" where <reference> can be either
commit hash, tag, or branch. This can be useful, especially if you need to hack around something
and cannot wait for a fix.
If you want to see what files will be published to npm, consider using a tool known as
irish-pub. It will give you a listing to review.
Bumping a Version
In order to bump your package version, youll just need to invoke one of these commands:
Invoking any of these will update package.json and create a version commit to git automatically. If
you execute npm publish after doing this, you should have something new out there.
Note that in the example above Ive set up version related hooks to make sure a version will contain
a fresh version of a distribution build. I also run tests just in case. Its better to catch potential issues
early on after all.
Consider using semantic-release if you prefer more structured approach. It can take some
pain out of the release process while automating a part of it. For instance, it is able to detect
possible breaking changes and generate changelogs.
v0.5.0-alpha1
v0.5.0-beta1
v0.5.0-beta2
https://www.npmjs.com/package/irish-pub
https://www.npmjs.com/package/semantic-release
Authoring Packages 221
v0.5.0-rc1
v0.5.0-rc2
v0.5.0
The initial alpha release will allow the users to try out the upcoming functionality and provide
feedback. The beta releases can be considered more stable. The release candidates (rc) are close to
an actual release and wont introduce any new functionality. They are all about refining the release
till its suitable for general consumption.
The workflow in this case is straight-forward:
In order to consume the test version, your users will have to use npm i <your package name>@alpha1.
It can be useful to utilize npm link during development. That will allow you to use a
development version of your library from some other context. Node.js will resolve to the
linked version unless local node_modules happens to contain a version. If you want to
remove the link, use npm unlink.
On Naming Packages
Before starting to develop, it can be a good idea to spend a little bit of time on figuring out a good
name for your package. Its not very fun to write a great package just to notice the name has been
taken. A good name is easy to find through a search engine, and most importantly, is available at
npm.
As of npm 2.7.0 it is possible to create scoped packages. They follow format @username/project-
name. Simply follow that when naming your project.
{
...
"scripts": {
...
"postinstall": "node lib/post_install.js"
},
"devDependencies": {
...
/* You should install sync-exec through `npm i` to get a recent version */
"sync-exec": "^0.6.2"
}
}
In addition, we need to define a little script to do the work for us. It will check whether our package
contains the directory we expect and will then act based on that. If it doesnt exist, well generate it:
lib/post_install.js
function exec(command) {
execSync(command, {
stdio: [0, 1, 2]
});
}
if (error || !stat.isDirectory()) {
exec('npm i babel-cli babel-preset-es2015 babel-preset-react');
exec('npm run dist-modules');
Authoring Packages 223
}
});
You may need to tweak the script to fit your exact purposes, but it gives you the basic idea.
Version Ranges
npm supports multiple version ranges. Ive listed the common ones below:
- Tilde matches only patch versions. For example, 1.2 would be equal to 1.2.x.
- Caret is the default you get using --save or --save-dev. It matches to It matches minor
versions. This means 0.2.0 would be equal to 0.2.x.
* - Asterisk matches major releases. This is the most dangerous of the ranges. Using this
recklessly can easily break your project in the future and I would advise against using it.
>= 1.3.0 < 2.0.0 - Range between versions. This can be particularly useful if you are using
peerDependencies.
You can set the default range using npm config set save-prefix='' in case you prefer something
else than caret. Alternatively you can modify /.npmrc directly. Especially defaulting to tilde can
be a good idea that can help you to avoid some trouble with dependencies.
Sometimes, using version ranges can feel a little dangerous. What if some future version is
broken? npm shrinkwrap allows you to fix your project versions and have stricter control
over the versions you are using in a production environment.
http://semver.npmjs.com/
https://docs.npmjs.com/cli/shrinkwrap
Authoring Packages 224
...
var config = {
paths: {
dist: '...',
src: '...',
},
filename: 'demo',
library: 'Demo'
};
var commonDist = {
devtool: 'source-map',
output: {
path: config.paths.dist,
libraryTarget: 'umd',
library: config.library
},
entry: config.paths.src,
externals: {
react: 'react'
// Use more complicated mapping for lodash.
// We need to access it differently depending
https://github.com/umdjs/umd
Authoring Packages 225
// on the environment.
lodash: {
commonjs: 'lodash',
commonjs2: 'lodash',
amd: '_',
root: '_'
}
},
module: {
loaders: [
{
test: /\.jsx?$/,
loaders: ['babel?cacheDirectory'],
include: config.paths.src
}
]
}
};
Most of the magic happens thanks to devtool and output declarations. In addition, I have set up
Authoring Packages 226
externals as I want to avoid bundling React and lodash into my library. Instead, both will be loaded
as external dependencies using the naming defined in the mapping.
The example uses the same merge utility we defined earlier on. You should check the
boilerplate itself for the exact configuration.
If your library is using ES6 exclusively, rollup can be a valid, simple alternative to
Webpack. It provides features, such as tree shaking. This means it will analyze the code
structure and drop unused parts of it automatically leading to a smaller size.
This will walk through the ./lib directory and output a processed file for each library it encounters
to ./dist-modules.
Since we want to avoid having to run the command directly whenever we publish a new version,
we can connect it to prepublish hook like this:
"scripts": {
...
"prepublish": "babel ./lib --out-dir ./dist-modules"
}
Make sure you execute npm i babel --save-dev to include the tool into your project.
You probably dont want the directory content to end up in your Git repository. In order to avoid
this and to keep your git status clean, consider this sort of .gitignore:
https://github.com/bebraw/react-component-boilerplate
https://www.npmjs.com/package/rollup
Authoring Packages 227
dist-modules/
...
Besides prepublish, npm provides a set of other hooks. The naming is always the same and follows
the pattern pre<hook>, <hook>, post<hook> where <hook> can be publish, install, test, stop,
start, restart, or version. Even though npm will trigger scripts bound to these automatically, you
can trigger them explicitly through npm run for testing (i.e., npm run prepublish).
There are plenty of smaller tricks to learn for advanced usage. Those are better covered by the official
documentation. Often all you need is just a prepublish script for build automation.
You can update all dependencies at once and hope for the best. Tools, such as npm-check-
updates, can do this for you.
Install the newest version of some specific dependency, e.g., npm i lodash@* --save. This is
a more controlled way to approach the problem.
Patch version information by hand by modifying package.json directly.
It is important to remember that your dependencies may introduce backwards incompatible changes.
It can be useful to remember how SemVer works and study dependency release notes. They might
not always exist, so you may have to go through the project commit history. There are a few services
that can help you to keep track of your project dependencies:
David
versioneye
Gemnasium
These services provide badges you can integrate into your project README.md. These services may
email you about important changes. They can also point out possible security issues that have been
fixed.
https://docs.npmjs.com/misc/scripts
https://www.npmjs.com/package/npm-check-updates
https://david-dm.org/
https://www.versioneye.com/
https://gemnasium.com
Authoring Packages 228
For testing your projects you can consider solutions, such as Travis CI or SauceLabs. Coveralls
gives you code coverage information and a badge to include in your README.
These services are valuable as they allow you to test your updates against a variety of platforms
quickly. Something that might work on your system might not work in some specific configuration.
Youll want to know about that as fast as possible to avoid introducing problems.
See npm documentation for the most up to date information about the topic.
12.8 Conclusion
You should have a basic idea on how to author npm packages with the help of Webpack now. It takes
a lot of effort out of the process. Just keep the basic rules in mind when developing and remember
to respect the SemVer.
https://travis-ci.org/
https://saucelabs.com/
https://coveralls.io/
https://docs.npmjs.com/cli/owner
13. Styling React
Traditionally, web pages have been split up into markup (HTML), styling (CSS), and logic (JavaScript).
Thanks to React and similar approaches, weve begun to question this split. We still may want to
separate our concerns somehow. But the split can be on different axes.
This change in the mindset has lead to new ways to think about styling. With React, were still
figuring out the best practices. Some early patterns have begun to emerge, however. As a result it is
difficult to provide any definite recommendations at the moment. Instead, I will go through various
approaches so you can make up your mind based on your exact needs.
var common = {
...
module: {
loaders: [
{
test: /\.css$/,
loaders: ['style', 'css'],
include: PATHS.style
}
]
},
...
};
229
Styling React 230
First, css-loader goes through possible @import and url() statements within the matched files and
treats them as regular require. This allows us to rely on various other loaders, such as file-loader
or url-loader.
file-loader generates files, whereas url-loader can create inline data URLs for small resources.
This can be useful for optimizing application loading. You avoid unnecessary requests while
providing a slightly bigger payload. Small improvements can yield large benefits if you depend
on a lot of small resources in your style definitions.
Finally, style-loader picks up css-loader output and injects the CSS into the bundle. As we saw
earlier in the build chapter, it is possible to use ExtractTextPlugin to generate a separate CSS file.
If you want to enable sourcemaps for CSS, you should use ['style', 'css?sourceMap']
and set output.publicPath to an absolute url. css-loader issue 29 discusses this problem
further.
BEM
BEM originates from Yandex. The goal of BEM is to allow reusable components and code sharing.
Sites, such as Get BEM help you to understand the methodology in more detail.
https://www.npmjs.com/package/css-loader
https://www.npmjs.com/package/file-loader
https://www.npmjs.com/package/url-loader
https://github.com/webpack/css-loader/issues/29
http://oocss.org/
https://smacss.com/
https://en.bem.info/method/
http://getbem.com/
Styling React 231
Maintaining long class names which BEM requires can be arduous. Thus various libraries have
appeared to make this easier. For React, examples of these are react-bem-helper, react-bem-render,
and bem-react.
Note that postcss-bem-linter allows you to lint your CSS for BEM conformance.
CSS Processors
Vanilla CSS is missing some functionality that would make maintenance work easier. Consider
something basic like variables, nesting, mixins, math or color functions. It would also be nice to be
able to forget about browser specific prefixes. These are small things that add up quite fast and make
it annoying to write vanilla CSS.
Sometimes, you may see terms preprocessor or postprocessor. Stefan Baumgartner calls these tools
simply CSS processors. The image above adapted based on Stefans work gets to the point. The
tooling operates both on authoring and optimization level. By authoring we mean features that
make it easier to write CSS. Optimization features operate based on vanilla CSS and convert it into
something more optimal for browsers to consume.
The interesting thing is that you may actually want to use multiple CSS processors. Stefans image
illustrates how you can author your code using Sass and still benefit from processing done through
PostCSS. For example, it can autoprefix your CSS code so that you dont have to worry about
prefixing per browser anymore.
https://medium.com/@ddprrt/deconfusing-pre-and-post-processing-d68e3bd078a3
Styling React 233
Less
Less
Less is a popular CSS processor that is packed with functionality. In Webpack using Less doesnt
take a lot of effort. less-loader deals with the heavy lifting:
{
test: /\.less$/,
loaders: ['style', 'css', 'less'],
include: PATHS.style
}
There is also support for Less plugins, sourcemaps, and so on. To understand how those work you
should check out the project itself.
Sass
Sass
http://lesscss.org/
https://www.npmjs.com/package/less-loader
Styling React 234
Sass is a popular alternative to Less. You should use sass-loader with it. Remember to install
node-sass to your project as the loader has a peer dependency on that. Webpack doesnt take much
configuration:
{
test: /\.scss$/,
loaders: ['style', 'css', 'sass'],
include: PATHS.style
}
Stylus
Stylus
Stylus is a Python inspired way to write CSS. Besides providing an indentation based syntax, it is
a full-featured processor. When using Webpack, you can use stylus-loader to Stylus within your
project. Configure as follows:
http://sass-lang.com/
https://www.npmjs.com/package/sass-loader
https://learnboost.github.io/stylus/
https://www.npmjs.com/package/stylus-loader
Styling React 235
{
test: /\.styl$/,
loaders: ['style', 'css', 'stylus'],
include: PATHS.style
}
You can also use Stylus plugins with it by setting stylus.use: [plugin()]. Check out the loader
for more information.
PostCSS
PostCSS allows you to perform transformations over CSS through JavaScript plugins. You can even
find plugins that provide you Sass-like features. PostCSS can be thought as the equivalent of Babel
for styling. It can be used through postcss-loader with Webpack as below:
module.exports = {
module: {
loaders: [
{
test: /\.css$/,
loaders: ['style', 'css', 'postcss'],
include: PATHS.style
}
]
},
// PostCSS plugins go here
postcss: function () {
return [autoprefixer, precss];
}
};
https://github.com/postcss/postcss
https://www.npmjs.com/package/postcss-loader
Styling React 236
cssnext
cssnext
cssnext is a PostCSS plugin that allows us to experience the future now. There are some restrictions,
but it may be worth a go. In Webpack it is simply a matter of installing cssnext-loader and attaching
it to your CSS configuration. In our case, you would end up with the following:
{
test: /\.css$/,
loaders: ['style', 'css', 'cssnext'],
include: PATHS.style
}
Alternatively, you could consume it through postcss-loader as a plugin if you need more control.
The advantage of PostCSS and cssnext is that you will literally be coding in the future. As browsers
get better and adopt the standards, you dont have to worry about porting.
In our project, we could benefit from cssnext even if we didnt make any changes to our CSS. Thanks
to autoprefixing, rounded corners of our lanes would look good even in legacy browsers. In addition,
we could parameterize styling thanks to variables.
render(props, context) {
const notes = this.props.notes;
render(props, context) {
const notes = this.props.notes;
const style = {
margin: '0.5em',
paddingLeft: 0,
listStyle: 'none'
};
Like with HTML attribute names, we are using the same camelcase convention for CSS properties.
Now that we have styling at the component level, we can implement logic that also alters the styles
easily. One classic way to do this has been to alter class names based on the outlook we want. Now
we can adjust the properties we want directly.
Styling React 238
We have lost something in process, though. Now all of our styling is tied to our JavaScript code. It
is going to be difficult to perform large, sweeping changes to our codebase as we need to tweak a
lot of components to achieve that.
We can try to work against this by injecting a part of styling through props. A component could patch
its style based on a provided one. This can be improved further by coming up with conventions that
allow parts of style configuration to be mapped to some specific part. We just reinvented selectors
on a small scale.
How about things like media queries? This nave approach wont quite cut it. Fortunately, people
have come up with libraries to solve these tough problems for us.
According to Michele Bertoli basic features of these libraries are
I will cover some of the available libraries to give you a better idea how they work. See Micheles
list for a more a comprehensive outlook of the situation.
Radium
Radium has certain valuable ideas that are worth highlighting. Most importantly it provides
abstractions required to deal with media queries and pseudo classes (e.g., :hover). It expands the
basic syntax as follows:
const styles = {
button: {
padding: '1em',
':hover': {
border: '1px solid black'
},
https://github.com/MicheleBertoli/css-in-js
http://projects.formidablelabs.com/radium/
Styling React 239
':hover': {
background: 'white',
}
}
},
primary: {
background: 'green'
},
warning: {
background: 'yellow'
},
};
...
For style prop to work, youll need to annotate your classes using @Radium decorator.
React Style
React Style uses the same syntax as React Native StyleSheet. It expands the basic definition by
introducing additional keys for fragments.
}
}
});
...
As you can see, we can use individual fragments to get the same effect as Radium modifiers. Also
media queries are supported. React Style expects that you manipulate browser states (e.g., :hover)
through JavaScript. Also CSS animations wont work. Instead, its preferred to use some other
solution for that.
Interestingly, there is a React Style plugin for Webpack. It can extract CSS declarations into a
separate bundle. Now we are closer to the world were used to, but without cascades. We also have
our style declarations on the component level.
JSS
JSS is a JSON to StyleSheet compiler. It can be convenient to represent styling using JSON structures
as this gives us easy namespacing. Furthermore it is possible to perform transformations over the
JSON to gain features, such as autoprefixing. JSS provides a plugin interface just for this.
JSS can be used with React through react-jss. Theres also an experimental jss-loader for Webpack.
You can use JSS through react-jss like this:
...
import classNames from 'classnames';
import useSheet from 'react-jss';
const styles = {
button: {
padding: '1em'
},
'media (max-width: 200px)': {
button: {
width: '100%'
}
},
https://github.com/js-next/react-style-webpack-plugin
https://github.com/jsstyles/jss
https://www.npmjs.com/package/react-jss
https://www.npmjs.com/package/jss-loader
Styling React 241
primary: {
background: 'green'
},
warning: {
background: 'yellow'
}
};
@useSheet(styles)
export default class ConfirmButton extends React.Component {
render() {
const {classes} = this.props.sheet;
return <button
className={classNames(classes.button, classes.primary)}>
Confirm
</button>;
}
}
The approach supports pseudoselectors, i.e., you could define a selector within, such as &:hover,
within a definition and it would just work.
React Inline
React Inline is an interesting twist on StyleSheet. It generates CSS based on className prop of
elements where it is used. The example above could be adapted to React Inline like this:
https://github.com/martinandert/react-inline
Styling React 242
Unlike React Style, the approach supports browser states (e.g., :hover). Unfortunately, it relies on its
own custom tooling to generate React code and CSS which it needs to work. As of the time of this
writing, theres no Webpack loader available.
jsxstyle
Pete Hunts jsxstyle aims to mitigate some problems of React Styles approach. As you saw in
previous examples, we still have style definitions separate from the component markup. jsxstyle
merges these two concepts. Consider the following example:
// PrimaryButton component
<button
padding='1em'
background='green'
>Confirm</button>
The approach is still in its early days. For instance, support for media queries is missing. Instead of
defining modifiers as above, youll end up defining more components to support your use cases.
Just like React Style, jsxstyle comes with a Webpack loader that can extract CSS into a separate file.
https://github.com/petehunt/jsxstyle
https://github.com/css-modules/css-modules
https://medium.com/seek-ui-engineering/the-end-of-global-css-90d2a4a06284
Styling React 243
.primary {
background: 'green';
}
.warning {
background: 'yellow';
}
.button {
padding: 1em;
}
button.jsx
...
<button className={classNames(
styles.button, styles.primary
)}>Confirm</button>
As you can see, this approach provides a balance between what people are familiar with and what
React specific libraries do. It would not surprise me a lot if this approach gained popularity even
though its still in its early days. See CSS Modules Webpack Demo for more examples.
gajus/react-css-modules makes it even more convenient to use CSS Modules with React.
Using it, you dont need to refer to the styles object anymore, and you are not forced to
use camelCase for naming.
https://css-modules.github.io/webpack-demo/
https://github.com/gajus/react-css-modules
Styling React 244
13.6 Conclusion
It is simple to try out various styling approaches with Webpack. You can do it all, ranging from
vanilla CSS to more complex setups. React specific tooling even comes with loaders of their own.
This makes it easy to try out different alternatives.
React based styling approaches allow us to push styles to the component level. This provides an
interesting contrast to conventional approaches where CSS is kept separate. Dealing with component
specific logic becomes easier. You will lose some power provided by CSS. In return you gain
something that is simpler to understand. It is also harder to break.
CSS Modules strike a balance between a conventional approach and React specific approaches. Even
though its a newcomer, it shows a lot of promise. The biggest benefit seems to be that it doesnt lose
too much in the process. Its a nice step forward from what has been commonly used.
There are no best practices yet, and we are still figuring out the best ways to do this in React. You
will likely have to do some experimentation of your own to figure out what ways fit your use case
the best.
Appendices
As not everything thats worth discussing fits a book like this, Ive compiled related material into
brief appendices. These support the main material and explain certain topics, such as language
features, in greater detail. There are also troubleshooting tips in the end.
245
Structuring React Projects
React doesnt enforce any particular project structure. The good thing about this is that it allows you
to make up a structure to suit your needs. The bad thing is that it is not possible to provide you an
ideal structure that would work for every project. Instead, Im going to give you some inspiration
you can use to think about structure.
actions
LaneActions.js
NoteActions.js
components
App.jsx
Editable.jsx
Lane.jsx
Lanes.jsx
Note.jsx
Notes.jsx
constants
itemTypes.js
index.jsx
libs
alt.js
persist.js
storage.js
main.css
stores
LaneStore.js
NoteStore.js
Its enough for this purpose, but there are some interesting alternatives around:
File per concept - Perfect for small prototypes. You can split this up as you get more serious
with your application.
246
Structuring React Projects 247
Directory per component - It is possible to push components to directories of their own. Even
though this is a heavier approach, there are some interesting advantages as well see soon.
Directory per view - This approach becomes relevant once you want to introduce routing to
your application.
There are more alternatives but these cover some of the common cases. There is always room for
adjustment based on the needs of your application.
actions
LaneActions.js
NoteActions.js
components
App
App.jsx
app.css
app_test.jsx
index.js
Editable
Editable.jsx
editable.css
editable_test.jsx
index.js
...
index.js
constants
itemTypes.js
index.jsx
libs
alt.js
persist.js
storage.js
main.css
stores
LaneStore.js
NoteStore.js
Compared to our current solution, this would be heavier. The index.js files are there to provide easy
entry points for components. Even though they add noise, they simplify imports.
Structuring React Projects 248
We can leverage technology, such as CSS Modules, for styling each component separately.
Given each component is a little package of its own now, it would be easier to extract them
from the project. You could push generic components elsewhere and consume them across
multiple applications.
We can define unit tests at component level. The approach encourages you to test. We can
still have higher level tests around at the root level of the application just like earlier.
It could be interesting to try to push actions and stores to components as well. Or they could follow
a similar directory scheme. The benefit of this is that it would allow you to define unit tests in a
similar manner.
This setup isnt enough when you want to add multiple views to the application. Something else is
needed to support that.
https://github.com/gajus/create-index
https://github.com/rackt/react-router
Structuring React Projects 249
components
Note
Note.jsx
index.js
note.css
note_test.jsx
Routes
Routes.jsx
index.js
routes_test.jsx
index.js
...
index.jsx
main.css
views
Home
Home.jsx
home.css
home_test.jsx
index.js
Register
Register.jsx
index.js
register.css
register_test.jsx
index.js
The idea is the same as earlier. This time around we have more parts to coordinate. The application
starts from index.jsx which will trigger Routes that in turn chooses some view to display. After
that its the flow weve gotten used to.
This structure can scale further, but even it has its limits. Once your project begins to grow, you
might want to introduce new concepts to it. It could be natural to introduce a concept, such as
feature, between the views and the components.
For example, you might have a fancy LoginModal that is displayed on certain views if the session of
the user has timed out. It would be composed of lower level components. Again, common features
could be pushed out of the project itself into packages of their own as you see potential for reuse.
Conclusion
There is no single right way to structure your project with React. That said, it is one of those aspects
that is worth thinking about. Figuring out a structure that serves you well is worth it. A clear
structure helps in the maintenance effort and makes your project more understandable to others.
Structuring React Projects 250
You can evolve the structure as you go. Too heavy structure early on might just slow you down. As
the project evolves, so should its structure. Its one of those things thats worth thinking about given
it affects development so much.
Language Features
ES6 (or ES2015) was arguably the biggest change to JavaScript in a long time. As a result, we received
a wide variety of new functionality. The purpose of this appendix is to illustrate the features used
in the book in isolation to make it clearer to understand how they work. Rather than going through
the entire specification, I will just focus on the subset of features used in the book.
Modules
ES6 introduced proper module declarations. Earlier, this was somewhat ad hoc and we used formats,
such as AMD or CommonJS. See the Webpack Compared chapter for descriptions of those. Both
formats are still in use, but its always better to have something standard in place.
ES6 module declarations are statically analyzable. This is highly useful for tool authors. Effectively,
this means we can gain features like tree shaking. This allows the tooling to skip unused code easily
simply by analyzing the import structure.
index.js
...
251
Language Features 252
// Equivalent to
//export {add: add, multiple: multiple};
The example leverages fat arrow syntax and property value shorthand.
This definition can be consumed through an import like this:
index.js
...
Especially export default is useful if you prefer to keep your modules focused. The persist
function is an example of such. Regular export is useful for collecting multiple functions below
the same umbrella.
Aliasing Imports
Sometimes it can be handy to alias imports. Example:
Language Features 253
...
Webpack resolve.alias
Bundlers, such as Webpack, can provide some features beyond this. You could define a re-
solve.alias for some of your module directories for example. This would allow you to use an
import, such as import persist from 'libs/persist';, regardless of where you import. A simple
resolve.alias could look like this:
...
resolve: {
alias: {
libs: path.join(__dirname, 'libs')
}
}
Classes
Unlike many other languages out there, JavaScript uses prototype based inheritance instead of
class based one. Both approaches have their merits. In fact, you can mimic a class based model
through a prototype based one. ES6 classes are about providing syntactical sugar above the basic
mechanisms of JavaScript. Internally it still uses the same old system. It just looks a little different
to the programmer.
These days React supports class based component definitions. Not all agree that its a good thing.
That said, the definition can be quite neat as long as you dont abuse it. To give you a simple example,
consider the code below:
https://webpack.github.io/docs/configuration.html#resolve-alias
Language Features 254
...
}
}
Perhaps the biggest advantage of the class based approach is the fact that it cuts down some
complexity, especially when it comes to React lifecycle hooks. It is important to note that class
methods wont get by default, though! This is why the book relies on an experimental feature known
as property initializers.
Notes.jsx
Or use export class className to export several named classes from a single module:
Components.jsx
App.jsx
...
https://github.com/jeffmo/es-class-static-properties-and-fields
Language Features 256
this.renderNote = this.renderNote.bind(this);
}
render() {
// Use `renderNote` here somehow.
...
return this.renderNote();
}
renderNote() {
// Given renderNote was bound, we can access `this` as expected
return <div>{this.props.note}</div>;
}
}
App.propTypes = {
value: React.PropTypes.string
};
App.defaultProps = {
value: ''
};
Using class properties and property initializers we could write something tidier instead:
...
return this.renderNote();
}
// Property initializer gets rid of the `bind`
renderNote = () => {
// Given renderNote was bound, we can access `this` as expected
return <div>{this.props.note}</div>;
};
}
Now that weve pushed the declaration to method level, the code reads better. I decided to use the
feature in this book primarily for this reason. There is simply less to worry about.
Functions
Traditionally, JavaScript has been very flexible with its functions. To give you a better idea, see the
implementation of map below:
return ret;
}
map(function(v) {
return v * 2;
}, [34, 2, 5]); // yields [68, 4, 10]
return ret;
}
The implementation of map is more or less the same still. The interesting bit is at the way we call it.
Especially that (v) => v * 2 part is intriguing. Rather than having to write function everywhere,
the fat arrow syntax provides us a handy little shorthand. To give you further examples of usage,
consider below:
console.log(double(2));
var obj = {
context: function() {
return this;
},
name: 'demo object 1'
};
var obj2 = {
context: () => this,
name: 'demo object 2'
};
As you can notice in the snippet above, the anonymous function has a this pointing to the context
function in the obj object. In other words, it is binding the scope of the caller object obj to the
context function.
This happens because this doesnt point to the object scopes that contains it, but the caller object
scopes, as you can see it in the next snippet of code:
The arrow function in the object obj2 doesnt bind any object to its context, following the normal
lexical scoping rules resolving the reference to the nearest outer scope. In this case it happens to be
Node.js global object.
Even though the behavior might seem a little weird, it is actually useful. In the past, if you wanted
to access parent context, you either needed to bind it or attach the parent context to a variable var
that = this;. The introduction of the arrow function syntax has mitigated this problem.
Function Parameters
Historically, dealing with function parameters has been somewhat limited. There are various hacks,
such as values = values || [];, but they arent particularly nice and they are prone to errors. For
example, using || can cause problems with zeros. ES6 solves this problem by introducing default
parameters. We can simply write function map(cb, values=[]) now.
There is more to that and the default values can even depend on each other. You can also pass an
arbitrary amount of parameters through function map(cb, values...). In this case, you would
call the function through map(a => a * 2, 1, 2, 3, 4). The API might not be perfect for map, but
it might make more sense in some other scenario.
Language Features 260
There are also convenient means to extract values out of passed objects. This is highly useful with
React component defined using the function syntax:
String Interpolation
Earlier, dealing with strings was somewhat painful in JavaScript. Usually you just ended up using
a syntax like 'Hello' + name + '!'. Overloading + for this purpose wasnt perhaps the smartest
move as it can lead to strange behavior due to type coercion. For example, 0 + ' world would yield
0 world string as a result.
Besides being clearer, ES6 style string interpolation provides us multi-line strings. This is something
the old syntax didnt support. Consider the examples below:
The back-tick syntax may take a while to get used to, but its powerful and less prone to mistakes.
Destructuring
That ... is related to the idea of destructuring. For example, const {lane, ...props} = this.props;
would extract lane out of this.props while the rest of the object would go to props. This object
based syntax is still experimental. ES6 specifies an official way to perform the same for arrays like
this:
The spread operator (...) is useful for concatenating. You see syntax like this in Redux examples
often. They rely on experimental Object rest/spread syntax:
https://github.com/sebmarkbage/ecmascript-rest-spread
Language Features 261
[...state, action.lane];
// This is equal to
state.concat([action.lane])
...
render() {
const {value, onEdit, ...props} = this.props;
...
Object Shorthands
In order to make it easier to work with objects, ES6 provides a variety of features just for this. To
quote MDN, consider the examples below:
const a = 'demo';
const shorthand = {a}; // Same as {a: a}
// Shorthand methods
const o = {
get property() {},
set property(value) {},
demo() {}
};
In JavaScript, variables are global by default. var binds them on function level. This is in contrast
to many other languages that implement block level binding. ES6 introduces block level binding
through let.
Theres also support for const, which guarantees the reference to the variable itself cannot change.
This doesnt mean, however, that you cannot modify the contents of the variable. So if you are
pointing at an object, you are still allowed to tweak it!
I tend to favor to default to const whenever possible. If I need something mutable, let will do
fine. It is hard to find any good use for var anymore as const and let cover the need in a more
understandable manner. In fact, all of the books code, apart from this appendix, relies on const.
That just shows you how far you can get with it.
Decorators
Given decorators are still an experimental feature and theres a lot to cover about them, theres an
entire appendix dedicated to the topic. Read Understanding Decorators for more information.
Conclusion
Theres a lot more to ES6 and the upcoming specifications than this. If you want to understand the
specification better, ES6 Katas is a good starting point for learning more. Just having a good idea
of the basics will take you far.
http://es6katas.org/
Understanding Decorators
If you have used languages, such as Java or Python before, you might be familiar with the idea.
Decorators are syntactic sugar that allow us to wrap and annotate classes and functions. In their
current proposal (stage 1) only class and method level wrapping is supported. Functions may
become supported later on.
In Babel 6 you can enable this behavior through babel-plugin-syntax-decorators and babel-plugin-
transform-decorators-legacy plugins. The former provides syntax level support whereas the latter
gives the type of behavior we are going to discuss here.
The greatest benefit of decorators is that they allow us to wrap behavior into simple, reusable chunks
while cutting down the amount of noise. It is definitely possible to code without them. They just
make certain tasks neater, as we saw with drag and drop related annotations.
class Math {
@log
add(a, b) {
return a + b;
}
}
descriptor.value = function() {
console.log(`Calling "${name}" with`, arguments);
https://github.com/wycats/javascript-decorators
https://www.npmjs.com/package/babel-plugin-syntax-decorators
https://www.npmjs.com/package/babel-plugin-transform-decorators-legacy
263
Understanding Decorators 264
return descriptor;
}
The idea is that our log decorator wraps the original function, triggers a console.log, and finally,
calls it again while passing the original arguments to it. Especially if you havent seen arguments
or apply before, it might seem a little strange.
apply can be thought as an another way to invoke a function while passing its context (this) and
parameters as an array. arguments receives function parameters implicitly so its ideal for this case.
This logger could be pushed to a separate module. After that, we could use it across our application
whenever we want to log some methods. Once implemented decorators become powerful building
blocks.
The decorator receives three parameters:
const descriptor = {
value: () => {...},
enumerable: false,
configurable: true,
writable: true
};
As you saw above, value makes it possible to shape the behavior. The rest allows you to modify
behavior on method level. For instance, a @readonly decorator could limit access. @memoize is
another interesting example as that allows you to implement easy caching for methods.
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/arguments
Understanding Decorators 265
Implementing @connect
@connect will wrap our component in another component. That, in turn, will deal with the
connection logic (listen/unlisten/setState). It will maintain the store state internally and then
pass it to the child component that we are wrapping. During this process, it will pass the state
through props. The implementation below illustrates the idea:
app/decorators/connect.js
this.storeChanged = this.storeChanged.bind(this);
this.state = store.getState();
store.listen(this.storeChanged);
}
componentWillUnmount() {
store.unlisten(this.storeChanged);
}
storeChanged() {
this.setState(store.getState());
}
render() {
return <Component {...this.props} {...this.state} />;
}
};
};
Can you see the wrapping idea? Our decorator tracks store state. After that, it passes the state to the
component contained through props.
... is known as a spread operator. It expands the given object to separate key-value pairs,
or props, as in this case.
https://github.com/sebmarkbage/ecmascript-rest-spread
Understanding Decorators 266
...
import connect from '../decorators/connect';
...
@connect(NoteStore)
export default class App extends React.Component {
render() {
const notes = this.props.notes;
...
}
...
}
Pushing the logic to a decorator allows us to keep our components simple. If we wanted to add more
stores to the system and connect them to components, it would be trivial now. Even better, we could
connect multiple stores to a single component easily.
Decorator Ideas
We can build new decorators for various functionalities, such as undo, in this manner. They allow
us to keep our components tidy and push common logic elsewhere out of sight. Well designed
decorators can be used across projects.
Alts @connectToStores
Alt provides a similar decorator known as @connectToStores. It relies on static methods. Rather than
normal methods that are bound to a specific instance, these are bound on class level. This means
you can call them through the class itself (i.e., App.getStores()). The example below shows how
we might integrate @connectToStores into our application.
Understanding Decorators 267
...
import connectToStores from 'alt-utils/lib/connectToStores';
@connectToStores
export default class App extends React.Component {
static getStores(props) {
return [NoteStore];
};
static getPropsFromStores(props) {
return NoteStore.getState();
};
...
}
This more verbose approach is roughly equivalent to our implementation. It actually does more as
it allows you to connect to multiple stores at once. It also provides more control over the way you
can shape store state to props.
Conclusion
Even though still a little experimental, decorators provide nice means to push logic where it belongs.
Better yet, they provide us a degree of reusability while keeping our components neat and tidy.
Troubleshooting
Ive tried to cover some common issues here. This chapter will be expanded as common issues are
found.
EPEERINVALID
...
npm ERR! peerinvalid The package eslint does not satisfy its siblings' peerDepen\
dencies requirements!
npm ERR! peerinvalid Peer [email protected] wants eslint@>=0.8.0
npm ERR! peerinvalid Peer [email protected] wants [email protected] - 0.23
npm ERR! Please include the following file with any support request:
...
In human terms, it means that some package, eslint-loader in this case, has a too strict
peerDependency requirement. Our project has a newer version installed already. Given the required
peer dependency is older than our version, we get this particular error.
There are a couple of ways to work around this:
268
Troubleshooting 269
1. Report the glitch to the package author and hope the version range will be expanded.
2. Resolve the conflict by settling to a version that satisfies the peer dependency. In this case, we
could pin eslint to version 0.23 ("eslint": "0.23"), and everyone should be happy.
3. Fork the package, fix the version range, and point at your custom version. In this case, you
would have a "<package>": "<github user>/<project>#<reference>" kind of declaration
for your dependencies.
Note that peer dependencies are dealt with differently starting from npm 3. After that
version, its up to the package consumer (i.e., you) to deal with it. This particular error will
go away.
You tried to mount React multiple times to the same container. Check your script loading and
make sure your application is loaded only once.
The existing markup on your template doesnt match the one rendered by React. This can
happen especially if you are rendering the initial markup through a server.
ERROR in ./app/components/Demo.jsx
Module parse failed: .../app/components/Demo.jsx Line 16: Unexpected token <
This means there is something preventing Webpack to interpret the file correctly. You should check
out your loader configuration carefully. Make sure the right loaders are applied to the right files. If
you are using include, you should verify that the file is included within include paths.
Often you are not alone with your problem. Therefore, it may be worth your while to check out the
project issue trackers to see whats going on. You can likely find a good workaround or a proposed
fix there. These issues tend to get fixed fast for popular projects.
In a production environment, it may be preferable to lock production dependencies using npm
shrinkwrap. The official documentation goes into more detail on the topic.
https://docs.npmjs.com/cli/shrinkwrap