Node Handbook
Node Handbook
Preface
The Node.js Handbook
Conclusion
1
Preface
The Node.js Handbook follows the 80/20 rule: learn in 20% of the time the
80% of a topic.
Enjoy!
2
The Node.js Handbook
Introduction to Node.js
Node.js is an open-source and cross-platform JavaScript runtime
environment. It is a popular tool for almost any kind of project!
Node.js runs the V8 JavaScript engine, the core of Google Chrome, outside
of the browser. This allows Node.js to be very performant.
A Node.js app runs in a single process, without creating a new thread for
every request. Node.js provides a set of asynchronous I/O primitives in its
standard library that prevent JavaScript code from blocking and generally,
libraries in Node.js are written using non-blocking paradigms, making
blocking behavior the exception rather than the norm.
When Node.js performs an I/O operation, like reading from the network,
accessing a database or the filesystem, instead of blocking the thread and
wasting CPU cycles waiting, Node.js will resume the operations when the
response comes back.
3
Node.js version, and you can also enable specific experimental features by
running Node.js with flags.
To run this snippet, save it as a server.js file and run node server.js in
your terminal.
4
The server is set to listen on the specified port and hostname. When the
server is ready, the callback function is called, in this case informing us that
the server is running.
The first provides the request details. In this simple example, this is not used,
but you could access the request headers and request data.
res.statusCode = 200
res.setHeader('Content-Type', 'text/plain')
res.end('Hello World\n')
5
Express, one of the most simple yet powerful ways to create a web
server. Its minimalist approach, unopinionated, focused on the core
features of a server, is key to its success.
Meteor, an incredibly powerful full-stack framework, powering you with
an isomorphic approach to building apps with JavaScript, sharing code
on the client and the server. Once an off-the-shelf tool that provided
everything, now integrates with frontend libs React, Vue and Angular.
Can be used to create mobile apps as well.
koa, built by the same team behind Express, aims to be even simpler
and smaller, building on top of years of knowledge. The new project born
out of the need to create incompatible changes without disrupting the
existing community.
Next.js, a framework to render server-side rendered React applications.
Micro, a very lightweight server to create asynchronous HTTP
microservices.
Socket.io, a real-time communication engine to build network
applications.
SvelteKit: Sapper is a framework for building web applications of all
sizes, with a beautiful development experience and flexible filesystem-
based routing. Offers SSR and more!
Remix: Remix is a fullstack web framework for building excellent user
experiences for the web. It comes out of the box with everything you
need to build modern web applications (both frontend and backend) and
deploy them to any JavaScript-based runtime environment (including
Node.js).
Fastify a fast and efficient web framework highly focused on providing
the best developer experience with the least overhead and a powerful
plugin architecture, inspired by Hapi and Express.
6
13 years isn't a very long time in tech, but Node.js seems to have been
around forever.
In this post, we draw the big picture of Node.js in its history, to put things in
perspective.
Part of the business model of Netscape was to sell Web Servers, which
included an environment called Netscape LiveWire that could create dynamic
pages using server-side JavaScript. Unfortunately, Netscape LiveWire wasn't
very successful and server-side JavaScript wasn't popularized until recently,
by the introduction of Node.js.
One key factor that led to the rise of Node.js was the timing. Just a few years
earlier, JavaScript had started to be considered as a more serious language,
thanks to "Web 2.0" applications (such as Flickr, Gmail, etc.) that showed the
world what a modern experience on the web could be like.
Node.js happened to be built in the right place and right time, but luck isn't
the only reason why it is popular today. It introduces a lot of innovative
thinking and approaches for JavaScript server-side development that have
already helped many developers.
2009
Node.js is born
7
The first version of npm
2010
Express is born
Socket.io is born
2011
npm hits version 1.0
Big companies start adopting Node: LinkedIn, Uber, etc
2012
Adoption continues very rapidly
2013
First big blogging platform using Node: Ghost
Koa is born
2014
The Big Fork: io.js is a major fork of Node.js, with the goal of introducing
ES6 support and moving faster
2015
The Node.js Foundation is born
io.js is merged back into Node.js
Node 4 (no 1, 2, 3 versions were previously released)
2016
The leftpad incident
8
Yarn is born
Node 6
2017
npm focuses more on security
Node 8 - 9
HTTP/2
V8 introduces Node in its testing suite, officially making Node a target for
the JS engine, in addition to Chrome
3 billion npm downloads every week
2018
Node 10 - 11
ES modules .mjs experimental support
2019
Node 12 - 13
2020
Node 14 - 15
GitHub (owned by Microsoft) acquired NPM
2021
Node.js 16
Node.js 17
2022
Node.js 18
9
How to install Node.js
Node.js can be installed in different ways.
There you can choose to download an LTS version (LTS stands for Long
Term Support) or the latest available release. As usual, the latest version
contains the latest goodies.
nvm is a popular way to run Node. It allows you to easily switch the Node
version, and install new versions to try and easily rollback if something
breaks, for example.
It is also very useful to test your code with old Node versions.
My suggestion is to use the official installer if you are just starting out and
you don't use Homebrew already, otherwise, Homebrew is my favorite
solution because I can easily update node by running brew upgrade node .
10
In any case, when Node is installed you'll have access to the node
While learning to code, you might also be confused at where does JavaScript
end, and where Node.js begins, and vice versa.
Lexical Structure
Expressions
Types
Variables
Functions
this
Arrow Functions
Loops
Loops and Scope
Arrays
Template Literals
Semicolons
Strict Mode
ECMAScript 6, 2016, 2017
With those concepts in mind, you are well on your road to become a
proficient JavaScript developer, in both the browser and in Node.js.
11
Asynchronous programming and callbacks
Timers
Promises
Async and Await
Closures
The Event Loop
Building apps that run in the browser is a completely different thing than
building a Node.js application.
Despite the fact that it's always JavaScript, there are some key differences
that make the experience radically different.
You have a huge opportunity because we know how hard it is to fully, deeply
learn a programming language, and by using the same language to perform
all your work on the web - both on the client and on the server, you're in a
unique position of advantage.
In the browser, most of the time what you are doing is interacting with the
DOM, or other Web Platform APIs like Cookies. Those do not exist in Node,
of course. You don't have the document , window and all the other objects
that are provided by the browser.
12
And in the browser, we don't have all the nice APIs that Node.js provides
through its modules, like the filesystem access functionality.
Another big difference is that in Node.js you control the environment. Unless
you are building an open source application that anyone can deploy
anywhere, you know which version of Node.js you will run the application on.
Compared to the browser environment, where you don't get the luxury to
choose what browser your visitors will use, this is very convenient.
This means that you can write all the modern ES6-7-8-9 JavaScript that your
Node version supports.
Since JavaScript moves so fast, but browsers can be a bit slow and users a
bit slow to upgrade, sometimes on the web, you are stuck to use older
JavaScript / ECMAScript releases.
Going forward ES Modules ( import ) are the way to load modules across all
JavaScript, frontend or backend, but Node.js still supports the require
syntax.
13
The cool thing is that the JavaScript engine is independent by the browser in
which it's hosted. This key feature enabled the rise of Node.js. V8 was
chosen to be the engine that powered Node.js back in 2009, and as the
popularity of Node.js exploded, V8 became the engine that now powers an
incredible amount of server-side code written in JavaScript.
Other JS engines
Other browsers have their own JavaScript engine:
All those engines implement the ECMA ES-262 standard, also called
ECMAScript, the standard used by JavaScript.
V8 is always evolving, just like the other JavaScript engines around, to speed
up the Web and the Node.js ecosystem.
On the web, there is a race for performance that's been going on for years,
and we (as users and developers) benefit a lot from this competition because
we get faster and more optimized machines year after year.
14
Compilation
JavaScript is generally considered an interpreted language, but modern
JavaScript engines no longer just interpret JavaScript, they compile it.
This has been happening since 2009, when the SpiderMonkey JavaScript
compiler was added to Firefox 3.5, and everyone followed this idea.
This might seem counter-intuitive, but since the introduction of Google Maps
in 2004, JavaScript has evolved from a language that was generally
executing a few dozens of lines of code to complete applications with
thousands to hundreds of thousands of lines running in the browser.
Our applications can now run for hours inside a browser, rather than being
just a few form validation rules or simple scripts.
In this new world, compiling JavaScript makes perfect sense because while it
might take a little bit more to have the JavaScript ready, once done it's going
to be much more performant than purely interpreted code.
If your main Node.js application file is app.js , you can call it by typing:
node app.js
15
Above, you are explicitly telling the shell to run your script with node . You
can also embed this information into your JavaScript file with a "shebang"
line. The "shebang" is the first line in the file, and tells the OS which
interpreter to use for running the script. Below is the first line of JavaScript:
#!/usr/bin/node
Above, we are explicitly giving the absolute path of interpreter. Not all
operating systems have node in the bin folder, but all should have env .
#!/usr/bin/env node
// your code
To use a shebang, your file should have executable permission. You can give
app.js the executable permission by running:
While running the command, make sure you are in the same directory which
contains the app.js file.
module is used.
npm i -g nodemon
16
npm i -D nodemon
This local installation of nodemon can be run by calling it from within npm
script such as npm start or using npx nodemon.
nodemon app.js
When running a program in the console you can close it with ctrl-C , but
what we want to discuss here is programmatically exiting.
Let's start with the most drastic one, and see why you're better off not using
it.
The process core module provides a handy method that allows you to
programmatically exit from a Node.js program: process.exit() .
When Node.js runs this line, the process is immediately forced to terminate.
This means that any callback that's pending, any network request still being
sent, any filesystem access, or processes writing to stdout or stderr - all
is going to be ungracefully terminated right away.
If this is fine for you, you can pass an integer that signals the operating
system the exit code:
process.exit(1)
By default the exit code is 0 , which means success. Different exit codes
have different meaning, which you might want to use in your own system to
have the program communicate to other programs.
17
You can read more on exit codes at
https://nodejs.org/api/process.html#process_exit_codes
process.exitCode = 1
and when the program ends, Node.js will return that exit code.
Many times with Node.js we start servers, like this HTTP server:
Express is a framework that uses the http module under the hood,
app.listen() returns an instance of http. You would use https.createServer
if you needed to serve your app using HTTPS, as app.listen only uses
the http module.
This program is never going to end. If you call process.exit() , any currently
pending or running request is going to be aborted. This is not nice.
18
const express = require('express')
process.on('SIGTERM', () => {
server.close(() => {
console.log('Process terminated')
})
})
You can send this signal from inside the program, in another function:
process.kill(process.pid, 'SIGTERM')
Or from another Node.js running program, or any other app running in your
system that knows the PID of the process you want to terminate.
19
The process core module of Node.js provides the env property which
hosts all the environment variables that were set at the moment the process
was started.
The below code runs app.js and set USER_ID and USER_KEY .
That will pass the user USER_ID as 239482 and the USER_KEY as foobar.
This is suitable for testing, however for production, you will probably be
configuring some bash scripts to export variables.
process.env.USER_ID // "239482"
process.env.USER_KEY // "foobar"
In the same way you can access any custom environment variable you set.
If you have multiple environment variables in your node project, you can also
create an .env file in the root directory of your project, and then use the
dotenv package to load them during runtime.
# .env file
USER_ID="239482"
USER_KEY="foobar"
NODE_ENV="development"
In your js file
20
require('dotenv').config()
process.env.USER_ID // "239482"
process.env.USER_KEY // "foobar"
process.env.NODE_ENV // "development"
You can also run your js file with node -r dotenv/config index.js
I will list the options from simplest and constrained to more complex and
powerful.
This option is suited for some quick testing, demo a product or sharing of an
app with a very small group of people.
Using it, you can just type ngrok PORT and the PORT you want is exposed
to the internet. You will get a ngrok.io domain, but with a paid subscription
you can get a custom URL as well as more security options (remember that
you are opening your machine to the public Internet).
21
Glitch
Glitch is a playground and a way to build your apps faster than ever, and see
them live on their own glitch.com subdomain. You cannot currently have a a
custom domain, and there are a few restrictions in place, but it's really great
to prototype. It looks fun (and this is a plus), and it's not a dumbed down
environment - you get all the power of Node.js, a CDN, secure storage for
credentials, GitHub import/export and much more.
Codepen
Codepen is an amazing platform and community. You can create a project
with multiple files, and deploy it with a custom domain.
Serverless
A way to publish your apps, and have no server at all to manage, is
Serverless. Serverless is a paradigm where you publish your apps as
functions, and they respond on a network endpoint (also called FAAS -
Functions As A Service).
Serverless Framework
Standard Library
PAAS
22
PAAS stands for Platform As A Service. These platforms take away a lot of
things you should otherwise worry about when deploying your application.
Zeit Now
Zeit is now called Vercel
Zeit is an interesting option. You just type now in your terminal, and it takes
care of deploying your application. There is a free version with limitations,
and the paid version is more powerful. You forget that there's a server, you
just deploy the app.
Nanobox
Nanobox
Heroku
Heroku is an amazing platform.
Microsoft Azure
Azure is the Microsoft Cloud offering.
23
In this section you find the usual suspects, ordered from more user friendly to
less user friendly:
Digital Ocean
Linode
Amazon Web Services, in particular I mention Amazon Elastic Beanstalk
as it abstracts away a little bit the complexity of AWS.
Since they provide an empty Linux machine on which you can work, there is
no specific tutorial for these.
There are lots more options in the VPS category, those are just the ones I
used and I would recommend.
Bare metal
Another solution is to get a bare metal server, install a Linux distribution,
connect it to the internet (or rent one monthly, like you can do using the Vultr
Bare Metal service)
node script.js
If we run the node command without any script to execute or without any
arguments, we start a REPL session:
node
24
Note: REPL stands for Read Evaluate Print Loop, and it is a
programming language environment (basically a console window) that
takes single expression as user input and returns the result back to the
console after execution. The REPL session provides a convenient way
to quickly test simple JavaScript code.
❯ node
>
The command stays in idle mode and waits for us to enter something.
Tip: if you are unsure how to open your terminal, google "How to open
terminal on your-operating-system".
> console.log('test')
test
undefined
>
The first value, test , is the output we told the console to print, then we get
undefined which is the return value of running console.log() . Node read
this line of code, evaluated it, printed the result, and then went back to
waiting for more lines of code. Node will loop through these three steps for
every piece of code we execute in the REPL until we exit the session. That is
where the REPL got its name.
Node automatically prints the result of any line of JavaScript code without the
need to instruct it to do so. For example, type in the following line and press
enter:
25
> 5 === '5'
false
>
Note the difference in the outputs of the above two lines. The Node REPL
printed undefined after executed console.log() , while on the other hand, it
just printed the result of 5 === '5' . You need to keep in mind that the
former is just a statement in JavaScript, and the latter is an expression.
In some cases, the code you want to test might need multiple lines. For
example, say you want to define a function that generates a random number,
in the REPL session type in the following line and press enter:
function generateRandom() {
...
The Node REPL is smart enough to determine that you are not done writing
your code yet, and it will go into a multi-line mode for you to type in more
code. Now finish your function definition and press enter:
function generateRandom() {
...return Math.random()
}
undefined
Node will get out of the multi-line mode, and print undefined since there is
no value returned. This multi-line mode is limited. Node offers a more
featured editor right inside the REPL. We discuss it below under Dot
commands.
26
As you write your code, if you press the tab key the REPL will try to
autocomplete what you wrote to match a variable you already defined or a
predefined one.
The REPL will print all the properties and methods you can access on that
class:
27
The _ special variable
If after some code you type _ , that is going to print the result of the last
operation.
Dot commands
28
The REPL has some special commands, all starting with a dot . . They are
The REPL knows when you are typing a multi-line statement without the
need to invoke .editor .
and you press enter , the REPL will go to a new line that starts with 3 dots,
indicating you can now continue to work on that block.
... console.log(num)
... })
If you type .break at the end of a line, the multiline mode will stop and the
statement will not be executed.
29
Using the repl variable we can perform various operations. To start the REPL
command prompt, type in the following line
repl.start()
node repl.js
> const n = 10
You can pass a string which shows when the REPL starts. The default is '> '
(with a trailing space), but we can define custom prompt.
local.on('exit', () => {
console.log('exiting repl')
process.exit()
})
node app.js
30
For example:
or
This changes how you will retrieve this value in the Node.js code.
The way you retrieve it is using the process object built into Node.js.
The second element is the full path of the file being executed.
All the additional arguments are present from the third position going
forward.
You can iterate over all the arguments (including the node path and the file
path) using a loop:
You can get only the additional arguments by creating a new array that
excludes the first 2 params:
31
you can access it using
In this case:
args[0] is name=joe , and you need to parse it. The best way to do so is by
using the minimist library, which helps dealing with arguments:
args.name // joe
Install the required minimist package using npm (lesson about the
package manager comes later on).
This time you need to use double dashes before each argument name:
32
It is basically the same as the console object you find in the browser.
The most basic and most used method is console.log() , which prints the
string you pass to it to the console.
const x = 'x'
const y = 'y'
console.log(x, y)
For example:
Example:
console.log('%o', Number)
Counting elements
33
console.count() is a handy method.
const x = 1
const y = 2
const z = 3
console.count(
'The value of x is ' + x + ' and has been checked .. how many times?'
)
console.count(
'The value of x is ' + x + ' and has been checked .. how many times?'
)
console.count(
'The value of y is ' + y + ' and has been checked .. how many times?'
)
Reset counting
The console.countReset() method resets counter used with console.count().
34
const oranges = ['orange', 'orange']
const apples = ['just one apple']
oranges.forEach((fruit) => {
console.count(fruit)
})
apples.forEach((fruit) => {
console.count(fruit)
})
console.countReset('orange')
oranges.forEach((fruit) => {
console.count(fruit)
})
This will print the stack trace. This is what's printed if we try this in the
Node.js REPL:
35
Trace
at function2 (repl:1:33)
at function1 (repl:1:25)
at repl:1:1
at ContextifyScript.Script.runInThisContext (vm.js:44:33)
at REPLServer.defaultEval (repl.js:239:29)
at bound (domain.js:301:14)
at REPLServer.runBound [as eval] (domain.js:314:12)
at REPLServer.onLine (repl.js:440:10)
at emitOne (events.js:120:20)
at REPLServer.emit (events.js:210:7)
It will not appear in the console, but it will appear in the error log.
36
Example:
console.log('\x1b[33m%s\x1b[0m', 'hi!')
You can try that in the Node.js REPL, and it will print hi! in yellow.
However, this is the low-level way to do this. The simplest way to go about
coloring the console output is by using a library. Chalk is such a library, and
in addition to coloring it also helps with other styling facilities, like making text
bold, italic or underlined.
You install it with npm install chalk@4 , then you can use it:
console.log(chalk.yellow('hi!'))
Check the project link posted above for more usage examples.
This snippet creates a 10-step progress bar, and every 100ms one step is
completed. When the bar completes we clear the interval:
37
const ProgressBar = require('progress')
to import the functionality exposed in the library.js file that resides in the
current file folder.
Any other object or variable defined in the file by default is private and not
exposed to the outer world.
This is what the module.exports API offered by the module system allows
us to do.
38
The first is to assign an object to module.exports , which is an object
provided out of the box by the module system, and this will make your file
export just that object:
// car.js
const car = {
brand: 'Ford',
model: 'Fiesta',
}
module.exports = car
// index.js
const car = require('./car')
The second way is to add the exported object as a property of exports . This
way allows you to export multiple objects, functions or data:
const car = {
brand: 'Ford',
model: 'Fiesta',
}
exports.car = car
or directly
exports.car = {
brand: 'Ford',
model: 'Fiesta',
}
And in the other file, you'll use it by referencing a property of your import:
39
or you can use a destructuring assignment:
The first exposes the object it points to. The latter exposes the properties of
the object it points to.
require will always return the object that module.exports points to.
// car.js
exports.car = {
brand: 'Ford',
model: 'Fiesta',
}
module.exports = {
brand: 'Tesla',
model: 'Model S',
}
// app.js
const tesla = require('./car')
const ford = require('./car').car
console.log(tesla, ford)
This will print { brand: 'Tesla', model: 'Model S' } undefined since the
require function's return value has been updated to the object that
module.exports points to, so the property that exports added can't be
accessed.
40
npm is the standard package manager for Node.js.
In January 2017 over 350000 packages were reported being listed in the
npm registry, making it the biggest single language code repository on Earth,
and you can be sure there is a package for (almost!) everything.
Yarn and pnpm are alternatives to npm cli. You can check them out as
well.
Downloads
npm manages downloads of dependencies of your project.
npm install
41
Often you'll see more flags added to this command:
-D or --save-dev installs and adds the entry to the package.json file
devDependencies
--no-save installs but does not add the entry to the package.json file
dependencies
--save-optional installs and adds the entry to the package.json file
optionalDependencies
--no-optional will prevent optional dependencies from being installed
-S: --save
-D: --save-dev
-O: --save-optional
Updating packages
Updating is also made easy, by running
npm update
npm will check all packages for a newer version that satisfies your
versioning constraints.
42
Versioning
In addition to plain downloads, npm also manages versioning, so you can
specify any specific version of a package, or require a version higher or
lower than what you need.
Many times you'll find that a library is only compatible with a major release of
another library.
In all those cases, versioning helps a lot, and npm follows the semantic
versioning (semver) standard.
Running Tasks
The package.json file supports a format for specifying command line tasks
that can be run by using
For example:
{
"scripts": {
"start-dev": "node lib/server-development",
"start": "node lib/server-production"
}
}
43
{
"scripts": {
"watch": "webpack --watch --progress --colors --config webpack.conf.js
"dev": "webpack --progress --colors --config webpack.conf.js",
"prod": "NODE_ENV=production webpack -p --config webpack.conf.js"
}
}
a local install
a global install
the package is installed in the current file tree, under the node_modules
subfolder.
As this happens, npm also adds the lodash entry in the dependencies
44
When this happens, npm won't install the package under the local folder, but
instead, it will use a global location.
Where, exactly?
The npm root -g command will tell you where that exact location is on your
machine.
If you use nvm to manage Node.js versions, however, that location would
differ.
For example, if your username is 'joe' and you use nvm , then packages
location will show as
/Users/joe/.nvm/versions/node/v8.9.0/lib/node_modules .
Say you install lodash , the popular JavaScript utility library, using
To use it in your code, you just need to import it into your program using
require :
const _ = require('lodash')
45
In this case, it will put the executable file under the node_modules/.bin/
folder.
There is a hidden .bin folder, which contains symbolic links to the cowsay
binaries:
46
How do you execute those?
What's that for? What should you know about it, and what are some of the
cool things you can do with it?
The package.json file is kind of a manifest for your project. It can do a lot of
things, completely unrelated. It's a central repository of configuration for
tools, for example. It's also where npm and yarn store the names and
versions for all the installed packages.
47
The file structure
Here's an example package.json file:
{}
If you're building a Node.js package that you want to distribute over npm
things change radically, and you must have a set of properties that will help
other people use it. We'll see more about this later on.
{
"name": "test-project"
}
It defines a name property, which tells the name of the app, or package,
that's contained in the same folder where this file lives.
Here's a much more complex example, which was extracted from a sample
Vue.js application:
48
{
"name": "test-project",
"version": "1.0.0",
"description": "A Vue.js project",
"main": "src/main.js",
"private": true,
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.
"start": "npm run dev",
"unit": "jest --config test/unit/jest.conf.js --coverage",
"test": "npm run unit",
"lint": "eslint --ext .js,.vue src test/unit",
"build": "node build/build.js"
},
"dependencies": {
"vue": "^2.5.2"
},
"devDependencies": {
"autoprefixer": "^7.1.2",
"babel-core": "^6.22.1",
"babel-eslint": "^8.2.1",
"babel-helper-vue-jsx-merge-props": "^2.0.3",
"babel-jest": "^21.0.2",
"babel-loader": "^7.1.1",
"babel-plugin-dynamic-import-node": "^1.2.0",
"babel-plugin-syntax-jsx": "^6.18.0",
"babel-plugin-transform-es2015-modules-commonjs": "^6.26.0",
"babel-plugin-transform-runtime": "^6.22.0",
"babel-plugin-transform-vue-jsx": "^3.5.0",
"babel-preset-env": "^1.3.2",
"babel-preset-stage-2": "^6.22.0",
"chalk": "^2.0.1",
"copy-webpack-plugin": "^4.0.1",
"css-loader": "^0.28.0",
"eslint": "^4.15.0",
"eslint-config-airbnb-base": "^11.3.0",
"eslint-friendly-formatter": "^3.0.0",
"eslint-import-resolver-webpack": "^0.8.3",
"eslint-loader": "^1.7.1",
"eslint-plugin-import": "^2.7.0",
"eslint-plugin-vue": "^4.0.0",
"extract-text-webpack-plugin": "^3.0.0",
"file-loader": "^1.1.4",
"friendly-errors-webpack-plugin": "^1.6.1",
"html-webpack-plugin": "^2.30.1",
49
"jest": "^22.0.4",
"jest-serializer-vue": "^0.3.0",
"node-notifier": "^5.1.2",
"optimize-css-assets-webpack-plugin": "^3.2.0",
"ora": "^1.2.0",
"portfinder": "^1.0.13",
"postcss-import": "^11.0.0",
"postcss-loader": "^2.0.8",
"postcss-url": "^7.2.1",
"rimraf": "^2.6.0",
"semver": "^5.3.0",
"shelljs": "^0.7.6",
"uglifyjs-webpack-plugin": "^1.1.1",
"url-loader": "^0.5.8",
"vue-jest": "^1.0.2",
"vue-loader": "^13.3.0",
"vue-style-loader": "^3.0.1",
"vue-template-compiler": "^2.5.2",
"webpack": "^3.6.0",
"webpack-bundle-analyzer": "^2.9.0",
"webpack-dev-server": "^2.9.1",
"webpack-merge": "^4.1.0"
},
"engines": {
"node": ">= 6.0.0",
"npm": ">= 3.0.0"
},
"browserslist": ["> 1%", "last 2 versions", "not ie <= 8"]
}
50
engines sets which versions of Node.js this package/app works on
browserslist is used to tell which browsers (and their versions) you
want to support
All those properties are used by either npm or other tools that we can use.
Properties breakdown
This section describes the properties you can use in detail. We refer to
"package" but the same thing applies to local applications which you do not
use as packages.
name
Sets the package name.
Example:
"name": "test-project"
The name must be less than 214 characters, must not have spaces, it can
only contain lowercase letters, hyphens ( - ) or underscores ( _ ).
This is because when a package is published on npm , it gets its own URL
based on this property.
If you published this package publicly on GitHub, a good value for this
property is the GitHub repository name.
author
Lists the package author name
Example:
51
{
"author": "Joe <[email protected]> (https://whatever.com)"
}
{
"author": {
"name": "Joe",
"email": "[email protected]",
"url": "https://whatever.com"
}
}
contributors
As well as the author, the project can have one or more contributors. This
property is an array that lists them.
Example:
{
"contributors": ["Joe <[email protected]> (https://whatever.com)"]
}
{
"contributors": [
{
"name": "Joe",
"email": "[email protected]",
"url": "https://whatever.com"
}
]
}
bugs
52
Links to the package issue tracker, most likely a GitHub issues page
Example:
{
"bugs": "https://github.com/whatever/package/issues"
}
homepage
Sets the package homepage
Example:
{
"homepage": "https://whatever.com/package"
}
version
Indicates the current version of the package.
Example:
"version": "1.0.0"
This property follows the semantic versioning (semver) notation for versions,
which means the version is always expressed with 3 numbers: x.x.x .
The first number is the major version, the second the minor version and the
third is the patch version.
license
53
Indicates the license of the package.
Example:
"license": "MIT"
keywords
This property contains an array of keywords that associate with what your
package does.
Example:
"keywords": [
"email",
"machine learning",
"ai"
]
This helps people find your package when navigating similar packages, or
when browsing the https://www.npmjs.com/ website.
description
This property contains a brief description of the package
Example:
repository
This property specifies where this package repository is located.
54
Example:
"repository": "github:whatever/testing",
Notice the github prefix. There are other popular services baked in:
"repository": "gitlab:whatever/testing",
"repository": "bitbucket:whatever/testing",
"repository": {
"type": "git",
"url": "https://github.com/whatever/testing.git"
}
"repository": {
"type": "svn",
"url": "..."
}
main
Sets the entry point for the package.
When you import this package in an application, that's where the application
will search for the module exports.
Example:
"main": "src/main.js"
55
private
if set to true prevents the app/package to be accidentally published on
npm
Example:
"private": true
scripts
Defines a set of node scripts you can run
Example:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.de
"start": "npm run dev",
"unit": "jest --config test/unit/jest.conf.js --coverage",
"test": "npm run unit",
"lint": "eslint --ext .js,.vue src test/unit",
"build": "node build/build.js"
}
These scripts are command line applications. You can run them by calling
npm run XXXX or yarn XXXX , where XXXX is the command name. Example:
npm run dev .
You can use any name you want for a command, and scripts can do literally
anything you want.
dependencies
Sets a list of npm packages installed as dependencies.
56
npm install <PACKAGENAME>
yarn add <PACKAGENAME>
Example:
"dependencies": {
"vue": "^2.5.2"
}
devDependencies
Sets a list of npm packages installed as development dependencies.
They differ from dependencies because they are meant to be installed only
on a development machine, not needed to run the code in production.
Example:
"devDependencies": {
"autoprefixer": "^7.1.2",
"babel-core": "^6.22.1"
}
engines
Sets which versions of Node.js and other commands this package/app work
on
57
Example:
"engines": {
"node": ">= 6.0.0",
"npm": ">= 3.0.0",
"yarn": "^0.13.0"
}
browserslist
Is used to tell which browsers (and their versions) you want to support. It's
referenced by Babel, Autoprefixer, and other tools, to only add the polyfills
and fallbacks needed to the browsers you target.
Example:
"browserslist": [
"> 1%",
"last 2 versions",
"not ie <= 8"
]
This configuration means you want to support the last 2 major versions of all
browsers with at least 1% of usage (from the CanIUse.com stats), except IE8
and lower.
(see more)
Command-specific properties
The package.json file can also host command-specific configuration, for
example for Babel, ESLint, and more.
Each has a specific property, like eslintConfig , babel and others. Those
are command-specific, and you can find how to use those in the respective
command/project documentation.
58
Package versions
You have seen in the description above version numbers like these: ~3.0.0
or ^0.13.0 . What do they mean, and which other version specifiers can you
use?
That symbol specifies which updates your package accepts, from that
dependency.
Given that using semver (semantic versioning) all versions have 3 digits, the
first being the major release, the second the minor release and the third is
the patch release, you have these "Rules".
You can combine most of the versions in ranges, like this: 1.0.0 || >=1.1.0
<1.2.0 , to either use 1.0.0 or one release from 1.1.0 up, but lower than
1.2.0.
What's that? You probably know about the package.json file, which is much
more common and has been around for much longer.
if you write ~0.13.0 , you want to only update patch releases: 0.13.1
59
^1.13.0 , you will get patch and minor releases: 1.13.1 , 1.14.0 and
so on up to 2.0.0 but not 2.0.0 .
if you write 0.13.0 , that is the exact version that will be used, always
You don't commit to Git your node_modules folder, which is generally huge,
and when you try to replicate the project on another machine by using the
npm install command, if you specified the ~ syntax and a patch release
of a package has been released, that one is going to be installed. Same for
^ and minor releases.
If you specify exact versions, like 0.13.0 in the example, you are not
affected by this problem.
It could be you, or another person trying to initialize the project on the other
side of the world by running npm install .
So your original project and the newly initialized project are actually different.
Even if a patch or minor release should not introduce breaking changes, we
all know bugs can (and so, they will) slide in.
An example
This is an example structure of a package-lock.json file we get when we run
npm install cowsay in an empty folder:
60
{
"requires": true,
"lockfileVersion": 1,
"dependencies": {
"ansi-regex": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.
0.0.tgz",
"integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
},
"cowsay": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/cowsay/-/cowsay-1.3.1.tgz"
,
"integrity": "sha512-3PVFe6FePVtPj1HTeLin9v8WyLl+VmM1l1H/5P+BTTDkM
Ajufp+0F9eLjzRnOHzVAYeIYFF5po5NjRrgefnRMQ==",
"requires": {
"get-stdin": "^5.0.1",
"optimist": "~0.6.1",
"string-width": "~2.1.1",
"strip-eof": "^1.0.0"
}
},
"get-stdin": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-5.0.
1.tgz",
"integrity": "sha1-Ei4WFZHiH/TFJTAwVpPyDmOTo5g="
},
"is-fullwidth-code-point": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/
is-fullwidth-code-point-2.0.0.tgz",
"integrity": "sha1-o7MKXE8ZkYMWeqq5O+764937ZU8="
},
"minimist": {
"version": "0.0.10",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.10
.tgz",
"integrity": "sha1-3j+YVD2/lggr5IrRoMfNqDYwHc8="
},
"optimist": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/optimist/-/optimist-0.6.1.tg
"integrity": "sha1-2j6nRob6IaGaERwybpDrFaAZZoY=",
61
"requires": {
"minimist": "~0.0.1",
"wordwrap": "~0.0.2"
}
},
"string-width": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-
"integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLN
"requires": {
"is-fullwidth-code-point": "^2.0.0",
"strip-ansi": "^4.0.0"
}
},
"strip-ansi": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.
"integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=",
"requires": {
"ansi-regex": "^3.0.0"
}
},
"strip-eof": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/strip-eof/-/strip-eof-1.0.0.
"integrity": "sha1-u0P/VZim6wXYm1n80SnJgzE2Br8="
},
"wordwrap": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/wordwrap/-/wordwrap-0.0.3.tg
"integrity": "sha1-o9XabNXAvAAI03I0u68b7WMFkQc="
}
}
}
get-stdin
optimist
string-width
strip-eof
62
In turn, those packages require other packages, as we can see from the
requires property that some have:
ansi-regex
is-fullwidth-code-point
minimist
wordwrap
strip-eof
They are added in alphabetical order into the file, and each one has a
version field, a resolved field that points to the package location, and an
integrity string that we can use to verify the package.
npm list
For example:
❯ npm list
/Users/joe/dev/node/cowsay
└─┬ [email protected]
├── [email protected]
├─┬ [email protected]
│ ├── [email protected]
│ └── [email protected]
├─┬ [email protected]
│ ├── [email protected]
│ └─┬ [email protected]
│ └── [email protected]
└── [email protected]
63
You can also just open the package-lock.json file, but this involves some
visual scanning.
npm list -g is the same, but for globally installed packages.
To get only your top-level packages (basically, the ones you told npm to
install and you listed in the package.json ), run npm list --depth=0 :
You can get the version of a specific package by specifying its name:
If you want to see what's the latest available version of the package on the
npm repository, run npm view [package_name] version :
1.3.1
64
You can install an old version of an npm package using the @ syntax:
Example:
You might also be interested in listing all the previous versions of a package.
You can do it with npm view <package> versions :
65
❯ npm view cowsay versions
[ '1.0.0',
'1.0.1',
'1.0.2',
'1.0.3',
'1.1.0',
'1.1.1',
'1.1.2',
'1.1.3',
'1.1.4',
'1.1.5',
'1.1.6',
'1.1.7',
'1.1.8',
'1.1.9',
'1.2.0',
'1.2.1',
'1.3.0',
'1.3.1' ]
npm determines the dependencies and installs their latest versions as well.
Let's say you install cowsay , a nifty command-line tool that lets you make a
cow say things.
When you run npm install cowsay , this entry is added to the package.json
file:
66
{
"dependencies": {
"cowsay": "^1.3.1"
}
}
{
"requires": true,
"lockfileVersion": 1,
"dependencies": {
"cowsay": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/cowsay/-/cowsay-1.3.1.tgz",
"integrity": "sha512-3PVFe6FePVtPj1HTeLin9v8WyLl+VmM1l1H/5P+BTTDkMAj
"requires": {
"get-stdin": "^5.0.1",
"optimist": "~0.6.1",
"string-width": "~2.1.1",
"strip-eof": "^1.0.0"
}
}
}
}
Now those 2 files tell us that we installed version 1.3.1 of cowsay, and our
npm versioning rule for updates is ^1.3.1 . This means npm can update to
patch and minor releases: 1.3.2 , 1.4.0 and so on.
If there is a new minor or patch release and we type npm update , the
installed version is updated, and the package-lock.json file diligently filled
with the new version.
Since npm version 5.0.0, npm update updates package.json with newer
minor or patch versions. Use npm update --no-save to prevent modifying
package.json .
67
Here's the list of a few outdated packages in a repository:
Some of those updates are major releases. Running npm update won't help
here. Major releases are never updated in this way because they (by
definition) introduce breaking changes, and npm wants to save you trouble.
ncu -u
68
1. Finally, run a standard install:
npm install
When you make a new release, you don't just up a number as you please,
but you have rules:
you up the major version when you make incompatible API changes
you up the minor version when you add functionality in a backward-
compatible manner
you up the patch version when you make backward-compatible bug
fixes
Because npm set some rules we can use in the package.json file to choose
which versions it can update our packages to, when we run npm update .
^
69
~
>
>=
<
<=
=
-
||
^ : It will only do updates that do not change the leftmost non-zero
number i.e there can be changes in minor version or patch version but
not in major version. If you write ^13.1.0 , when running npm update , it
can update to 13.2.0 , 13.3.0 even 13.3.1 , 13.3.2 and so on, but
not to 14.0.0 or above.
~ : if you write ~0.13.0 when running npm update it can update to
patch releases: 0.13.1 is ok, but 0.14.0 is not.
> : you accept any version higher than the one you specify
>= : you accept any version equal to or higher than the one you specify
<= : you accept any version equal or lower to the one you specify
< : you accept any version lower than the one you specify
= : you accept that exact version
- : you accept a range of versions. Example: 2.1.0 - 2.6.2
You can combine some of those notations, for example use 1.0.0 ||
>=1.1.0 <1.2.0 to either use 1.0.0 or one release from 1.1.0 up, but lower
than 1.2.0.
no symbol: you accept only that specific version you specify ( 1.2.1 )
latest : you want to use the latest version available
70
Uninstalling npm packages with npm
uninstall
from the project root folder (the folder that contains the node_modules folder).
This will update dependencies , devDependencies , optionalDependencies ,
Use --no-save option if you don't want to update the package.json and
package-lock.json files.
If the package is installed globally, you need to add the -g / --global
flag:
for example:
and you can run this command from anywhere you want on your system
because, for global packages the current directory doesn't matter.
local packages are installed in the directory where you run npm install
<package-name> , and they are put in the node_modules folder under this
directory
71
global packages are all put in a single place in your system (exactly
where depends on your setup), regardless of where you run npm
install -g <package-name>
require('package-name')
This makes sure you can have dozens of applications in your computer, all
running a different version of each package if needed.
Updating a global package would make all your projects use the new
release, and as you can imagine this might cause nightmares in terms of
maintenance, as some packages might break compatibility with further
dependencies, and so on.
All projects have their own local version of a package, even if this might
appear like a waste of resources, it's minimal compared to the possible
negative consequences.
You can also install executable commands locally and run them using npx,
but some packages are just better installed globally.
Great examples of popular global packages which you might know are
npm
vue-cli
grunt-cli
mocha
react-native-cli
gatsby-cli
72
forever
nodemon
You probably have some packages installed globally already on your system.
You can see them by running
save ).
When you add the -D flag, or --save-dev , you are installing it as a
development dependency, which adds it to the devDependencies list.
When you go in production, if you type npm install and the folder contains
a package.json file, they are installed, as npm assumes this is a
development deploy.
You need to set the --production flag ( npm install --production ) to avoid
installing those development dependencies.
If you don't want to install npm, you can install npx as a standalone
package
npx lets you run code built with Node.js and published through the npm
registry.
This was a pain because you could not really install different versions of the
same command.
74
cowsay "Hello" will print
_______
< Hello >
-------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
This only works if you have the cowsay command globally installed from
npm previously. Otherwise you'll get an error when you try to run the
command.
npx allows you to run that npm command without installing it first. If the
command isn't found, npx will install it into a central cache:
running the vue CLI tool to create new applications and run them: npx
app my-react-app
75
This helps to avoid tools like nvm or the other Node.js version management
tools.
You can run code that sits in a GitHub gist, for example:
npx https://gist.github.com/zkat/4bc19503fe9e9309e2bfaa2c58074d32
Of course, you need to be careful when running code that you do not control,
as with great power comes great responsibility.
The Node.js JavaScript code runs on a single thread. There is just one thing
happening at a time.
This is a limitation that's actually very helpful, as it simplifies a lot how you
program without worrying about concurrency issues.
You just need to pay attention to how you write your code and avoid anything
that could block the thread, like synchronous network calls or infinite loops.
76
In general, in most browsers there is an event loop for every browser tab, to
make every process isolated and avoid a web page with infinite loops or
heavy processing to block your entire browser.
You mainly need to be concerned that your code will run on a single event
loop, and write code with this thing in mind to avoid blocking it.
The event loop continuously checks the call stack to see if there's any
function that needs to run.
While doing so, it adds any function call it finds in the call stack and executes
each one in order.
You know the error stack trace you might be familiar with, in the debugger or
in the browser console? The browser looks up the function names in the call
stack to inform you which function originates the current call:
77
A simple event loop explanation
Let's pick an example:
78
const bar = () => console.log('bar')
foo()
foo
bar
baz
as expected.
When this code runs, first foo() is called. Inside foo() we first call
bar() , then we call baz() .
79
The event loop on every iteration looks if there's something in the call stack,
and executes it:
80
until the call stack is empty.
81
Let's see how to defer a function until the stack is clear.
The use case of setTimeout(() => {}, 0) is to call a function, but execute it
once every other function in the code has executed.
foo()
foo
baz
bar
When this code runs, first foo() is called. Inside foo() we first call setTimeout,
passing bar as an argument, and we instruct it to run immediately as fast
as it can, passing 0 as the timer. Then we call baz().
82
Here is the execution order for all the functions in our program:
83
Why is this happening?
84
When setTimeout() is called, the Browser or Node.js starts the timer. Once
the timer expires, in this case immediately as we put 0 as the timeout, the
callback function is put in the Message Queue.
The loop gives priority to the call stack, and it first processes
everything it finds in the call stack, and once there's nothing in there, it
goes to pick up things in the message queue.
We don't have to wait for functions like setTimeout , fetch or other things to
do their own work, because they are provided by the browser, and they live
on their own threads. For example, if you set the setTimeout timeout to 2
seconds, you don't have to wait 2 seconds - the wait happens elsewhere.
Promises that resolve before the current function ends will be executed right
after the current function.
Example:
85
const bar = () => console.log('bar')
foo()
This prints
foo
baz
should be right after baz, before bar
bar
Finally, here's what the call stack looks like for the example above:
86
87
Understanding process.nextTick()
As you try to understand the Node.js event loop, one important part of it is
process.nextTick() .
Every time the event loop takes a full trip, we call it a tick.
process.nextTick(() => {
// do something
})
When this operation ends, the JS engine runs all the functions passed to
nextTick calls during that operation.
It's the way we can tell the JS engine to process a function asynchronously
(after the current function), but as soon as possible, not queue it.
Calling setTimeout(() => {}, 0) will execute the function at the end of next
tick, much later than when using nextTick() which prioritizes the call and
executes it just before the beginning of the next tick.
Use nextTick() when you want to make sure that in the next event loop
iteration that code is already executed.
Understanding setImmediate()
When you want to execute some piece of code asynchronously, but as soon
as possible, one option is to use the setImmediate() function provided by
Node.js:
88
setImmediate(() => {
// run something
})
The execution order will depend on various factors, but they will be both run
in the next iteration of the event loop.
89
const baz = () => console.log('baz')
const foo = () => console.log('foo')
const zoo = () => console.log('zoo')
const start = () => {
console.log('start')
setImmediate(baz)
new Promise((resolve, reject) => {
resolve('bar')
}).then((resolve) => {
console.log(resolve)
process.nextTick(zoo)
})
process.nextTick(foo)
}
start()
This code will first call start() , then call foo() in process.nextTick
queue . After that, it will handle promises microtask queue , which prints bar
and adds zoo() in process.nextTick queue at the same time. Then it will
call zoo() which has just been added. In the end, the baz() in macrotask
queue is called.
90
setTimeout(() => {
// runs after 2 seconds
}, 2000)
setTimeout(() => {
// runs after 50 milliseconds
}, 50)
This syntax defines a new function. You can call whatever other function you
want in there, or you can pass an existing function name, and a set of
parameters:
setTimeout returns the timer id. This is generally not used, but you can
store this id, and clear it if you want to delete this scheduled function
execution:
// I changed my mind
clearTimeout(id)
Zero delay
If you specify the timeout delay to 0 , the callback function will be executed
as soon as possible, but after the current function execution:
91
setTimeout(() => {
console.log('after ')
}, 0)
before
after
This is especially useful to avoid blocking the CPU on intensive tasks and let
other functions be executed while performing a heavy calculation, by queuing
functions in the scheduler.
setInterval()
setInterval is a function similar to setTimeout , with a difference: instead
of running the callback function once, it will run it forever, at the specific time
interval you specify (in milliseconds):
setInterval(() => {
// runs every 2 seconds
}, 2000)
The function above runs every 2 seconds unless you tell it to stop, using
clearInterval , passing it the interval id that setInterval returned:
92
const id = setInterval(() => {
// runs every 2 seconds
}, 2000)
clearInterval(id)
Recursive setTimeout
setInterval starts a function every n milliseconds, without any
consideration about when a function finished its execution.
If a function always takes the same amount of time, it's all fine:
93
To avoid this, you can schedule a recursive setTimeout to be called when the
callback function finishes:
setTimeout(myFunction, 1000)
}
setTimeout(myFunction, 1000)
On the backend side, Node.js offers us the option to build a similar system
using the events module.
94
This module, in particular, offers the EventEmitter class, which we'll use to
handle our events.
This object exposes, among many others, the on and emit methods.
eventEmitter.on('start', () => {
console.log('started')
})
When we run
eventEmitter.emit('start')
the event handler function is triggered, and we get the console log.
You can pass arguments to the event handler by passing them as additional
arguments to emit() :
eventEmitter.emit('start', 23)
Multiple arguments:
95
eventEmitter.on('start', (start, end) => {
console.log(`started from ${start} to ${end}`)
})
eventEmitter.emit('start', 1, 100)
The EventEmitter object also exposes several other methods to interact with
events, like
You can read all their details on the events module page at
https://nodejs.org/api/events.html
const fs = require('fs')
Notice the r we used as the second parameter to the fs.open() call.
96
r+ open the file for reading and writing, if file doesn't exist it won't be
created.
w+ open the file for reading and writing, positioning the stream at the
beginning of the file. The file is created if not existing.
a open the file for writing, positioning the stream at the end of the file.
The file is created if not existing.
a+ open the file for reading and writing, positioning the stream at the
end of the file. The file is created if not existing.
You can also open the file by using the fs.openSync method, which returns
the file descriptor, instead of providing it in a callback:
const fs = require('fs')
try {
const fd = fs.openSync('/Users/joe/test.txt', 'r')
} catch (err) {
console.error(err)
}
Once you get the file descriptor, in whatever way you choose, you can
perform all the operations that require it, like calling fs.close() and many
other operations that interact with the filesystem.
You can also open the file by using the promise-based fsPromises.open
The fs/promises module is available starting only from Node.js v14. Before
v14, after v10, you can use require('fs').promises instead. Before v10,
after v8, you can use util.promisify to convert fs methods into promise-
based methods.
97
const fs = require('fs/promises')
// Or const fs = require('fs').promises before v14.
async function example() {
let filehandle
try {
filehandle = await fs.open('/Users/joe/test.txt', 'r')
console.log(filehandle.fd)
console.log(await filehandle.readFile({ encoding: 'utf8' }))
} finally {
await filehandle.close()
}
}
example()
const fs = require('fs')
const util = require('util')
You call it passing a file path, and once Node.js gets the file details it will call
the callback function you pass, with 2 parameters: an error message, and the
file stats:
98
const fs = require('fs')
Node.js also provides a sync method, which blocks the thread until the file
stats are ready:
const fs = require('fs')
try {
const stats = fs.statSync('/Users/joe/test.txt')
} catch (err) {
console.error(err)
}
The file information is included in the stats variable. What kind of information
can we extract using the stats?
A lot, including:
There are other advanced methods, but the bulk of what you'll use in your
day-to-day programming is this.
99
const fs = require('fs')
stats.isFile() // true
stats.isDirectory() // false
stats.isSymbolicLink() // false
stats.size // 1024000 //= 1MB
})
const fs = require('fs/promises')
/users/joe/file.txt
100
while Windows computers are different, and have a structure such as:
C:\users\joe\file.txt
You need to pay attention when using paths in your applications, as this
difference must be taken into account.
Example:
path.dirname(notes) // /users/joe
path.basename(notes) // notes.txt
path.extname(notes) // .txt
You can get the file name without the extension by specifying a second
argument to basename :
101
const name = 'joe'
path.join('/', 'users', name, 'notes.txt') // '/users/joe/notes.txt'
You can get the absolute path calculation of a relative path using
path.resolve() :
In this case Node.js will simply append /joe.txt to the current working
directory. If you specify a second parameter folder, resolve will use the first
as a base for the second:
If the first parameter starts with a slash, that means it's an absolute path:
path.normalize() is another useful function, that will try and calculate the
actual path, when it contains relative specifiers like . or .. , or double
slashes:
path.normalize('/users/joe/..//test.txt') // '/users/test.txt'
Neither resolve nor normalize will check if the path exists. They just
calculate a path based on the information they got.
method, passing it the file path, encoding and a callback function that will be
called with the file data (and the error):
102
const fs = require('fs')
const fs = require('fs')
try {
const data = fs.readFileSync('/Users/joe/test.txt', 'utf8')
console.log(data)
} catch (err) {
console.error(err)
}
const fs = require('fs/promises')
read the full content of the file in memory before returning the data.
103
This means that big files are going to have a major impact on your memory
consumption and speed of execution of the program.
In this case, a better option is to read the file content using streams.
Example:
const fs = require('fs')
const fs = require('fs')
try {
fs.writeFileSync('/Users/joe/test.txt', content)
// file written successfully
} catch (err) {
console.error(err)
}
104
const fs = require('fs/promises')
By default, this API will replace the contents of the file if it does already
exist.
Append to a file
A handy method to append content to the end of a file is fs.appendFile()
105
const content = 'Some content!'
const fs = require('fs/promises')
Using streams
All those methods write the full content to the file before returning the control
back to your program (in the async version, this means executing the
callback)
In this case, a better option is to write the file content using streams.
106
Use fs.access() (and its promise-based fsPromises.access() counterpart)
to check if the folder exists and Node.js can access it with its permissions.
const fs = require('fs')
try {
if (!fs.existsSync(folderName)) {
fs.mkdirSync(folderName)
}
} catch (err) {
console.error(err)
}
This piece of code reads the content of a folder, both files and subfolders,
and returns their relative path:
const fs = require('fs')
fs.readdirSync(folderPath)
107
fs.readdirSync(folderPath).map((fileName) => {
return path.join(folderPath, fileName)
})
You can also filter the results to only return the files, and exclude the folders:
fs.readdirSync(folderPath)
.map((fileName) => {
return path.join(folderPath, fileName)
})
.filter(isFile)
Rename a folder
Use fs.rename() or fs.renameSync() or fsPromises.rename() to rename
folder. The first parameter is the current path, the second the new path:
const fs = require('fs')
const fs = require('fs')
try {
fs.renameSync('/Users/joe', '/Users/roger')
} catch (err) {
console.error(err)
}
108
fsPromises.rename() is the promise-based version:
const fs = require('fs/promises')
Remove a folder
Use fs.rmdir() or fs.rmdirSync() or fsPromises.rmdir() to remove a
folder.
Removing a folder that has content can be more complicated than you need.
You can pass the option { recursive: true } to recursively remove the
contents.
const fs = require('fs')
console.log(`${dir} is deleted!`)
})
109
const fs = require('fs')
console.log(`${dir} is deleted!`)
})
Or you can install and make use of the fs-extra module, which is very
popular and well maintained. It's a drop-in replacement of the fs module,
which provides more features on top of it.
Install it using
const fs = require('fs-extra')
fs.remove(folder)
.then(() => {
// done
})
.catch((err) => {
console.error(err)
})
110
or with async/await:
There is no need to install it. Being part of the Node.js core, it can be used by
simply requiring it:
const fs = require('fs')
Once you do so, you have access to all its methods, which include:
fs.access() : check if the file exists and Node.js can access it with its
permissions
fs.appendFile() : append data to a file. If the file does not exist, it's
created
fs.chmod() : change the permissions of a file specified by the filename
passed. Related: fs.lchmod() , fs.fchmod()
111
fs.createReadStream() : create a readable file stream
fs.createWriteStream() : create a writable file stream
fs.link() : create a new hard link to a file
fs.mkdir() : create a new folder
fs.mkdtemp() : create a temporary directory
fs.open() : opens the file and returns a file descriptor to allow file
manipulation
fs.readdir() : read the contents of a directory
fs.readFile() : read the content of a file. Related: fs.read()
One peculiar thing about the fs module is that all the methods are
asynchronous by default, but they can also work synchronously by
appending Sync .
For example:
fs.rename()
fs.renameSync()
fs.write()
112
fs.writeSync()
For example let's examine the fs.rename() method. The asynchronous API
is used with a callback:
const fs = require('fs')
// done
})
A synchronous API can be used like this, with a try/catch block to handle
errors:
const fs = require('fs')
try {
fs.renameSync('before.json', 'after.json')
// done
} catch (err) {
console.error(err)
}
The key difference here is that the execution of your script will block in the
second example, until the file operation succeeded.
113
// Example: Read a file and change its content and read
// it again using callback-based API.
const fs = require('fs')
The callback-based API may rises callback hell when there are too many
nested callbacks. We can simply use promise-based API to avoid it:
114
// Example: Read a file and change its content and read
// it again using promise-based API.
const fs = require('fs/promises')
There is no need to install it. Being part of the Node.js core, it can be used by
simply requiring it:
This module provides path.sep which provides the path segment separator
( \ on Windows, and / on Linux / macOS), and path.delimiter which
provides the path delimiter ( ; on Windows, and : on Linux / macOS).
path.basename()
115
Return the last portion of a path. A second parameter can filter out the file
extension:
require('path').basename('/test/something') // something
require('path').basename('/test/something.txt') // something.txt
require('path').basename('/test/something.txt', '.txt') // something
path.dirname()
require('path').dirname('/test/something') // /test
require('path').dirname('/test/something/file.txt') // /test/something
path.extname()
require('path').extname('/test/something') // ''
require('path').extname('/test/something/file.txt') // '.txt'
path.format()
116
// POSIX
require('path').format({ dir: '/Users/joe', base: 'test.txt' }) // '/User
// WINDOWS
require('path').format({ dir: 'C:\\Users\\joe', base: 'test.txt' }) // 'C
path.isAbsolute()
require('path').isAbsolute('/test/something') // true
require('path').isAbsolute('./test/something') // false
path.join()
path.normalize()
Tries to calculate the actual path when it contains relative specifiers like .
require('path').normalize('/users/joe/..//test.txt') // '/users/test.txt'
path.parse()
117
name : the file name
ext : the file extension
Example:
require('path').parse('/users/test.txt')
results in
{
root: '/',
dir: '/users',
base: 'test.txt',
ext: '.txt',
name: 'test'
}
path.relative()
Accepts 2 paths as arguments. Returns the relative path from the first path to
the second, based on the current working directory.
Example:
path.resolve()
You can get the absolute path calculation of a relative path using
path.resolve() :
By specifying a second parameter, resolve will use the first as a base for
the second:
118
require('path').resolve('tmp', 'joe.txt') // '/Users/joe/tmp/joe.txt' if r
If the first parameter starts with a slash, that means it's an absolute path:
const os = require('os')
There are a few useful properties that tell us some key things related to
handling files:
os.EOL gives the line delimiter sequence. It's \n on Linux and macOS, and
\r\n on Windows.
os.arch()
Return the string that identifies the underlying architecture, like arm , x64 ,
arm64 .
119
os.cpus()
Return information on the CPUs available on your system.
Example:
/*
[
{
model: 'Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz',
speed: 2400,
times: {
user: 281685380,
nice: 0,
sys: 187986530,
idle: 685833750,
irq: 0,
},
},
{
model: 'Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz',
speed: 2400,
times: {
user: 282348700,
nice: 0,
sys: 161800480,
idle: 703509470,
irq: 0,
},
},
]
*/
os.freemem()
Return the number of bytes that represent the free memory in the system.
os.homedir()
Return the path to the home directory of the current user.
120
Example:
'/Users/joe'
os.hostname()
Return the host name.
os.loadavg()
Return the calculation made by the operating system on the load average.
Example:
os.networkInterfaces()
Returns the details of the network interfaces available on your system.
Example:
121
{ lo0:
[ { address: '127.0.0.1',
netmask: '255.0.0.0',
family: 'IPv4',
mac: 'fe:82:00:00:00:00',
internal: true },
{ address: '::1',
netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
family: 'IPv6',
mac: 'fe:82:00:00:00:00',
scopeid: 0,
internal: true },
{ address: 'fe80::1',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:82:00:00:00:00',
scopeid: 1,
internal: true } ],
en1:
[ { address: 'fe82::9b:8282:d7e6:496e',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: '06:00:00:02:0e:00',
scopeid: 5,
internal: false },
{ address: '192.168.1.38',
netmask: '255.255.255.0',
family: 'IPv4',
mac: '06:00:00:02:0e:00',
internal: false } ],
utun0:
[ { address: 'fe80::2513:72bc:f405:61d0',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:80:00:20:00:00',
scopeid: 8,
internal: false } ] }
os.platform()
Return the platform that Node.js was compiled for:
darwin
122
freebsd
linux
openbsd
win32
...more
os.release()
Returns a string that identifies the operating system release number
os.tmpdir()
Returns the path to the assigned temp folder.
os.totalmem()
Returns the number of bytes that represent the total memory available in the
system.
os.type()
Identifies the operating system:
Linux
Darwin on macOS
Windows_NT on Windows
os.uptime()
Returns the number of seconds the computer has been running since it was
last rebooted.
os.userInfo()
123
Returns an object that contains the current username , uid , gid , shell ,
and homedir
emitter.addListener()
Alias for emitter.on() .
emitter.emit()
Emits an event. It synchronously calls every event listener in the order they
were registered.
emitter.eventNames()
Return an array of strings that represent the events registered on the current
EventEmitter object:
124
door.eventNames()
emitter.getMaxListeners()
Get the maximum amount of listeners one can add to an EventEmitter
door.getMaxListeners()
emitter.listenerCount()
Get the count of listeners of the event passed as parameter:
door.listenerCount('open')
emitter.listeners()
Gets an array of listeners of the event passed as parameter:
door.listeners('open')
emitter.off()
Alias for emitter.removeListener() added in Node.js 10
emitter.on()
Adds a callback function that's called when an event is emitted.
Usage:
125
door.on('open', () => {
console.log('Door was opened')
})
emitter.once()
Adds a callback function that's called when an event is emitted for the first
time after registering this. This callback is only going to be called once, never
again.
ee.once('my-event', () => {
// call callback function once
})
emitter.prependListener()
When you add a listener using on or addListener , it's added last in the
queue of listeners, and called last. Using prependListener it's added, and
called, before other listeners.
emitter.prependOnceListener()
When you add a listener using once , it's added last in the queue of
listeners, and called last. Using prependOnceListener it's added, and called,
before other listeners.
emitter.removeAllListeners()
Removes all listeners of an EventEmitter object listening to a specific event:
door.removeAllListeners('open')
126
emitter.removeListener()
Remove a specific listener. You can do this by saving the callback function to
a variable, when added, so you can reference it later:
emitter.setMaxListeners()
Sets the maximum amount of listeners one can add to an EventEmitter
door.setMaxListeners(50)
The module provides some properties and methods, and some classes.
Properties
http.METHODS
127
> require('http').METHODS
[ 'ACL',
'BIND',
'CHECKOUT',
'CONNECT',
'COPY',
'DELETE',
'GET',
'HEAD',
'LINK',
'LOCK',
'M-SEARCH',
'MERGE',
'MKACTIVITY',
'MKCALENDAR',
'MKCOL',
'MOVE',
'NOTIFY',
'OPTIONS',
'PATCH',
'POST',
'PROPFIND',
'PROPPATCH',
'PURGE',
'PUT',
'REBIND',
'REPORT',
'SEARCH',
'SUBSCRIBE',
'TRACE',
'UNBIND',
'UNLINK',
'UNLOCK',
'UNSUBSCRIBE' ]
http.STATUS_CODES
This property lists all the HTTP status codes and their description:
128
> require('http').STATUS_CODES
{ '100': 'Continue',
'101': 'Switching Protocols',
'102': 'Processing',
'200': 'OK',
'201': 'Created',
'202': 'Accepted',
'203': 'Non-Authoritative Information',
'204': 'No Content',
'205': 'Reset Content',
'206': 'Partial Content',
'207': 'Multi-Status',
'208': 'Already Reported',
'226': 'IM Used',
'300': 'Multiple Choices',
'301': 'Moved Permanently',
'302': 'Found',
'303': 'See Other',
'304': 'Not Modified',
'305': 'Use Proxy',
'307': 'Temporary Redirect',
'308': 'Permanent Redirect',
'400': 'Bad Request',
'401': 'Unauthorized',
'402': 'Payment Required',
'403': 'Forbidden',
'404': 'Not Found',
'405': 'Method Not Allowed',
'406': 'Not Acceptable',
'407': 'Proxy Authentication Required',
'408': 'Request Timeout',
'409': 'Conflict',
'410': 'Gone',
'411': 'Length Required',
'412': 'Precondition Failed',
'413': 'Payload Too Large',
'414': 'URI Too Long',
'415': 'Unsupported Media Type',
'416': 'Range Not Satisfiable',
'417': 'Expectation Failed',
'418': 'I\'m a teapot',
'421': 'Misdirected Request',
'422': 'Unprocessable Entity',
'423': 'Locked',
'424': 'Failed Dependency',
129
'425': 'Unordered Collection',
'426': 'Upgrade Required',
'428': 'Precondition Required',
'429': 'Too Many Requests',
'431': 'Request Header Fields Too Large',
'451': 'Unavailable For Legal Reasons',
'500': 'Internal Server Error',
'501': 'Not Implemented',
'502': 'Bad Gateway',
'503': 'Service Unavailable',
'504': 'Gateway Timeout',
'505': 'HTTP Version Not Supported',
'506': 'Variant Also Negotiates',
'507': 'Insufficient Storage',
'508': 'Loop Detected',
'509': 'Bandwidth Limit Exceeded',
'510': 'Not Extended',
'511': 'Network Authentication Required' }
http.globalAgent
Points to the global instance of the Agent object, which is an instance of the
http.Agent class.
It's used to manage connections persistence and reuse for HTTP clients, and
it's a key component of Node.js HTTP networking.
Methods
http.createServer()
Usage:
130
http.request()
http.get()
Classes
The HTTP module provides 5 classes:
http.Agent
http.ClientRequest
http.Server
http.ServerResponse
http.IncomingMessage
http.Agent
This object makes sure that every request made to a server is queued and a
single socket is reused.
http.ClientRequest
131
The returned data of a response can be read in 2 ways:
http.Server
This class is commonly instantiated and returned when creating a new server
using http.createServer() .
Once you have a server object, you have access to its methods:
http.ServerResponse
The method you'll always call in the handler is end() , which closes the
response, the message is complete and the server can send it to the client. It
must be called on each response.
getHeaderNames() get the list of the names of the HTTP headers already
set
getHeaders() get a copy of the HTTP headers already set
setHeader('headername', value) sets an HTTP header value
getHeader('headername') gets an HTTP header already set
132
removeHeader('headername') removes an HTTP header already set
hasHeader('headername') return true if the response has that header set
headersSent() return true if the headers have already been sent to the
client
After processing the headers you can send them to the client by calling
response.writeHead() , which accepts the statusCode as the first parameter,
the optional status message, and the headers object.
To send data to the client in the response body, you use write() . It will
send buffered data to the HTTP response stream.
If the headers were not sent yet using response.writeHead() , it will send the
headers first, with the status code and message that's set in the request,
which you can edit by setting the statusCode and statusMessage properties
values:
response.statusCode = 500
response.statusMessage = 'Internal Server Error'
http.IncomingMessage
133
Node.js Streams
What are streams
Streams are one of the fundamental concepts that power Node.js
applications.
Streams are not a concept unique to Node.js. They were introduced in the
Unix operating system decades ago, and programs can interact with each
other passing streams through the pipe operator ( | ).
For example, in the traditional way, when you tell the program to read a file,
the file is read into memory, from start to finish, and then you process it.
Using streams you read it piece by piece, processing its content without
keeping it all in memory.
The Node.js stream module provides the foundation upon which all
streaming APIs are built. All streams are instances of EventEmitter
Why streams
Streams basically provide two major advantages over using other data
handling methods:
An example of a stream
A typical example is reading files from a disk.
134
Using the Node.js fs module, you can read a file, and serve it over HTTP
when a new connection is established to your HTTP server:
readFile() reads the full contents of the file, and invokes the callback
function when it's done.
res.end(data) in the callback will return the file contents to the HTTP client.
If the file is big, the operation will take quite a bit of time. Here is the same
thing written using streams:
Instead of waiting until the file is fully read, we start streaming it to the HTTP
client as soon as we have a chunk of data ready to be sent.
pipe()
The above example uses the line stream.pipe(res) : the pipe() method is
called on the file stream.
What does this code do? It takes the source, and pipes it into a destination.
135
You call it on the source stream, so in this case, the file stream is piped to the
HTTP response.
The return value of the pipe() method is the destination stream, which is a
very convenient thing that lets us chain multiple pipe() calls, like this:
src.pipe(dest1).pipe(dest2)
src.pipe(dest1)
dest1.pipe(dest2)
136
There are four classes of streams:
Readable : a stream you can pipe from, but not pipe into (you can
receive data, but not send data to it). When you push data into a
readable stream, it is buffered, until a consumer starts to read the data.
Writable : a stream you can pipe into, but not pipe from (you can send
data, but not receive from it)
Duplex : a stream you can both pipe into and pipe from, basically a
combination of a Readable and Writable stream
Transform : a Transform stream is similar to a Duplex, but the output is
a transform of its input
readableStream._read = () => {}
137
readableStream.push('hi!')
readableStream.push('ho!')
process.stdin.pipe(writableStream)
138
const Stream = require('stream')
readableStream.pipe(writableStream)
readableStream.push('hi!')
readableStream.push('ho!')
You can also consume a readable stream directly, using the readable event:
readableStream.on('readable', () => {
console.log(readableStream.read())
})
writableStream.write('hey!\n')
139
const Stream = require('stream')
readableStream.pipe(writableStream)
readableStream.push('hi!')
readableStream.push('ho!')
readableStream.destroy()
In the above example, end() is called within a listener to the close event
on the readable stream to ensure it is not called before all write events have
passed through the pipe, as doing so would cause an error event to be
emitted. Calling destroy() on the readable stream causes the close event
to be emitted. The listener to the close event on the writable stream
demonstrates the completion of the process as it is emitted after the call to
end() .
140
const { Transform } = require('stream')
process.stdin.pipe(transformStream).pipe(process.stdout)
export NODE_ENV=production
in the shell, but it's better to put it in your shell configuration file (e.g.
.bash_profile with the Bash shell) because otherwise the setting does not
persist in case of a system restart.
141
NODE_ENV=production node app.js
For example Pug, the templating library used by Express, compiles in debug
mode if NODE_ENV is not set to production . Express views are compiled in
every request in development mode, while in production they are cached.
There are many more examples.
For example, in an Express app, you can use this to set different error
handlers per environment:
142
Error handling in Node.js
Errors in Node.js are handled through exceptions.
Creating exceptions
An exception is created using the throw keyword:
throw value
As soon as JavaScript executes this line, the normal program flow is halted
and the control is held back to the nearest exception handler.
Error objects
An error object is an object that is either an instance of the Error object, or
extends the Error class, provided in the Error core module:
or
Handling exceptions
An exception handler is a try / catch statement.
143
Any exception raised in the lines of code included in the try block is
handled in the corresponding catch block:
try {
// lines of code
} catch (e) {}
You can add multiple handlers, that can catch different kinds of errors.
To solve this, you listen for the uncaughtException event on the process
object:
You don't need to import the process core module for this, as it's
automatically injected.
doSomething1()
.then(doSomething2)
.then(doSomething3)
.catch((err) => console.error(err))
144
How do you know where the error occurred? You don't really know, but you
can handle errors in each of the functions you call ( doSomethingX ), and
inside the error handler throw a new error, that's going to call the outside
catch handler:
doSomething1()
.then(() => {
return doSomething2().catch((err) => {
// handle error
throw err // break the chain!
})
})
.then(() => {
return doSomething3().catch((err) => {
// handle error
throw err // break the chain!
})
})
.catch((err) => console.error(err))
145
async function someFunction() {
try {
await someOtherFunction()
} catch (err) {
console.error(err.message)
}
}
server.listen(port, () => {
console.log(`Server running at port ${port}`)
})
The server is set to listen on the specified port, 3000 . When the server is
ready, the listen callback function is called.
The callback function we pass is the one that's going to be executed upon
every request that comes in. Whenever a new request is received, the
request event is called, providing two objects: a request (an
http.IncomingMessage object) and a response (an http.ServerResponse
object).
146
request provides the request details. Through it, we access the request
headers and request data.
response is used to populate the data we're going to return to the client.
res.statusCode = 200
res.setHeader('Content-Type', 'text/html')
res.end('<h1>Hello, World!</h1>')
The simplest way to perform an HTTP request using Node.js is to use the
Axios library:
147
const axios = require('axios')
axios
.get('https://example.com/todos')
.then((res) => {
console.log(`statusCode: ${res.status}`)
console.log(res)
})
.catch((error) => {
console.error(error)
})
A GET request is possible just using the Node.js standard modules, although
it's more verbose than the option above:
const options = {
hostname: 'example.com',
port: 443,
path: '/todos',
method: 'GET',
}
req.end()
148
Similar to making an HTTP GET request, you can use the Axios library to
perform a POST request:
axios
.post('https://whatever.com/todos', {
todo: 'Buy the milk',
})
.then((res) => {
console.log(`statusCode: ${res.status}`)
console.log(res)
})
.catch((error) => {
console.error(error)
})
149
const https = require('https')
const options = {
hostname: 'whatever.com',
port: 443,
path: '/todos',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
},
}
req.write(data)
req.end()
150
Here is how you can extract the data that was sent as JSON in the request
body.
If you are using Express, that's quite simple: use the express.json()
axios.post('https://whatever.com/todos', {
todo: 'Buy the milk',
})
app.use(
express.urlencoded({
extended: true,
})
)
app.use(express.json())
If you're not using Express and you want to do this in vanilla Node.js, you
need to do a bit more work, of course, as Express abstracts a lot of this for
you.
The key thing to understand is that when you initialize the HTTP server using
http.createServer() , the callback is called when the server got all the HTTP
headers, but not the request body.
151
The request object passed in the connection callback is a stream.
So, we must listen for the body content to be processed, and it's processed
in chunks.
We first get the data by listening to the stream data events, and when the
data ends, the stream end event is called, once:
Starting from Node.js v10 a for await .. of syntax is available for use. It
simplifies the example above and makes it look more linear:
152
const server = http.createServer(async (req, res) => {
const buffers = []
153
Conclusion
Thanks a lot for reading this book.
154