0% found this document useful (0 votes)
2K views

Node FS Module Collate 030421

Nodejs FS Module documentation for Curios programmer

Uploaded by

Tini
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views

Node FS Module Collate 030421

Nodejs FS Module documentation for Curios programmer

Uploaded by

Tini
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 925

Home (/) Posts (/posts/) Contact (/contact/)

Last updated Saturday, Nov 16, 2019

Writing cross-platform Node.js


A major strength of Node.js is great cross-platform support. With a little
effort you can make sure your code will run on Windows, Linux and OSX.

Estimated reading time: 5 minutes

Table of contents

Cross-platform if you want


Paths
Use path.resolve to traverse the filesystem
Use path.normalize to create reliable paths
Use path.join to join folder names
Scripts in package.json
Join shell commands with a double ampersand instead of a semi-colon
Cross Platform Newline Characters
Temporary files
Home directories
Use the os module for more control
Conclusion
References

Cross-platform if you want

Node.js is cross-platform meaning it works on Windows, OSX and Linux. A large number of
the Node.js community write Node.js on OSX and then deploy to Linux servers. Because
OSX and Linux are based on UNIX this tends to just work. Windows support is a first-class
citizen in Node.js and if you learn to use Node.js in the right way you can make sure that
you can welcome your Windows friends to your code party.

Paths

The biggest issue you will run into is paths. Node.js does a great job of taking care of most
of this for you but if you build paths in the wrong way you’ll run into problems on Windows.
Consider you are doing some string concatantion to build a path for example.

var foo = 'foo';


var bar = 'bar';
var filePath = foo + '/' + bar + '/';

Whilst forward slashes will work ok on Windows if you do string concatenation you miss
out on the protection that the path module in Node.js gives you.

The path (http://nodejs.org/api/path.html) module gives you all of the tools you need to
handle cross-platform paths. For this example we need path.join .

var filePath = path.join(foo, bar);


// 'foo/bar' on OSX and Linux
// 'foo\\bar' on Windows

Use path.resolve to traverse the filesystem

Using path.resolve lets you move around the file system but maintain cross platform
compatibility. As per the documentation you can think of it as a series of cd commands
that output a single path at the end.

path.resolve('../', '/../', '../')


// '/home' on Linux
// '/Users' on OSX
// 'C:\\Users' on Windows

Use path.normalize to create reliable paths

If you find yourself doing things like this

var filePath = '/home/george/../folder/code';

You should be using path.normalize . This will present you with the correct path on
whatever platform you are using.

var filePath = path.normalize('/home/george/../folder/code');


// '/home/folder/code'
Use path.join to join folder names

As we saw before with the string concatenation example kittens can die if you use string
concatenation.

If you need to join paths together use path.join . This will also normalize the result for
you.

path.join('foo', '..', 'bar', 'baz/foo');


// 'bar/baz/foo' on OSX and Linux
// 'bar\\baz\\foo' on Windows

Scripts in package.json

Let’s say you have the following executable script npm-postinstall in the bin folder of
your project.

#!/usr/bin/env node
console.log('node modules installed!');

If you define scripts to be run in your package.json you will find that Windows will choke if
you rely on a Node.js shebang.

{
"name": "some-app",
"version": "0.0.1",
"authors": [
"George Ornbo <[email protected]>",
],
"scripts": {
"postinstall": "./bin/npm-postinstall"
}
}

The solution is to use the node executable.

{
"name": "some-app",
"version": "0.0.1",
"authors": [
"George Ornbo <[email protected]>",
],
"scripts": {
"postinstall": "node bin/npm-postinstall"
}
}

This works for all platforms rather than just OSX and Linux.

Join shell commands with a double ampersand instead of a semi-colon

If you are working with any form of executing command-line programs, and you like to ex-
ecute more than one in a single go, you would probably do so like this (let’s use the basic act
of creating a folder and cd’ing into it for brevity):

shell.exec('mkdir folder_name; cd folder_name');

Unfortunately, that does not work on Windows. Instead, use this:

shell.exec('mkdir folder_name && cd folder_name');

Cross Platform Newline Characters

We all know how troublesome newline characters are accross platforms. Some platforms
use ‘\n’, others use ‘\r’, and the rest use both. If you are struggling to get the newline char-
acter to work in your log statements or strings on multiple platforms, then you might con-
sider a solution that uses nasty regular expressions to match the correct newline character
that you want. Usually, that would look like this: /(?:\r\n|[r\n])/ . Yuck. Here’s a better
approach. The OS module has an EOL constant attached to it that when referred, will out-
put the correct newline character for the operating system.

var os = require(‘os’), EOL = os.EOL;

console.log(‘This text will print’ + EOL + ‘on three lines’ + EOL + ‘no matter the OS’);

Thanks to Declan de Wet (http://declandewet.com) for the above two tips.

Temporary files
If you need to write files to a tmp folder use os.tmpdir() to ensure you write to the cor-
rect tmp file location for you platform. Thanks to alessioalex
(https://github.com/alessioalex) for this tip.

Home directories

On *nix your home directory is process.env.HOME but in Windows the home directory is
proces.env.HOMEPATH. You can smooth this out with

var homedir = (process.platform === 'win32') ? process.env.HOMEPATH : proces

The module-smith (https://www.npmjs.org/package/module-smith) module takes care of


this for you so if you are interested in writing cross-platform modules consider using this.

Thanks to indexzero (https://github.com/indexzero) for this tip.

Use the os module for more control

If you need even more control you can get the operating system platform and CPU architec-
ture you are running on react accordingly with the os module
(http://nodejs.org/api/os.html).

var os = require('os');
os.platform(); // equivalent to process.platform
// 'linux' on Linux
// 'win32' on Windows (32-bit / 64-bit)
// 'darwin' on OSX
os.arch();
// 'x86' on 32-bit CPU architecture
// 'x64' on 64-bit CPU architecture

Conclusion

One of the major strengths of Node.js is the ability to deploy your code on any platform and
to work with almost any development platform. With a bit of knowledge you can make
cross-platform compatibility happen out of the box and avoid having to write the ‘make x
compatible on x’ ticket.
References

Core path module (http://nodejs.org/api/path.html)


Core os module (http://nodejs.org/api/os.html)
Windows and Node: Writing Portable Code (http://dailyjs.com/2012/05/24/windows-
and-node-4/)
Tips for Writing Portable Node.js Code (https://www.npmjs.org/package/module-
smith)

Have an update or suggestion for this article? You can edit it here and send me a pull re-
quest.
(https://github.com/shapeshed/shapeshed.com/edit/master/content/posts/writing-
cross-platform-node.md)

Tags

Node.js (/tags/node.js)

Recent Posts

About the author

George Ornbo is a Software Engineer based in Buckinghamshire,


England.

He is the author of Sams Teach Yourself Go in 24 Hours


(https://www.amazon.com/Sams-Teach-Yourself-Hours-
Programming/dp/0672338033) and Sams Teach Yourself
Node.js in 24 Hours (https://www.amazon.com/Sams-Teach-
Yourself-Node-js-Hours/dp/0672335956) . He can be found in
most of the usual places as shapeshed.

← http://shapeshed.com (/)

Content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0


International (CC BY-NC-SA 4.0) (https://creativecommons.org/licenses/by-nc-sa/4.0/)
HOME ES6 ES.NEXT JS BOM JS DOM WEB API SNIPPETS TYPESCRIPT

Home / Node.js Tutorial / Node.js Path Module

Node.js Path Module


ADVERTISEMENT

Summary: in this tutorial, you will learn about the path module in Node.js

Node.js provides you with the path module that allows you to interact with file paths easily.

The path module has many useful properties and methods to access and manipulate paths in the file
system.

The path is a core module in Node, therefore, you can use it without installing:

const path = require('path');


Code language: JavaScript (javascript)

Useful path properties

The path object has the sep property that represents the platform-specific path separator:

path.sep
Code language: JavaScript (javascript)
The path.sep returns \ on Windows and / on Linux and macOS.

The path object also has the delimiter property that represents the path delimiter:

path.delimiter
Code language: JavaScript (javascript)

The path.delimiter returns ; on Windows and : on Linux and macOS.

Handy path methods

The following shows some handy methods of the path module that you probably use very often:

path.basename(path, [,ext])
path.dirname(path)
path.extname(path)
path.format(pathObj)
path.isAbsolute(path)
path.join(...path)
path.normalize(path)
path.parse(path)
path.relative(from, to)
path.resolve(...path)
Code language: JavaScript (javascript)

path.basename(path[, ext])
The path.basename() returns the last portion of a specified path. For example:

let result = path.basename('/public_html/home/index.html');


console.log(result);
Code language: JavaScript (javascript)

Output:

index.html
Code language: JavaScript (javascript)

The ext parameter filters out the extension from the path:

let result = path.basename('/public_html/home/index.html','.html');


console.log(result);
Code language: JavaScript (javascript)

Output:

index
Code language: JavaScript (javascript)

path.dirname(path)

The path.dirname() method returns the directory name of a specified path. For example:
let result = path.dirname('/public_html/home/index.html');
console.log(result);
Code language: JavaScript (javascript)

Output:

/public_html/home
Code language: JavaScript (javascript)

Note that the path.dirname() ignores the trailing directory.

path.extname(path)

The path.extname() returns extension of the path. For example:

console.log(path.extname('index.html'));
console.log(path.extname('app.js'));
console.log(path.extname('node.js.md'));
Code language: JavaScript (javascript)

Output:

.html
.js
.md
Code language: JavaScript (javascript)

path.format(pathObj)
The path.format() method returns a path string from a specified path object.

let pathToFile = path.format({


dir: 'public_html/home/js',
base: 'app.js'
});

console.log(pathToFile);
Code language: JavaScript (javascript)

Output (on Linux or macOS):

public_html/home/js/app.js
Code language: JavaScript (javascript)

path.isAbsolute(path)

The path.isAbsolute() returns true if a specified path is an absolute path.

For example, on Windows:

let result = path.isAbsolute('C:\\node.js\\');


console.log(result); // true

result = path.isAbsolute('C:/node.js/');
console.log(result); // true
result = path.isAbsolute('/node.js');
console.log(result); // true

result = path.isAbsolute('home/');
console.log(result); // false

result = path.isAbsolute('.');
console.log(result); // false
Code language: JavaScript (javascript)

On Linux & macOS:

let result = path.isAbsolute('/node/js/');


console.log(result); // true

result = path.isAbsolute('/node/..');
console.log(result); // true

result = path.isAbsolute('node/');
console.log(result); // false

result = path.isAbsolute('.');
console.log(result); // false
Code language: JavaScript (javascript)
path.join(…paths)

The path.join() method does two things:

Join a sequence of path segments using the platform-specific separator as a delimiter

Normalize the resulting path and return it.

For example:

let pathToDir = path.join('/home', 'js', 'dist', 'app.js');


console.log(pathToDir);
Code language: JavaScript (javascript)

Output (on Windows):

\home\js\dist\app.js
Code language: JavaScript (javascript)

path.parse(path)

The path.parse() method returns an object whose properties represent the path elements. The
returned object has the following properties:

root: the root

dir: the directory path from the root

base: the file name + extension


name: the file name

ext: the extension

For example, on Windows:

let pathObj = path.parse('d:/nodejs/html/js/app.js');


console.log(pathObj);
Code language: JavaScript (javascript)

Output:

{
root: 'd:/',
dir: 'd:/nodejs/html/js/',
base: 'app.js',
ext: '.js',
name: 'app'
}
Code language: JavaScript (javascript)

On Linux or macOS:

let pathObj = path.parse('/nodejs/html/js/app.js');


console.log(pathObj);
Code language: JavaScript (javascript)

Output:

{
root: '/',
dir: '/nodejs/html/js',
base: 'app.js',
ext: '.js',
name: 'app'
}
Code language: JavaScript (javascript)

path.normalize(path)
The path.normalize() method normalizes a specified path. It also resolves the '..' and '.'
segments.

For example, on Windows:

let pathToDir = path.normalize('C:\\node.js/module/js//dist');


console.log(pathToDir);
Code language: JavaScript (javascript)

Output:

C:\node.js\module\js\dist
Code language: JavaScript (javascript)

path.relative(from, to)

The path.relative() accepts two arguments and returns the relative path between them based on
the current working directory.

For example, on Linux or macOS:

let relativePath = path.relative('/home/user/config/','/home/user/js/')


console.log(relativePath);
Code language: JavaScript (javascript)

Output:
../js
Code language: JavaScript (javascript)

path.resolve(…paths)

The path.resolve() method accepts a sequence of paths or path segments and resolves it into an
absolute path. The path.resolve() method prepends each subsequent path from right to left until it
completes constructing an absolute path.

If you don’t pass any argument into the path.resolve() method, it will return the current working
directory.

For example, on Linux or macOS:

console.log("Current working directory:", __dirname);


console.log(path.resolve());
Code language: JavaScript (javascript)

Output:

/home/john
/home/john
Code language: JavaScript (javascript)

In this example, the path.resolve() method returns a path that is the same as the current working
directory.
See another example on Linux or macOS:

// Resolve 2 segments with the current directory


path1 = path.resolve("html", "index.html");
console.log(path1)

// Resolve 3 segments with the current directory


path2 = path.resolve("html", "js", "app.js");
console.log(path2)

// Treat of the first segment as root and ignore


// the current working directory
path3 = path.resolve("/home/html", "about.html");
console.log(path3);
Code language: JavaScript (javascript)

Output:

/home/john/html/index.html
/home/john/html/js/app.js
/home/html/about.html
Code language: JavaScript (javascript)

Summary
Use the path core module to manipulate the file path effectively.

The path module is platform-specific.

Was this tutorial helpful ?

YesNo

ADVERTISEMENT

Previous Nodejs Modules


Next Node.js OS Module

ADVERTISEMENT

Search this website

GETTING STARTED

What is JavaScript

Install a JavaScript Code Editor

Meet the Console Tab of Devtools

JavaScript Hello World


JAVASCRIPT FUNDAMENTALS

Syntax

Variables

Data Types

Number

Boolean

JAVASCRIPT OPERATORS

Unary Operators

Assignment Operators

Logical Operators

Comparison Operators

CONTROL FLOW

if else

switch case

while

do-while

for

break
continue

ADVERTISEMENT

JAVASCRIPT STRINGS

JavaScript Strings

String Type

Concatenating Strings: concat()

Splitting a String into Substrings: split()

Locating a Substring Forward: indexOf()

Locating a Substring Backward:


lastIndexOf()

Extracting a Substring from a String:


substring()

Removing Whitespaces from Both Ends:


trim()

Extracting a part of a String: slice()

JAVASCRIPT ARRAY

JavaScript Arrays

JavaScript Array Length

Stack
Queue

Manipulate Elements: splice()

Sort Elements: sort()

Locate Elements: indexOf()

Check If Every Element Passes a Test:


every()

Check If At Least One Element Passes a


Test: some()

Filter Array Elements: filter()

Reduce an Array Into a Value: reduce()

Transform Array Elements: map()

Loop Through Array: forEach()

Concatenating Array Elements Into a


String: join()

Multidimensional Array

JAVASCRIPT FUNCTIONS

Functions

Functions are First-Class Citizens

Anonymous Functions
IIFE

Callback Functions

JAVASCRIPT OBJECTS

JavaScript Objects

Object Properties

Constructor Functions

Javascript Prototypes

Constructor/Prototype Pattern

Prototypal Inheritance

THIS Keyword

Factory Functions

for…in Loop

Enumerable Properties

Own Properties

Primitive vs. Reference Values

Primitive Wrapper Types

ADVANCED FUNCTIONS

Passing By Value
Returning Multiple Values

Function Type

The call() Method

The apply() method

The bind() Method

Recursive Functions

Closures

REGUL AR EXPRESSIONS

Basic Regular Expressions

search()

match()

replace()

Character Classes

Anchors

JAVASCRIPT RUNTIME

Execution Context

Call Stack

Event Loop
Hoisting

Variable Scopes

ADVERTISEMENT

ABOUT JAVASCRIPT TUTORIAL

The JavaScript Tutorial website helps you learn JavaScript programming from scratch quickly and effectively.

RECENT TUTORIALS

JavaScript History pushState

JavaScript Constructor Prototype

JavaScript Constructor Function

JavaScript Class Expressions

JavaScript Static Method

SITE LINKS

About Us

Contact Us

Privacy Policy
Copyright © 2021 by JavaScript Tutorial Website. All Right Reserved.
Docs » Api » Stream

Stream
Stability: 2 - Unstable

A stream is an abstract interface implemented by various objects in Node. For example a request to an
HTTP server is a stream, as is stdout. Streams are readable, writable, or both. All streams are instances
of EventEmitter

You can load the Stream base classes by doing require('stream') . There are base classes provided for
Readable streams, Writable streams, Duplex streams, and Transform streams.

This document is split up into 3 sections. The rst explains the parts of the API that you need to be
aware of to use streams in your programs. If you never implement a streaming API yourself, you can
stop there.

The second section explains the parts of the API that you need to use if you implement your own
custom streams yourself. The API is designed to make this easy for you to do.

The third section goes into more depth about how streams work, including some of the internal
mechanisms and functions that you should probably not modify unless you de nitely know what you
are doing.

API for Stream Consumers


Streams can be either Readable, Writable, or both (Duplex).

All streams are EventEmitters, but they also have other custom methods and properties depending on
whether they are Readable, Writable, or Duplex.
If a stream is both Readable and Writable, then it implements all of the methods and events below. So,
a Duplex or Transform stream is fully described by this API, though their implementation may be
somewhat different.

It is not necessary to implement Stream interfaces in order to consume streams in your programs. If
you are implementing streaming interfaces in your own program, please also refer to API for Stream
Implementors below.

Almost all Node programs, no matter how simple, use Streams in some way. Here is an example of
using Streams in a Node program:
var http = require('http');

var server = http.createServer(function (req, res) {


// req is an http.IncomingMessage, which is a Readable Stream
// res is an http.ServerResponse, which is a Writable Stream

var body = '';


// we want to get the data as utf8 strings
// If you don't set an encoding, then you'll get Buffer objects
req.setEncoding('utf8');

// Readable streams emit 'data' events once a listener is added


req.on('data', function (chunk) {
body += chunk;
});

// the end event tells you that you have entire body
req.on('end', function () {
try {
var data = JSON.parse(body);
} catch (er) {
// uh oh! bad json!
res.statusCode = 400;
return res.end('error: ' + er.message);
}

// write back something interesting to the user:


res.write(typeof data);
res.end();
});
});

server.listen(1337);

// $ curl localhost:1337 -d '{}'


// object
// $ curl localhost:1337 -d '"foo"'
// string
// $ curl localhost:1337 -d 'not json'
// error: Unexpected token o

Class: stream.Readable

The Readable stream interface is the abstraction for a source of data that you are reading from. In
other words, data comes out of a Readable stream.

A Readable stream will not start emitting data until you indicate that you are ready to receive it.
Readable streams have two "modes": a owing mode and a paused mode. When in owing mode, data
is read from the underlying system and provided to your program as fast as possible. In paused mode,
you must explicitly call stream.read() to get chunks of data out. Streams start out in paused mode.

Note: If no data event handlers are attached, and there are no [ pipe() ][] destinations, and the stream
is switched into owing mode, then data will be lost.

You can switch to owing mode by doing any of the following:

Adding a [ 'data' event][] handler to listen for data.


Calling the [ resume() ][] method to explicitly open the ow.
Calling the [ pipe() ][] method to send the data to a Writable.

You can switch back to paused mode by doing either of the following:

If there are no pipe destinations, by calling the [ pause() ][] method.


If there are pipe destinations, by removing any [ 'data' event][] handlers, and removing all pipe destinations
by calling the [ unpipe() ][] method.

Note that, for backwards compatibility reasons, removing 'data' event handlers will not
automatically pause the stream. Also, if there are piped destinations, then calling pause() will not
guarantee that the stream will remain paused once those destinations drain and ask for more data.

Examples of readable streams include:

http responses, on the client


http requests, on the server
fs read streams
zlib streams
crypto streams
tcp sockets
child process stdout and stderr
process.stdin

Event: 'readable'

When a chunk of data can be read from the stream, it will emit a 'readable' event.

In some cases, listening for a 'readable' event will cause some data to be read into the internal buffer
from the underlying system, if it hadn't already.

var readable = getReadableStreamSomehow();


readable.on('readable', function() {
// there is some data to read now
});

Once the internal buffer is drained, a readable event will re again when more data is available.

The readable event is not emitted in the " owing" mode with the sole exception of the last one, on
end-of-stream.

The 'readable' event indicates that the stream has new information: either new data is available or the
end of the stream has been reached. In the former case, .read() will return that data. In the latter
case, .read() will return null. For instance, in the following example, foo.txt is an empty le:

var fs = require('fs');
var rr = fs.createReadStream('foo.txt');
rr.on('readable', function() {
console.log('readable:', rr.read());
});
rr.on('end', function() {
console.log('end');
});

The output of running this script is:

bash-3.2$ node test.js


readable: null
end
Event: 'data'

chunk {Buffer | String} The chunk of data.

Attaching a data event listener to a stream that has not been explicitly paused will switch the stream
into owing mode. Data will then be passed as soon as it is available.

If you just want to get all the data out of the stream as fast as possible, this is the best way to do so.

var readable = getReadableStreamSomehow();


readable.on('data', function(chunk) {
console.log('got %d bytes of data', chunk.length);
});

Note that the readable event should not be used together with data because the assigning the latter
switches the stream into " owing" mode, so the readable event will not be emitted.

Event: 'end'

This event res when there will be no more data to read.

Note that the end event will not re unless the data is completely consumed. This can be done by
switching into owing mode, or by calling read() repeatedly until you get to the end.

var readable = getReadableStreamSomehow();


readable.on('data', function(chunk) {
console.log('got %d bytes of data', chunk.length);
});
readable.on('end', function() {
console.log('there will be no more data.');
});

Event: 'close'

Emitted when the underlying resource (for example, the backing le descriptor) has been closed. Not
all streams will emit this.
Event: 'error'

{Error Object}

Emitted if there was an error receiving data.

readable.read([size])

size {Number} Optional argument to specify how much data to read.


Return {String | Buffer | null}

The read() method pulls some data out of the internal buffer and returns it. If there is no data
available, then it will return null .

If you pass in a size argument, then it will return that many bytes. If size bytes are not available,
then it will return null , unless we've ended, in which case it will return the data remaining in the
buffer.

If you do not specify a size argument, then it will return all the data in the internal buffer.

This method should only be called in paused mode. In owing mode, this method is called
automatically until the internal buffer is drained.

var readable = getReadableStreamSomehow();


readable.on('readable', function() {
var chunk;
while (null !== (chunk = readable.read())) {
console.log('got %d bytes of data', chunk.length);
}
});

If this method returns a data chunk, then it will also trigger the emission of a [ 'data' event][].

Note that calling readable.read([size]) after the end event has been triggered will return null . No
runtime error will be raised.
readable.setEncoding(encoding)

encoding {String} The encoding to use.


Return: this

Call this function to cause the stream to return strings of the speci ed encoding instead of Buffer
objects. For example, if you do readable.setEncoding('utf8') , then the output data will be interpreted
as UTF-8 data, and returned as strings. If you do readable.setEncoding('hex') , then the data will be
encoded in hexadecimal string format.

This properly handles multi-byte characters that would otherwise be potentially mangled if you simply
pulled the Buffers directly and called buf.toString(encoding) on them. If you want to read the data as
strings, always use this method.

var readable = getReadableStreamSomehow();


readable.setEncoding('utf8');
readable.on('data', function(chunk) {
assert.equal(typeof chunk, 'string');
console.log('got %d characters of string data', chunk.length);
});

readable.resume()

Return: this

This method will cause the readable stream to resume emitting data events.

This method will switch the stream into owing mode. If you do not want to consume the data from a
stream, but you do want to get to its end event, you can call [ readable.resume() ][] to open the ow of
data.

var readable = getReadableStreamSomehow();


readable.resume();
readable.on('end', function(chunk) {
console.log('got to the end, but did not read anything');
});
readable.pause()

Return: this

This method will cause a stream in owing mode to stop emitting data events, switching out of
owing mode. Any data that becomes available will remain in the internal buffer.

var readable = getReadableStreamSomehow();


readable.on('data', function(chunk) {
console.log('got %d bytes of data', chunk.length);
readable.pause();
console.log('there will be no more data for 1 second');
setTimeout(function() {
console.log('now data will start flowing again');
readable.resume();
}, 1000);
});

readable.isPaused()

Return: Boolean

This method returns whether or not the readable has been explicitly paused by client code (using
readable.pause() without a corresponding readable.resume() ).

var readable = new stream.Readable

readable.isPaused() // === false


readable.pause()
readable.isPaused() // === true
readable.resume()
readable.isPaused() // === false

readable.pipe(destination[, options])

destination {Writable Stream} The destination for writing data


options {Object} Pipe options
end {Boolean} End the writer when the reader ends. Default = true
This method pulls all the data out of a readable stream, and writes it to the supplied destination,
automatically managing the ow so that the destination is not overwhelmed by a fast readable stream.

Multiple destinations can be piped to safely.

var readable = getReadableStreamSomehow();


var writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt'
readable.pipe(writable);

This function returns the destination stream, so you can set up pipe chains like so:

var r = fs.createReadStream('file.txt');
var z = zlib.createGzip();
var w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);

For example, emulating the Unix cat command:

process.stdin.pipe(process.stdout);

By default [ end() ][] is called on the destination when the source stream emits end , so that
destination is no longer writable. Pass { end:

false } as options to keep the destination stream open.

This keeps writer open so that "Goodbye" can be written at the end.

reader.pipe(writer, { end: false });


reader.on('end', function() {
writer.end('Goodbye\n');
});

Note that process.stderr and process.stdout are never closed until the process exits, regardless of the
speci ed options.

readable.unpipe([destination])
destination {Writable Stream} Optional speci c stream to unpipe

This method will remove the hooks set up for a previous pipe() call.

If the destination is not speci ed, then all pipes are removed.

If the destination is speci ed, but no pipe is set up for it, then this is a no-op.

var readable = getReadableStreamSomehow();


var writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt',
// but only for the first second
readable.pipe(writable);
setTimeout(function() {
console.log('stop writing to file.txt');
readable.unpipe(writable);
console.log('manually close the file stream');
writable.end();
}, 1000);

readable.unshift(chunk)

chunk {Buffer | String} Chunk of data to unshift onto the read queue

This is useful in certain cases where a stream is being consumed by a parser, which needs to "un-
consume" some data that it has optimistically pulled out of the source, so that the stream can be
passed on to some other party.

Note that stream.unshift(chunk) cannot be called after the end event has been triggered; a runtime
error will be raised.

If you nd that you must often call stream.unshift(chunk) in your programs, consider implementing a
Transform stream instead. (See API for Stream Implementors, below.)
// Pull off a header delimited by \n\n
// use unshift() if we get too much
// Call the callback with (error, header, stream)
var StringDecoder = require('string_decoder').StringDecoder;
function parseHeader(stream, callback) {
stream.on('error', callback);
stream.on('readable', onReadable);
var decoder = new StringDecoder('utf8');
var header = '';
function onReadable() {
var chunk;
while (null !== (chunk = stream.read())) {
var str = decoder.write(chunk);
if (str.match(/\n\n/)) {
// found the header boundary
var split = str.split(/\n\n/);
header += split.shift();
var remaining = split.join('\n\n');
var buf = new Buffer(remaining, 'utf8');
if (buf.length)
stream.unshift(buf);
stream.removeListener('error', callback);
stream.removeListener('readable', onReadable);
// now the body of the message can be read from the stream.
callback(null, header, stream);
} else {
// still reading the header.
header += str;
}
}
}
}

Note that, unlike stream.push(chunk) , stream.unshift(chunk) will not end the reading process by
resetting the internal reading state of the stream. This can cause unexpected results if unshift is
called during a read (i.e. from within a _read implementation on a custom stream). Following the call
to unshift with an immediate stream.push('') will reset the reading state appropriately, however it is
best to simply avoid calling unshift while in the process of performing a read.

readable.wrap(stream)

stream {Stream} An "old style" readable stream

Versions of Node prior to v0.10 had streams that did not implement the entire Streams API as it is
today. (See "Compatibility" below for more information.)
If you are using an older Node library that emits 'data' events and has a [ pause() ][] method that is
advisory only, then you can use the wrap() method to create a Readable stream that uses the old
stream as its data source.

You will very rarely ever need to call this function, but it exists as a convenience for interacting with
old Node programs and libraries.

For example:

var OldReader = require('./old-api-module.js').OldReader;


var oreader = new OldReader;
var Readable = require('stream').Readable;
var myReader = new Readable().wrap(oreader);

myReader.on('readable', function() {
myReader.read(); // etc.
});

Class: stream.Writable

The Writable stream interface is an abstraction for a destination that you are writing data to.

Examples of writable streams include:

http requests, on the client


http responses, on the server
fs write streams
zlib streams
crypto streams
tcp sockets
child process stdin
process.stdout, process.stderr

writable.write(chunk[, encoding][, callback])

chunk {String | Buffer} The data to write


encoding {String} The encoding, if chunk is a String
callback {Function} Callback for when this chunk of data is ushed
Returns: {Boolean} True if the data was handled completely.

This method writes some data to the underlying system, and calls the supplied callback once the data
has been fully handled.

The return value indicates if you should continue writing right now. If the data had to be buffered
internally, then it will return false . Otherwise, it will return true .

This return value is strictly advisory. You MAY continue to write, even if it returns false . However,
writes will be buffered in memory, so it is best not to do this excessively. Instead, wait for the drain

event before writing more data.

Event: 'drain'

If a [ writable.write(chunk) ][] call returns false, then the drain event will indicate when it is
appropriate to begin writing more data to the stream.
// Write the data to the supplied writable stream 1MM times.
// Be attentive to back-pressure.
function writeOneMillionTimes(writer, data, encoding, callback) {
var i = 1000000;
write();
function write() {
var ok = true;
do {
i -= 1;
if (i === 0) {
// last time!
writer.write(data, encoding, callback);
} else {
// see if we should continue, or wait
// don't pass the callback, because we're not done yet.
ok = writer.write(data, encoding);
}
} while (i > 0 && ok);
if (i > 0) {
// had to stop early!
// write some more once it drains
writer.once('drain', write);
}
}
}

writable.cork()

Forces buffering of all writes.

Buffered data will be ushed either at .uncork() or at .end() call.

writable.uncork()

Flush all data, buffered since .cork() call.

writable.setDefaultEncoding(encoding)

encoding {String} The new default encoding


Return: Boolean

Sets the default encoding for a writable stream. Returns true if the encoding is valid and is set.
Otherwise returns false .
writable.end([chunk][, encoding][, callback])

chunk {String | Buffer} Optional data to write


encoding {String} The encoding, if chunk is a String
callback {Function} Optional callback for when the stream is nished

Call this method when no more data will be written to the stream. If supplied, the callback is attached
as a listener on the finish event.

// write 'hello, ' and then end with 'world!'


var file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');

Calling [ write() ][] after calling [ end() ][] will raise an error:

// end with 'world!' and then write with 'hello, ' will raise an error
var file = fs.createWriteStream('example.txt');
file.end('world!');
file.write('hello, ');

Event: ' nish'

When the [ end() ][] method has been called, and all data has been ushed to the underlying system,
this event is emitted.

var writer = getWritableStreamSomehow();


for (var i = 0; i < 100; i ++) {
writer.write('hello, #' + i + '!\n');
}
writer.end('this is the end\n');
writer.on('finish', function() {
console.error('all writes are now complete.');
});

Event: 'pipe'

src {Readable Stream} source stream that is piping to this writable


This is emitted whenever the pipe() method is called on a readable stream, adding this writable to its
set of destinations.

var writer = getWritableStreamSomehow();


var reader = getReadableStreamSomehow();
writer.on('pipe', function(src) {
console.error('something is piping into the writer');
assert.equal(src, reader);
});
reader.pipe(writer);

Event: 'unpipe'

src {Readable Stream} The source stream that unpiped this writable

This is emitted whenever the [ unpipe() ][] method is called on a readable stream, removing this
writable from its set of destinations.

var writer = getWritableStreamSomehow();


var reader = getReadableStreamSomehow();
writer.on('unpipe', function(src) {
console.error('something has stopped piping into the writer');
assert.equal(src, reader);
});
reader.pipe(writer);
reader.unpipe(writer);

Event: 'error'

{Error object}

Emitted if there was an error when writing or piping data.

Class: stream.Duplex

Duplex streams are streams that implement both the Readable and Writable interfaces. See above for
usage.

Examples of Duplex streams include:


tcp sockets
zlib streams
crypto streams

Class: stream.Transform

Transform streams are Duplex streams where the output is in some way computed from the input.
They implement both the Readable and Writable interfaces. See above for usage.

Examples of Transform streams include:

zlib streams
crypto streams

API for Stream Implementors


To implement any sort of stream, the pattern is the same:

1. Extend the appropriate parent class in your own subclass. (The [ util.inherits ][] method is particularly
helpful for this.)
2. Call the appropriate parent class constructor in your constructor, to be sure that the internal mechanisms are
set up properly.
3. Implement one or more speci c methods, as detailed below.

The class to extend and the method(s) to implement depend on the sort of stream class you are
writing:

Use-case Class Method(s) to implement

[Readable]
Reading only [_read][]
(#stream_class_stream_readable_1)
Use-case Class Method(s) to implement

[Writable]
Writing only [_write][]
(#stream_class_stream_writable_1)

[Duplex]
Reading and writing [_read][] , [_write][]
(#stream_class_stream_duplex_1)

Operate on written data, [Transform] _transform , _flush


then read the result (#stream_class_stream_transform_1)

In your implementation code, it is very important to never call the methods described in API for
Stream Consumers above. Otherwise, you can potentially cause adverse side effects in programs that
consume your streaming interfaces.

Class: stream.Readable

stream.Readable is an abstract class designed to be extended with an underlying implementation of the


[ _read(size) ][] method.

Please see above under API for Stream Consumers for how to consume streams in your programs.
What follows is an explanation of how to implement Readable streams in your programs.

Example: A Counting Stream

This is a basic example of a Readable stream. It emits the numerals from 1 to 1,000,000 in ascending
order, and then ends.
var Readable = require('stream').Readable;
var util = require('util');
util.inherits(Counter, Readable);

function Counter(opt) {
Readable.call(this, opt);
this._max = 1000000;
this._index = 1;
}

Counter.prototype._read = function() {
var i = this._index++;
if (i > this._max)
this.push(null);
else {
var str = '' + i;
var buf = new Buffer(str, 'ascii');
this.push(buf);
}
};

Example: SimpleProtocol v1 (Sub-optimal)

This is similar to the parseHeader function described above, but implemented as a custom stream. Also,
note that this implementation does not convert the incoming data to a string.

However, this would be better implemented as a Transform stream. See below for a better
implementation.
// A parser for a simple data protocol.
// The "header" is a JSON object, followed by 2 \n characters, and
// then a message body.
//
// NOTE: This can be done more simply as a Transform stream!
// Using Readable directly for this is sub-optimal. See the
// alternative example below under the Transform section.

var Readable = require('stream').Readable;


var util = require('util');

util.inherits(SimpleProtocol, Readable);

function SimpleProtocol(source, options) {


if (!(this instanceof SimpleProtocol))
return new SimpleProtocol(source, options);

Readable.call(this, options);
this._inBody = false;
this._sawFirstCr = false;

// source is a readable stream, such as a socket or file


this._source = source;

var self = this;


source.on('end', function() {
self.push(null);
});

// give it a kick whenever the source is readable


// read(0) will not consume any bytes
source.on('readable', function() {
self.read(0);
});

this._rawHeader = [];
this.header = null;
}

SimpleProtocol.prototype._read = function(n) {
if (!this._inBody) {
var chunk = this._source.read();

// if the source doesn't have data, we don't have data yet.


if (chunk === null)
return this.push('');

// check if the chunk has a \n\n


var split = -1;
for (var i = 0; i < chunk.length; i++) {
if (chunk[i] === 10) { // '\n'
if (this._sawFirstCr) {
split = i;
break;
} else {
this._sawFirstCr = true;
}
} else {
this._sawFirstCr = false;
}
}

if (split === -1) {


// still waiting for the \n\n
// stash the chunk, and try again.
this._rawHeader.push(chunk);
this.push('');
} else {
this._inBody = true;
var h = chunk.slice(0, split);
this._rawHeader.push(h);
var header = Buffer.concat(this._rawHeader).toString();
try {
this.header = JSON.parse(header);
} catch (er) {
this.emit('error', new Error('invalid simple protocol data'));
return;
}
// now, because we got some extra data, unshift the rest
// back into the read queue so that our consumer will see it.
var b = chunk.slice(split);
this.unshift(b);
// calling unshift by itself does not reset the reading state
// of the stream; since we're inside _read, doing an additional
// push('') will reset the state appropriately.
this.push('');

// and let them know that we are done parsing the header.
this.emit('header', this.header);
}
} else {
// from there on, just provide the data to our consumer.
// careful not to push(null), since that would indicate EOF.
var chunk = this._source.read();
if (chunk) this.push(chunk);
}
};

// Usage:
// var parser = new SimpleProtocol(source);
// Now parser is a readable stream that will emit 'header'
// with the parsed header data.

new stream.Readable([options])

options {Object}
highWaterMark {Number} The maximum number of bytes to store in the internal buffer before ceasing to read
from the underlying resource. Default=16kb, or 16 for objectMode streams
encoding {String} If speci ed, then buffers will be decoded to strings using the speci ed encoding.
Default=null
objectMode {Boolean} Whether this stream should behave as a stream of objects. Meaning that stream.read(n)
returns a single value instead of a Buffer of size n. Default=false

In classes that extend the Readable class, make sure to call the Readable constructor so that the
buffering settings can be properly initialized.

readable._read(size)

size {Number} Number of bytes to read asynchronously

Note: Implement this method, but do NOT call it directly.

This method is pre xed with an underscore because it is internal to the class that de nes it and should
only be called by the internal Readable class methods. All Readable stream implementations must
provide a _read method to fetch data from the underlying resource.

When _read is called, if data is available from the resource, _read should start pushing that data into
the read queue by calling this.push(dataChunk) . _read should continue reading from the resource and
pushing data until push returns false, at which point it should stop reading from the resource. Only
when _read is called again after it has stopped should it start reading more data from the resource and
pushing that data onto the queue.

Note: once the _read() method is called, it will not be called again until the push method is called.

The size argument is advisory. Implementations where a "read" is a single call that returns data can
use this to know how much data to fetch. Implementations where that is not relevant, such as TCP or
TLS, may ignore this argument, and simply provide data whenever it becomes available. There is no
need, for example to "wait" until size bytes are available before calling [ stream.push(chunk) ][].

readable.push(chunk[, encoding])

chunk {Buffer | null | String} Chunk of data to push into the read queue
encoding {String} Encoding of String chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'

return {Boolean} Whether or not more pushes should be performed

Note: This method should be called by Readable implementors, NOT by consumers of Readable
streams.

If a value other than null is passed, The push() method adds a chunk of data into the queue for
subsequent stream processors to consume. If null is passed, it signals the end of the stream (EOF),
after which no more data can be written.

The data added with push can be pulled out by calling the read() method when the 'readable' event
res.

This API is designed to be as exible as possible. For example, you may be wrapping a lower-level
source which has some sort of pause/resume mechanism, and a data callback. In those cases, you could
wrap the low-level source object by doing something like this:
// source is an object with readStop() and readStart() methods,
// and an `ondata` member that gets called when it has data, and
// an `onend` member that gets called when the data is over.

util.inherits(SourceWrapper, Readable);

function SourceWrapper(options) {
Readable.call(this, options);

this._source = getLowlevelSourceObject();
var self = this;

// Every time there's data, we push it into the internal buffer.


this._source.ondata = function(chunk) {
// if push() returns false, then we need to stop reading from source
if (!self.push(chunk))
self._source.readStop();
};

// When the source ends, we push the EOF-signaling `null` chunk


this._source.onend = function() {
self.push(null);
};
}

// _read will be called when the stream wants to pull more data in
// the advisory size argument is ignored in this case.
SourceWrapper.prototype._read = function(size) {
this._source.readStart();
};

Class: stream.Writable

stream.Writable is an abstract class designed to be extended with an underlying implementation of the


[ _write(chunk, encoding, callback) ][] method.

Please see above under API for Stream Consumers for how to consume writable streams in your
programs. What follows is an explanation of how to implement Writable streams in your programs.

new stream.Writable([options])

options {Object}
highWaterMark {Number} Buffer level when [ write() ][] starts returning false. Default=16kb, or 16 for
objectMode streams
decodeStrings {Boolean} Whether or not to decode strings into Buffers before passing them to [ _write() ][].
Default=true
objectMode {Boolean} Whether or not the write(anyObj) is a valid operation. If set you can write arbitrary
data instead of only Buffer / String data. Default=false

In classes that extend the Writable class, make sure to call the constructor so that the buffering
settings can be properly initialized.

writable._write(chunk, encoding, callback)

chunk {Buffer | String} The chunk to be written. Will always be a buffer unless the decodeStrings option was
set to false .
encoding {String} If the chunk is a string, then this is the encoding type. Ignore if chunk is a buffer. Note that
chunk will always be a buffer unless the decodeStrings option is explicitly set to false .
callback {Function} Call this function (optionally with an error argument) when you are done processing the
supplied chunk.

All Writable stream implementations must provide a [ _write() ][] method to send data to the
underlying resource.

Note: This function MUST NOT be called directly. It should be implemented by child classes, and
called by the internal Writable class methods only.

Call the callback using the standard callback(error) pattern to signal that the write completed
successfully or with an error.

If the decodeStrings ag is set in the constructor options, then chunk may be a string rather than a
Buffer, and encoding will indicate the sort of string that it is. This is to support implementations that
have an optimized handling for certain string data encodings. If you do not explicitly set the
decodeStrings option to false , then you can safely ignore the encoding argument, and assume that

chunk will always be a Buffer.


This method is pre xed with an underscore because it is internal to the class that de nes it, and should
not be called directly by user programs. However, you are expected to override this method in your
own extension classes.

writable._writev(chunks, callback)

chunks {Array} The chunks to be written. Each chunk has following format: { chunk: ..., encoding: ... } .
callback {Function} Call this function (optionally with an error argument) when you are done processing the
supplied chunks.

Note: This function MUST NOT be called directly. It may be implemented by child classes, and called
by the internal Writable class methods only.

This function is completely optional to implement. In most cases it is unnecessary. If implemented, it


will be called with all the chunks that are buffered in the write queue.

Class: stream.Duplex

A "duplex" stream is one that is both Readable and Writable, such as a TCP socket connection.

Note that stream.Duplex is an abstract class designed to be extended with an underlying


implementation of the _read(size) and [ _write(chunk, encoding, callback) ][] methods as you would
with a Readable or Writable stream class.

Since JavaScript doesn't have multiple prototypal inheritance, this class prototypally inherits from
Readable, and then parasitically from Writable. It is thus up to the user to implement both the lowlevel
_read(n) method as well as the lowlevel [ _write(chunk, encoding, callback) ][] method on extension
duplex classes.

new stream.Duplex(options)

options {Object} Passed to both Writable and Readable constructors. Also has the following elds:
allowHalfOpen {Boolean} Default=true. If set to false , then the stream will automatically end the readable
side when the writable side ends and vice versa.
readableObjectMode {Boolean} Default=false. Sets objectMode for readable side of the stream. Has no effect if
objectMode is true .
writableObjectMode {Boolean} Default=false. Sets objectMode for writable side of the stream. Has no effect if
objectMode is true .

In classes that extend the Duplex class, make sure to call the constructor so that the buffering settings
can be properly initialized.

Class: stream.Transform

A "transform" stream is a duplex stream where the output is causally connected in some way to the
input, such as a zlib stream or a crypto stream.

There is no requirement that the output be the same size as the input, the same number of chunks, or
arrive at the same time. For example, a Hash stream will only ever have a single chunk of output which
is provided when the input is ended. A zlib stream will produce output that is either much smaller or
much larger than its input.

Rather than implement the [ _read() ][] and [ _write() ][] methods, Transform classes must implement
the _transform() method, and may optionally also implement the _flush() method. (See below.)

new stream.Transform([options])

options {Object} Passed to both Writable and Readable constructors.

In classes that extend the Transform class, make sure to call the constructor so that the buffering
settings can be properly initialized.

transform._transform(chunk, encoding, callback)


chunk {Buffer | String} The chunk to be transformed. Will always be a buffer unless the decodeStrings option
was set to false .
encoding {String} If the chunk is a string, then this is the encoding type. (Ignore if decodeStrings chunk is a
buffer.)
callback {Function} Call this function (optionally with an error argument and data) when you are done
processing the supplied chunk.

Note: This function MUST NOT be called directly. It should be implemented by child classes, and
called by the internal Transform class methods only.

All Transform stream implementations must provide a _transform method to accept input and
produce output.

_transform should do whatever has to be done in this speci c Transform class, to handle the bytes
being written, and pass them off to the readable portion of the interface. Do asynchronous I/O,
process things, and so on.

Call transform.push(outputChunk) 0 or more times to generate output from this input chunk, depending
on how much data you want to output as a result of this chunk.

Call the callback function only when the current chunk is completely consumed. Note that there may
or may not be output as a result of any particular input chunk. If you supply a data chunk as the second
argument to the callback function it will be passed to push method, in other words the following are
equivalent:

transform.prototype._transform = function (data, encoding, callback) {


this.push(data);
callback();
}

transform.prototype._transform = function (data, encoding, callback) {


callback(null, data);
}
This method is pre xed with an underscore because it is internal to the class that de nes it, and should
not be called directly by user programs. However, you are expected to override this method in your
own extension classes.

transform._ ush(callback)

callback {Function} Call this function (optionally with an error argument) when you are done ushing any
remaining data.

Note: This function MUST NOT be called directly. It MAY be implemented by child classes, and if so,
will be called by the internal Transform class methods only.

In some cases, your transform operation may need to emit a bit more data at the end of the stream.
For example, a Zlib compression stream will store up some internal state so that it can optimally
compress the output. At the end, however, it needs to do the best it can with what is left, so that the
data will be complete.

In those cases, you can implement a _flush method, which will be called at the very end, after all the
written data is consumed, but before emitting end to signal the end of the readable side. Just like with
_transform , call transform.push(chunk) zero or more times, as appropriate, and call callback when the
ush operation is complete.

This method is pre xed with an underscore because it is internal to the class that de nes it, and should
not be called directly by user programs. However, you are expected to override this method in your
own extension classes.

Events: ' nish' and 'end'

The [ finish ][] and [ end ][] events are from the parent Writable and Readable classes respectively.
The finish event is red after .end() is called and all chunks have been processed by _transform ,
end is red after all data has been output which is after the callback in _flush has been called.
Example: SimpleProtocol parser v2

The example above of a simple protocol parser can be implemented simply by using the higher level
Transform stream class, similar to the parseHeader and SimpleProtocol v1 examples above.

In this example, rather than providing the input as an argument, it would be piped into the parser,
which is a more idiomatic Node stream approach.
var util = require('util');
var Transform = require('stream').Transform;
util.inherits(SimpleProtocol, Transform);

function SimpleProtocol(options) {
if (!(this instanceof SimpleProtocol))
return new SimpleProtocol(options);

Transform.call(this, options);
this._inBody = false;
this._sawFirstCr = false;
this._rawHeader = [];
this.header = null;
}

SimpleProtocol.prototype._transform = function(chunk, encoding, done) {


if (!this._inBody) {
// check if the chunk has a \n\n
var split = -1;
for (var i = 0; i < chunk.length; i++) {
if (chunk[i] === 10) { // '\n'
if (this._sawFirstCr) {
split = i;
break;
} else {
this._sawFirstCr = true;
}
} else {
this._sawFirstCr = false;
}
}

if (split === -1) {


// still waiting for the \n\n
// stash the chunk, and try again.
this._rawHeader.push(chunk);
} else {
this._inBody = true;
var h = chunk.slice(0, split);
this._rawHeader.push(h);
var header = Buffer.concat(this._rawHeader).toString();
try {
this.header = JSON.parse(header);
} catch (er) {
this.emit('error', new Error('invalid simple protocol data'));
return;
}
// and let them know that we are done parsing the header.
this.emit('header', this.header);

// now, because we got some extra data, emit this first.


this.push(chunk.slice(split));
}
} else {
// from there on, just provide the data to our consumer as-is.
this.push(chunk);
}
done();
};
// Usage:
// var parser = new SimpleProtocol();
// source.pipe(parser)
// Now parser is a readable stream that will emit 'header'
// with the parsed header data.

Class: stream.PassThrough

This is a trivial implementation of a Transform stream that simply passes the input bytes across to the
output. Its purpose is mainly for examples and testing, but there are occasionally use cases where it
can come in handy as a building block for novel sorts of streams.

Streams: Under the Hood


Buffering

Both Writable and Readable streams will buffer data on an internal object called
_writableState.buffer or _readableState.buffer , respectively.

The amount of data that will potentially be buffered depends on the highWaterMark option which is
passed into the constructor.

Buffering in Readable streams happens when the implementation calls [ stream.push(chunk) ][]. If the
consumer of the Stream does not call stream.read() , then the data will sit in the internal queue until it
is consumed.

Buffering in Writable streams happens when the user calls [ stream.write(chunk) ][] repeatedly, even
when write() returns false .

The purpose of streams, especially with the pipe() method, is to limit the buffering of data to
acceptable levels, so that sources and destinations of varying speed will not overwhelm the available
memory.

stream.read(0)
There are some cases where you want to trigger a refresh of the underlying readable stream
mechanisms, without actually consuming any data. In that case, you can call stream.read(0) , which will
always return null.

If the internal read buffer is below the highWaterMark , and the stream is not currently reading, then
calling read(0) will trigger a low-level _read call.

There is almost never a need to do this. However, you will see some cases in Node's internals where
this is done, particularly in the Readable stream class internals.

stream.push('')

Pushing a zero-byte string or Buffer (when not in Object mode) has an interesting side effect. Because
it is a call to [ stream.push() ][], it will end the reading process. However, it does not add any data to the
readable buffer, so there's nothing for a user to consume.

Very rarely, there are cases where you have no data to provide now, but the consumer of your stream
(or, perhaps, another bit of your own code) will know when to check again, by calling stream.read(0) . In
those cases, you may call stream.push('') .

So far, the only use case for this functionality is in the tls.CryptoStream class, which is deprecated in
Node v0.12. If you nd that you have to use stream.push('') , please consider another approach,
because it almost certainly indicates that something is horribly wrong.

Compatibility with Older Node Versions

In versions of Node prior to v0.10, the Readable stream interface was simpler, but also less powerful
and less useful.

Rather than waiting for you to call the read() method, 'data' events would start emitting immediately. If
you needed to do some I/O to decide how to handle data, then you had to store the chunks in some kind of
buffer so that they would not be lost.
The [ pause() ][] method was advisory, rather than guaranteed. This meant that you still had to be prepared to
receive 'data' events even when the stream was in a paused state.

In Node v0.10, the Readable class described below was added. For backwards compatibility with older
Node programs, Readable streams switch into " owing mode" when a 'data' event handler is added,
or when the [ resume() ][] method is called. The effect is that, even if you are not using the new read()

method and 'readable' event, you no longer have to worry about losing 'data' chunks.

Most programs will continue to function normally. However, this introduces an edge case in the
following conditions:

No [ 'data' event][] handler is added.


The [ resume() ][] method is never called.
The stream is not piped to any writable destination.

For example, consider the following code:

// WARNING! BROKEN!
net.createServer(function(socket) {

// we add an 'end' method, but never consume the data


socket.on('end', function() {
// It will never get here.
socket.end('I got your message (but didnt read it)\n');
});

}).listen(1337);

In versions of node prior to v0.10, the incoming message data would be simply discarded. However, in
Node v0.10 and beyond, the socket will remain paused forever.

The workaround in this situation is to call the resume() method to start the ow of data:
// Workaround
net.createServer(function(socket) {

socket.on('end', function() {
socket.end('I got your message (but didnt read it)\n');
});

// start the flow of data, discarding it.


socket.resume();

}).listen(1337);

In addition to new Readable streams switching into owing mode, pre-v0.10 style streams can be
wrapped in a Readable class using the wrap() method.

Object Mode

Normally, Streams operate on Strings and Buffers exclusively.

Streams that are in object mode can emit generic JavaScript values other than Buffers and Strings.

A Readable stream in object mode will always return a single item from a call to stream.read(size) ,
regardless of what the size argument is.

A Writable stream in object mode will always ignore the encoding argument to
stream.write(data, encoding) .

The special value null still retains its special value for object mode streams. That is, for object mode
readable streams, null as a return value from stream.read() indicates that there is no more data, and
[ stream.push(null) ][] will signal the end of stream data ( EOF ).

No streams in Node core are object mode streams. This pattern is only used by userland streaming
libraries.

You should set objectMode in your stream child class constructor on the options object. Setting
objectMode mid-stream is not safe.
For Duplex streams objectMode can be set exclusively for readable or writable side with
readableObjectMode and writableObjectMode respectively. These options can be used to implement
parsers and serializers with Transform streams.

var util = require('util');


var StringDecoder = require('string_decoder').StringDecoder;
var Transform = require('stream').Transform;
util.inherits(JSONParseStream, Transform);

// Gets \n-delimited JSON string data, and emits the parsed objects
function JSONParseStream() {
if (!(this instanceof JSONParseStream))
return new JSONParseStream();

Transform.call(this, { readableObjectMode : true });

this._buffer = '';
this._decoder = new StringDecoder('utf8');
}

JSONParseStream.prototype._transform = function(chunk, encoding, cb) {


this._buffer += this._decoder.write(chunk);
// split on newlines
var lines = this._buffer.split(/\r?\n/);
// keep the last partial line buffered
this._buffer = lines.pop();
for (var l = 0; l < lines.length; l++) {
var line = lines[l];
try {
var obj = JSON.parse(line);
} catch (er) {
this.emit('error', er);
return;
}
// push the parsed object out to the readable consumer
this.push(obj);
}
cb();
};

JSONParseStream.prototype._flush = function(cb) {
// Just handle any leftover
var rem = this._buffer.trim();
if (rem) {
try {
var obj = JSON.parse(rem);
} catch (er) {
this.emit('error', er);
return;
}
// push the parsed object out to the readable consumer
this.push(obj);
}
cb();
};
Docs » Api » Events

Events
Stability: 4 - API Frozen

Many objects in Node emit events: a net.Server emits an event each time a peer connects to it, a
fs.readStream emits an event when the le is opened. All objects which emit events are instances of
events.EventEmitter . You can access this module by doing: require("events");

Typically, event names are represented by a camel-cased string, however, there aren't any strict
restrictions on that, as any string will be accepted.

Functions can then be attached to objects, to be executed when an event is emitted. These functions
are called listeners. Inside a listener function, this refers to the EventEmitter that the listener was
attached to.

Class: events.EventEmitter
To access the EventEmitter class, require('events').EventEmitter .

When an EventEmitter instance experiences an error, the typical action is to emit an 'error' event.
Error events are treated as a special case in node. If there is no listener for it, then the default action is
to print a stack trace and exit the program.

All EventEmitters emit the event 'newListener' when new listeners are added and 'removeListener'

when a listener is removed.

emitter.addListener(event, listener)
emitter.on(event, listener)

Adds a listener to the end of the listeners array for the speci ed event . No checks are made to see if
the listener has already been added. Multiple calls passing the same combination of event and
listener will result in the listener being added multiple times.

server.on('connection', function (stream) {


console.log('someone connected!');
});

Returns emitter, so calls can be chained.

emitter.once(event, listener)

Adds a one time listener for the event. This listener is invoked only the next time the event is red,
after which it is removed.

server.once('connection', function (stream) {


console.log('Ah, we have our first user!');
});

Returns emitter, so calls can be chained.

emitter.removeListener(event, listener)

Remove a listener from the listener array for the speci ed event. Caution: changes array indices in the
listener array behind the listener.

var callback = function(stream) {


console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);
removeListener will remove, at most, one instance of a listener from the listener array. If any single
listener has been added multiple times to the listener array for the speci ed event , then
removeListener must be called multiple times to remove each instance.

Returns emitter, so calls can be chained.

emitter.removeAllListeners([event])

Removes all listeners, or those of the speci ed event. It's not a good idea to remove listeners that were
added elsewhere in the code, especially when it's on an emitter that you didn't create (e.g. sockets or
le streams).

Returns emitter, so calls can be chained.

emitter.setMaxListeners(n)

By default EventEmitters will print a warning if more than 10 listeners are added for a particular
event. This is a useful default which helps nding memory leaks. Obviously not all Emitters should be
limited to 10. This function allows that to be increased. Set to zero for unlimited.

Returns emitter, so calls can be chained.

EventEmitter.defaultMaxListeners

emitter.setMaxListeners(n) sets the maximum on a per-instance basis. This class property lets you set it
for all EventEmitter instances, current and future, effective immediately. Use with care.

Note that emitter.setMaxListeners(n) still has precedence over EventEmitter.defaultMaxListeners .

emitter.listeners(event)

Returns a copy of the array of listeners for the speci ed event.


server.on('connection', function (stream) {
console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection'))); // [ [Function] ]

emitter.emit(event[, arg1][, arg2][, ...])

Execute each of the listeners in order with the supplied arguments.

Returns true if event had listeners, false otherwise.

Class Method: EventEmitter.listenerCount(emitter, event)

Return the number of listeners for a given event.

Event: 'newListener'

event {String} The event name


listener {Function} The event handler function

This event is emitted any time a listener is added. When this event is triggered, the listener may not
yet have been added to the array of listeners for the event .

Event: 'removeListener'

event {String} The event name


listener {Function} The event handler function

This event is emitted any time someone removes a listener. When this event is triggered, the listener
may not yet have been removed from the array of listeners for the event .
Docs » Api » Path

Path
Stability: 3 - Stable

This module contains utilities for handling and transforming le paths. Almost all these methods
perform only string transformations. The le system is not consulted to check whether paths are valid.

Use require('path') to use this module. The following methods are provided:

path.normalize(p)
Normalize a string path, taking care of '..' and '.' parts.

When multiple slashes are found, they're replaced by a single one; when the path contains a trailing
slash, it is preserved. On Windows backslashes are used.

Example:

path.normalize('/foo/bar//baz/asdf/quux/..')
// returns
'/foo/bar/baz/asdf'

path.join([path1][, path2][, ...])


Join all arguments together and normalize the resulting path.

Arguments must be strings. In v0.8, non-string arguments were silently ignored. In v0.10 and up, an
exception is thrown.
Example:

path.join('/foo', 'bar', 'baz/asdf', 'quux', '..')


// returns
'/foo/bar/baz/asdf'

path.join('foo', {}, 'bar')


// throws exception
TypeError: Arguments to path.join must be strings

path.resolve([from ...], to)


Resolves to to an absolute path.

If to isn't already absolute from arguments are prepended in right to left order, until an absolute
path is found. If after using all from paths still no absolute path is found, the current working directory
is used as well. The resulting path is normalized, and trailing slashes are removed unless the path gets
resolved to the root directory. Non-string from arguments are ignored.

Another way to think of it is as a sequence of cd commands in a shell.

path.resolve('foo/bar', '/tmp/file/', '..', 'a/../subfile')

Is similar to:

cd foo/bar
cd /tmp/file/
cd ..
cd a/../subfile
pwd

The difference is that the different paths don't need to exist and may also be les.

Examples:
path.resolve('/foo/bar', './baz')
// returns
'/foo/bar/baz'

path.resolve('/foo/bar', '/tmp/file/')
// returns
'/tmp/file'

path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif')


// if currently in /home/myself/node, it returns
'/home/myself/node/wwwroot/static_files/gif/image.gif'

path.isAbsolute(path)
Determines whether path is an absolute path. An absolute path will always resolve to the same
location, regardless of the working directory.

Posix examples:

path.isAbsolute('/foo/bar') // true
path.isAbsolute('/baz/..') // true
path.isAbsolute('qux/') // false
path.isAbsolute('.') // false

Windows examples:

path.isAbsolute('//server') // true
path.isAbsolute('C:/foo/..') // true
path.isAbsolute('bar\\baz') // false
path.isAbsolute('.') // false

path.relative(from, to)
Solve the relative path from from to to .

At times we have two absolute paths, and we need to derive the relative path from one to the other.
This is actually the reverse transform of path.resolve , which means we see that:

path.resolve(from, path.relative(from, to)) == path.resolve(to)


Examples:

path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb')
// returns
'..\\..\\impl\\bbb'

path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb')
// returns
'../../impl/bbb'

path.dirname(p)
Return the directory name of a path. Similar to the Unix dirname command.

Example:

path.dirname('/foo/bar/baz/asdf/quux')
// returns
'/foo/bar/baz/asdf'

path.basename(p[, ext])
Return the last portion of a path. Similar to the Unix basename command.

Example:

path.basename('/foo/bar/baz/asdf/quux.html')
// returns
'quux.html'

path.basename('/foo/bar/baz/asdf/quux.html', '.html')
// returns
'quux'

path.extname(p)
Return the extension of the path, from the last '.' to end of string in the last portion of the path. If there
is no '.' in the last portion of the path or the rst character of it is '.', then it returns an empty string.
Examples:
path.extname('index.html')
// returns
'.html'

path.extname('index.coffee.md')
// returns
'.md'

path.extname('index.')
// returns
'.'

path.extname('index')
// returns
''

path.sep
The platform-speci c le separator. '\\' or '/' .

An example on *nix:

'foo/bar/baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

An example on Windows:

'foo\\bar\\baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

path.delimiter
The platform-speci c path delimiter, ; or ':' .

An example on *nix:
console.log(process.env.PATH)
// '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'

process.env.PATH.split(path.delimiter)
// returns
['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']

An example on Windows:

console.log(process.env.PATH)
// 'C:\Windows\system32;C:\Windows;C:\Program Files\nodejs\'

process.env.PATH.split(path.delimiter)
// returns
['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\nodejs\\']

path.parse(pathString)
Returns an object from a path string.

An example on *nix:

path.parse('/home/user/dir/file.txt')
// returns
{
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
}

An example on Windows:

path.parse('C:\\path\\dir\\index.html')
// returns
{
root : "C:\\",
dir : "C:\\path\\dir",
base : "index.html",
ext : ".html",
name : "index"
}

path.format(pathObject)
Returns a path string from an object, the opposite of path.parse above.

path.format({
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
})
// returns
'/home/user/dir/file.txt'

path.posix
Provide access to aforementioned path methods but always interact in a posix compatible way.

path.win32
Provide access to aforementioned path methods but always interact in a win32 compatible way.
Docs » Api » Buffer

Buffer
Stability: 3 - Stable

Pure JavaScript is Unicode friendly but not nice to binary data. When dealing with TCP streams or the
le system, it's necessary to handle octet streams. Node has several strategies for manipulating,
creating, and consuming octet streams.

Raw data is stored in instances of the Buffer class. A Buffer is similar to an array of integers but
corresponds to a raw memory allocation outside the V8 heap. A Buffer cannot be resized.

The Buffer class is a global, making it very rare that one would need to ever require('buffer') .

Converting between Buffers and JavaScript string objects requires an explicit encoding method. Here
are the different string encodings.

'ascii' - for 7 bit ASCII data only. This encoding method is very fast, and will strip the high bit if
set.
'utf8' - Multibyte encoded Unicode characters. Many web pages and other document formats
use UTF-8.
'utf16le' - 2 or 4 bytes, little endian encoded Unicode characters. Surrogate pairs (U+10000 to

U+10FFFF) are supported.


'ucs2' - Alias of 'utf16le' .

'base64' - Base64 string encoding.


'binary' - A way of encoding raw binary data into strings by using only the rst 8 bits of each
character. This encoding method is deprecated and should be avoided in favor of Buffer objects
where possible. This encoding will be removed in future versions of Node.
'hex' - Encode each byte as two hexadecimal characters.

Creating a typed array from a Buffer works with the following caveats:

1. The buffer's memory is copied, not shared.


2. The buffer's memory is interpreted as an array, not a byte array. That is,
new Uint32Array(new Buffer([1,2,3,4])) creates a 4-element Uint32Array with elements [1,2,3,4] ,
not a Uint32Array with a single element [0x1020304] or [0x4030201] .

NOTE: Node.js v0.8 simply retained a reference to the buffer in array.buffer instead of cloning it.

While more ef cient, it introduces subtle incompatibilities with the typed arrays speci cation.
ArrayBuffer#slice() makes a copy of the slice while Buffer#slice() creates a view.

Class: Buffer
The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety
of ways.

new Buffer(size)

size Number

Allocates a new buffer of size octets. Note, size must be no more than kMaxLength. Otherwise, a
RangeError will be thrown here. Unlike ArrayBuffers , the underlying memory for buffers is not
initialized. So the contents of a newly created Buffer is unknown. Use buf.fill(0) to initialize a buffer
to zeroes.

new Buffer(array)

array Array
Allocates a new buffer using an array of octets.

new Buffer(buffer)

buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

new Buffer(str[, encoding])

str String - string to encode.


encoding String - encoding to use, Optional.

Allocates a new buffer containing the given str . encoding defaults to 'utf8' .

Class Method: Buffer.isEncoding(encoding)

encoding {String} The encoding string to test

Returns true if the encoding is a valid encoding argument, or false otherwise.

Class Method: Buffer.isBuffer(obj)

obj Object
Return: Boolean

Tests if obj is a Buffer .

Class Method: Buffer.byteLength(string[, encoding])

string String
encoding String, Optional, Default: 'utf8'
Return: Number

Gives the actual byte length of a string. encoding defaults to 'utf8' . This is not the same as
String.prototype.length since that returns the number of characters in a string.

Example:

str = '\u00bd + \u00bc = \u00be';

console.log(str + ": " + str.length + " characters, " +


Buffer.byteLength(str, 'utf8') + " bytes");

// ½ + ¼ = ¾: 9 characters, 12 bytes

Class Method: Buffer.concat(list[, totalLength])

list {Array} List of Buffer objects to concat


totalLength {Number} Total length of the buffers when concatenated

Returns a buffer which is the result of concatenating all the buffers in the list together.

If the list has no items, or if the totalLength is 0, then it returns a zero-length buffer.

If the list has exactly one item, then the rst item of the list is returned.

If the list has more than one item, then a new Buffer is created.

If totalLength is not provided, it is read from the buffers in the list. However, this adds an additional
loop to the function, so it is faster to provide the length explicitly.

Class Method: Buffer.compare(buf1, buf2)

buf1 {Buffer}
buf2 {Buffer}
The same as buf1.compare(buf2) . Useful for sorting an Array of Buffers:

var arr = [Buffer('1234'), Buffer('0123')];


arr.sort(Buffer.compare);

buf.length

Number

The size of the buffer in bytes. Note that this is not necessarily the size of the contents. length refers
to the amount of memory allocated for the buffer object. It does not change when the contents of the
buffer are changed.

buf = new Buffer(1234);

console.log(buf.length);
buf.write("some string", 0, "ascii");
console.log(buf.length);

// 1234
// 1234

While the length property is not immutable, changing the value of length can result in unde ned and
inconsistent behavior. Applications that wish to modify the length of a buffer should therefore treat
length as read-only and use buf.slice to create a new buffer.

buf = new Buffer(10);


buf.write("abcdefghj", 0, "ascii");
console.log(buf.length); // 10
buf = buf.slice(0,5);
console.log(buf.length); // 5

buf.write(string[, offset][, length][, encoding])

string String - data to be written to buffer


offset Number, Optional, Default: 0
length Number, Optional, Default: buffer.length - offset

encoding String, Optional, Default: 'utf8'


Writes string to the buffer at offset using the given encoding. offset defaults to 0 , encoding

defaults to 'utf8' . length is the number of bytes to write. Returns number of octets written. If
buffer did not contain enough space to t the entire string, it will write a partial amount of the string.
length defaults to buffer.length - offset . The method will not write partial characters.

buf = new Buffer(256);


len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(len + " bytes: " + buf.toString('utf8', 0, len));

buf.writeUIntLE(value, offset, byteLength[, noAssert])

buf.writeUIntBE(value, offset, byteLength[, noAssert])

buf.writeIntLE(value, offset, byteLength[, noAssert])

buf.writeIntBE(value, offset, byteLength[, noAssert])

value {Number} Bytes to be written to buffer


offset {Number} 0 <= offset <= buf.length

byteLength {Number} 0 < byteLength <= 6

noAssert {Boolean} Default: false


Return: {Number}

Writes value to the buffer at the speci ed offset and byteLength . Supports up to 48 bits of accuracy.
For example:

var b = new Buffer(6);


b.writeUIntBE(0x1234567890ab, 0, 6);
// <Buffer 12 34 56 78 90 ab>

Set noAssert to true to skip validation of value and offset . Defaults to false .

buf.readUIntLE(offset, byteLength[, noAssert])

buf.readUIntBE(offset, byteLength[, noAssert])


buf.readIntLE(offset, byteLength[, noAssert])

buf.readIntBE(offset, byteLength[, noAssert])

offset {Number} 0 <= offset <= buf.length

byteLength {Number} 0 < byteLength <= 6

noAssert {Boolean} Default: false


Return: {Number}

A generalized version of all numeric read methods. Supports up to 48 bits of accuracy. For example:

var b = new Buffer(6);


b.writeUint16LE(0x90ab, 0);
b.writeUInt32LE(0x12345678, 2);
b.readUIntLE(0, 6).toString(16); // Specify 6 bytes (48 bits)
// output: '1234567890ab'

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

buf.toString([encoding][, start][, end])

encoding String, Optional, Default: 'utf8'


start Number, Optional, Default: 0
end Number, Optional, Default: buffer.length

Decodes and returns a string from buffer data encoded using the speci ed character set encoding. If
encoding is undefined or null , then encoding defaults to 'utf8' . The start and end parameters

default to 0 and buffer.length when undefined .


buf = new Buffer(26);
for (var i = 0 ; i < 26 ; i++) {
buf[i] = i + 97; // 97 is ASCII a
}
buf.toString('ascii'); // outputs: abcdefghijklmnopqrstuvwxyz
buf.toString('ascii',0,5); // outputs: abcde
buf.toString('utf8',0,5); // outputs: abcde
buf.toString(undefined,0,5); // encoding defaults to 'utf8', outputs abcde

See buffer.write() example, above.

buf.toJSON()

Returns a JSON-representation of the Buffer instance. JSON.stringify implicitly calls this function
when stringifying a Buffer instance.

Example:

var buf = new Buffer('test');


var json = JSON.stringify(buf);

console.log(json);
// '{"type":"Buffer","data":[116,101,115,116]}'

var copy = JSON.parse(json, function(key, value) {


return value && value.type === 'Buffer'
? new Buffer(value.data)
: value;
});

console.log(copy);
// <Buffer 74 65 73 74>

buf[index]

Get and set the octet at index . The values refer to individual bytes, so the legal range is between
0x00 and 0xFF hex or 0 and 255 .

Example: copy an ASCII string into a buffer, one byte at a time:


str = "node.js";
buf = new Buffer(str.length);

for (var i = 0; i < str.length ; i++) {


buf[i] = str.charCodeAt(i);
}

console.log(buf);

// node.js

buf.equals(otherBuffer)

otherBuffer {Buffer}

Returns a boolean of whether this and otherBuffer have the same bytes.

buf.compare(otherBuffer)

otherBuffer {Buffer}

Returns a number indicating whether this comes before or after or is the same as the otherBuffer in
sort order.

buf.copy(targetBuffer[, targetStart][, sourceStart][, sourceEnd])

targetBuffer Buffer object - Buffer to copy into


targetStart Number, Optional, Default: 0
sourceStart Number, Optional, Default: 0
sourceEnd Number, Optional, Default: buffer.length

Copies data from a region of this buffer to a region in the target buffer even if the target memory
region overlaps with the source. If undefined the targetStart and sourceStart parameters default to
0 while sourceEnd defaults to buffer.length .
Example: build two Buffers, then copy buf1 from byte 16 through byte 19 into buf2 , starting at the
8th byte in buf2 .

buf1 = new Buffer(26);


buf2 = new Buffer(26);

for (var i = 0 ; i < 26 ; i++) {


buf1[i] = i + 97; // 97 is ASCII a
buf2[i] = 33; // ASCII !
}

buf1.copy(buf2, 8, 16, 20);


console.log(buf2.toString('ascii', 0, 25));

// !!!!!!!!qrst!!!!!!!!!!!!!

Example: Build a single buffer, then copy data from one region to an overlapping region in the same
buffer

buf = new Buffer(26);

for (var i = 0 ; i < 26 ; i++) {


buf[i] = i + 97; // 97 is ASCII a
}

buf.copy(buf, 0, 4, 10);
console.log(buf.toString());

// efghijghijklmnopqrstuvwxyz

buf.slice([start][, end])

start Number, Optional, Default: 0


end Number, Optional, Default: buffer.length

Returns a new buffer which references the same memory as the old, but offset and cropped by the
start (defaults to 0 ) and end (defaults to buffer.length ) indexes. Negative indexes start from the

end of the buffer.

Modifying the new buffer slice will modify memory in the original buffer!
Example: build a Buffer with the ASCII alphabet, take a slice, then modify one byte from the original
Buffer.

var buf1 = new Buffer(26);

for (var i = 0 ; i < 26 ; i++) {


buf1[i] = i + 97; // 97 is ASCII a
}

var buf2 = buf1.slice(0, 3);


console.log(buf2.toString('ascii', 0, buf2.length));
buf1[0] = 33;
console.log(buf2.toString('ascii', 0, buf2.length));

// abc
// !bc

buf.readUInt8(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads an unsigned 8 bit integer from the buffer at the speci ed offset.

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

Example:
var buf = new Buffer(4);

buf[0] = 0x3;
buf[1] = 0x4;
buf[2] = 0x23;
buf[3] = 0x42;

for (ii = 0; ii < buf.length; ii++) {


console.log(buf.readUInt8(ii));
}

// 0x3
// 0x4
// 0x23
// 0x42

buf.readUInt16LE(offset[, noAssert])

buf.readUInt16BE(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads an unsigned 16 bit integer from the buffer at the speci ed offset with speci ed endian format.

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

Example:
var buf = new Buffer(4);

buf[0] = 0x3;
buf[1] = 0x4;
buf[2] = 0x23;
buf[3] = 0x42;

console.log(buf.readUInt16BE(0));
console.log(buf.readUInt16LE(0));
console.log(buf.readUInt16BE(1));
console.log(buf.readUInt16LE(1));
console.log(buf.readUInt16BE(2));
console.log(buf.readUInt16LE(2));

// 0x0304
// 0x0403
// 0x0423
// 0x2304
// 0x2342
// 0x4223

buf.readUInt32LE(offset[, noAssert])

buf.readUInt32BE(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads an unsigned 32 bit integer from the buffer at the speci ed offset with speci ed endian format.

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

Example:
var buf = new Buffer(4);

buf[0] = 0x3;
buf[1] = 0x4;
buf[2] = 0x23;
buf[3] = 0x42;

console.log(buf.readUInt32BE(0));
console.log(buf.readUInt32LE(0));

// 0x03042342
// 0x42230403

buf.readInt8(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads a signed 8 bit integer from the buffer at the speci ed offset.

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

Works as buffer.readUInt8 , except buffer contents are treated as two's complement signed values.

buf.readInt16LE(offset[, noAssert])

buf.readInt16BE(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads a signed 16 bit integer from the buffer at the speci ed offset with speci ed endian format.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

Works as buffer.readUInt16* , except buffer contents are treated as two's complement signed values.

buf.readInt32LE(offset[, noAssert])

buf.readInt32BE(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads a signed 32 bit integer from the buffer at the speci ed offset with speci ed endian format.

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

Works as buffer.readUInt32* , except buffer contents are treated as two's complement signed values.

buf.readFloatLE(offset[, noAssert])

buf.readFloatBE(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads a 32 bit oat from the buffer at the speci ed offset with speci ed endian format.

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Example:

var buf = new Buffer(4);

buf[0] = 0x00;
buf[1] = 0x00;
buf[2] = 0x80;
buf[3] = 0x3f;

console.log(buf.readFloatLE(0));

// 0x01

buf.readDoubleLE(offset[, noAssert])

buf.readDoubleBE(offset[, noAssert])

offset Number
noAssert Boolean, Optional, Default: false
Return: Number

Reads a 64 bit double from the buffer at the speci ed offset with speci ed endian format.

Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .

Example:

var buf = new Buffer(8);

buf[0] = 0x55;
buf[1] = 0x55;
buf[2] = 0x55;
buf[3] = 0x55;
buf[4] = 0x55;
buf[5] = 0x55;
buf[6] = 0xd5;
buf[7] = 0x3f;

console.log(buf.readDoubleLE(0));

// 0.3333333333333333
buf.writeUInt8(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false

Writes value to the buffer at the speci ed offset. Note, value must be a valid unsigned 8 bit integer.

Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .

Example:

var buf = new Buffer(4);


buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);

console.log(buf);

// <Buffer 03 04 23 42>

buf.writeUInt16LE(value, offset[, noAssert])

buf.writeUInt16BE(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false

Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid unsigned 16 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .

Example:

var buf = new Buffer(4);


buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);

console.log(buf);

buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);

console.log(buf);

// <Buffer de ad be ef>
// <Buffer ad de ef be>

buf.writeUInt32LE(value, offset[, noAssert])

buf.writeUInt32BE(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false

Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid unsigned 32 bit integer.

Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .

Example:
var buf = new Buffer(4);
buf.writeUInt32BE(0xfeedface, 0);

console.log(buf);

buf.writeUInt32LE(0xfeedface, 0);

console.log(buf);

// <Buffer fe ed fa ce>
// <Buffer ce fa ed fe>

buf.writeInt8(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false

Writes value to the buffer at the speci ed offset. Note, value must be a valid signed 8 bit integer.

Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .

Works as buffer.writeUInt8 , except value is written out as a two's complement signed integer into
buffer .

buf.writeInt16LE(value, offset[, noAssert])

buf.writeInt16BE(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false

Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid signed 16 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .

Works as buffer.writeUInt16* , except value is written out as a two's complement signed integer into
buffer .

buf.writeInt32LE(value, offset[, noAssert])

buf.writeInt32BE(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false

Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid signed 32 bit integer.

Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .

Works as buffer.writeUInt32* , except value is written out as a two's complement signed integer into
buffer .

buf.writeFloatLE(value, offset[, noAssert])

buf.writeFloatBE(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset with speci ed endian format. Note, behavior is
unspeci ed if value is not a 32 bit oat.

Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .

Example:

var buf = new Buffer(4);


buf.writeFloatBE(0xcafebabe, 0);

console.log(buf);

buf.writeFloatLE(0xcafebabe, 0);

console.log(buf);

// <Buffer 4f 4a fe bb>
// <Buffer bb fe 4a 4f>

buf.writeDoubleLE(value, offset[, noAssert])

buf.writeDoubleBE(value, offset[, noAssert])

value Number
offset Number
noAssert Boolean, Optional, Default: false

Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid 64 bit double.

Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Example:

var buf = new Buffer(8);


buf.writeDoubleBE(0xdeadbeefcafebabe, 0);

console.log(buf);

buf.writeDoubleLE(0xdeadbeefcafebabe, 0);

console.log(buf);

// <Buffer 43 eb d5 b7 dd f9 5f d7>
// <Buffer d7 5f f9 dd b7 d5 eb 43>

buf. ll(value[, offset][, end])

value

offset Number, Optional


end Number, Optional

Fills the buffer with the speci ed value. If the offset (defaults to 0 ) and end (defaults to
buffer.length ) are not given it will ll the entire buffer.

var b = new Buffer(50);


b.fill("h");

buffer.INSPECT_MAX_BYTES
Number, Default: 50

How many bytes will be returned when buffer.inspect() is called. This can be overridden by user
modules.

Note that this is a property on the buffer module returned by require('buffer') , not on the Buffer
global, or a buffer instance.

Class: SlowBuffer
Returns an un-pooled Buffer .

In order to avoid the garbage collection overhead of creating many individually allocated Buffers, by
default allocations under 4KB are sliced from a single larger allocated object. This approach improves
both performance and memory usage since v8 does not need to track and cleanup as many Persistent

objects.

In the case where a developer may need to retain a small chunk of memory from a pool for an
indeterminate amount of time it may be appropriate to create an un-pooled Buffer instance using
SlowBuffer and copy out the relevant bits.

// need to keep around a few small chunks of memory


var store = [];

socket.on('readable', function() {
var data = socket.read();
// allocate for retained data
var sb = new SlowBuffer(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});

Though this should used sparingly and only be a last resort after a developer has actively observed
undue memory retention in their applications.
Docs » Api » Fs

File System
Stability: 3 - Stable

File I/O is provided by simple wrappers around standard POSIX functions. To use this module do
require('fs') . All the methods have asynchronous and synchronous forms.

The asynchronous form always take a completion callback as its last argument. The arguments passed
to the completion callback depend on the method, but the rst argument is always reserved for an
exception. If the operation was completed successfully, then the rst argument will be null or
undefined .

When using the synchronous form any exceptions are immediately thrown. You can use try/catch to
handle exceptions or allow them to bubble up.

Here is an example of the asynchronous version:

var fs = require('fs');

fs.unlink('/tmp/hello', function (err) {


if (err) throw err;
console.log('successfully deleted /tmp/hello');
});

Here is the synchronous version:

var fs = require('fs');

fs.unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');

With the asynchronous methods there is no guaranteed ordering. So the following is prone to error:
fs.rename('/tmp/hello', '/tmp/world', function (err) {
if (err) throw err;
console.log('renamed complete');
});
fs.stat('/tmp/world', function (err, stats) {
if (err) throw err;
console.log('stats: ' + JSON.stringify(stats));
});

It could be that fs.stat is executed before fs.rename . The correct way to do this is to chain the
callbacks.

fs.rename('/tmp/hello', '/tmp/world', function (err) {


if (err) throw err;
fs.stat('/tmp/world', function (err, stats) {
if (err) throw err;
console.log('stats: ' + JSON.stringify(stats));
});
});

In busy processes, the programmer is strongly encouraged to use the asynchronous versions of these
calls. The synchronous versions will block the entire process until they complete--halting all
connections.

Relative path to lename can be used, remember however that this path will be relative to
process.cwd() .

Most fs functions let you omit the callback argument. If you do, a default callback is used that
rethrows errors. To get a trace to the original call site, set the NODE_DEBUG environment variable:
$ cat script.js
function bad() {
require('fs').readFile('/');
}
bad();

$ env NODE_DEBUG=fs node script.js


fs.js:66
throw err;
^
Error: EISDIR, read
at rethrow (fs.js:61:21)
at maybeCallback (fs.js:79:42)
at Object.fs.readFile (fs.js:153:18)
at bad (/path/to/script.js:2:17)
at Object.<anonymous> (/path/to/script.js:5:1)
<etc.>

fs.rename(oldPath, newPath, callback)


Asynchronous rename(2). No arguments other than a possible exception are given to the completion
callback.

fs.renameSync(oldPath, newPath)
Synchronous rename(2). Returns undefined .

fs.ftruncate(fd, len, callback)


Asynchronous ftruncate(2). No arguments other than a possible exception are given to the completion
callback.

fs.ftruncateSync(fd, len)
Synchronous ftruncate(2). Returns undefined .

fs.truncate(path, len, callback)


Asynchronous truncate(2). No arguments other than a possible exception are given to the completion
callback. A le descriptor can also be passed as the rst argument. In this case, fs.ftruncate() is
called.

fs.truncateSync(path, len)
Synchronous truncate(2). Returns undefined .

fs.chown(path, uid, gid, callback)


Asynchronous chown(2). No arguments other than a possible exception are given to the completion
callback.

fs.chownSync(path, uid, gid)


Synchronous chown(2). Returns undefined .

fs.fchown(fd, uid, gid, callback)


Asynchronous fchown(2). No arguments other than a possible exception are given to the completion
callback.

fs.fchownSync(fd, uid, gid)


Synchronous fchown(2). Returns undefined .

fs.lchown(path, uid, gid, callback)


Asynchronous lchown(2). No arguments other than a possible exception are given to the completion
callback.

fs.lchownSync(path, uid, gid)


Synchronous lchown(2). Returns undefined .

fs.chmod(path, mode, callback)


Asynchronous chmod(2). No arguments other than a possible exception are given to the completion
callback.

fs.chmodSync(path, mode)
Synchronous chmod(2). Returns undefined .

fs.fchmod(fd, mode, callback)


Asynchronous fchmod(2). No arguments other than a possible exception are given to the completion
callback.

fs.fchmodSync(fd, mode)
Synchronous fchmod(2). Returns undefined .

fs.lchmod(path, mode, callback)


Asynchronous lchmod(2). No arguments other than a possible exception are given to the completion
callback.

Only available on Mac OS X.

fs.lchmodSync(path, mode)
Synchronous lchmod(2). Returns undefined .

fs.stat(path, callback)
Asynchronous stat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object.
See the fs.Stats section below for more information.

fs.lstat(path, callback)
Asynchronous lstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats

object. lstat() is identical to stat() , except that if path is a symbolic link, then the link itself is stat-
ed, not the le that it refers to.

fs.fstat(fd, callback)
Asynchronous fstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats

object. fstat() is identical to stat() , except that the le to be stat-ed is speci ed by the le
descriptor fd .

fs.statSync(path)
Synchronous stat(2). Returns an instance of fs.Stats .

fs.lstatSync(path)
Synchronous lstat(2). Returns an instance of fs.Stats .

fs.fstatSync(fd)
Synchronous fstat(2). Returns an instance of fs.Stats .

fs.link(srcpath, dstpath, callback)


Asynchronous link(2). No arguments other than a possible exception are given to the completion
callback.
fs.linkSync(srcpath, dstpath)
Synchronous link(2). Returns undefined .

fs.symlink(srcpath, dstpath[, type], callback)


Asynchronous symlink(2). No arguments other than a possible exception are given to the completion
callback. The type argument can be set to 'dir' , 'file' , or 'junction' (default is 'file' ) and is
only available on Windows (ignored on other platforms). Note that Windows junction points require
the destination path to be absolute. When using 'junction' , the destination argument will
automatically be normalized to absolute path.

fs.symlinkSync(srcpath, dstpath[, type])


Synchronous symlink(2). Returns undefined .

fs.readlink(path, callback)
Asynchronous readlink(2). The callback gets two arguments (err,

linkString) .

fs.readlinkSync(path)
Synchronous readlink(2). Returns the symbolic link's string value.

fs.realpath(path[, cache], callback)


Asynchronous realpath(2). The callback gets two arguments (err,

resolvedPath) . May use process.cwd to resolve relative paths. cache is an object literal of mapped
paths that can be used to force a speci c path resolution or avoid additional fs.stat calls for known
real paths.
Example:

var cache = {'/etc':'/private/etc'};


fs.realpath('/etc/passwd', cache, function (err, resolvedPath) {
if (err) throw err;
console.log(resolvedPath);
});

fs.realpathSync(path[, cache])
Synchronous realpath(2). Returns the resolved path.

fs.unlink(path, callback)
Asynchronous unlink(2). No arguments other than a possible exception are given to the completion
callback.

fs.unlinkSync(path)
Synchronous unlink(2). Returns undefined .

fs.rmdir(path, callback)
Asynchronous rmdir(2). No arguments other than a possible exception are given to the completion
callback.

fs.rmdirSync(path)
Synchronous rmdir(2). Returns undefined .

fs.mkdir(path[, mode], callback)


Asynchronous mkdir(2). No arguments other than a possible exception are given to the completion
callback. mode defaults to 0777 .
fs.mkdirSync(path[, mode])
Synchronous mkdir(2). Returns undefined .

fs.readdir(path, callback)
Asynchronous readdir(3). Reads the contents of a directory. The callback gets two arguments
(err, files) where files is an array of the names of the les in the directory excluding '.' and
'..' .

fs.readdirSync(path)
Synchronous readdir(3). Returns an array of lenames excluding '.' and '..' .

fs.close(fd, callback)
Asynchronous close(2). No arguments other than a possible exception are given to the completion
callback.

fs.closeSync(fd)
Synchronous close(2). Returns undefined .

fs.open(path, ags[, mode], callback)


Asynchronous le open. See open(2). flags can be:

'r' - Open le for reading. An exception occurs if the le does not exist.
'r+' - Open le for reading and writing. An exception occurs if the le does not exist.
'rs' - Open le for reading in synchronous mode. Instructs the operating system to bypass the
local le system cache.
This is primarily useful for opening les on NFS mounts as it allows you to skip the potentially stale
local cache. It has a very real impact on I/O performance so don't use this ag unless you need it.

Note that this doesn't turn fs.open() into a synchronous blocking call. If that's what you want then
you should be using fs.openSync()

'rs+' - Open le for reading and writing, telling the OS to open it synchronously. See notes for
'rs' about using this with caution.
'w' - Open le for writing. The le is created (if it does not exist) or truncated (if it exists).
'wx' - Like 'w' but fails if path exists.
'w+' - Open le for reading and writing. The le is created (if it does not exist) or truncated (if it
exists).
'wx+' - Like 'w+' but fails if path exists.
'a' - Open le for appending. The le is created if it does not exist.
'ax' - Like 'a' but fails if path exists.
'a+' - Open le for reading and appending. The le is created if it does not exist.
'ax+' - Like 'a+' but fails if path exists.

mode sets the le mode (permission and sticky bits), but only if the le was created. It defaults to
0666 , readable and writeable.

The callback gets two arguments (err, fd) .

The exclusive ag 'x' ( O_EXCL ag in open(2)) ensures that path is newly created. On POSIX
systems, path is considered to exist even if it is a symlink to a non-existent le. The exclusive ag may
or may not work with network le systems.

On Linux, positional writes don't work when the le is opened in append mode. The kernel ignores the
position argument and always appends the data to the end of the le.
fs.openSync(path, ags[, mode])
Synchronous version of fs.open() . Returns an integer representing the le descriptor.

fs.utimes(path, atime, mtime, callback)


Change le timestamps of the le referenced by the supplied path.

fs.utimesSync(path, atime, mtime)


Synchronous version of fs.utimes() . Returns undefined .

fs.futimes(fd, atime, mtime, callback)


Change the le timestamps of a le referenced by the supplied le descriptor.

fs.futimesSync(fd, atime, mtime)


Synchronous version of fs.futimes() . Returns undefined .

fs.fsync(fd, callback)
Asynchronous fsync(2). No arguments other than a possible exception are given to the completion
callback.

fs.fsyncSync(fd)
Synchronous fsync(2). Returns undefined .

fs.write(fd, buffer, offset, length[, position], callback)


Write buffer to the le speci ed by fd .
offset and length determine the part of the buffer to be written.

position refers to the offset from the beginning of the le where this data should be written. If
typeof position !== 'number' , the data will be written at the current position. See pwrite(2).

The callback will be given three arguments (err, written, buffer) where written speci es how many
bytes were written from buffer .

Note that it is unsafe to use fs.write multiple times on the same le without waiting for the callback.
For this scenario, fs.createWriteStream is strongly recommended.

On Linux, positional writes don't work when the le is opened in append mode. The kernel ignores the
position argument and always appends the data to the end of the le.

fs.write(fd, data[, position[, encoding]], callback)


Write data to the le speci ed by fd . If data is not a Buffer instance then the value will be coerced
to a string.

position refers to the offset from the beginning of the le where this data should be written. If
typeof position !== 'number' the data will be written at the current position. See pwrite(2).

encoding is the expected string encoding.

The callback will receive the arguments (err, written, string) where written speci es how many
bytes the passed string required to be written. Note that bytes written is not the same as string
characters. See Buffer.byteLength.

Unlike when writing buffer , the entire string must be written. No substring may be speci ed. This is
because the byte offset of the resulting data may not be the same as the string offset.
Note that it is unsafe to use fs.write multiple times on the same le without waiting for the callback.
For this scenario, fs.createWriteStream is strongly recommended.

On Linux, positional writes don't work when the le is opened in append mode. The kernel ignores the
position argument and always appends the data to the end of the le.

fs.writeSync(fd, buffer, offset, length[, position])

fs.writeSync(fd, data[, position[, encoding]])


Synchronous versions of fs.write() . Returns the number of bytes written.

fs.read(fd, buffer, offset, length, position, callback)


Read data from the le speci ed by fd .

buffer is the buffer that the data will be written to.

offset is the offset in the buffer to start writing at.

length is an integer specifying the number of bytes to read.

position is an integer specifying where to begin reading from in the le. If position is null , data will
be read from the current le position.

The callback is given the three arguments, (err, bytesRead, buffer) .

fs.readSync(fd, buffer, offset, length, position)


Synchronous version of fs.read . Returns the number of bytesRead .
fs.readFile( lename[, options], callback)
filename {String}
options {Object}
encoding {String | Null} default = null

flag {String} default = 'r'

callback {Function}

Asynchronously reads the entire contents of a le. Example:

fs.readFile('/etc/passwd', function (err, data) {


if (err) throw err;
console.log(data);
});

The callback is passed two arguments (err, data) , where data is the contents of the le.

If no encoding is speci ed, then the raw buffer is returned.

fs.readFileSync( lename[, options])


Synchronous version of fs.readFile . Returns the contents of the filename .

If the encoding option is speci ed then this function returns a string. Otherwise it returns a buffer.

fs.writeFile( lename, data[, options], callback)


filename {String}
data {String | Buffer}
options {Object}
encoding {String | Null} default = 'utf8'

mode {Number} default = 438 (aka 0666 in Octal)


flag {String} default = 'w'
callback {Function}

Asynchronously writes data to a le, replacing the le if it already exists. data can be a string or a
buffer.

The encoding option is ignored if data is a buffer. It defaults to 'utf8' .

Example:

fs.writeFile('message.txt', 'Hello Node', function (err) {


if (err) throw err;
console.log('It\'s saved!');
});

fs.writeFileSync( lename, data[, options])


The synchronous version of fs.writeFile . Returns undefined .

fs.appendFile( lename, data[, options], callback)


filename {String}
data {String | Buffer}
options {Object}
encoding {String | Null} default = 'utf8'

mode {Number} default = 438 (aka 0666 in Octal)


flag {String} default = 'a'

callback {Function}

Asynchronously append data to a le, creating the le if it not yet exists. data can be a string or a
buffer.

Example:
fs.appendFile('message.txt', 'data to append', function (err) {
if (err) throw err;
console.log('The "data to append" was appended to file!');
});

fs.appendFileSync( lename, data[, options])


The synchronous version of fs.appendFile . Returns undefined .

fs.watchFile( lename[, options], listener)


Stability: 2 - Unstable. Use fs.watch instead, if possible.

Watch for changes on filename . The callback listener will be called each time the le is accessed.

The second argument is optional. The options if provided should be an object containing two
members a boolean, persistent , and interval . persistent indicates whether the process should
continue to run as long as les are being watched. interval indicates how often the target should be
polled, in milliseconds. The default is { persistent: true, interval: 5007 } .

The listener gets two arguments the current stat object and the previous stat object:

fs.watchFile('message.text', function (curr, prev) {


console.log('the current mtime is: ' + curr.mtime);
console.log('the previous mtime was: ' + prev.mtime);
});

These stat objects are instances of fs.Stat .

If you want to be noti ed when the le was modi ed, not just accessed you need to compare
curr.mtime and prev.mtime .

fs.unwatchFile( lename[, listener])


Stability: 2 - Unstable. Use fs.watch instead, if possible.
Stop watching for changes on filename . If listener is speci ed, only that particular listener is
removed. Otherwise, all listeners are removed and you have effectively stopped watching filename .

Calling fs.unwatchFile() with a lename that is not being watched is a no-op, not an error.

fs.watch( lename[, options][, listener])


Stability: 2 - Unstable.

Watch for changes on filename , where filename is either a le or a directory. The returned object is a
fs.FSWatcher.

The second argument is optional. The options if provided should be an object. The supported boolean
members are persistent and recursive . persistent indicates whether the process should continue to
run as long as les are being watched. recursive indicates whether all subdirectories should be
watched, or only the current directory. This applies when a directory is speci ed, and only on
supported platforms (See Caveats below).

The default is { persistent: true, recursive: false } .

The listener callback gets two arguments (event, filename) . event is either 'rename' or 'change', and
filename is the name of the le which triggered the event.

Caveats

The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.

The recursive option is currently supported on OS X. Only FSEvents supports this type of le watching
so it is unlikely any additional platforms will be added soon.

Availability
This feature depends on the underlying operating system providing a way to be noti ed of lesystem
changes.

On Linux systems, this uses inotify .


On BSD systems, this uses kqueue .
On OS X, this uses kqueue for les and 'FSEvents' for directories.
On SunOS systems (including Solaris and SmartOS), this uses event ports .
On Windows systems, this feature depends on ReadDirectoryChangesW .

If the underlying functionality is not available for some reason, then fs.watch will not be able to
function. For example, watching les or directories on network le systems (NFS, SMB, etc.) often
doesn't work reliably or at all.

You can still use fs.watchFile , which uses stat polling, but it is slower and less reliable.

Filename Argument

Providing filename argument in the callback is not supported on every platform (currently it's only
supported on Linux and Windows). Even on supported platforms filename is not always guaranteed to
be provided. Therefore, don't assume that filename argument is always provided in the callback, and
have some fallback logic if it is null.

fs.watch('somedir', function (event, filename) {


console.log('event is: ' + event);
if (filename) {
console.log('filename provided: ' + filename);
} else {
console.log('filename not provided');
}
});

fs.exists(path, callback)
Test whether or not the given path exists by checking with the le system. Then call the callback

argument with either true or false. Example:


fs.exists('/etc/passwd', function (exists) {
util.debug(exists ? "it's there" : "no passwd!");
});

fs.exists() is an anachronism and exists only for historical reasons. There should almost never be a
reason to use it in your own code.

In particular, checking if a le exists before opening it is an anti-pattern that leaves you vulnerable to
race conditions: another process may remove the le between the calls to fs.exists() and fs.open() .
Just open the le and handle the error when it's not there.

fs.exists() will be deprecated.

fs.existsSync(path)
Synchronous version of fs.exists() . Returns true if the le exists, false otherwise.

fs.existsSync() will be deprecated.

fs.access(path[, mode], callback)


Tests a user's permissions for the le speci ed by path . mode is an optional integer that speci es the
accessibility checks to be performed. The following constants de ne the possible values of mode . It is
possible to create a mask consisting of the bitwise OR of two or more values.

fs.F_OK - File is visible to the calling process. This is useful for determining if a le exists, but says nothing
about rwx permissions. Default if no mode is speci ed.
fs.R_OK - File can be read by the calling process.
fs.W_OK - File can be written by the calling process.
fs.X_OK - File can be executed by the calling process. This has no effect on Windows (will behave like
fs.F_OK ).
The nal argument, callback , is a callback function that is invoked with a possible error argument. If
any of the accessibility checks fail, the error argument will be populated. The following example
checks if the le /etc/passwd can be read and written by the current process.

fs.access('/etc/passwd', fs.R_OK | fs.W_OK, function(err) {


util.debug(err ? 'no access!' : 'can read/write');
});

fs.accessSync(path[, mode])
Synchronous version of fs.access . This throws if any accessibility checks fail, and does nothing
otherwise.

Class: fs.Stats
Objects returned from fs.stat() , fs.lstat() and fs.fstat() and their synchronous counterparts are
of this type.

stats.isFile()

stats.isDirectory()

stats.isBlockDevice()

stats.isCharacterDevice()

stats.isSymbolicLink() (only valid with fs.lstat() )


stats.isFIFO()

stats.isSocket()

For a regular le util.inspect(stats) would return a string very similar to this:


{ dev: 2114,
ino: 48064969,
mode: 33188,
nlink: 1,
uid: 85,
gid: 100,
rdev: 0,
size: 527,
blksize: 4096,
blocks: 8,
atime: Mon, 10 Oct 2011 23:24:11 GMT,
mtime: Mon, 10 Oct 2011 23:24:11 GMT,
ctime: Mon, 10 Oct 2011 23:24:11 GMT,
birthtime: Mon, 10 Oct 2011 23:24:11 GMT }

Please note that atime , mtime , birthtime , and ctime are instances of Date object and to compare the
values of these objects you should use appropriate methods. For most general uses getTime() will
return the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC and this integer should
be suf cient for any comparison, however there are additional methods which can be used for
displaying fuzzy information. More details can be found in the MDN JavaScript Reference page.

Stat Time Values

The times in the stat object have the following semantics:

atime "Access Time" - Time when le data last accessed. Changed by the mknod(2) , utimes(2) , and read(2)

system calls.
mtime "Modi ed Time" - Time when le data last modi ed. Changed by the mknod(2) , utimes(2) , and
write(2) system calls.
ctime "Change Time" - Time when le status was last changed (inode data modi cation). Changed by the
chmod(2) , chown(2) , link(2) , mknod(2) , rename(2) , unlink(2) , utimes(2) , read(2) , and write(2) system
calls.
birthtime "Birth Time" - Time of le creation. Set once when the le is created. On lesystems where
birthtime is not available, this eld may instead hold either the ctime or 1970-01-01T00:00Z (ie, unix epoch
timestamp 0 ). On Darwin and other FreeBSD variants, also set if the atime is explicitly set to an earlier
value than the current birthtime using the utimes(2) system call.
Prior to Node v0.12, the ctime held the birthtime on Windows systems. Note that as of v0.12, ctime

is not "creation time", and on Unix systems, it never was.

fs.createReadStream(path[, options])
Returns a new ReadStream object (See Readable Stream ).

Be aware that, unlike the default value set for highWaterMark on a readable stream (16kB), the stream
returned by this method has a default value of 64kB for the same parameter.

options is an object with the following defaults:

{ flags: 'r',
encoding: null,
fd: null,
mode: 0666,
autoClose: true
}

options can include start and end values to read a range of bytes from the le instead of the entire
le. Both start and end are inclusive and start at 0. The encoding can be 'utf8' , 'ascii' , or
'base64' .

If fd is speci ed, ReadStream will ignore the path argument and will use the speci ed le descriptor.
This means that no open event will be emitted.

If autoClose is false, then the le descriptor won't be closed, even if there's an error. It is your
responsibility to close it and make sure there's no le descriptor leak. If autoClose is set to true
(default behavior), on error or end the le descriptor will be closed automatically.

mode sets the le mode (permission and sticky bits), but only if the le was created.

An example to read the last 10 bytes of a le which is 100 bytes long:


fs.createReadStream('sample.txt', {start: 90, end: 99});

Class: fs.ReadStream
ReadStream is a Readable Stream.

Event: 'open'

fd {Integer} le descriptor used by the ReadStream.

Emitted when the ReadStream's le is opened.

fs.createWriteStream(path[, options])
Returns a new WriteStream object (See Writable Stream ).

options is an object with the following defaults:

{ flags: 'w',
defaultEncoding: 'utf8',
fd: null,
mode: 0666 }

options may also include a start option to allow writing data at some position past the beginning of
the le. Modifying a le rather than replacing it may require a flags mode of r+ rather than the
default mode w .

Like ReadStream above, if fd is speci ed, WriteStream will ignore the path argument and will use the
speci ed le descriptor. This means that no open event will be emitted.

Class: fs.WriteStream
WriteStream is a Writable Stream.
Event: 'open'

fd {Integer} le descriptor used by the WriteStream.

Emitted when the WriteStream's le is opened.

le.bytesWritten

The number of bytes written so far. Does not include data that is still queued for writing.

Class: fs.FSWatcher
Objects returned from fs.watch() are of this type.

watcher.close()

Stop watching for changes on the given fs.FSWatcher .

Event: 'change'

event {String} The type of fs change


filename {String} The lename that changed (if relevant/available)

Emitted when something changes in a watched directory or le. See more details in fs.watch.

Event: 'error'

error {Error object}

Emitted when an error occurs.


Docs » Api » Path

Path
Stability: 3 - Stable

This module contains utilities for handling and transforming le paths. Almost all these methods
perform only string transformations. The le system is not consulted to check whether paths are valid.

Use require('path') to use this module. The following methods are provided:

path.normalize(p)
Normalize a string path, taking care of '..' and '.' parts.

When multiple slashes are found, they're replaced by a single one; when the path contains a trailing
slash, it is preserved. On Windows backslashes are used.

Example:

path.normalize('/foo/bar//baz/asdf/quux/..')
// returns
'/foo/bar/baz/asdf'

path.join([path1][, path2][, ...])


Join all arguments together and normalize the resulting path.

Arguments must be strings. In v0.8, non-string arguments were silently ignored. In v0.10 and up, an
exception is thrown.
Example:

path.join('/foo', 'bar', 'baz/asdf', 'quux', '..')


// returns
'/foo/bar/baz/asdf'

path.join('foo', {}, 'bar')


// throws exception
TypeError: Arguments to path.join must be strings

path.resolve([from ...], to)


Resolves to to an absolute path.

If to isn't already absolute from arguments are prepended in right to left order, until an absolute
path is found. If after using all from paths still no absolute path is found, the current working directory
is used as well. The resulting path is normalized, and trailing slashes are removed unless the path gets
resolved to the root directory. Non-string from arguments are ignored.

Another way to think of it is as a sequence of cd commands in a shell.

path.resolve('foo/bar', '/tmp/file/', '..', 'a/../subfile')

Is similar to:

cd foo/bar
cd /tmp/file/
cd ..
cd a/../subfile
pwd

The difference is that the different paths don't need to exist and may also be les.

Examples:
path.resolve('/foo/bar', './baz')
// returns
'/foo/bar/baz'

path.resolve('/foo/bar', '/tmp/file/')
// returns
'/tmp/file'

path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif')


// if currently in /home/myself/node, it returns
'/home/myself/node/wwwroot/static_files/gif/image.gif'

path.isAbsolute(path)
Determines whether path is an absolute path. An absolute path will always resolve to the same
location, regardless of the working directory.

Posix examples:

path.isAbsolute('/foo/bar') // true
path.isAbsolute('/baz/..') // true
path.isAbsolute('qux/') // false
path.isAbsolute('.') // false

Windows examples:

path.isAbsolute('//server') // true
path.isAbsolute('C:/foo/..') // true
path.isAbsolute('bar\\baz') // false
path.isAbsolute('.') // false

path.relative(from, to)
Solve the relative path from from to to .

At times we have two absolute paths, and we need to derive the relative path from one to the other.
This is actually the reverse transform of path.resolve , which means we see that:

path.resolve(from, path.relative(from, to)) == path.resolve(to)


Examples:

path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb')
// returns
'..\\..\\impl\\bbb'

path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb')
// returns
'../../impl/bbb'

path.dirname(p)
Return the directory name of a path. Similar to the Unix dirname command.

Example:

path.dirname('/foo/bar/baz/asdf/quux')
// returns
'/foo/bar/baz/asdf'

path.basename(p[, ext])
Return the last portion of a path. Similar to the Unix basename command.

Example:

path.basename('/foo/bar/baz/asdf/quux.html')
// returns
'quux.html'

path.basename('/foo/bar/baz/asdf/quux.html', '.html')
// returns
'quux'

path.extname(p)
Return the extension of the path, from the last '.' to end of string in the last portion of the path. If there
is no '.' in the last portion of the path or the rst character of it is '.', then it returns an empty string.
Examples:
path.extname('index.html')
// returns
'.html'

path.extname('index.coffee.md')
// returns
'.md'

path.extname('index.')
// returns
'.'

path.extname('index')
// returns
''

path.sep
The platform-speci c le separator. '\\' or '/' .

An example on *nix:

'foo/bar/baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

An example on Windows:

'foo\\bar\\baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

path.delimiter
The platform-speci c path delimiter, ; or ':' .

An example on *nix:
console.log(process.env.PATH)
// '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'

process.env.PATH.split(path.delimiter)
// returns
['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']

An example on Windows:

console.log(process.env.PATH)
// 'C:\Windows\system32;C:\Windows;C:\Program Files\nodejs\'

process.env.PATH.split(path.delimiter)
// returns
['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\nodejs\\']

path.parse(pathString)
Returns an object from a path string.

An example on *nix:

path.parse('/home/user/dir/file.txt')
// returns
{
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
}

An example on Windows:

path.parse('C:\\path\\dir\\index.html')
// returns
{
root : "C:\\",
dir : "C:\\path\\dir",
base : "index.html",
ext : ".html",
name : "index"
}

path.format(pathObject)
Returns a path string from an object, the opposite of path.parse above.

path.format({
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
})
// returns
'/home/user/dir/file.txt'

path.posix
Provide access to aforementioned path methods but always interact in a posix compatible way.

path.win32
Provide access to aforementioned path methods but always interact in a win32 compatible way.
Products Pricing Documentation Community

Sign Up Sign In

Search packages Search

Have ideas to improve npm? Join in the discussion! »

file-uri-to-path
2.0.0 • Public • Published a year ago

Readme

Explore BETA

0 Dependencies

93 Dependents

4 Versions

Install

npm i file-uri-to-path

Weekly Downloads
13,607,702

Version License
2.0.0 MIT

Unpacked Size Total Files


7.88 kB 6

Issues Pull Requests


2 0

Homepage
github.com/TooTallNate/file-uri-to-path

Repository
github.com/TooTallNate/file-uri-to-path

Last publish
a year ago

Collaborators

Try on RunKit
Report malware

file-uri-to-path
Convert a file: URI to a file path
Node CI passing

Accepts a file: URI and returns a regular file path suitable for use with the fs module functions.

Installation
Install with npm :

$ npm install file-uri-to-path

Example

var uri2path = require('file-uri-to-path');

uri2path('file://localhost/c|/WINDOWS/clock.avi');
// "c:\\WINDOWS\\clock.avi"

uri2path('file:///c|/WINDOWS/clock.avi');
// "c:\\WINDOWS\\clock.avi"
uri2path('file://localhost/c:/WINDOWS/clock.avi');
// "c:\\WINDOWS\\clock.avi"

uri2path('file://hostname/path/to/the%20file.txt');
// "\\\\hostname\\path\\to\\the file.txt"

uri2path('file:///c:/path/to/the%20file.txt');
// "c:\\path\\to\\the file.txt"

API
fileUriToPath(String uri) → String

License
(The MIT License)

Copyright (c) 2014 Nathan Rajlich <[email protected]>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation
files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Keywords

file uri convert path

Support

Help

Community

Advisories

Status
Contact npm

Company

About

Blog

Press

Terms & Policies

Policies

Terms of Use

Code of Conduct

Privacy
Instantly share code, notes, and snippets.

branneman / better-nodejs-require-paths.md
Last active Apr 2, 2021

Star

Code Revisions 53 Stars 2,298 Forks 236

Better local require() paths for Node.js

better-nodejs-require-paths.md

Better local require() paths for Node.js

Problem
When the directory structure of your Node.js application (not library!) has some depth, you end up with a lot of annoying
relative paths in your require calls like:

const Article = require('../../../../app/models/article');

Those suck for maintenance and they're ugly.

Possible solutions
Ideally, I'd like to have the same basepath from which I require() all my modules. Like any other language environment out
there. I'd like the require() calls to be first-and-foremost relative to my application entry point file, in my case app.js .

There are only solutions here that work cross-platform, because 42% of Node.js users use Windows as their desktop
environment (source).

0. The Alias
1. Install the module-alias package:

npm i --save module-alias

2. Add paths to your package.json like this:

{
"_moduleAliases": {
"@lib": "app/lib",
"@models": "app/models"
}
}

3. In your entry-point file, before any require() calls:

require('module-alias/register')

4. You can now require files like this:

const Article = require('@models/article');

1. The Container
1. Learn all about Dependency Injection and Inversion of Control containers. Example implementation using Electrolyte
here: github/branneman/nodejs-app-boilerplate

2. Create an entry-point file like this:

const IoC = require('electrolyte');


IoC.use(IoC.dir('app'));
IoC.use(IoC.node_modules());
IoC.create('server').then(app => app());

3. You can now define your modules like this:

module.exports = factory;
module.exports['@require'] = [
'lib/read',
'lib/render-view'
];
function factory(read, render) { /* ... */ }

More detailed example module: app/areas/homepage/index.js

2. The Symlink
Stolen from: focusaurus / express_code_structure # the-app-symlink-trick

1. Create a symlink under node_modules to your app directory:


Linux: ln -nsf node_modules app
Windows: mklink /D app node_modules

2. Now you can require local modules like this from anywhere:

const Article = require('models/article');


Note: you can not have a symlink like this inside a Git repo, since Git does not handle symlinks cross-platform. If you can live
with a post-clone git-hook and/or the instruction for the next developer to create a symlink, then sure.

Alternatively, you can create the symlink on the npm postinstall hook, as described by scharf in this awesome comment.
Put this inside your package.json :

"scripts": {
"postinstall" : "node -e \"var s='../src',d='node_modules/src',fs=require('fs');fs.exists(d,function(e){e||fs.sy
}

3. The Global
1. In your entry-point file, before any require() calls:

global.__base = __dirname + '/';

2. In your very/far/away/module.js:

const Article = require(`${__base}app/models/article`);

4. The Module
1. Install some module:

npm install app-module-path --save

2. In your entry-point file, before any require() calls:


require('app-module-path').addPath(`${__dirname}/app`);

3. In your very/far/away/module.js:

const Article = require('models/article');

Naturally, there are a ton of unmaintained 1-star modules available on npm: 0, 1, 2, 3, 4, 5

5. The Environment
Set the NODE_PATH environment variable to the absolute path of your application, ending with the directory you want your
modules relative to (in my case . ).

There are 2 ways of achieving the following require() statement from anywhere in your application:

const Article = require('app/models/article');

5.1. Up-front

Before running your node app , first run:

Linux: export NODE_PATH=.


Windows: set NODE_PATH=.

Setting a variable like this with export or set will remain in your environment as long as your current shell is open. To have
it globally available in any shell, set it in your userprofile and reload your environment.

5.2. Only while executing node

This solution will not affect your environment other than what node preceives. It does change your application start
command.
Start your application like this from now on:
Linux: NODE_PATH=. node app
Windows: cmd.exe /C "set NODE_PATH=.&& node app"

(On Windows this command will not work if you put a space in between the path and the && . Crazy shit.)

6. The Start-up Script


Effectively, this solution also uses the environment (as in 5.2), it just abstracts it away.

With one of these solutions (6.1 & 6.2) you can start your application like this from now on:
Linux: ./app (also for Windows PowerShell)
Windows: app

An advantage of this solution is that if you want to force your node app to always be started with v8 parameters like --
harmony or --use_strict , you can easily add them in the start-up script as well.

6.1. Node.js

Example implementation: https://gist.github.com/branneman/8775568

6.2. OS-specific start-up scripts

Linux, create app.sh in your project root:

#!/bin/sh
NODE_PATH=. node app.js

Windows, create app.bat in your project root:

@echo off
cmd.exe /C "set NODE_PATH=.&& node app.js"
7. The Hack
Courtesy of @joelabair. Effectively also the same as 5.2, but without the need to specify the NODE_PATH outside your
application, making it more fool proof. However, since this relies on a private Node.js core method, this is also a hack that
might stop working on the previous or next version of node.

In your app.js , before any require() calls:

process.env.NODE_PATH = __dirname;
require('module').Module._initPaths();

8. The Wrapper
Courtesy of @a-ignatov-parc. Another simple solution which increases obviousness, simply wrap the require() function
with one relative to the path of the application's entry point file.

Place this code in your app.js , again before any require() calls:

global.rootRequire = name => require(`${__dirname}/${name}`);

You can then require modules like this:

const Article = rootRequire('app/models/article');

Another option is to always use the initial require() function, basically the same trick without a wrapper. Node.js creates a
new scoped require() function for every new module, but there's always a reference to the initial global one. Unlike most
other solutions this is actually a documented feature. It can be used like this:

const Article = require.main.require('app/models/article');


Since Node.js v10.12.0 there's a module.createRequireFromPath() function available in the stdard library:

const { createRequireFromPath } = require('module')


const requireUtil = createRequireFromPath('../src/utils')

requireUtil('./some-tool')

Conclusion
0. The Alias
Great solution, and a well maintained and popular package on npm. The @ -syntax also looks like something special is going
on, which will tip off the next developer whats going on. You might need extra steps for this solution to work with linting and
unit testing though.

1. The Container
If you're building a slightly bigger application, using a IoC Container is a great way to apply DI. I would only advise this for
the apps relying heavily on Object-oriented design principals and design patterns.

2. The Symlink
If you're using CVS or SVN (but not Git!), this solution is a great one which works, otherwise I don't recommend this to
anyone. You're going to have OS differences one way or another.

3. The Global
You're effectively swapping ../../../ for __base + which is only slightly better if you ask me. However it's very obvious for
the next developer what's exactly happening. That's a big plus compared to the other magical solutions around here.

4. The Module
Great and simple solution. Does not touch other require calls to node_modules .

5. The Environment
Setting application-specific settings as environment variables globally or in your current shell is an anti-pattern if you ask me.
E.g. it's not very handy for development machines which need to run multiple applications.
If you're adding it only for the currently executing program, you're going to have to specify it each time you run your app.
Your start-app command is not easy anymore, which also sucks.

6. The Start-up Script


You're simplifying the command to start your app (always simply node app ), and it gives you a nice spot to put your
mandatory v8 parameters! A small disadvantage might be that you need to create a separate start-up script for your unit
tests as well.

7. The Hack
Most simple solution of all. Use at your own risk.

8. The Wrapper
Great and non-hacky solution. Very obvious what it does, especially if you pick the require.main.require() one.

isaacs commented Dec 20, 2013

Just set up your stuff as modules, and put them in node_modules folder, and then they're top-level things. Problem solved.

tj commented Dec 20, 2013

solution we often use:

a single path (usually ./lib) exposed via NODE_PATH


shallow nesting (if ever)

let's you drop in node modules if you need to "fork" them and don't yet have a private registry. Lots of nesting in an app ends up sucking more
often than not, and I'd argue that ../ in any module is usually an anti-pattern, maybe other than var pkg = require('../package') for bin .version
etc
branneman commented Dec 20, 2013 Owner Author

@isaacs; yes I know that's an option, but the node_modules folder currently is a nice clean place for only the external modules we use. All the
application-specific modules are not generic enough to be put inside node_modules. Like all kinds of Controllers, Models and stuff. I don't think the
node_modules folder is intended for that, is it?

mikeal commented Dec 20, 2013

yeah, whenever i see '../../../dir/name' i immediately think that someone has either 1) prematurely broken out their app in to a million directories and
file or 2) hasn't modularized these components in to modules yet, and they should.

mikeal commented Dec 20, 2013

@branneman we do things in 3 phases.

1. something is a single file library in our app


2. we break it in to a proper node module and check it in to node_modules
3. we publish it and give it its own repository.

If it has application logic, it's not in node_modules. If a lot of things call it or depend on it, it shouldn't have application logic in it, it should be a
node_module.

This helps us keep things clean and lets us write things for ourselves, make sure they work, then publish them and hopefully see others getting use
from them and contributing.

tj commented Dec 20, 2013

I should note that NODE_PATH can be confusing too if you're not familiar with the app, it's not always clear where a module is coming from unless
it's named in an obvious way we prefix ours with s- so it's obvious but they now live in a private registry
it s named in an obvious way, we prefix ours with s so it s obvious but they now live in a private registry

branneman commented Dec 20, 2013 Owner Author

Thanks for all the feedback!

I hear mostly: if you have this problem: you have a bad architecture or bad application design. I also hear: maybe it's time for a private npm
repository?

As an example, most modules in one of my applications depend on a config file, still I can not remove application logic from that, and I'm already
using a proper (external) module to handle common config logic. But the data itself needs to be either loaded a lot or passed around a lot.

Would it then be a best practice to save that config object once per request to the req variable in express.js? I doubt that, because I'm touching
objects I don't own. What is the way to do that kind of thing?

One of the other things I tried with a old version is require.paths , but that's removed now. That was actually the most elegant solution in my
opinion. At least everything would stay inside the app, it's the developers responsibility to use it wisely.

creationix commented Dec 21, 2013

I used to use the symlink method, but it's too much trouble on windows so I don't use it anymore.

In most my projects nowadays I don't have this problem. I use relative requires for intra-package modules.

I used to mix local deps with npm deps in node_modules, but that made my .gitignore too much trouble to only ignore certain deps.

My current behavior is:

1 - Write a single file


2 - when it gets too big, start moving parts to other files in the same folder with relative requires
3 - When there are too many modules, package some into reusable modules independent of my app or library.

I use symlinks (or nested directories on windows) to link my different packages to each-other, but each has it's own git repo and if it's generally
usable, it's own npm name.
defunctzombie commented Jan 30, 2014

A while back I proposed the file:/// dependency for private installs.

Essentially the following in your package.json

"dependencies": {
"whatever": "file///relative/path/to/folder"
}

It would only work for private packages but is an easy way to have the package management/install system take care of setting up the symlink for
you at install time. This avoids all of the above described hacks and also has the benefit of letting you reference package.json when you want to
learn about a dependency (which you do already).

dskrepps commented Feb 6, 2014

The start up script is a good option, though all the solutions have some drawback. At the very least others looking at your code might not know
where the require is looking for modules. You also want to eliminate the possibility of new dependencies colliding with modules of the same name.

I haven't noticed anyone mention using the relationship between your dependencies and your project root. So I went and built it myself:
requireFrom. This method is intuitive to anyone looking at it, and requires no extra steps outside of adding a dependency. Third-party modules can
use it relative to themselves, as well.

var requireFrom = require('requirefrom');


var models = requireFrom('lib/components/models');

var Article = models('article');

Thanks for writing up this overview.


alexgorbatchev commented Feb 20, 2014

I've been using symlinks with the following structure:

/node_modules
/package.json
/src
/node_modules
/client -> ../client
/server -> ../server
/shared -> ../shared
/client
/apps
/main
/test
main.spec.js
index.js
/modules
/foo
/test
foo.spec.js
index.js
/server
/apps
/modules
/shared

it also solves the problem of not know where the modules come from because all app modules have client/server/shared prefixes in require paths

indirectlylit commented Feb 22, 2014

I ran into the same architectural problem: wanting a way of giving my application more organization and internal namespaces, without:

mixing application modules with external dependencies or bothering with private npm repos for application-specific code
using relative requires which make refactoring and comprehension harder
using relative requires, which make refactoring and comprehension harder

using symlinks or environment variables which don't play nicely with source control

The start-up script is a good idea, but I didn't like the extra moving parts.

In the end, I decided to organize my code using file naming conventions rather than directories. A structure would look something like:

node_modules
...
package.json
npm-shrinkwrap.json
src
app.js
app.config.js
app.models.bar.js
app.models.foo.js
app.web.js
app.web.routes.js
...

Then in code:

var app_config = require('./app.config');


var app_models_foo = require('./app.models.foo');

or just:

var config = require('./app.config');


var foo = require('./app.models.foo');

and external dependencies are available from node_modules as usual:

var express = require('express');


In this way, all application code is hierarchically organized into modules and available to all other code relative to the application root.

The main disadvantage is of course that in a file browser, you can't expand/collapse the tree as though it was actually organized into directories. But
I like that it's very explicit about where all code is coming from, and it doesn't use any 'magic'.

flodev commented Mar 5, 2014

Hi,

the start up script doesn't work very well with nodemon (or node forever).
If something changes nodemon tries to restart the start-up script and in my case the childprocess (express js) is still bound to my IP and I got a
EADDRINUSE error.
I also tried to kill the child process but this will be executed too late.

var app = spawn(process.execPath, args, opt);

process.on('exit', function() {
console.log("kill child process");
app.kill('SIGINT');
});

edit:
I've switched to the approach used by alexgorbatchev using a server and shared folder and making symlinks to node_modules folder.
Thank you it works great.

gmfx commented Mar 14, 2014

@visionmedia: quite like the idea of the no/low nesting, but how does that work with larger a source base - I have seen a few of your github reps
which manifest what you say - I'm thinking that maybe an application has a more sprawling areas of functionality? ( I'm a newbie on node so I might
be speculating? )
tuliomonteazul commented Mar 24, 2014

I also found a good way to use the start-up script solution with Grunt and nodemon.

In my Gruntfile.js , I just have set:

grunt.initConfig({
concurrent: {
dev: {
tasks: ['nodemon', 'node-inspector', 'watch', 'mochaTest'],
options: {
logConcurrentOutput: true
}
}
...
},
nodemon: {
dev: {
script: 'index.js',
options: {
nodeArgs: ['--debug'],
env: {
NODE_PATH: './app'
}
}
}
},
...

So just setting the options.env inside nodemon configuration and my application is still starting by just calling $ grunt

patrick-steele-idem commented Apr 4, 2014

Here's another option to consider:


https://github.com/patrick-steele-idem/app-module-path-node
ttps://g t ub.co /pat c stee e de /app odu e pat ode

The app-module-path modifies the internal Module._nodeModulePaths method to change how the search path is calculated for modules at the
application-level. Modules under "node_modules" will not be impacted because modules installed under node_modules will not get a modified
search path.

It of course bothers me that a semi-private method needed to be modified, but it works pretty well. Use at your own risk.

The startup script solution will impact module loading for all installed modules which is not ideal. Plus, that solution requires that you start your
application in a different way which introduces more friction.

a-ignatov-parc commented Apr 28, 2014

You can create helper function in global scope to be able require modules relative to root path.

In app.js :

global.app_require = function(name) {
return require(__dirname + '/' + name);
}

var fs = require('fs'),
config = app_require('config'),
common = app_require('utils/common');

It also will work in other files.

esco commented May 17, 2014

@gumaflux I believe @visionmedia is only talking about modules which usually wouldn't require "sprawling areas of functionality" because a single
module isn't meant to do as much as an application. I think the nesting issue is more of a problem in applications, especially MVC apps.
slorber commented May 19, 2014

I'm using browserify for a browser app.

The problem using paths, or putting code into node_modules is that in your app you may have sources to transform, for exemple CoffeeScript or
JSX files.

When using require("some_private_node_module"), browserify doesn't seem to transform the files and builds a bundle with unprocessed sources.

substack commented May 30, 2014

@slorber Put the transforms in each module's package.json https://github.com/substack/browserify-handbook#browserifytransform-field

Now your code will work and is less vulnerable to system-wide configuration changes and upgrades because each component can have its own local
transforms and dependencies.

See also: avoiding ../../../../../../.. which pretty much echos what @isaacs has said already: just use node_modules/ .

If you're worried about how node_modules might clutter up your app, create a node_modules/app and put all your modules under that package
namespace. You can always require('app/whatever') for some package node_modules/app/whatever .

Not sure how node_modules/ works? It's really nifty!

joelabair commented Jun 20, 2014

So....
This is a small hack. It relies only on node.js continuing to support the NODE_PATH environment variable. The NODE_PATH env setting is a fine
method for defining an application specific local modules search path. However, I don't like relying on it being properly set external to javascript, in
all cases (i.e. export, bash profile, or startup cmd). Node's module.js absorbs process.env's NODE_PATH into a private variable for inclusion into a list
of global search paths used by require. The problem is, node only looks at process.env['NODE_PATH'] once, on main process init, before evaluating
any of the app's code. Including the following 2 lines allows the re-definition of NODE_PATH, post process-init, and should be included prior to any
local module specific requires. In a top level file include:

process.env['NODE_PATH'] = __dirname + '/lib';


require('module').Module._initPaths();

Then simply require any modules in ./lib

var myLocalLibModule = require('myLocalLibModule');


...

This does not change the behavior of module.js as documented; node_modules, package.json, and global modules all behave as expected.

kgryte commented Jun 24, 2014

Another option for complex application logic (config files, loggers, database connections, etc) is to use inversion of control (IoC) containers with
dependency injection. See @jaredhanson's Electrolyte for one implementation.

branneman commented Jun 30, 2014 Owner Author

I just updated the article again and added more solutions. Thanks for all the feedback, keep it coming!

@joelabair: Great suggestion, added it as solution 6.

@a-ignatov-parc: Love the simplicity, added it as solution 7. Great and non-hacky.


@dskrepps: I don't like the fact that I would need to call require('requirefrom') in every file, unless you make it global like @a-ignatov-parc's
solution as well. And then it's not that different from solution 7. (Altough I now see that you commented that one first!)

/cc @isaacs, @visionmedia, @mikeal, @creationix, @defunctzombie, @dskrepps, @alexgorbatchev, @indirectlylit, @flodev, @gumaflux,
@tuliomonteazul, @patrick-steele-idem, @a-ignatov-parc, @esco, @slorber, @substack, @joelabair, @kgryte

awei01 commented Jul 26, 2014

FWIW, in case anyone is using Jest for testing, I tried solution 1 referenced above and it broke everything. But after hacking around, I figured out a
way to make symlinks work: facebook/jest#98

valtido commented Aug 12, 2014

This might be the worst IDEA ever, but what do you guys think about this ?

# CoffeeScript Example
$require = require
require = (file)->
if /^\/\/.*$/.test file
file = file.slice 1, file.length
$require.resolve process.cwd() + file
else
$require file

//JavaScript Example
var $require, require;
$require = require;
require = function(file) {
if (/^\/\/.*$/.test(file)) {
file = file.slice(1, file.length);
return $require.resolve(process.cwd() + file);
} else {
return $require(file);
q ( )

}
};

You can add that on the first line to override the require function with a reference to itself...

now, you can use require("express") as normal, and require("//lib/myLibFile") the difference is the leading // , inspired by the use in http
requests //ajax.googleapis.com/ajax/libs/jqueryui/1.11.0/jquery-ui.min.js .

MarkKahn commented Sep 14, 2014

My current solution is to have my script spawn a child-process to itself if NODE_PATH isn't set. This allows me to just run node file.js and not worry
about anything else:

if( !process.env.NODE_PATH ){
// set NODE_PATH to `pwd`
process.env.NODE_PATH = __dirname + '/';

require( 'child_process' ).spawn( 'gulp', [].slice.call( process.argv, 2 ), {


stdio: 'inherit'
} );

// "throw away" logging from this process. The child will still be fine since it has access to stdout and its own console.log
console.log = function(){};
}else{
// start app
}

UnquietCode commented Sep 17, 2014

Thank you for this write-up! I went with #7 and have global method Require which complements require .
cronvel commented Oct 2, 2014

And what about:


var myModule = require.main.require( './path/to/module' ) ;
... seems to work pretty well as long as your main js file is at the root of your project.

azu commented Oct 4, 2014

npm 2.0 support Local Paths.

viruschidai commented Oct 10, 2014

I did a lib when I tried to restructure some source code in a large project. https://github.com/viruschidai/node-mv move a source file and update all
require paths to the moved file.

stringparser commented Oct 26, 2014

@azu nice! Still...

This feature is helpful for local offline development and creating tests that require npm installing where you don't want to hit an external
server, but should not be used when publishing packages to the public registry.

What I've been doing is to exploit the require.cache . If I have a package, say utils on node_modules I'll do a lib/utils and on there I'll merge
the cache of utils to have whatever I want. That is:

var util = require('utils');


util.which = require('which');
util.minimist = require('minimist');
module.exports = util;
p

So I only have to require that package once and then utils.<some package> will give the necesary pack.

gagle commented Nov 9, 2014

This is my contribution to this topic: https://github.com/gagle/node-getmod

It just shortens the relative paths by introducing marks, points from which paths can be relative.

renatoargh commented Nov 17, 2014

My solution is;

var path = require('path');

global._require = function(path) { //I call it 'reversal require'


return require(path.join(__dirname, path));
}

//PS.: This code should in the root level folder of your project!

You are now basically requiring your .js files from base instead of cwd

booleangate commented Dec 2, 2014

A word of caution for people using the symlink approach with Browserify: you are likely to break transforms. This has been my experience with brfs
and trying to include a module through a symlinked path. The transformer seems to ignore symlinked paths (or probably packages that are in the
node_modules directory).
However, it turns out that there's an additional option for strategy #4 if you're using a build tool like gulp (and still works with browserify
transforms). I've simply added process.env.NODE_PATH = "./my/include/path:" + (process.env.NODE_PATH || ""); to my gulpfile.js and everything
works great now.

enricostara commented Dec 13, 2014

I released requirish, a solution that mixes strategy #3(rekuire) and #7 (require.main.require)


The tool is also a browserify-transform that convert back all the require() statements for browser, adding again the long relative paths only for the
browserify processor

davidshimjs commented Jan 1, 2015

@azu Local path in npm isn't be synchronized with original source code when I edit it in original folder. It doesn't make a symbolic link.

gavinengel commented Jan 3, 2015

I just made this module (my first) so I'd love to hear feedback (on my github page, not on this thread): https://www.npmjs.com/package/magic-
globals

// require this module without assigning export


require('magic-globals');

// you may now use additional global objects in any module,


// in addition to built-ins: __filename and __dirname
console.log('__line: ' + __line); // ex: 6
console.log('__file: ' + __file); // ex: server
console.log('__ext: ' + __ext); // ex: js
console.log('__base: ' + __base); // ex: /home/node/apps/5pt-app-model-example/api-example
console.log('__filename: ' + __filename); // ex: /home/node/apps/5pt-app-model-example/api-example/server/server.js
console.log('__function: ' + __function); // ex: (anonymous)
console.log('__dirname: ' + __dirname); // ex: /home/node/apps/5pt-app-model-example/api-example/server

andineck commented Jan 16, 2015

For me the hack presented by @joelabair works really well. I tested it with node v0.8, v0.10, v0.11 and it works well. In order to reuse this solution, i
made a little module where you can just add the folders that should behave like the node_modules folder.
https://www.npmjs.com/package/local-modules

require('local-modules')('lib', 'components');

like @creationix, I didn't want to mess with private dependencies in node_modules folder.

ivan-kleshnin commented Jan 21, 2015

If you put parts of your app into node_modules you can't exclude node_modules from search scope anymore. So you lose the ability to quick search
through project files. This kinda sucks.

ivan-kleshnin commented Jan 21, 2015

As for local-modules solution and likes...

When you start to import app modules like require("something") and those modules are not really reside in node_modules it feels like an evil
magic to me. Import semantics was changed under the cover.

I actually think it should be resolved by adding special PROJECT ROOT symbol and patching native require . Syntax may be like
require("~/dfdfdf") .
But ~ will be confused with unix home dir so it's better to choose something else like require("@/dfdfdf") .
Explicit is better than implicit, as noone may miss "@" symbol in import statements.
We basically add different syntax for different semantics which is good imo.

I believe having a special shims.js file for every non-standard installation like this in project folder is sane and safe enough.

https://gist.github.com/ivan-kleshnin/edfa4abefe8ce216b9fa

What do you guys think?

gagle commented Jan 25, 2015

This is my second approach. It just implements the __root solution which, in my opiniom, it's the best solution to this problem and nodejs/iojs
should implement it.

https://github.com/gagle/node-groot

I also like the require("@/dfdfdf") approach.

gustavohenke commented Feb 1, 2015

I wrote in my blog about a few solutions presented here versus ES6 problems:
http://injoin.io/2015/01/31/nodejs-require-problem-es6.html

gagle commented Feb 1, 2015

@gustavohenke nice one, very hackish but cleaner and cross-functional among OS's. But the problem with it is the same as with putting the
modules inside node_modules. Having a require call require('my/package') it's very confusing for me because I associate require paths without a
leading ./ with core or external modules. You could have an external module named my , collisions may happen.
gustavohenke commented Feb 1, 2015

Yeah @gagle, I understand these problems, but my case is special, I won't be dropping ES6 modules. Fortunately, I have taken care of namespacing
my libs so there's only a single collision point. Also, my app is well documented for developers.

aforty commented Feb 4, 2015

This gist is so incredibly helpful. Kind of embarrassing that Node has an issue with this many hackish solutions.

ColCh commented Feb 5, 2015

It seemed that NODE_PATH is most clean solution

doron2402 commented Feb 6, 2015

seems like
if you can turn this into a node module do it
else just define it in your index.js or app.js
if (!global.__base) { global.__base = __dirname + '/'; }

ericelliott commented Feb 9, 2015

Holy crap. Lots of hacky solutions here.

Try this instead: rootrequire

The readme:
The readme:

rootrequire
Require files relative to your project root.

Install

npm install --save rootrequire

Use

var
root = require('rootrequire'),
myLib = require(root + '/path/to/lib.js');

Why?

You can move files around more easily than you can with relative paths like ../../lib/my-lib.js
Every file documents your app's directory structure for you. You'll know exactly where to look for things.
Dazzle your coworkers.

Learn JavaScript with Eric Elliott

This was written for the "Learn JavaScript with Eric Elliott" courses. Don't just learn JavaScript. Learn how to change the world.

koresar commented Feb 11, 2015

To make node.js search for modules in an additional directory you could use require.main.path array.
// require('node-dm'); <-- Exception
require.main.paths.push('/home/username/code/projectname/node_modules/'); // <- any path here
console.log(require('node-dm')); // All good

Talento90 commented Feb 15, 2015

I'm using the wrapper solution. No magic just elegance.

Thanks for this post!

ivan-kleshnin commented Feb 24, 2015

@ericelliott, with your solution IDE navigation is lost in the same way as with others...
There is no escape from this problem at app code level. Every "trick" breaks IDE move-to functionality.
From all those "solutions", only symlinks keep IDE working as it should.

sylvainv commented Feb 25, 2015

Thanks for the post, very useful and detailed. I found the wrapper solution to be the most elegant, works on any latest node instance and does not
require any pre-setup / hacks for it to work.

Besides it let me set the path to the library and avoid any potential name conflict issues.

etcinit commented Mar 2, 2015

I'll add my library to the list: https://github.com/etcinit/enclosure (It's very Java-like though)
jondlm commented Mar 3, 2015

Turns out that npm now flattens your dependency tree which breaks the "rootrequire" method by @ericelliott.

I found a work around though: http://www.jondelamotte.com/solving-node-project-requires/

rahularyan commented Mar 9, 2015

Thanks for the awesome tutorial

scharf commented Mar 13, 2015

Create symlink using node in npm postinstall

Since symlink is the only solution that does not confuse IDEs (as @ivan-kleshnin noted), here is my solution: add a postinstall script to the
package.json that creates a symlink from the app directory the to node_modules (note the srcpath link is specified relative to the node_modules ):

"scripts": {
"postinstall" : "node -e \"var srcpath='../app'; var dstpath='node_modules/app';var fs=require('fs'); fs.exists(dstpath,function(exist
},

The script could also be put into a separate file, but I prefer to specify it directly inside the package.json...

For readability, here is the one-liner well formatted:


// the src path relative to node_module
var srcpath = '../app';
var dstpath = 'node_modules/app';
var fs = require('fs');
fs.exists(dstpath, function (exists) {
// create the link only if the dest does not exist!
if (!exists) {
fs.symlinkSync(srcpath, dstpath, 'dir');
}
});

I think it should work on windows as well, but I have not tested it.

tomatau commented Apr 1, 2015

Would like to see an updated article for JS module syntax, as it requires you to be static with your imports - many of these solutions won't work

sh-a-v commented Apr 2, 2015

@scharf, on windows it works. You only should run cmd as admin


But fs.exists returns always false, so I replaced it with fs.readlink :

fs.readlink(dstpath,function(err, existLink){if(!existLink){fs.symlinkSync(srcpath, dstpath,'dir');}})

jaubourg commented Apr 3, 2015

I developed wires because we had configuration and routing nightmares at my company. We've been using it for 2 years now and I just released
version 0.3.0 which is world-ready, so have fun using it and don't hesitate with feedback, questions or death-threats :P
Using wires, you would create a wires.json file at the root of your app:

{
":models/": "./lib/models/"
}

And then just require models like this:

require( ":models/article" );
require( ":models/client" );

And call your main script using the wires binary:

wires startServer

There's a lot more to wires but I felt like sharing on this specific topic.

Hope this helps! :)


Even if your Node.js program is a web-server of some sort, working with the local file system is somewhat inevitable. While Node.js
does provide low-level file system access (see the Node.js fs module), abstraction is always helpful, particularly when dealing with
absolute paths.

The filepath Node.js module is a very helpful utility for simple access to file paths. You’ll need only a package.json file with this module
as a dependency, an “npm install” command, and then you are up and running. This article provides a quick introduction to a few of the
most common methods.

Example # 1A

1 //get a reference to the filepath module


2 var FP = require('filepath');
3
4 //get a reference to the folder structure that leads up to the current file, set it to the path variable
5 var path = FP.newPath();
6
7 //output the path variable
8 console.log(path);

Example # 1B:

1 [YOUR LOCAL PATH TO]/JavaScript/node-js/filepath

In Example # 1, we first create the FP variable, which references the filepath module. Then we create the path variable, which holds the
return value of the FP object’s newPath method. And finally, we output the path in the console. Example # 1B shows the terminal output
when we use console.log to view the path variable. This path will vary for each user so I simply put “[YOUR LOCAL PATH TO]” for the
folder structure that leads up to that file in the github repo that you cloned (see “How to Demo” below).

How to Demo:

1. Clone this github repo: https://github.com/kevinchisholm/video-code-examples


2. Navigate to: JavaScript/node-js/filepath
3. Execute the following command in a terminal prompt: node filepath-1.js

Example # 2

1 //get a reference to the filepath module


2 var FP = require('filepath');
3
4 //get a reference to the folder structure that leads up to the current file, set it to the path variable
5 var path = FP.newPath();
6
7 var files = path.list();
8
9 console.dir(files);

Example # 2 demonstrates the list method. The only real difference between this code and Example # 1, is the new variable “files”,
which receives the value of the list method, when called on our path variable. The files variable ends up as an array. Each element in
the array is an object whose “path” property is a string that points to a file in the current directory.

How to Demo:

1. Clone this github repo: https://github.com/kevinchisholm/video-code-examples


2. Navigate to: JavaScript/node-js/filepath
3. Execute the following command in a terminal prompt: node filepath-2.js

Example # 3A

1 //get a reference to the filepath module


2 var FP = require('filepath');
3
4 FP.newPath(__dirname).recurse(function (path) {
5 console.dir(path);
6 })

Example # 3B

1 [
2 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-1.js' },
3 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-2.js' },
4 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-3.js' },
5 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules' },
6 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/package.json' }
7 ]

Example # 3C

1 //get a reference to the filepath module


2 var FP = require('filepath');
3
4 FP.newPath(__dirname).recurse(function (path) {
5 //console.dir(path);
6 console.log(path.toString());
7 })

Example # 3D

1 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-1.js


2 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-2.js
3 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-3.js
4 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules
5 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath
6 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/.npmignore
7 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/LICENSE
8 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/README.md
9 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/index.js
10 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/node_modules
11 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/node_modules/iou
12 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/node_modules/iou
13 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/node_modules/iou
14 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/node_modules/iou
15 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/node_modules/iou
16 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/node_modules/iou
17 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules/filepath/package.json
18 [YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/package.json

In Example # 3A, we see the recurse method in action. Just as the name implies, the recurse method will recursively list all of the files in
the current directory. As a result, if one of those files is a folder, then it will list all of the files in that folder, and so on. This method differs
from the previous two examples in that it takes a callback. The callback is a bit like a forEach call; it iterates over all of the files or folders
in the path, and calls the callback for each one. Inside of the callback, the path variable is the current path being iterated over.

Example # 3C is the output from the code in Example # 3A.

In Example # 3C, we use the toString() method of the path object so that instead of a bunch of objects that we would need to handle, we
just get the values we are after; the string representation of the path to that file or folder.

Example # 3D is the output from the code in Example # 3C.

How to Demo:

1. Clone this github repo: https://github.com/kevinchisholm/video-code-examples


2. Navigate to: JavaScript/node-js/filepath
3. Execute the following command in a terminal prompt: node filepath-3.js

Summary
The filepath Node.js module has much more to offer than was demonstrated here. Hopefully, this article has demonstrated how easy it is
to get started with filepath
Home > Ar cles > Web Development

Working with File Paths in Node.js


Jan 1, 2015
📄 Contents ⎙ Print + Share This Page 1 of 3 Next >

Jesse Smith discusses how to work with the file paths o en used in Node.js applica ons.

Like this ar cle? We recommend


Node.js, MongoDB, and AngularJS Web Development

Learn More Buy

This ar cle discusses handling file paths from the file system, which is important for loading and parsing file names in your applica on.

The file system is a big part of any applica on that has to handle files paths for loading, manipula ng, or serving data. Node provides some
helper methods for working with file paths, which are discussed in the sec ons that follow.

Most of the me, your applica on has to know where certain files and/or directories are and executes them within the file system based
on certain contexts. Most other languages also have these convenience methods, but Node may have a few you might not have seen with
any other language.

Find Paths
Node can tell you where in the file system it is working by using the _filename and _dirname variables. The _filename variable
provides the absolute path to the file that is currently execu ng; _dirname provides the absolute path to the working directory where the
file being executed is located. Neither variable has to be imported from any modules because each is provided standard.

A simple example using both variables appears below:

console.log("This file is " + __filename);


console.log("It's located in " + __dirname);

The output from this code from my machine is this:

This file is C:\Users\Jesse Smith\workspacex\IntroToNode\file1.js


It is located in C:\Users\Jesse Smith\workspacex\IntroToNode
You can use the process object’s cwd() method to get the current working directory of the applica on:

console.log("The current working directory is " + process.cwd());

Many applica ons might have to switch the current working directory to another directory to fetch or serve different files.
The processobject provides the chdir() method to accomplish this. The name of the directory to switch to is passed in as an argument
to this method:

process.chdir("../");
console.log("The new working directory is " + process.cwd());

The code changes to the directory above the current working directory. If the directory name change fails, the current working directory
remains the working directory. You can trap for this error using a try..catch clause.

You might you need the path to the Node executable file. The process object provides the execPath() method to achieve this:

console.log(process.execPath);

The output from the code above is the path C:\Program Files (x86)\nodejs\node.exe.
Node.js | path.relative() Method
Last Updated : 28 Jan, 2020

The path.relative() method is used to find the relative path from a given path to another path based on the current working
directory. If both the given paths are the same, it would resolve to a zero-length string.

Syntax:

path.relative( from, to )

Parameters: This method accept two parameters as mentioned above and described below:

from: It is the file path that would be used as base path.


to: It is the file path that would be used to find the relative path.

Return Value: It returns a string with the normalized form of the path.

Below program illustrates the path.relative() method in Node.js:


Example:

// Node.js program to demonstrate the


// path.relative() method

// Import the path module


const path = require('path');

path1 = path.relative("geeks/website", "geeks/index.html");


console.log(path1)

path2 = path.relative("users/admin", "admin/files/website");


console.log(path2)

// When both the paths are same


// It returns blank string
path3 = path.relative("users/admin", "users/admin");
console.log(path3)

Output:

..\index.html
..\..\admin\files\website
Reference: https://nodejs.org/api/path.html#path_path_relative_from_to

Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a
student-friendly price and become industry ready.
Requiring modules in
Node.js: Everything you
need to know
Samer Buna
Update: This article is now part of my book “Node.js Beyond The Basics”.

Read the updated version of this content and more about Node at jscomplete.com/node-

beyond-basics.

Node uses two core modules for managing module dependencies:

The require module, which appears to be available on the global scope — no need
to require('require') .

The module module, which also appears to be available on the global scope — no need
to require('module') .

You can think of the require module as the command and the module module as the organizer of

all required modules.

Requiring a module in Node isn’t that complicated of a concept.


const config = require('/path/to/file');

The main object exported by the require module is a function (as used in the above example).

When Node invokes that require() function with a local file path as the function’s only argument,

Node goes through the following sequence of steps:

Resolving: To find the absolute path of the file.

Loading: To determine the type of the file content.

Wrapping: To give the file its private scope. This is what makes both
the require and module objects local to every file we require.

Evaluating: This is what the VM eventually does with the loaded code.

Caching: So that when we require this file again, we don’t go over all the steps another time.

In this article, I’ll attempt to explain with examples these different stages and how they affect the

way we write modules in Node.

Let me first create a directory to host all the examples using my terminal:
mkdir ~/learn-node && cd ~/learn-node

All the commands in the rest of this article will be run from within ~/learn-node .

Resolving a local path


Let me introduce you to the module object. You can check it out in a simple REPL session:

~/learn-node $ node
> module

Module {

id: '<repl>',

exports: {},

parent: undefined,
filename: null,
loaded: false,
children: [],
paths: [ ... ] }

Every module object gets an id property to identify it. This id is usually the full path to the file,

but in a REPL session it’s simply <repl>.

Node modules have a one-to-one relation with files on the file-system. We require a module by

loading the content of a file into memory.

However, since Node allows many ways to require a file (for example, with a relative path or a pre-

configured path), before we can load the content of a file into the memory we need to find the

absolute location of that file.

When we require a 'find-me' module, without specifying a path:


require('find-me');

Node will look for find-me.js in all the paths specified by module.paths — in order.

~/learn-node $ node
> module.paths

[ '/Users/samer/learn-node/repl/node_modules',

'/Users/samer/learn-node/node_modules',

'/Users/samer/node_modules',

'/Users/node_modules',

'/node_modules',

'/Users/samer/.node_modules',

'/Users/samer/.node_libraries',

'/usr/local/Cellar/node/7.7.1/lib/node' ]

The paths list is basically a list of node_modules directories under every directory from the current
p y _ y y

directory to the root directory. It also includes a few legacy directories whose use is not

recommended.

If Node can’t find find-me.js in any of these paths, it will throw a “cannot find module error.”

~/learn-node $ node
> require('find-me')

Error: Cannot find module 'find-me'

at Function.Module._resolveFilename (module.js:470:15)
at Function.Module._load (module.js:418:25)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at repl:1:1
at ContextifyScript.Script.runInThisContext (vm.js:23:33)
at REPLServer.defaultEval (repl.js:336:29)
at bound (domain.js:280:14)
at REPLServer.runBound [as eval] (domain.js:293:12)
at REPLServer.onLine (repl.js:533:10)
If you now create a local node_modules directory and put a find-me.js in there, the require('find-

me') line will find it.

~/learn-node $ mkdir node_modules

~/learn-node $ echo "console.log('I am not lost');" > node_modules/find-me.js

~/learn-node $ node
> require('find-me');

I am not lost
{}

>

If another find-me.js file existed in any of the other paths, for example, if we have

a node_modules directory under the home directory and we have a different find-me.js file in there:
_ y y j

$ mkdir ~/node_modules
$ echo "console.log('I am the root of all problems');" > ~/node_modules/find-me.js

When we require('find-me') from within the learn-node directory — which has its own node_modul

es/find-me.js , the find-me.js file under the home directory will not be loaded at all:

~/learn-node $ node
> require('find-me')

I am not lost
{}

>

If we remove the local node_modules directory under ~/learn-node and try to require find-me one
more time, the file under the home’s node_modules directory would be used:

~/learn-node $ rm -r node_modules/
~/learn-node $ node
> require('find-me')

I am the root of all problems


{}

>

Requiring a folder
Modules don’t have to be files. We can also create a find-me folder under node_modules and place

an index.js file in there. The same require('find-me') line will use that folder’s index.js file:

~/learn-node $ mkdir -p node_modules/find-me


p _

~/learn-node $ echo "console.log('Found again.');" > node_modules/find-me/index.j

~/learn-node $ node
> require('find-me');

Found again.
{}

>

Note how it ignored the home directory’s node_modules path again since we have a local one now.

An index.js file will be used by default when we require a folder, but we can control what file

name to start with under the folder using the main property in package.json . For example, to make

the require('find-me') line resolve to a different file under the find-me folder, all we need to do is

add a package.json file in there and specify which file should be used to resolve this folder:
~/learn-node $ echo "console.log('I rule');" > node_modules/find-me/start.js

~/learn-node $ echo '{ "name": "find-me-folder", "main": "start.js" }' > node_modules/fi

~/learn-node $ node
> require('find-me');

I rule
{}

>

require.resolve
If you want to only resolve the module and not execute it, you can use

the require.resolve function. This behaves exactly the same as the main require function, but

does not load the file. It will still throw an error if the file does not exist and it will return the full path

to the file when found.


> require.resolve('find-me');

'/Users/samer/learn-node/node_modules/find-me/start.js'

> require.resolve('not-there');
Error: Cannot find module 'not-there'

at Function.Module._resolveFilename (module.js:470:15)
at Function.resolve (internal/module.js:27:19)
at repl:1:9
at ContextifyScript.Script.runInThisContext (vm.js:23:33)
at REPLServer.defaultEval (repl.js:336:29)
at bound (domain.js:280:14)
at REPLServer.runBound [as eval] (domain.js:293:12)
at REPLServer.onLine (repl.js:533:10)
at emitOne (events.js:101:20)
at REPLServer.emit (events.js:191:7)
>

This can be used, for example, to check whether an optional package is installed or not and only
use it when it’s available.

Relative and absolute paths


Besides resolving modules from within the node_modules directories, we can also place the module

anywhere we want and require it with either relative paths ( ./ and ../ ) or with absolute paths

starting with / .

If, for example, the find-me.js file was under a lib folder instead of the node_modules folder, we

can require it with:

require('./lib/find-me');

Parent-child relation between files


Create a lib/util.js file and add a console.log line there to identify it. Also, console.log the modu

le object itself:

~/learn-node $ mkdir lib


~/learn-node $ echo "console.log('In util', module);" > lib/util.js

Do the same for an index.js file, which is what we’ll be executing with the node command. Make

this index.js file require lib/util.js :

~/learn-node $ echo "console.log('In index', module); require('./lib/util');" > index.js

Now execute the index.js file with node:


~/learn-node $ node index.js
In index Module {
id: '.',

exports: {},

parent: null,
filename: '/Users/samer/learn-node/index.js',
loaded: false,
children: [],
paths: [ ... ] }

In util Module {

id: '/Users/samer/learn-node/lib/util.js',

exports: {},

parent:
Module {

id: '.',

exports: {},

parent: null,
filename: '/Users/samer/learn-node/index.js',
loaded: false,
hild Ci l
children: [ [Circular] ],

paths: [...] },

filename: '/Users/samer/learn-node/lib/util.js',

loaded: false,

children: [],

paths: [...] }

Note how the main index module (id: '.') is now listed as the parent for the lib/util module.

However, the lib/util module was not listed as a child of the index module. Instead, we have

the [Circular] value there because this is a circular reference. If Node prints

the lib/util module object, it will go into an infinite loop. That’s why it simply replaces the lib/ut

il reference with [Circular] .

More importantly now, what happens if the lib/util module required the main index module?

This is where we get into what’s known as the circular modular dependency, which is allowed in

Node.
To understand it better, let’s first understand a few other concepts on the module object.

exports, module.exports, and


synchronous loading of modules
In any module, exports is a special object. If you’ve noticed above, every time we’ve printed a

module object, it had an exports property which has been an empty object so far. We can add any

attribute to this special exports object. For example, let’s export an id attribute for index.js and li

b/util.js :

// Add the following line at the top of lib/util.js

exports.id = 'lib/util';

// Add the following line at the top of index.js

exports.id = 'index';
When we now execute index.js , we’ll see these attributes as managed on each

file’s module object:

~/learn-node $ node index.js


In index Module {
id: '.',

exports: { id: 'index' },

loaded: false,
... }

In util Module {

id: '/Users/samer/learn-node/lib/util.js',

exports: { id: 'lib/util' },

parent:
Module {

id: '.',

exports: { id: 'index' },

loaded: false,
... },

loaded: false,
... }

I’ve removed some attributes in the above output to keep it brief, but note how the exports object

now has the attributes we defined in each module. You can put as many attributes as you want on

that exports object, and you can actually change the whole object to be something else. For

example, to change the exports object to be a function instead of an object, we do the following:

// Add the following line in index.js before the console.log

module.exports = function() {};

When you run index.js now, you’ll see how the exports object is a function:
~/learn-node $ node index.js
In index Module {
id: '.',

exports: [Function],

loaded: false,
... }

Note how we did not do exports = function() {} to make the exports object into a function. We

can’t actually do that because the exports variable inside each module is just a reference to modul

e.exports which manages the exported properties. When we reassign the exports variable, that

reference is lost and we would be introducing a new variable instead of changing the module.expor

ts object.

The module.exports object in every module is what the require function returns when we require

that module. For example, change the require('./lib/util') line in index.js into:
p , g q ( / / ) j

const UTIL = require('./lib/util');

console.log('UTIL:', UTIL);

The above will capture the properties exported in lib/util into the UTIL constant. When we

run index.js now, the very last line will output:

UTIL: { id: 'lib/util' }

Let’s also talk about the loaded attribute on every module. So far, every time we printed a module

object, we saw a loaded attribute on that object with a value of false .

The module module uses the loaded attribute to track which modules have been loaded (true
The module module uses the loaded attribute to track which modules have been loaded (true

value) and which modules are still being loaded (false value). We can, for example, see the inde

x.js module fully loaded if we print its module object on the next cycle of the event loop using a s

etImmediate call:

// In index.js
setImmediate(() => {
console.log('The index.js module object is now loaded!', module)
});

The output of that would be:

The index.js module object is now loaded! Module {

id: '.',

exports: [Function],
parent: null,
filename: '/Users/samer/learn-node/index.js',

loaded: true,
children:
[ Module {

id: '/Users/samer/learn-node/lib/util.js',

exports: [Object],
parent: [Circular],
filename: '/Users/samer/learn-node/lib/util.js',

loaded: true,
children: [],
paths: [Object] } ],

paths:
[ '/Users/samer/learn-node/node_modules',

'/Users/samer/node_modules',

'/Users/node_modules',

'/node_modules' ] }

Note how in this delayed console.log output both lib/util.js and index.js are fully loaded.
The exports object becomes complete when Node finishes loading the module (and labels it so).

The whole process of requiring/loading a module is synchronous. That’s why we were able to see

the modules fully loaded after one cycle of the event loop.

This also means that we cannot change the exports object asynchronously. We can’t, for

example, do the following in any module:

fs.readFile('/etc/passwd', (err, data) => {

if (err) throw err;

exports.data = data; // Will not work.

});

Circular module dependency


Let’s now try to answer the important question about circular dependency in Node: What happens
y p q p y pp

when module 1 requires module 2, and module 2 requires module 1?

To find out, let’s create the following two files under lib/ , module1.js and module2.js and have

them require each other:

// lib/module1.js

exports.a = 1;

require('./module2');

exports.b = 2;

exports.c = 3;

// lib/module2.js

const Module1 = require('./module1');

console.log('Module1 is partially loaded here', Module1);


When we run module1.js we see the following:

~/learn-node $ node lib/module1.js

Module1 is partially loaded here { a: 1 }

We required module2 before module1 was fully loaded, and since module2 required module1 while

it wasn’t fully loaded, what we get from the exports object at that point are all the properties

exported prior to the circular dependency. Only the a property was reported because

both b and c were exported after module2 required and printed module1 .

Node keeps this really simple. During the loading of a module, it builds the exports object. You

can require the module before it’s done loading and you’ll just get a partial exports object with

whatever was defined so far.


JSON and C/C++ addons
We can natively require JSON files and C++ addon files with the require function. You don’t even

need to specify a file extension to do so.

If a file extension was not specified, the first thing Node will try to resolve is a .js file. If it can’t

find a .js file, it will try a .json file and it will parse the .json file if found as a JSON text file.

After that, it will try to find a binary .node file. However, to remove ambiguity, you should probably

specify a file extension when requiring anything other than .js files.

Requiring JSON files is useful if, for example, everything you need to manage in that file is some

static configuration values, or some values that you periodically read from an external source. For

example, if we had the following config.json file:

{
{

"host": "localhost",

"port": 8080

We can require it directly like this:

const { host, port } = require('./config');

console.log(`Server will run at http://${host}:${port}`);

Running the above code will have this output:

Server will run at http://localhost:8080


If Node can’t find a .js or a .json file, it will look for a .node file and it would interpret the file as

a compiled addon module.

The Node documentation site has a sample addon file which is written in C++. It’s a simple

module that exposes a hello() function and the hello function outputs “world.”

You can use the node-gyp package to compile and build the .cc file into a .node file. You just

need to configure a binding.gyp file to tell node-gyp what to do.

Once you have the addon.node file (or whatever name you specify in binding.gyp ) then you can

natively require it just like any other module:

const addon = require('./addon');

console.log(addon.hello());
We can actually see the support of the three extensions by looking at require.extensions .
Looking at the functions for each extension, you can clearly see what Node will do with each. It

uses module._compile for .js files, JSON.parse for .json files, and process.dlopen for .node files.

All code you write in Node will be


wrapped in functions
Node’s wrapping of modules is often misunderstood. To understand it, let me remind you about

the exports / module.exports relation.

We can use the exports object to export properties, but we cannot replace the exports object

directly because it’s just a reference to module.exports

exports.id = 42; // This is ok.


exports = { id: 42 }; // This will not work.

module.exports = { id: 42 }; // This is ok.

How exactly does this exports object, which appears to be global for every module, get defined as

a reference on the module object?

Let me ask one more question before explaining Node’s wrapping process.

In a browser, when we declare a variable in a script like this:

var answer = 42;

That answer variable will be globally available in all scripts after the script that defined it.
This is not the case in Node. When we define a variable in one module, the other modules in the

program will not have access to that variable. So how come variables in Node are magically

scoped?

The answer is simple. Before compiling a module, Node wraps the module code in a function,

which we can inspect using the wrapper property of the module module.

~ $ node
> require('module').wrapper

[ '(function (exports, require, module, __filename, __dirname) { ',

'\n});' ]

>

Node does not execute any code you write in a file directly. It executes this wrapper function which

will have your code in its body. This is what keeps the top-level variables that are defined in any

module scoped to that module.


This wrapper function has 5 arguments: exports , require , module , __filename , and __dirname .

This is what makes them appear to look global when in fact they are specific to each module.

All of these arguments get their values when Node executes the wrapper function. exports is

defined as a reference to module.exports prior to that. require and module are both specific to the

function to be executed, and __filename / __dirname variables will contain the wrapped module’s

absolute filename and directory path.

You can see this wrapping in action if you run a script with a problem on its first line:

~/learn-node $ echo "euaohseu" > bad.js

~/learn-node $ node bad.js


~/bad.js:1
(function (exports, require, module, __filename, __dirname) { euaohse
^
ReferenceError: euaohseu is not defined

Note how the first line of the script as reported above was the wrapper function, not the bad

reference.

Moreover, since every module gets wrapped in a function, we can actually access that function’s

arguments with the arguments keyword:

~/learn-node $ echo "console.log(arguments)" > index.js

~/learn-node $ node index.js


{ '0': {},

'1':

{ [Function: require]
resolve: [Function: resolve],
main:
Module {

id: '.',

exports: {},

parent: null,
filename: '/Users/samer/index.js',
loaded: false,
children: [],
paths: [Object] },
extensions: { ... },

cache: { '/Users/samer/index.js': [Object] } },

'2':

Module {

id: '.',

exports: {},

parent: null,
filename: '/Users/samer/index.js',

loaded: false,
children: [],
paths: [ ... ] },

'3': '/Users/samer/index.js',
'4': '/Users/samer' }

The first argument is the exports object, which starts empty. Then we have

the require / module objects, both of which are instances that are associated with the index.js file

that we’re executing. They are not global variables. The last 2 arguments are the file’s path and its

directory path.

The wrapping function’s return value is module.exports . Inside the wrapped function, we can use

the exports object to change the properties of module.exports , but we can’t reassign exports itself

because it’s just a reference.

What happens is roughly equivalent to:

function (require, module, __filename, __dirname) {

let exports = module.exports;


// Your Code...

return module.exports;
}

If we change the whole exports object, it would no longer be a reference to module.exports . This

is the way JavaScript reference objects work everywhere, not just in this context.

The require object


There is nothing special about require . It’s an object that acts mainly as a function that takes a

module name or path and returns the module.exports object. We can simply override

the require object with our own logic if we want to.

For example, maybe for testing purposes, we want every require call to be mocked by default

and just return a fake object instead of the required module exports object. This simple

reassignment of require will do the trick:


reassignment of require will do the trick:

require = function() {

return { mocked: true };

After doing the above reassignment of require , every require('something') call in the script will

just return the mocked object.

The require object also has properties of its own. We’ve seen the resolve property, which is a

function that performs only the resolving step of the require process. We’ve also seen require.ext

ensions above.

There is also require.main which can be helpful to determine if the script is being required or run
directly.

Say, for example, that we have this simple printInFrame function in print-in-frame.js :

// In print-in-frame.js

const printInFrame = (size, header) => {

console.log('*'.repeat(size));
console.log(header);
console.log('*'.repeat(size));
};

The function takes a numeric argument size and a string argument header and it prints that

header in a frame of stars controlled by the size we specify.

We want to use this file in two ways:


1. From the command line directly like this:

~/learn-node $ node print-in-frame 8 Hello

Passing 8 and Hello as command line arguments to print “Hello” in a frame of 8 stars.

2. With require . Assuming the required module will export the printInFrame function and we can

just call it:

const print = require('./print-in-frame');

print(5, 'Hey');

To print the header “Hey” in a frame of 5 stars.


Those are two different usages. We need a way to determine if the file is being run as a stand-

alone script or if it is being required by other scripts.

This is where we can use this simple if statement:

if (require.main === module) {

// The file is being executed directly (not with require)

So we can use this condition to satisfy the usage requirements above by invoking the

printInFrame function differently:

// In print-in-frame.js

t i t ( i h d ) {
const printInFrame = (size, header) => {

console.log('*'.repeat(size));
console.log(header);
console.log('*'.repeat(size));
};

if (require.main === module) {

printInFrame(process.argv[2], process.argv[3]);
} else {

module.exports = printInFrame;
}

When the file is not being required, we just call the printInFrame function

with process.argv elements. Otherwise, we just change the module.exports object to be the printI

nFrame function itself.

All modules will be cached


Caching is important to understand. Let me use a simple example to demonstrate it.

Say that you have the following ascii-art.js file that prints a cool looking header:

We want to display this header every time we require the file. So when we require the file twice,

we want the header to show up twice.

require('./ascii-art') // will show the header.

require('./ascii-art') // will not show the header.


The second require will not show the header because of modules’ caching. Node caches the first

call and does not load the file on the second call.

We can see this cache by printing require.cache after the first require. The cache registry is

simply an object that has a property for every required module. Those properties values are the m

odule objects used for each module. We can simply delete a property from

that require.cache object to invalidate that cache. If we do that, Node will re-load the module to re-

cache it.

However, this is not the most efficient solution for this case. The simple solution is to wrap the log

line in ascii-art.js with a function and export that function. This way, when we require the ascii-

art.js file, we get a function that we can execute to invoke the log line every time:

require('./ascii-art')() // will show the header.


require('./ascii-art')() // will also show the header.

That’s all I have for this topic. Thanks for reading. Until next time!

Learning React or Node? Checkout my books:

Learn React.js by Building Games

Node.js Beyond the Basics


Node.js — Check If a Path or File Exists
by Marcus Pöhls on March 11 2021, tagged in Node.js, 4 min read

When interacting with the file system, you may want to check whether a file exists on the hard disk at a given path. Node.js comes
with the fs core module allowing you to interact with the hard disk.

This tutorial shows you how to use Node.js to determine whether a file exists on disk.

Node.js Series Overview

Node.js Strings Streams Date & Time Arrays Promises JSON Classes Numbers
1. Get a

Objects File System File’s Created Date

2. Get a File’s Last Modified/Updated Date

3. How to Create an Empty File

4. Check If a Path or File Exists


5. How to Rename a File

6. Check If a Path Is a Directory (Coming soon)

7. Check If a Path Is a File (Coming soon)

8. Retrieve the Path to the User’s Home Directory (Coming soon)

9. How to Touch a File (Coming soon)

Asynchronously Check if a File Exists in Node.js


The fs module in Node.js comes with a deprecated exists method. It’s recommended not to use this method anymore. Instead,
you should use the Fs#access method to check whether a file exists.

Well, Fs#access doesn’t return the desired boolean value ( true/false ). Instead, it expects a callback with an error as the only
argument. The callback support comes from the early days of Node.js where asynchronous operations used callbacks.

Starting in version 10.0, Node.js added support for promises and async/await for the fs module. This tutorial assumes you’re using
async/await for flow control of your code. Then, you can use the require('fs').promises version of Fs#access which is usable with
async/await.

Here’s a helper method returning the boolean value whether a file exists at the given path :
const { promises: Fs } = require('fs')

async function exists (path) {


try {
await Fs.access(path)
return true
} catch {
return false
}
}

// Example:
const Path = require('path')
const path = Path.join(__dirname, "existing-file.txt")

await exists(path)
// true

Synchronously Check if a File Exists


You may also use the synchronous method Fs#existsSync to check whether a file exists on your hard disk. Please notice that this
method blocks the Node.js event loop for other operations while processing the file existence check:
const Fs = require('fs')
const Path = require('path')

const path = Path.join(__dirname, "existing-file.txt")

Fs.existsSync(path)
// true

Use the @supercharge/filesystem Package


I’m the maintainer of the @supercharge/filesystem package providing convenient file system utilities. Methods in
the @supercharge/filesystem package are async by default and don’t block the event loop.

You may use the exists method to check if a file path exists on your disk:

const Path = require('path')


const Fs = require('@supercharge/filesystem')

const path = Path.join(__dirname, "existing-file.txt")

await Fs.exists(path)
// true

Enjoy!
Mentioned Resources

Docs for the Node.js fs module

@supercharge/filesystem repository on GitHub


w3schools.com
THE WORLD'S LARGEST WEB DEVELOPER SITE

  HTML CSS JAVASCRIPT MORE   

Node.js path.basename() Method


❮ Path Module

Example
Extract the filename from a file path:

var path = require('path');

var filename = path.basename('/Users/Refsnes/demo_path.js');


console.log(filename);

Run example »
Definition and Usage
The path.basename() method returns the filename part of a file path.

Syntax
path.basename(path, extension);

Parameter Values
Parameter Description

path Required. The file path to search in

extension Optional. If the filename ends with the specified string, the specified string is excluded
from the result

Technical Details
Return Value: The filename, as a String

Node.js Version: 0.1.25


More Examples

Example
Extract the filename, but not the ".js" at the end:

var path = require('path');

var filename = path.basename('/Users/Refsnes/demo_path.js', '.js');


console.log(filename);

Run example »

❮ Path Module

COLOR PICKER

HOW TO
Tabs
Dropdowns
Accordions
Convert Weights
Animated Buttons
Side Navigation
Top Navigation
Modal Boxes
Progress Bars
Parallax
Login Form
HTML Includes
Google Maps
Range Sliders
Tooltips
Slideshow
Filter List
Sort List

SHARE

  

CERTIFICATES
HTML, CSS, JavaScript, PHP, jQuery, Bootstrap and XML.

Read More »
REPORT ERROR PRINT PAGE FORUM ABOUT

Top 10 Tutorials Top 10 References


HTML Tutorial HTML Reference
CSS Tutorial CSS Reference
JavaScript Tutorial JavaScript Reference
How To Tutorial W3.CSS Reference
W3.CSS Tutorial Bootstrap Reference
Bootstrap Tutorial SQL Reference
SQL Tutorial PHP Reference
PHP Tutorial HTML Colors
jQuery Tutorial jQuery Reference
Angular Tutorial AngularJS Reference

Top 10 Examples Web Certificates


HTML Examples HTML Certificate
CSS Examples CSS Certificate
JavaScript Examples JavaScript Certificate
How To Examples jQuery Certificate
W3.CSS Examples PHP Certificate
Bootstrap Examples Bootstrap Certificate
PHP Examples XML Certificate
jQuery Examples
Angular Examples
XML Examples

W3Schools is optimized for learning, testing, and training. Examples might be simplified to improve reading and basic understanding. Tutorials, references,
and examples are constantly reviewed to avoid errors, but we cannot warrant full correctness of all content. While using this site, you agree to have read
and accepted our terms of use, cookie and privacy policy. Copyright 1999-2018 by Refsnes Data. All Rights Reserved.
Clone by Md Maruf Adnan Sami.
How YOU can learn Node.js I/O, files and paths
Follow me on Twitter , happy to take your suggestions on topics or improvements /Chris
If you are you completely new to Node.js or maybe you've just spun up an Express app in Node.js, but barely know anything else
about Node - Then this first part in a series is for YOU.

In this part we will look at:

Working with file paths, it's important when working with files and directories that we understand how to
work with paths. There are so many things that can go wrong in terms of locating your files and parsing
expressions but Node.js does a really good job of keeping you on the straight and narrow thanks to built-in
variables and great core libraries
Working with Files and Directories, almost everything in Node.js comes in an async, and sync flavor.
It's important to understand why we should go with one over the other, but also how they differ in how you
invoke them.
Demo, finally we will build some demos demonstrating these functionalities
## The file system

The file system is an important part of many applications. This means working with files, directories but also
dealing with different access levels and paths.

Working with files is in Node.js a synchronous or an asynchronous process. Node.js is single-threaded which
means if we need to carry things out in parallel we need an approach that supports it. That approach is the
callback pattern.

## References

Node.js docs - file system This is the official docs page for the file system
Overview of the fs module Good overview that shows what methods are available on the fs module
Reading files Shows all you need to know about reading files
Writing files Docs page showing to how to writ files
Working with folders Shows how to work with folders
File stats If you need specific information on a file or directory like creation date, size etc, this is the page
to learn more.
Paths Working with paths can be tricky but this module makes that really easy.
Create a Node.js app on Azure Want to know how to take your Node.js app to the Cloud?
Log on to Azure programmatically using Node.js This teaches you how to programmatically connect to
your Azure resources using Node.js
## Paths

A file path represents where a directory or file is located in your file system. It can look like this:

1 /path/to/file.txt

The path looks different depending on whether we are dealing with Linux based or Windows-based operating
system. On Windows the same path might look like this instead:

1 C:\path\to\file.txt

We need to take this into account when developing our application.

For this we have the built-in module path that we can use like so:

1 const path = require("path");

The module path an help us with the following operations:

Information, it can extract information from our path on things such as parent directory, filename and file
extension
Join, we can get help joining two paths so we don't have to worry about which OS our code is run on
Absolute path, we can get help calculating an absolute path
Normalization, we can get help calculating the relative distance between two paths.

## Demo - file paths

Pre-steps

1. Create a directory for your app


2. Navigate to your directory cd <name of dir>
3. Create app file, Now create a JavaScript file that will contain your code, the suggestion is app.js
4. Create file we can open, In the same directory create a file info.txt and give it some sample data if you
want

Information

Add the following code to your created app file.

1 const path = require("path");


2
3 const filePath = '/path/to/file.txt';
4 console.log(`Base name ${path.basename(filePath)}`);
5 console.log(`Dir name ${path.dirname(filePath)}`);
6 console.log(`Extension name ${path.extname(filePath)}`);
Now run this code with the following command:

1 node <name of your app file>.js

This should produce the following output

1 Base name file.txt


2 Dir name /path/to
3 Extension name .txt

Above we can see how the methods basename() , dirname() and extname() helps us inspect our path to
give us different pieces of information.

Join paths

Here we will look into different ways of joining paths.

Add the following code to your existing application file:

1 const join = '/path';


2 const joinArg = '/to/my/file.txt';
3
4 console.log(`Joined ${path.join(join, joinArg)}`);
5
6 console.log(`Concat ${path.join(join, 'user','files','file.txt')}`)

Above we are joining the paths contained in variables join and joinArg but we are also in our last
example testing out concatenating using nothing but directory names and file names:

1 console.log(`Concat ${path.join(join, 'user','files','file.txt')}`)

Now run this using

1 node <name of your app file>.js

This should give the following output:

1 Joined /path/to/my/file.txt
2 Concat /path/user/files/file.txt

The takeaway here is that we can concatenate different paths using the join() method. However, because
we don't know if our app will be run on a Linux of Windows host machine it's preferred that we construct
paths using nothing but directory and file names like so:

1 console.log(`Concat ${path.join(join, 'user','files','file.txt')}`)


Absolute path

Add the following to our application file:

1 console.log(`Abs path ${path.resolve(joinArg)}`);


2 console.log(`Abs path ${path.resolve("info.txt")}`);

Now run this using

1 node <name of your app file>.js

This should give the following output:

1 Abs path /to/my/file.txt


2 Abs path <this is specific to your system>/info.txt

Note, how we in our second example is using the resolve() method on info.txt a file that exist in the
same directory as we run our code:

1 console.log(`Abs path ${path.resolve("info.txt")}`);


The above will attempt to resolve the absolute path for the file.

Normalize paths

Sometimes we have characters like ./ or ../ in our path. The method normalize() helps us calculate
the resulting path. Add the below code to our application file:

1 console.log(`Normalize ${path.normalize('/path/to/file/../')}`)

Now run this using

1 node <name of your app file>.js

This should give the following output:

1 Normalize /path/to/

## Working with Files and Directories

There are many things you can do when interacting with the file system like:

Read/write files & directories


Read stats on a file
Working with permissions

You interact with the file system using the built in module fs . To use it import it, like so:

1 const fs = require('fs')

I/O operations

Here is a selection of operations you can carry out on files/directories that exist on the fs module.

readFile() , reads the file content asynchronously

appendFile() , adds data to file if it exist, if not then file is created first

copyFile() , copies the file

readdir() , reads the content of a directory

mkdir() , creates a new directory,


rename() , renames a file or folder,

stat() , returns the stats of the file like when it was created, how big it is in Bytes and other info,

access() , check if file exists and if it can be accessed

All the above methods exist as synchronous versions as well. All you need to do is to append the Sync at
the end, for example readFileSync() .
Async/Sync

All operations come in synchronous and asynchronous form. Node.js is single-threaded. The consequence
of running synchronous operations are therefore that we are blocking anything else from happening. This
results in much less throughput than if your app was written in an asynchronous way.

Synchronous operation

In a synchronous operation, you are effectively stopping anything else from happening, this might make your
program less responsive. A synchronous file operation should have sync as part of the operation name, like
so:

1 const fileContent = fs.readFileSync('/path/to/file/file.txt', 'utf8');


2 console.log(fileContent);

Asynchronous operation

An Asynchronous operation is non-blocking. The way Node.js deals with asynchronous operations is by
using a callback model. What essentially happens is that Node.js doesn't wait for the operation to finish.
What you can do is to provide a callback, a function, that will be invoked once the operation has finished.
This gives rise to something called a callback pattern.
Below follows an example of opening a file:

1 const fs = require('fs');
2
3 fs.open('/path/to/file/file.txt', 'r', (err, fileContent) => {
4 if (err) throw err;
5 fs.close(fd, (err) => {
6 if (err) throw err;
7 });
8 });

Above we see how we provide a function as our third argument. The function in itself takes an error err as
the first argument. The second argument is usually data as a result of the operation, in this case, the file
content.

## Demo - files and directories

In this exercise, we will learn how to work with the module fs to do things such as

Read/Write files, we will learn how to do so in an asynchronous and synchronous way


List stats, we will learn how to list stat information on a file
Open directory, here we will learn how to open up a directory and list its file content

Pre-steps

1. Create a directory for your app


2. Navigate to your directory cd <name of dir>
3. Create app file, Now create a JavaScript file that will contain your code, a suggestion is app.js
4. Sample file, In the same directory create a file info.txt and give it some sample data if you want
5. Create a sub directory with content, In the same directory create a folder sub and within create the
files a.txt , b.txt and c.txt Now your directory structure should look like this:

1 app.js
2 info.txt
3 sub -|
4 ---| a.txt
5 ---| b.txt
6 ---| c.txt

## Read/Write files

First, start by giving your app.js file the following content on the top:

1 const fs = require('fs');
2 const path = require('path');

Now we will work primarily with the module fs , but we will need the module path for helping us construct
a path later in the exercise.

Now, add the following content to app.js :


1 try {
2 const fileContent = fs.readFileSync('info.txt', {
3 encoding: 'utf8'
4 });
5 console.log(`Sync Content: ${fileContent}`);
6 } catch (exception) {
7 console.error(`Sync Err: ${exception.message}`);
8 }
9
10 console.log('After sync call');

Above we are using the synchronous version of opening a file. We can see that through the use of a method
ending in sync.

Follow this up by adding the asynchronous version, like so:

1 fs.readFile('info.txt', (err, data) => {


2 if (err) {
3 console.log(`Async Error: ${err.message}`);
4 } else {
5 console.log(`Async Content: ${data}`);
6 }
7 })
8
9 console.log('After async call');

Now run this code with the following command:


1 node <name of your app file>.js

This should produce the following output

1 Sync Content: info


2 After sync call
3 After async call
4 Async Content: info

Note above how the text After sync call is printed right after it lists the file content from our synchronous
call. Additionally note how text After async call is printed before Async Content: info . This means
anything asynchronous happens last. This is an important realization about asynchronous operations, they
may be non-blocking but they don't complete right away. So if the order is important you should be looking at
constructs such Promises and Async/await.

### List stats

For various reasons, you may want to list detailed information on a specific file/directory. For that we
have stat() method. This also comes in an asynchronous/synchronous version.

To use it, add the following code:


1 fs.stat('info.txt', (err, stats) => {
2 if (err) {
3 console.error(`Err ${err.message} `);
4 } else {
5 const { size, mode, mtime } = stats;
6
7 console.log(`Size ${size}`);
8 console.log(`Mode ${mode}`);
9 console.log(`MTime ${mtime}`);
10 console.log(`Is directory ${stats.isDirectory()}`);
11 console.log(`Is file ${stats.isFile()}`);
12 }
13 })

Now run this code with the following command:

1 node <name of your app file>.js

This should produce the following output

1 Size 4
2 Mode 33188
3 MTime Mon Mar 16 2020 19:04:31 GMT+0100 (Central European Standard Time)
4 Is directory false
5 Is file true
Results above may vary depending on what content you have in your file info.txt and when it was
created.

### Open a directory

Lastly, we will open up a directory using the method readdir() . This will produce an array of files/directories
contained within the specified directory:

1 fs.readdir(path.join(__dirname, 'sub'), (err, files) => {


2 if (err) {
3 console.error(`Err: ${err.message}`)
4 } else {
5 files.forEach(file => {
6 console.log(`Open dir, File ${file}`);
7 })
8 }
9 })

Above we are constructing a directory path using the method join() from the path module, like so:

1 path.join(__dirname, 'sub')

__dirname is a built-in variable and simply means the executing directory. The method call means we will

look into a directory sub relative to where we are executing the code.
Now run this code with the following command:

1 node <name of your app file>.js

This should produce the following output

1 Open dir, File a.txt


2 Open dir, File b.txt
3 Open dir, File c.txt

Summary

In summary, we have covered the following areas:

Paths, we've looked at how we can work with paths using the built-in path module
Files & Directories, we've learned how we can use the fs module to create, update, remove, move etc
files & directories.

There is lots more to learn in this area and I highly recommend looking at the reference section of this article
to learn more.
ZetCode
All Spring Boot Python C# Java JavaScript Subscribe

JSON Server tutorial


last modified July 7, 2020

JSON Server tutorial introduces the JavaScript json-server library, which can be used to create fake REST API.

Like 1 Share

JSON server
The json-server is a JavaScript library to create testing REST API.

JSON Server installation


First, we create a project directory an install the json-server module.

$ mkdir json-server-lib
$ cd json-server-lib
$ npm init -y
$ npm i -g json-server

The JSON server module is installed globally with npm.

$ npm install axios

In addition, we install the axios module, which is a promise-based JavaScript HTTP client.
$ cat package.json
{
"name": "json-server-lib",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"axios": "^0.18.0"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}

This is our package.json file.

Get Doodly For A 1-Time Price


You can get Doodly for just a 1-time $67 price
and pay no monthly fees!
Doodly.com

JSON test data


We have some JSON test data:
users.json
{
"users": [
{
"id": 1,
"first_name": "Robert",
"last_name": "Schwartz",
"email": "[email protected]"
},
{
"id": 2,
"first_name": "Lucy",
"last_name": "Ballmer",
"email": "[email protected]"
},
{
"id": 3,
"first_name": "Anna",
"last_name": "Smith",
"email": "[email protected]"
},
{
"id": 4,
"first_name": "Robert",
"last_name": "Brown",
"email": "[email protected]"
},
{
"id": 5,
"first_name": "Roger",
"last_name": "Bacon",
"email": "[email protected]"
}
]
}

Starting JSON server


The JSON server is started with the json-server, which we have installed globally.

$ json-server --watch users.json

The --watch command is used to specify the data for the server.

$ curl localhost:3000/users/3/
{
"id": 3,
"first_name": "Anna",
"last_name": "Smith",
"email": "[email protected]"
}

With the curl command, we get the user with Id 3.

JSON Server GET request


In the next example we retrieve data with a GET request.

get_request.js
const axios = require('axios');

axios.get('http://localhost:3000/users')
.then(resp => {
data = resp.data;
data.forEach(e => {
console.log(`${e.first_name}, ${e.last_name}, ${e.email}`);
});
})
.catch(error => {
console.log(error);
});

With the axios module, we get all users as a JSON array and loop through it with forEach().
$ node get_request.js
Robert, Schwartz, [email protected]
Lucy, Ballmer, [email protected]
Anna, Smith, [email protected]
Robert, Brown, [email protected]
Roger, Bacon, [email protected]

Get Doodly For A 1-Time Price

You can get Doodly for just a


1-time $67 price and pay no
monthly fees!
Doodly.com

This is the output of the example. We get all users and print their full names and emails.
JSON Server POST request
With a POST request, we create a new user.

post_request.js
const axios = require('axios');

axios.post('http://localhost:3000/users', {
id: 6,
first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]'
}).then(resp => {
console.log(resp.data);
}).catch(error => {
console.log(error);
});

A new user is created with axios.

$ node post_request.js
{ id: 6,
first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]' }

The server responds with a newly created object.

$ curl localhost:3000/users/6/
{
"id": 6,
"first_name": "Fred",
"last_name": "Blair",
"email": "[email protected]"
}
We verify the newly created user with the curl command.

JSON Server modify data with PUT request


In the following example we modify data with a PUT request.

put_request.js
const axios = require('axios');

axios.put('http://localhost:3000/users/6/', {
first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]'
}).then(resp => {

console.log(resp.data);
}).catch(error => {

console.log(error);
});

In the example, we modify the user's email address.


Deploy & Scale Node.js Apps
A Quick, Easy, and Intuitive Way to Build, Deploy, Manage, and
Scale Your Node.js Apps.

try.digitalocean.com OPEN

$ node put_request.js
{ first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]',
id: 6 }

This is the output.

JSON Server DELETE request


In the following example, we show how to delete a user with a DELETE request.

delete_request.js
const axios = require('axios');

axios.delete('http://localhost:3000/users/1/')
.then(resp => {
console.log(resp.data)
}).catch(error => {
console.log(error);
});
In the example, we delete the user with Id 1.

$ node delete_request.js
{}

The server responds with empty JSON data.

JSON Server sorting data


In the next example, we sort our data.

sort_data.js
const axios = require('axios');

axios.get('http://localhost:3000/users?_sort=last_name&_order=asc')
.then(resp => {
data = resp.data;
data.forEach(e => {
console.log(`${e.first_name}, ${e.last_name}, ${e.email}`)
});
}).catch(error => {
console.log(error);
});

The code example sorts data by the users' last name in ascending order. We use the _sort and _order query parameters.
Google Search API
$0
Google Search high scalable API

OPEN RapidAPI
RapidAPI

$ node sort_data.js
Roger, Bacon, [email protected]
Lucy, Ballmer, [email protected]
Fred, Blair, [email protected]
Robert, Brown, [email protected]
Robert, Schwartz, [email protected]
Anna, Smith, [email protected]

This is the output.

JSON Server operators


We can use _gte and _lte for getting a specific range of data.

operators.js
const axios = require('axios');

axios.get('http://localhost:3000/users?id_gte=4')
.then(resp => {
console.log(resp.data)
}).catch(error => {
console.log(error);
});

The code example show users with id greater than or equal to 4.

$ node operators.js
[ { id: 4,
first_name: 'Robert',
last_name: 'Brown',
email: '[email protected]' },
{ id: '5',
first_name: 'Roger',
last_name: 'Bacon',
email: '[email protected]' },
{ first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]',
id: 6 } ]

This is the output.

JSON Server full text search


A full text search can be performed with the q parameter.

full_text_search.js
const axios = require('axios');

axios.get('http://localhost:3000/users?q=yahoo')
.then(resp => {
console.log(resp.data)
}).catch(error => {
console.log(error);
});

The code example searches for the yahoo term.


$ node full_text_search.js
[ { id: 4,
first_name: 'Robert',
last_name: 'Brown',
email: '[email protected]' },
{ id: '5',
first_name: 'Roger',
last_name: 'Bacon',
email: '[email protected]' },
{ first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]',
id: 6 } ]

The search query returned these three users.

In this tutorial, we have introduced the JSON Server JavaScript library.

List all JavaScript tutorials.

Home Facebook Twitter Github Subscribe Privacy


© 2007 - 2021 Jan Bodnar admin(at)zetcode.com
16 DAYS Upcoming Tech Talk: Getting Started With Laravel and Inertia.js

TUTORIAL

Introduction to the Path Module in Node.js


Node.js

By Cooper Makhijani
Published on July 10, 2019  6.8k

While this tutorial has content that we believe is of great benefit to our community, we have not yet tested or edited it to ensure you have an
 error-free learning experience. It's on our list, and we're working on it! You can help us out by using the "report an issue" button at the bottom of
the tutorial.

Many people forget about one of Node’s most useful built-in modules, the path module. It’s a module with methods that
help you deal with file and directory path names on the machine’s filesystem. In this article, we’re going to look at five of the
tools path provides.

Before we can start using the path module, we have to require it:

const path = require('path');

S C R O L L TO TO P
Something of note: path works a little bit differently depending on your OS, but that’s beyond the scope of this article. To read
more about the differences in the way path works on POSIX systems and Windows, see the path documentation.

Now that that’s out of the way, let’s look at all the things we can use path for.

path.join
One of the most commonly used path methods is path.join . The join method takes two or more parts of a file path and
joins them into one string that can be used anywhere that requires a file path. For this example, let’s say that we need the
file path of an image, and we have the name of the image. For simplicity’s sake, we’ll assume it’s a png.

Copy
const path = require('path');

let imageName = 'bob_smith';

let filepath = path.join(__dirname, '/images/useravatars/', imageName, '.png');


// We'll talk about what __dirname does a little later on.

console.log('the file path of the image is', filepath);


// the filepath of the image is
// C:/Users/.../intro-to-the-path-module/images/useravatars/bob_smith.png
// (actual output shortened for readability)

→ path.join documentation

path.basename
According to the path docs, the path.basename method will give you the trailing part of a path. In layman’s terms, it returns
S C R O L L TO TO P
either the name of the file or directory that the file path refers to. For this example, let’s say we want to know the name of an
image, but we were passed the whole file path.

const path = require('path');

// Shortened for readability


let filepath = 'C:/Users/.../intro-to-the-path-module/images/useravatars/bob_smith.png';

let imageName = path.basename(filepath);

console.log('name of image:', imageName);


// name of image: bob_smith.png

Now this is cool and all, but what if we want it without the extension? Lucky for us, we just have to tell path.basename to
remove it.

const path = require('path');

// Shortened for readability


let filepath = 'C:/Users/.../intro-to-the-path-module/images/useravatars/bob_smith.png';

let imageName = path.basename(filepath, '.png');

console.log('name of image:', imageName);


// name of image: bob_smith

→ path.basename documentation

path.dirname
Sometimes we need to know the directory that a file is in, but the file path we have leads to a file within that directory. The
S C R O L L TO TO P
path.dirname function is here for us. path.dirname returns the lowest level directory in a file path.
const path = require('path');

// Shortened for readability


let filepath = 'C:/Users/.../Pictures/Photos/India2019/DSC_0002.jpg';

let directoryOfFile = path.dirname(filepath);

console.log('The parent directory of the file is', directoryOfFile);


// The parent directory of the file is C:/Users/moose/Pictures/Photos/India2019

→ path.dirname documentation

path.extname
Say we need to know what the extension of a file is. For our example we’re going to make a function that tells us if a file is
an image. For simplicity’s sake, we’ll only be checking against the most common image types. We use path.extname to get
the extension of a file.

const path = require('path');

let imageTypes = ['.png', '.jpg', '.jpeg'];

function isImage(filepath) {
let filetype = path.extname(filepath);

if(imageTypes.includes(filetype)) {
return true;
} else {
return false;
}
}

isImage('picture.png'); // true S C R O L L TO TO P
isImage('myProgram.exe'); // false
isImage('pictures/selfie.jpeg'); // true

→ path.extname documentation

path.normalize
Many file systems allow the use of shortcuts and references to make navigation easier, such as .. and . , meaning up one
directory and current direction respectively. These are great for quick navigation and testing, but it’s a good idea to have our
paths a little more readable. With path.normalize , we can convert a path containing these shortcuts to the actual path it
represents. path.normalize can handle even the most convoluted paths, as our example shows.

const path = require('path');

path.normalize('/hello/world/lets/go/deeper/./wait/this/is/too/deep/lets/go/back/some/../../../../../../../../..');
// returns: /hello/world/lets/go/deeper

→ path.normalize documentation

🎉 We’re done! That’s all we’re going to cover in this article. Keep in mind that there’s way more to path than what’s covered
here, so I encourage you to check out the official path documentation. Hopefully you learned something, and thanks for
reading!

Was this helpful? Yes No    


0

S C R O L L TO TO P
Report an issue
About the authors

Cooper Makhijani
is a Community author on DigitalOcean.

Still looking for an answer?

 Ask a question  Search for more help

REL ATED

App Platform: Run Node apps without managing servers


Product

S C R O L L TO TO P
How To Code in Node.js eBook
Tutorial

How To Deploy a Next.js App to App Platform


Tutorial

Comments

0 Comments

Leave a comment...

Sign In to Comment

This work is licensed under a Creative


Commons Attribution-NonCommercial-
ShareAlike 4.0 International License.
S C R O L L TO TO P
GET OUR BIWEEKLY NEWSLETTER HOLLIE'S HUB FOR GOOD

Sign up for Infrastructure as a Working on improving health and


Newsletter. education, reducing inequality,
and spurring economic growth?
We'd like to help.

BECOME A CONTRIBUTOR

You get paid; we donate to tech


nonprofits.

S C R O L L TO TO P
Featured on Community Kubernetes Course Learn Python 3 Machine Learning in Python Getting started with Go Intro to Kubernetes

DigitalOcean Products Virtual Machines Managed Databases Managed Kubernetes Block Storage Object Storage Marketplace VPC

Load Balancers

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the


cloud and scale up as you grow – whether you’re
running one virtual machine or ten thousand.

Learn More

Company Products Community


S C R O L L TO TO P

Pricing Tutorials
Pricing Tutorials
About
Products Overview Q&A
Leadership
Droplets Tools and Integrations
Blog
© 2021 DigitalOcean, LLC. All rights reserved. Careers Kubernetes Tags
Partners Managed Databases Product Ideas
Referral Program Spaces Write for DigitalOcean
Press Marketplace Presentation Grants
Legal Load Balancers Hatch Startup Program
Security & Trust Center Block Storage Shop Swag
API Documentation Research Program
Documentation Open Source
Release Notes Code of Conduct

Contact

Get Support
Trouble Signing In?
Sales

Report Abuse
System Status

S C R O L L TO TO P
10 DAYS Upcoming Tech Talk: Build a Web App With Django

TUTORIAL

How To Use __dirname in Node.js


Node.js

By William Le
Last Validated on October 20, 2020 · Originally Published on May 23, 2019  184.6k

Introduction
__dirname is an environment variable that tells you the absolute path of the directory containing the currently executing file.

In this article, you will explore how to implement __dirname in your Node.js project.

Prerequisites
To complete this tutorial, you will need:

A general knowledge of Node.js. To learn more about Node.js, check out our How To Code in Node.js series.

Structuring Your Directories


S C R O L L TO TO P
This tutorial will use the following sample directory structure to explore how __dirname works. To begin your Node.js
project, let’s organize your directories and files:

node-app
├──index.js
├──public
├──src
│ ├──helpers.js
│ └──api
│ └──controller.js
├──cronjobs
│ ├──pictures
│ └──hello.js
└──package.json

You can use __dirname to check on which directories your files live:

controller.js

console.log(__dirname) // "/Users/Sam/node-app/src/api"
console.log(process.cwd()) // "/Users/Sam/node-app"

hello.js

console.log(__dirname) // "/Users/Sam/node-app/cronjobs"
console.log(process.cwd()) // "/Users/Sam/node-app"

Notice that __dirname has a different value depending on which file you console it out. The process.cwd() method also
returns a value, but the project directory instead. The __dirname variable always returns the absolute path of
S Cwhere your
R O L L TO TO P
files live.

Working With Directories


In this section, you will explore how to use __dirname to make new directories, point to them, as well as adding new files.

Making New Directories


To create a new directory in your index.js file, insert __dirname as the first argument to path.join() and the name of the
new directory as the second:

index.js

const fs = require('fs');
const path = require('path');
const dirPath = path.join(__dirname, '/pictures');

fs.mkdirSync(dirPath);

Now you’ve created a new directory, pictures , after calling on the mdirSync() method, which contains __dirname as the
absolute path.

Pointing to Directories
Another unique feature is its ability to point to directories. In your index.js file, declare a variable and pass in the value of
__dirname as the first argument in path.join() , and your directory containing static files as the second:

index.js

express.static(path.join(__dirname, '/public')); S C R O L L TO TO P
Here, you’re telling Node.js to use __dirname to point to the public directory that contains static files.

Adding Files to a Directory


You may also add files to an existing directory. In your index.js file, declare a variable and include __dirname as the first
argument and the file you want to add as the second:

index.js

const fs = require('fs');
const path = require('path');
const filePath = path.join(__dirname, '/pictures');

fs.openSync(filePath, 'hello.jpeg');

Using the openSync() method will add the file if it does not exist within your directory.

Conclusion
Node.js provides a way for you to make and point to directories, and add files to existing directories with a modular
environment variable.

For further reading, check out the Node.js documentation for __dirname , and the tutorial on using __dirname in the
Express.js framework.

Was this helpful? Yes No    


0

S C R O L L TO TO P
Report an issue

About the authors

William Le Natalia Vargas-


Caba
is a Community author on DigitalOcean.
Editor

Still looking for an answer?

 Ask a question  Search for more help

REL ATED

S C R O L L TO TO P
App Platform: Run Node apps without managing servers
Product

How To Code in Node.js eBook


Tutorial

How To Deploy a Next.js App to App Platform


Tutorial

Comments

0 Comments

Leave a comment...

Sign In to Comment

This work is licensed under a Creative


S C R O L L TO TO P
Commons Attribution-NonCommercial-
ShareAlike 4.0 International License.
GET OUR BIWEEKLY NEWSLETTER HOLLIE'S HUB FOR GOOD

Sign up for Infrastructure as a Working on improving health and


Newsletter. education, reducing inequality,
and spurring economic growth?
We'd like to help.

BECOME A CONTRIBUTOR

You get paid; we donate to tech


nonprofits.
S C R O L L TO TO P
Featured on Community Kubernetes Course Learn Python 3 Machine Learning in Python Getting started with Go Intro to Kubernetes

DigitalOcean Products Virtual Machines Managed Databases Managed Kubernetes Block Storage Object Storage Marketplace VPC
Load Balancers

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the


cloud and scale up as you grow – whether you’re
running one virtual machine or ten thousand.

Learn More

Company Products Community


S C R O L L TO TO P
About Pricing Tutorials
Leadership Products Overview Q&A
© 2021 DigitalOcean, LLC. All rights reserved. Blog Droplets Tools and Integrations

Careers Kubernetes Tags

Partners Managed Databases Product Ideas


Referral Program Spaces Write for DigitalOcean
Press Marketplace Presentation Grants
Legal Load Balancers Hatch Startup Program
Security & Trust Center Block Storage Shop Swag
API Documentation Research Program
Documentation Open Source
Release Notes Code of Conduct

Contact

Get Support
Trouble Signing In?
Sales
Report Abuse

System Status

S C R O L L TO TO P
Learn Docs Download Community 

Menu

Node.js File Paths


TABLE OF CONTENTS

Every le in the system has a path.

On Linux and macOS, a path might look like:

/users/joe/file.txt

while Windows computers are di erent, and have a structure such as:

C:\users\joe\file.txt

You need to pay attention when using paths in your applications, as this di erence must be taken into
account.

You include this module in your les using

const path = require('path')

and you can start using its methods.

Getting information out of a path


Given a path, you can extract information out of it using those methods:
dirname : get the parent folder of a le

basename : get the lename part

extname : get the le extension

Example:

const notes = '/users/joe/notes.txt'

path.dirname(notes) // /users/joe
path.basename(notes) // notes.txt
path.extname(notes) // .txt

You can get the le name without the extension by specifying a second argument to basename :

path.basename(notes, path.extname(notes)) //notes

Working with paths


You can join two or more parts of a path by using path.join() :

const name = 'joe'


path.join('/', 'users', name, 'notes.txt') //'/users/joe/notes.txt'

You can get the absolute path calculation of a relative path using path.resolve() :

path.resolve('joe.txt') //'/Users/joe/joe.txt' if run from my home folder


In this case Node.js will simply append /joe.txt to the current working directory. If you specify a second
parameter folder, resolve will use the rst as a base for the second:

path.resolve('tmp', 'joe.txt') //'/Users/joe/tmp/joe.txt' if run from my home folder

If the rst parameter starts with a slash, that means it's an absolute path:

path.resolve('/etc', 'joe.txt') //'/etc/joe.txt'

path.normalize() is another useful function, that will try and calculate the actual path, when it contains
relative speci ers like . or .. , or double slashes:

path.normalize('/users/joe/..//test.txt') //'/users/test.txt'

Neither resolve nor normalize will check if the path exists. They just calculate a path based on the
information they got.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

← PREV NEXT →
←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Node.js le stats
Every le comes with a set of details that we can inspect using Node.js.

In particular, using the stat() method provided by the fs module.

You call it passing a le path, and once Node.js gets the le details it will call the callback function you
pass, with 2 parameters: an error message, and the le stats:

const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}
//we have access to the file stats in `stats`
})

Node.js provides also a sync method, which blocks the thread until the le stats are ready:

const fs = require('fs')
try {
const stats = fs.statSync('/Users/joe/test.txt')
} catch (err) {
console.error(err)
}
The le information is included in the stats variable. What kind of information can we extract using the
stats?

A lot, including:

if the le is a directory or a le, using stats.isFile() and stats.isDirectory()


if the le is a symbolic link using stats.isSymbolicLink()
the le size in bytes using stats.size .

There are other advanced methods, but the bulk of what you'll use in your day-to-day programming is this.

const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}

stats.isFile() //true
stats.isDirectory() //false
stats.isSymbolicLink() //false
stats.size //1024000 //= 1MB
})

CONTRIBUTORS
EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Working with folders in Node.js


TABLE OF CONTENTS

The Node.js fs core module provides many handy methods you can use to work with folders.

Check if a folder exists


Use fs.access() to check if the folder exists and Node.js can access it with its permissions.

Create a new folder


Use fs.mkdir() or fs.mkdirSync() to create a new folder.

const fs = require('fs')

const folderName = '/Users/joe/test'

try {
if (!fs.existsSync(folderName)) {
fs.mkdirSync(folderName)
}
} catch (err) {
console.error(err)
}
Read the content of a directory
Use fs.readdir() or fs.readdirSync() to read the contents of a directory.

This piece of code reads the content of a folder, both les and subfolders, and returns their relative path:

const fs = require('fs')

const folderPath = '/Users/joe'

fs.readdirSync(folderPath)

You can get the full path:

fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})

You can also lter the results to only return the les, and exclude the folders:

const isFile = fileName => {


return fs.lstatSync(fileName).isFile()
}

fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})
.filter(isFile)

R f ld
Rename a folder

Use fs.rename() or fs.renameSync() to rename folder. The rst parameter is the current path, the
second the new path:

const fs = require('fs')

fs.rename('/Users/joe', '/Users/roger', err => {


if (err) {
console.error(err)
return
}
//done
})

fs.renameSync() is the synchronous version:

const fs = require('fs')

try {
fs.renameSync('/Users/joe', '/Users/roger')
} catch (err) {
console.error(err)
}

Remove a folder
Use fs.rmdir() or fs.rmdirSync() to remove a folder.

Removing a folder that has content can be more complicated than you need.
In this case it's best to install the fs-extra module, which is very popular and well maintained. It's a
drop-in replacement of the fs module, which provides more features on top of it.
In this case the remove() method is what you want.

Install it using

npm install fs-extra

and use it like this:

const fs = require('fs-extra')

const folder = '/Users/joe'

fs.remove(folder, err => {


console.error(err)
})

It can also be used with promises:

fs.remove(folder)
.then(() => {
//done
})
.catch(err => {
console.error(err)
})

or with async/await:
async function removeFolder(folder) {
try {
await fs.remove(folder)
//done
} catch (err) {
console.error(err)
}
}

const folder = '/Users/joe'


removeFolder(folder)

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog


© OpenJS Foundation
Writing les with Node.js
TABLE OF CONTENTS

The easiest way to write to les in Node.js is to use the fs.writeFile() API.

Example:

const fsLearn Docs('fs'


= require Download
) Community 

Menu
const content = 'Some content!'

fs.writeFile('/Users/joe/test.txt', content, err => {


if (err) {
console.error(err)
return
}
//file written successfully
})

Alternatively, you can use the synchronous version fs.writeFileSync() :

const fs = require('fs')

const content = 'Some content!'

try {
try {
const data = fs.writeFileSync('/Users/joe/test.txt', content)

//file written successfully


} catch (err) {
console.error(err)
}

By default, this API will replace the contents of the le if it does already exist.

You can modify the default by specifying a ag:

fs.writeFile('/Users/joe/test.txt', content, { flag: 'a+' }, err => {})

The ags you'll likely use are

r+ open the le for reading and writing


w+ open the le for reading and writing, positioning the stream at the beginning of the le. The le
is created if not existing
a open the le for writing, positioning the stream at the end of the le. The le is created if not
existing
a+ open the le for reading and writing, positioning the stream at the end of the le. The le is
created if not existing

(you can nd more ags at https://nodejs.org/api/fs.html#fs_ le_system_ ags)

Append to a le
A handy method to append content to the end of a le is fs.appendFile() (and its
fs.appendFileSync() counterpart):
const content = 'Some content!'

fs.appendFile('file.log', content, err => {


if (err) {
console.error(err)
return
}
//done!
})

Using streams
All those methods write the full content to the le before returning the control back to your program (in
the async version, this means executing the callback)

In this case, a better option is to write the le content using streams.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →
Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
The Node.js fs module
The fs module provides a lot of very useful functionality to access and interact with the le system.

There is noLearn
need toDocs
install it. Being part of
Download the Node.js core, it can be used by simply requiring it:
Community 

Menu
const fs = require('fs')

Once you do so, you have access to all its methods, which include:

fs.access() : check if the le exists and Node.js can access it with its permissions
fs.appendFile() : append data to a le. If the le does not exist, it's created
fs.chmod() : change the permissions of a le speci ed by the lename passed. Related:
fs.lchmod() , fs.fchmod()
fs.chown() : change the owner and group of a le speci ed by the lename passed. Related:
fs.fchown() , fs.lchown()
fs.close() : close a le descriptor

fs.copyFile() : copies a le
fs.createReadStream() : create a readable le stream
fs.createWriteStream() : create a writable le stream
fs.link() : create a new hard link to a le
fs.mkdir() : create a new folder

fs.mkdtemp() : create a temporary directory


fs.open() : set the le mode
fs.open() : set the le mode
fs.readdir() : read the contents of a directory

fs.readFile() : read the content of a le. Related: fs.read()

fs.readlink() : read the value of a symbolic link


fs.realpath() : resolve relative le path pointers ( . , .. ) to the full path
fs.rename() : rename a le or folder
fs.rmdir() : remove a folder
fs.stat() : returns the status of the le identi ed by the lename passed. Related: fs.fstat() ,
fs.lstat()

fs.symlink() : create a new symbolic link to a le


fs.truncate() : truncate to the speci ed length the le identi ed by the lename passed. Related:
fs.ftruncate()
fs.unlink() : remove a le or a symbolic link
fs.unwatchFile() : stop watching for changes on a le
fs.utimes() : change the timestamp of the le identi ed by the lename passed. Related:
fs.futimes()
fs.watchFile() : start watching for changes on a le. Related: fs.watch()

fs.writeFile() : write data to a le. Related: fs.write()

One peculiar thing about the fs module is that all the methods are asynchronous by default, but they
can also work synchronously by appending Sync .

For example:

fs.rename()

fs.renameSync()
fs.write()
fs.writeSync()

This makes a huge di erence in your application ow


This makes a huge di erence in your application ow.

Node.js 10 includes experimental support for a promise based API

For example let's examine the fs.rename() method. The asynchronous API is used with a callback:

const fs = require('fs')

fs.rename('before.json', 'after.json', err => {


if (err) {
return console.error(err)
}

//done
})

A synchronous API can be used like this, with a try/catch block to handle errors:

const fs = require('fs')

try {
fs.renameSync('before.json', 'after.json')
//done
} catch (err) {
console.error(err)
}

The key di erence here is that the execution of your script will block in the second example, until the le
operation succeeded.
CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Error handling in Node.js
TABLE OF CONTENTS

Errors in Node.js are handled through exceptions.

Creating exceptions
An exception is created using the throw keyword:

throw value

As soon as JavaScript executes this line, the normal program ow is halted and the control is held back to
the nearest exception handler.

Usually in client-side code value can be any JavaScript value including a string, a number or an object.

In Node.js, we don't throw strings, we just throw Error objects.

Error objects
An error object is an object that is either an instance of the Error object, or extends the Error class,
provided in the Error core module:

throw new Error('Ran out of coffee')


throw new Error( Ran out of coffee )

or
Learn Docs Download Community 

Menu
class NotEnoughCoffeeError extends Error {
//...
}
throw new NotEnoughCoffeeError()

Handling exceptions
An exception handler is a try / catch statement.

Any exception raised in the lines of code included in the try block is handled in the corresponding
catch block:

try {
//lines of code
} catch (e) {}

e in this example is the exception value.

You can add multiple handlers, that can catch di erent kinds of errors.

Catching uncaught exceptions


If an uncaught exception gets thrown during the execution of your program, your program will crash.

To solve this, you listen for the uncaughtException event on the process object:
process.on('uncaughtException', err => {
console.error('There was an uncaught error', err)
process.exit(1) //mandatory (as per the Node.js docs)
})

You don't need to import the process core module for this, as it's automatically injected.

Exceptions with promises


Using promises you can chain di erent operations, and handle errors at the end:

doSomething1()
.then(doSomething2)
.then(doSomething3)
.catch(err => console.error(err))

How do you know where the error occurred? You don't really know, but you can handle errors in each of
the functions you call ( doSomethingX ), and inside the error handler throw a new error, that's going to call
the outside catch handler:

const doSomething1 = () => {


//...
try {
//...
} catch (err) {
//... handle it locally
throw new Error(err.message)
}
//...
}

To be able to handle errors locally without handling them in the function we call, we can break the chain
you can create a function in each then() and process the exception:

doSomething1()
.then(() => {
return doSomething2().catch(err => {
//handle error
throw err //break the chain!
})
})
.then(() => {
return doSomething3().catch(err => {
//handle error
throw err //break the chain!
})
})
.catch(err => console.error(err))

Error handling with async/await


Using async/await, you still need to catch errors, and you do it this way:

async function someFunction() {


try {
await someOtherFunction()
} catch (err) {
console.error(err.message)
}
}
CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Node.js Streams
TABLE OF CONTENTS

What are streams


Streams are one of the fundamental concepts that power Node.js applications.

They are a way to handle reading/writing les, network communications, or any kind of end-to-end
information exchange in an e cient way.

Streams are not a concept unique to Node.js. They were introduced in the Unix operating system decades
ago, and programs can interact with each other passing streams through the pipe operator ( | ).

For example, in the traditional way, when you tell the program to read a le, the le is read into memory,
from start to nish, and then you process it.

Using streams you read it piece by piece, processing its content without keeping it all in memory.

The Node.js stream module provides the foundation upon which all streaming APIs are built. All streams
are instances of EventEmitter

Why streams
Streams basically provide two major advantages over using other data handling methods:

Memory e ciency: you don't need to load large amounts of data in memory before you are able to
process it
Time e ciency: it takes way less time to start processing data, since you can start processing as
soon as you have it, rather than waiting till the whole data payload is available

An example of a stream
A typical example is reading les from a disk.

Using the Node.js fs module, you can read a le, and serve it over HTTP when a new connection is
established to your HTTP server:

const http = require('http')


const fs = require('fs')

const server = http.createServer(function(req, res) {


fs.readFile(__dirname + '/data.txt', (err, data) => {
res.end(data)
})
})
server.listen(3000)

readFile() reads the full contents of the le, and invokes the callback function when it's done.

res.end(data) in the callback will return the le contents to the HTTP client.

If the le is big, the operation will take quite a bit of time. Here is the same thing written using streams:

const http = require('http')


const fs = require('fs')

const server = http.createServer((req, res) => {


const stream = fs.createReadStream(__dirname + '/data.txt')
stream.pipe(res)
})
})
server.listen(3000)

Instead of waiting until the le is fully read, we start streaming it to the HTTP client as soon as we have a
chunk of data ready to be sent.

pipe()
The above example uses the line stream.pipe(res) : the pipe() method is called on the le stream.

What does this code do? It takes the source, and pipes it into a destination.

You call it on the source stream, so in this case, the le stream is piped to the HTTP response.

The return value of the pipe() method is the destination stream, which is a very convenient thing that
lets us chain multiple pipe() calls, like this:

src.pipe(dest1).pipe(dest2)

This construct is the same as doing

src.pipe(dest1)
dest1.pipe(dest2)

Streams-powered Node.js APIs


Due to their advantages, many Node.js core modules provide native stream handling capabilities, most
notably:

process.stdin returns a stream connected to stdin


process.stdout returns a stream connected to stdout

process.stderr returns a stream connected to stderr

fs.createReadStream() creates a readable stream to a le


fs.createWriteStream() creates a writable stream to a le
net.connect() initiates a stream-based connection
http.request() returns an instance of the http.ClientRequest class, which is a writable stream
zlib.createGzip() compress data using gzip (a compression algorithm) into a stream

zlib.createGunzip() decompress a gzip stream.


zlib.createDeflate() compress data using de ate (a compression algorithm) into a stream
zlib.createInflate() decompress a de ate stream

Di erent types of streams


There are four classes of streams:

Readable : a stream you can pipe from, but not pipe into (you can receive data, but not send data
to it). When you push data into a readable stream, it is bu ered, until a consumer starts to read the
data.
Writable : a stream you can pipe into, but not pipe from (you can send data, but not receive from
it)
Duplex : a stream you can both pipe into and pipe from, basically a combination of a Readable and
Writable stream
Transform : a Transform stream is similar to a Duplex, but the output is a transform of its input

How to create a readable stream


We get the Readable stream from the stream module, and we initialize it and implement the
readable._read() method.

First create a stream object:


st c eate a st ea object:

const Stream = require('stream')


const readableStream = new Stream.Readable()

then implement _read :

readableStream._read = () => {}

You can also implement _read using the read option:

const readableStream = new Stream.Readable({


read() {}
})

Now that the stream is initialized, we can send data to it:

readableStream.push('hi!')
readableStream.push('ho!')

How to create a writable stream


To create a writable stream we extend the base Writable object, and we implement its _write() method.

First create a stream object:

const Stream = require('stream')


t it bl St St W it bl ()
const writableStream = new Stream.Writable()

then implement _write :

writableStream._write = (chunk, encoding, next) => {


console.log(chunk.toString())
next()
}

You can now pipe a readable stream in:

process.stdin.pipe(writableStream)

How to get data from a readable stream


How do we read data from a readable stream? Using a writable stream:

const Stream = require('stream')

const readableStream = new Stream.Readable({


read() {}
})
const writableStream = new Stream.Writable()

writableStream._write = (chunk, encoding, next) => {


console.log(chunk.toString())
next()
}

readableStream.pipe(writableStream)
readableStream.push('hi!')
readableStream.push('ho!')

You can also consume a readable stream directly, using the readable event:

readableStream.on('readable', () => {
console.log(readableStream.read())
})

How to send data to a writable stream


Using the stream write() method:

writableStream.write('hey!\n')

Signaling a writable stream that you ended writing


Use the end() method:

const Stream = require('stream')

const readableStream = new Stream.Readable({


read() {}
})
const writableStream = new Stream.Writable()

writableStream._write = (chunk, encoding, next) => {


console.log(chunk.toString())
next()
}

readableStream.pipe(writableStream)

readableStream.push('hi!')
readableStream.push('ho!')

writableStream.end()

How to create a transform stream


We get the Transform stream from the stream module, and we initialize it and implement the
transform._transform() method.

First create a transform stream object:

const { Transform } = require('stream')


const TransformStream = new Transform;

then implement _transform :

TransformStream._transform = (chunk, encoding, callback) => {


console.log(chunk.toString().toUpperCase());
callback();
}

Pipe readable stream:


process.stdin.pipe(TransformStream);

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Node.js Bu ers
TABLE OF CONTENTS

What is a bu er?
A bu er is an area of memory. JavaScript developers are not familiar with this concept, much less than C,
C++ or Go developers (or any programmer that uses a system programming language), which interact with
memory every day.

It represents a xed-size chunk of memory (can't be resized) allocated outside of the V8 JavaScript engine.

You can think of a bu er like an array of integers, which each represent a byte of data.

It is implemented by the Node.js Bu er class.

Why do we need a bu er?


Bu ers were introduced to help developers deal with binary data, in an ecosystem that traditionally only
dealt with strings rather than binaries.

Bu ers are deeply linked with streams. When a stream processor receives data faster than it can digest, it
puts the data in a bu er.

A simple visualization of a bu er is when you are watching a YouTube video and the red line goes beyond
your visualization point: you are downloading data faster than you're viewing it, and your browser bu ers
it.
How to create a bu er
A bu er is created using the Buffer.from() , Buffer.alloc() , and Buffer.allocUnsafe() methods.

const buf = Buffer.from('Hey!')

Buffer.from(array)
Buffer.from(arrayBuffer[, byteOffset[, length]])
Buffer.from(buffer)
Buffer.from(string[, encoding])

You can also just initialize the bu er passing the size. This creates a 1KB bu er:

const buf = Buffer.alloc(1024)


//or
const buf = Buffer.allocUnsafe(1024)

While both alloc and allocUnsafe allocate a Buffer of the speci ed size in bytes, the Buffer
created by alloc will be initialized with zeroes and the one created by allocUnsafe will be uninitialized.
This means that while allocUnsafe would be quite fast in comparison to alloc , the allocated segment
of memory may contain old data which could potentially be sensitive.

Older data, if present in the memory, can be accessed or leaked when the Buffer memory is read. This
is what really makes allocUnsafe unsafe and extra care must be taken while using it.

Using a bu er

Access the content of a bu er


A bu er, being an array of bytes, can be accessed like an array:

const buf = Buffer.from('Hey!')


console.log(buf[0]) //72
console.log(buf[1]) //101
console.log(buf[2]) //121

Those numbers are the Unicode Code that identi es the character in the bu er position (H => 72, e =>
101, y => 121)

You can print the full content of the bu er using the toString() method:

console.log(buf.toString())

Notice that if you initialize a buffer with a number that sets its size, you'll get access to pre-
initialized memory that will contain random data, not an empty buffer!

Get the length of a bu er

Use the length property:

const buf = Buffer.from('Hey!')


console.log(buf.length)

Iterate over the contents of a bu er


const buf = Buffer.from('Hey!')
for (const item of buf) {

console.log(item) //72 101 121 33


}

Changing the content of a bu er

You can write to a bu er a whole string of data by using the write() method:

const buf = Buffer.alloc(4)


buf.write('Hey!')

Just like you can access a bu er with an array syntax, you can also set the contents of the bu er in the
same way:

const buf = Buffer.from('Hey!')


buf[1] = 111 //o
console.log(buf.toString()) //Hoy!

Copy a bu er

Copying a bu er is possible using the copy() method:

const buf = Buffer.from('Hey!')


let bufcopy = Buffer.alloc(4) //allocate 4 bytes
buf.copy(bufcopy)

By default you copy the whole bu er. 3 more parameters let you de ne the target bu er starting position
to copy to, the source bu er starting position to copy from, and the new bu er length:

const buf = Buffer.from('Hey!')


let bufcopy = Buffer.alloc(2) //allocate 2 bytes
buf.copy(bufcopy, 0, 0, 2)
bufcopy.toString() //'He'

Slice a bu er

If you want to create a partial visualization of a bu er, you can create a slice. A slice is not a copy: the
original bu er is still the source of truth. If that changes, your slice changes.

Use the slice() method to create it. The rst parameter is the starting position, and you can specify an
optional second parameter with the end position:

const buf = Buffer.from('Hey!')


buf.slice(0).toString() //Hey!
const slice = buf.slice(0, 2)
console.log(slice.toString()) //He
buf[1] = 111 //o
console.log(slice.toString()) //Ho

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB


←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

The Node.js http module


TABLE OF CONTENTS

The HTTP core module is a key module to Node.js networking.

It can be included using

const http = require('http')

The module provides some properties and methods, and some classes.

Properties

http.METHODS

This property lists all the HTTP methods supported:

> require('http').METHODS
[ 'ACL',
'BIND',
'CHECKOUT',
'CONNECT',
'COPY',
'DELETE',
'GET'
'GET',
'HEAD',

'LINK',
'LOCK',
'M-SEARCH',
'MERGE',
'MKACTIVITY',
'MKCALENDAR',
'MKCOL',
'MOVE',
'NOTIFY',
'OPTIONS',
'PATCH',
'POST',
'PROPFIND',
'PROPPATCH',
'PURGE',
'PUT',
'REBIND',
'REPORT',
'SEARCH',
'SUBSCRIBE',
'TRACE',
'UNBIND',
'UNLINK',
'UNLOCK',
'UNSUBSCRIBE' ]

http.STATUS_CODES

This property lists all the HTTP status codes and their description:

> require('http').STATUS CODES


> require( http ).STATUS_CODES
{ '100': 'Continue',
'101': 'Switching Protocols',
'102': 'Processing',
'200': 'OK',
'201': 'Created',
'202': 'Accepted',
'203': 'Non-Authoritative Information',
'204': 'No Content',
'205': 'Reset Content',
'206': 'Partial Content',
'207': 'Multi-Status',
'208': 'Already Reported',
'226': 'IM Used',
'300': 'Multiple Choices',
'301': 'Moved Permanently',
'302': 'Found',
'303': 'See Other',
'304': 'Not Modified',
'305': 'Use Proxy',
'307': 'Temporary Redirect',
'308': 'Permanent Redirect',
'400': 'Bad Request',
'401': 'Unauthorized',
'402': 'Payment Required',
'403': 'Forbidden',
'404': 'Not Found',
'405': 'Method Not Allowed',
'406': 'Not Acceptable',
'407': 'Proxy Authentication Required',
'408': 'Request Timeout',
'409': 'Conflict',
'410': 'Gone',
'411': 'Length Required',
'412': 'Precondition Failed',
'413': 'Payload Too Large',
y g ,
'414': 'URI Too Long',
'415': 'Unsupported Media Type',
'416': 'Range Not Satisfiable',
'417': 'Expectation Failed',
'418': 'I\'m a teapot',
'421': 'Misdirected Request',
'422': 'Unprocessable Entity',
'423': 'Locked',
'424': 'Failed Dependency',
'425': 'Unordered Collection',
'426': 'Upgrade Required',
'428': 'Precondition Required',
'429': 'Too Many Requests',
'431': 'Request Header Fields Too Large',
'451': 'Unavailable For Legal Reasons',
'500': 'Internal Server Error',
'501': 'Not Implemented',
'502': 'Bad Gateway',
'503': 'Service Unavailable',
'504': 'Gateway Timeout',
'505': 'HTTP Version Not Supported',
'506': 'Variant Also Negotiates',
'507': 'Insufficient Storage',
'508': 'Loop Detected',
'509': 'Bandwidth Limit Exceeded',
'510': 'Not Extended',
'511': 'Network Authentication Required' }

http.globalAgent

Points to the global instance of the Agent object, which is an instance of the http.Agent class.

It's used to manage connections persistence and reuse for HTTP clients, and it's a key component of
Node js HTTP networking
Node.js HTTP networking.

More in the http.Agent class description later on.


Methods

http.createServer()

Return a new instance of the http.Server class.

Usage:

const server = http.createServer((req, res) => {


//handle every single request with this callback
})

http.request()

Makes an HTTP request to a server, creating an instance of the http.ClientRequest class.

http.get()

Similar to http.request() , but automatically sets the HTTP method to GET, and calls req.end()
automatically.

Classes
The HTTP module provides 5 classes:

http.Agent

http.ClientRequest
http Server
http.Server
http.ServerResponse

http.IncomingMessage

http.Agent

Node.js creates a global instance of the http.Agent class to manage connections persistence and reuse
for HTTP clients, a key component of Node.js HTTP networking.

This object makes sure that every request made to a server is queued and a single socket is reused.

It also maintains a pool of sockets. This is key for performance reasons.

http.ClientRequest

An http.ClientRequest object is created when http.request() or http.get() is called.

When a response is received, the response event is called with the response, with an
http.IncomingMessage instance as argument.

The returned data of a response can be read in 2 ways:

you can call the response.read() method


in the response event handler you can setup an event listener for the data event, so you can
listen for the data streamed into.

http.Server

This class is commonly instantiated and returned when creating a new server using
http.createServer() .

Once you have a server object, you have access to its methods:
close() stops the server from accepting new connections

listen() starts the HTTP server and listens for connections

http.ServerResponse

Created by an http.Server and passed as the second parameter to the request event it res.

Commonly known and used in code as res :

const server = http.createServer((req, res) => {


//res is an http.ServerResponse object
})

The method you'll always call in the handler is end() , which closes the response, the message is
complete and the server can send it to the client. It must be called on each response.

These methods are used to interact with HTTP headers:

getHeaderNames() get the list of the names of the HTTP headers already set
getHeaders() get a copy of the HTTP headers already set
setHeader('headername', value) sets an HTTP header value
getHeader('headername') gets an HTTP header already set
removeHeader('headername') removes an HTTP header already set
hasHeader('headername') return true if the response has that header set

headersSent() return true if the headers have already been sent to the client

After processing the headers you can send them to the client by calling response.writeHead() , which
accepts the statusCode as the rst parameter, the optional status message, and the headers object.
To send data to the client in the response body, you use write() . It will send bu ered data to the HTTP
response stream.

If the headers were not sent yet using response.writeHead() , it will send the headers rst, with the
status code and message that's set in the request, which you can edit by setting the statusCode and
statusMessage properties values:

response.statusCode = 500
response.statusMessage = 'Internal Server Error'

http.IncomingMessage

An http.IncomingMessage object is created by:

http.Server when listening to the request event


http.ClientRequest when listening to the response event

It can be used to access the response:

status using its statusCode and statusMessage methods


headers using its headers method or rawHeaders
HTTP method using its method method

HTTP version using the httpVersion method


URL using the url method
underlying socket using the socket method

The data is accessed using streams, since http.IncomingMessage implements the Readable Stream
interface.

CONTRIBUTORS
EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

The Node.js events module


TABLE OF CONTENTS

The events module provides us the EventEmitter class, which is key to working with events in Node.js.

const EventEmitter = require('events')


const door = new EventEmitter()

The event listener eats its own dog food and uses these events:

newListener when a listener is added


removeListener when a listener is removed

Here's a detailed description of the most useful methods:

emitter.addListener()

Alias for emitter.on() .

emitter.emit()

Emits an event. It synchronously calls every event listener in the order they were registered.
door.emit("slam") // emitting the event "slam"

emitter.eventNames()

Return an array of strings that represent the events registered on the current EventEmitter object:

door.eventNames()

emitter.getMaxListeners()

Get the maximum amount of listeners one can add to an EventEmitter object, which defaults to 10 but
can be increased or lowered by using setMaxListeners()

door.getMaxListeners()

emitter.listenerCount()

Get the count of listeners of the event passed as parameter:

door.listenerCount('open')

emitter.listeners()

Gets an array of listeners of the event passed as parameter:

door listeners
li t ('open'
' ')
door.listeners('open')

emitter.off()

Alias for emitter.removeListener() added in Node.js 10

emitter.on()

Adds a callback function that's called when an event is emitted.

Usage:

door.on('open', () => {
console.log('Door was opened')
})

emitter.once()

Adds a callback function that's called when an event is emitted for the rst time after registering this. This
callback is only going to be called once, never again.

const EventEmitter = require('events')


const ee = new EventEmitter()

ee.once('my-event', () => {
//call callback function once
})

emitter prependListener()
emitter.prependListener()

When you add a listener using on or addListener , it's added last in the queue of listeners, and called
last. Using prependListener it's added, and called, before other listeners.

emitter.prependOnceListener()

When you add a listener using once , it's added last in the queue of listeners, and called last. Using
prependOnceListener it's added, and called, before other listeners.

emitter.removeAllListeners()

Removes all listeners of an EventEmitter object listening to a speci c event:

door.removeAllListeners('open')

emitter.removeListener()

Remove a speci c listener. You can do this by saving the callback function to a variable, when added, so
you can reference it later:

const doSomething = () => {}


door.on('open', doSomething)
door.removeListener('open', doSomething)

emitter.setMaxListeners()

S t th i t f li t dd t E tE itt bj t hi h d f lt t 10 b t
Sets the maximum amount of listeners one can add to an EventEmitter object, which defaults to 10 but
can be increased or lowered.

door.setMaxListeners(50)

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

The Node.js os module


TABLE OF CONTENTS

This module provides many functions that you can use to retrieve information from the underlying
operating system and the computer the program runs on, and interact with it.

const os = require('os')

There are a few useful properties that tell us some key things related to handling les:

os.EOL gives the line delimiter sequence. It's \n on Linux and macOS, and \r\n on Windows.

os.constants.signals tells us all the constants related to handling process signals, like SIGHUP,
SIGKILL and so on.

os.constants.errno sets the constants for error reporting, like EADDRINUSE, EOVERFLOW and more.

You can read them all on https://nodejs.org/api/os.html#os_signal_constants.

Let's now see the main methods that os provides:

os.arch()

Return the string that identi es the underlying architecture, like arm , x64 , arm64 .
os.cpus()
Return information on the CPUs available on your system.

Example:

[
{
model: 'Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz',
speed: 2400,
times: {
user: 281685380,
nice: 0,
sys: 187986530,
idle: 685833750,
irq: 0
}
},
{
model: 'Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz',
speed: 2400,
times: {
user: 282348700,
nice: 0,
sys: 161800480,
idle: 703509470,
irq: 0
}
}
]

os.endianness()
Return BE or LE depending if Node.js was compiled with Big Endian or Little Endian.

os.freemem()

Return the number of bytes that represent the free memory in the system.

os.homedir()

Return the path to the home directory of the current user.

Example:

'/Users/joe'

os.hostname()

Return the host name.

os.loadavg()

Return the calculation made by the operating system on the load average.

It only returns a meaningful value on Linux and macOS.

Example:

[3.68798828125, 4.00244140625, 11.1181640625]


os.networkInterfaces()
Returns the details of the network interfaces available on your system.

Example:

{ lo0:
[ { address: '127.0.0.1',
netmask: '255.0.0.0',
family: 'IPv4',
mac: 'fe:82:00:00:00:00',
internal: true },
{ address: '::1',
netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
family: 'IPv6',
mac: 'fe:82:00:00:00:00',
scopeid: 0,
internal: true },
{ address: 'fe80::1',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:82:00:00:00:00',
scopeid: 1,
internal: true } ],
en1:
[ { address: 'fe82::9b:8282:d7e6:496e',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: '06:00:00:02:0e:00',
scopeid: 5,
internal: false },
{ address: '192.168.1.38',
netmask: '255.255.255.0',
family: 'IPv4',
a y: IPv4 ,
family
mac: '06:00:00:02:0e:00',
internal: false } ],
utun0:
[ { address: 'fe80::2513:72bc:f405:61d0',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:80:00:20:00:00',
scopeid: 8,
internal: false } ] }

os.platform()

Return the platform that Node.js was compiled for:

darwin
freebsd
linux
openbsd
win32
...more

os.release()

Returns a string that identi es the operating system release number

os.tmpdir()

Returns the path to the assigned temp folder.


os.totalmem()

Returns the number of bytes that represent the total memory available in the system.

os.type()

Identi es the operating system:

Linux
Darwin on macOS
Windows_NT on Windows

os.uptime()

Returns the number of seconds the computer has been running since it was last rebooted.

os.userInfo()

Returns an object that contains the current username , uid , gid , shell , and homedir

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →
Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

The Node.js path module


TABLE OF CONTENTS

The path module provides a lot of very useful functionality to access and interact with the le system.

There is no need to install it. Being part of the Node.js core, it can be used by simply requiring it:

const path = require('path')

This module provides path.sep which provides the path segment separator ( \ on Windows, and / on
Linux / macOS), and path.delimiter which provides the path delimiter ( ; on Windows, and : on
Linux / macOS).

These are the path methods:

path.basename()

Return the last portion of a path. A second parameter can lter out the le extension:

require('path').basename('/test/something') //something
require('path').basename('/test/something.txt') //something.txt
require('path').basename('/test/something.txt', '.txt') //something
path.dirname()

Return the directory part of a path:

require('path').dirname('/test/something') // /test
require('path').dirname('/test/something/file.txt') // /test/something

path.extname()

Return the extension part of a path

require('path').extname('/test/something') // ''
require('path').extname('/test/something/file.txt') // '.txt'

path.format()

Returns a path string from an object, This is the opposite of path.parse


path.format accepts an object as argument with the follwing keys:

root : the root


dir : the folder path starting from the root
base : the le name + extension
name : the le name
ext : the le extension

root is ignored if dir is provided


ext and name are ignored if base exists
// POSIX
require('path').format({ dir: '/Users/joe', base: 'test.txt' }) // '/Users/joe/test.txt

require('path').format({ root: '/Users/joe', name: 'test', ext: 'txt' }) // '/Users/joe/

//WINODWS
require('path').format({ dir: 'C:\\Users\\joe', base: 'test.txt' }) // 'C:\\Users\\joe\\

path.isAbsolute()

Returns true if it's an absolute path

require('path').isAbsolute('/test/something') // true
require('path').isAbsolute('./test/something') // false

path.join()

Joins two or more parts of a path:

const name = 'joe'


require('path').join('/', 'users', name, 'notes.txt') //'/users/joe/notes.txt'

path.normalize()

Tries to calculate the actual path when it contains relative speci ers like . or .. , or double slashes:

require('path').normalize('/users/joe/..//test.txt') //'/users/test.txt'
path.parse()

Parses a path to an object with the segments that compose it:

root : the root


dir : the folder path starting from the root
base : the le name + extension
name : the le name
ext : the le extension

Example:

require('path').parse('/users/test.txt')

results in

{
root: '/',
dir: '/users',
base: 'test.txt',
ext: '.txt',
name: 'test'
}

path.relative()

Accepts 2 paths as arguments. Returns the relative path from the rst path to the second, based on the
current working directory.
Example:

require('path').relative('/Users/joe', '/Users/joe/test.txt') //'test.txt'


require('path').relative('/Users/joe', '/Users/joe/something/test.txt') //'something/test

path.resolve()

You can get the absolute path calculation of a relative path using path.resolve() :

path.resolve('joe.txt') //'/Users/joe/joe.txt' if run from my home folder

By specifying a second parameter, resolve will use the rst as a base for the second:

path.resolve('tmp', 'joe.txt') //'/Users/joe/tmp/joe.txt' if run from my home folder

If the rst parameter starts with a slash, that means it's an absolute path:

path.resolve('/etc', 'joe.txt') //'/etc/joe.txt'

CONTRIBUTORS
EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

The Node.js fs module


The fs module provides a lot of very useful functionality to access and interact with the le system.

There is no need to install it. Being part of the Node.js core, it can be used by simply requiring it:

const fs = require('fs')

Once you do so, you have access to all its methods, which include:

fs.access() : check if the le exists and Node.js can access it with its permissions
fs.appendFile() : append data to a le. If the le does not exist, it's created
fs.chmod() : change the permissions of a le speci ed by the lename passed. Related:
fs.lchmod() , fs.fchmod()
fs.chown() : change the owner and group of a le speci ed by the lename passed. Related:
fs.fchown() , fs.lchown()
fs.close() : close a le descriptor

fs.copyFile() : copies a le
fs.createReadStream() : create a readable le stream
fs.createWriteStream() : create a writable le stream
fs.link() : create a new hard link to a le
fs.mkdir() : create a new folder

fs.mkdtemp() : create a temporary directory


fs.open() : set the le mode
s.ope () : set t e e ode
fs.readdir() : read the contents of a directory

fs.readFile() : read the content of a le. Related: fs.read()

fs.readlink() : read the value of a symbolic link


fs.realpath() : resolve relative le path pointers ( . , .. ) to the full path
fs.rename() : rename a le or folder
fs.rmdir() : remove a folder
fs.stat() : returns the status of the le identi ed by the lename passed. Related: fs.fstat() ,
fs.lstat()

fs.symlink() : create a new symbolic link to a le


fs.truncate() : truncate to the speci ed length the le identi ed by the lename passed. Related:
fs.ftruncate()
fs.unlink() : remove a le or a symbolic link
fs.unwatchFile() : stop watching for changes on a le
fs.utimes() : change the timestamp of the le identi ed by the lename passed. Related:
fs.futimes()
fs.watchFile() : start watching for changes on a le. Related: fs.watch()

fs.writeFile() : write data to a le. Related: fs.write()

One peculiar thing about the fs module is that all the methods are asynchronous by default, but they
can also work synchronously by appending Sync .

For example:

fs.rename()

fs.renameSync()
fs.write()
fs.writeSync()

This makes a huge di erence in your application ow


This makes a huge di erence in your application ow.

Node.js 10 includes experimental support for a promise based API

For example let's examine the fs.rename() method. The asynchronous API is used with a callback:

const fs = require('fs')

fs.rename('before.json', 'after.json', err => {


if (err) {
return console.error(err)
}

//done
})

A synchronous API can be used like this, with a try/catch block to handle errors:

const fs = require('fs')

try {
fs.renameSync('before.json', 'after.json')
//done
} catch (err) {
console.error(err)
}

The key di erence here is that the execution of your script will block in the second example, until the le
operation succeeded.
CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Working with folders in Node.js


TABLE OF CONTENTS

The Node.js fs core module provides many handy methods you can use to work with folders.

Check if a folder exists


Use fs.access() to check if the folder exists and Node.js can access it with its permissions.

Create a new folder


Use fs.mkdir() or fs.mkdirSync() to create a new folder.

const fs = require('fs')

const folderName = '/Users/joe/test'

try {
if (!fs.existsSync(folderName)) {
fs.mkdirSync(folderName)
}
} catch (err) {
console.error(err)
}
Read the content of a directory
Use fs.readdir() or fs.readdirSync() to read the contents of a directory.

This piece of code reads the content of a folder, both les and subfolders, and returns their relative path:

const fs = require('fs')

const folderPath = '/Users/joe'

fs.readdirSync(folderPath)

You can get the full path:

fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})

You can also lter the results to only return the les, and exclude the folders:

const isFile = fileName => {


return fs.lstatSync(fileName).isFile()
}

fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})
.filter(isFile)

R f ld
Rename a folder

Use fs.rename() or fs.renameSync() to rename folder. The rst parameter is the current path, the
second the new path:

const fs = require('fs')

fs.rename('/Users/joe', '/Users/roger', err => {


if (err) {
console.error(err)
return
}
//done
})

fs.renameSync() is the synchronous version:

const fs = require('fs')

try {
fs.renameSync('/Users/joe', '/Users/roger')
} catch (err) {
console.error(err)
}

Remove a folder
Use fs.rmdir() or fs.rmdirSync() to remove a folder.

Removing a folder that has content can be more complicated than you need.
In this case it's best to install the fs-extra module, which is very popular and well maintained. It's a
drop-in replacement of the fs module, which provides more features on top of it.
In this case the remove() method is what you want.

Install it using

npm install fs-extra

and use it like this:

const fs = require('fs-extra')

const folder = '/Users/joe'

fs.remove(folder, err => {


console.error(err)
})

It can also be used with promises:

fs.remove(folder)
.then(() => {
//done
})
.catch(err => {
console.error(err)
})

or with async/await:
async function removeFolder(folder) {
try {
await fs.remove(folder)
//done
} catch (err) {
console.error(err)
}
}

const folder = '/Users/joe'


removeFolder(folder)

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog


© OpenJS Foundation
Learn Docs Download Community 

Menu

Writing les with Node.js


TABLE OF CONTENTS

The easiest way to write to les in Node.js is to use the fs.writeFile() API.

Example:

const fs = require('fs')

const content = 'Some content!'

fs.writeFile('/Users/joe/test.txt', content, err => {


if (err) {
console.error(err)
return
}
//file written successfully
})

Alternatively, you can use the synchronous version fs.writeFileSync() :

const fs = require('fs')

const content = 'Some content!'

try {
try {
const data = fs.writeFileSync('/Users/joe/test.txt', content)

//file written successfully


} catch (err) {
console.error(err)
}

By default, this API will replace the contents of the le if it does already exist.

You can modify the default by specifying a ag:

fs.writeFile('/Users/joe/test.txt', content, { flag: 'a+' }, err => {})

The ags you'll likely use are

r+ open the le for reading and writing


w+ open the le for reading and writing, positioning the stream at the beginning of the le. The le
is created if not existing
a open the le for writing, positioning the stream at the end of the le. The le is created if not
existing
a+ open the le for reading and writing, positioning the stream at the end of the le. The le is
created if not existing

(you can nd more ags at https://nodejs.org/api/fs.html#fs_ le_system_ ags)

Append to a le
A handy method to append content to the end of a le is fs.appendFile() (and its
fs.appendFileSync() counterpart):
const content = 'Some content!'

fs.appendFile('file.log', content, err => {


if (err) {
console.error(err)
return
}
//done!
})

Using streams
All those methods write the full content to the le before returning the control back to your program (in
the async version, this means executing the callback)

In this case, a better option is to write the le content using streams.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →
Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Reading les with Node.js


The simplest way to read a le in Node.js is to use the fs.readFile() method, passing it the le path,
encoding and a callback function that will be called with the le data (and the error):

const fs = require('fs')

fs.readFile('/Users/joe/test.txt', 'utf8' , (err, data) => {


if (err) {
console.error(err)
return
}
console.log(data)
})

Alternatively, you can use the synchronous version fs.readFileSync() :

const fs = require('fs')

try {
const data = fs.readFileSync('/Users/joe/test.txt', 'utf8')
console.log(data)
} catch (err) {
console.error(err)
}
Both fs.readFile() and fs.readFileSync() read the full content of the le in memory before
returning the data.
This means that big les are going to have a major impact on your memory consumption and speed of
execution of the program.

In this case, a better option is to read the le content using streams.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Node.js File Paths
TABLE OF CONTENTS

Every le in the system has a path.

On Linux and macOS, a path might look like:

/users/joe/file.txt

while Windows computers are di erent, and have a structure such as:

C:\users\joe\file.txt

You need to pay attention when using paths in your applications, as this di erence must be taken into
account.

You include this module in your les using

const path = require('path')

and you can start using its methods.


Learn Docs Download Community 

Menu
Getting information out of a path
Given a path, you can extract information out of it using those methods:
dirname : get the parent folder of a le

basename : get the lename part

extname : get the le extension

Example:

const notes = '/users/joe/notes.txt'

path.dirname(notes) // /users/joe
path.basename(notes) // notes.txt
path.extname(notes) // .txt

You can get the le name without the extension by specifying a second argument to basename :

path.basename(notes, path.extname(notes)) //notes

Working with paths


You can join two or more parts of a path by using path.join() :

const name = 'joe'


path.join('/', 'users', name, 'notes.txt') //'/users/joe/notes.txt'

You can get the absolute path calculation of a relative path using path.resolve() :

path.resolve('joe.txt') //'/Users/joe/joe.txt' if run from my home folder


In this case Node.js will simply append /joe.txt to the current working directory. If you specify a second
parameter folder, resolve will use the rst as a base for the second:

path.resolve('tmp', 'joe.txt') //'/Users/joe/tmp/joe.txt' if run from my home folder

If the rst parameter starts with a slash, that means it's an absolute path:

path.resolve('/etc', 'joe.txt') //'/etc/joe.txt'

path.normalize() is another useful function, that will try and calculate the actual path, when it contains
relative speci ers like . or .. , or double slashes:

path.normalize('/users/joe/..//test.txt') //'/users/test.txt'

Neither resolve nor normalize will check if the path exists. They just calculate a path based on the
information they got.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

← PREV NEXT →
←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Node.js le stats
Every le comes with a set of details that we can inspect using Node.js.

In particular, using the stat() method provided by the fs module.

You call it passing a le path, and once Node.js gets the le details it will call the callback function you
pass, with 2 parameters: an error message, and the le stats:

const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}
//we have access to the file stats in `stats`
})

Node.js provides also a sync method, which blocks the thread until the le stats are ready:

const fs = require('fs')
try {
const stats = fs.statSync('/Users/joe/test.txt')
} catch (err) {
console.error(err)
}
The le information is included in the stats variable. What kind of information can we extract using the
stats?

A lot, including:

if the le is a directory or a le, using stats.isFile() and stats.isDirectory()


if the le is a symbolic link using stats.isSymbolicLink()
the le size in bytes using stats.size .

There are other advanced methods, but the bulk of what you'll use in your day-to-day programming is this.

const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}

stats.isFile() //true
stats.isDirectory() //false
stats.isSymbolicLink() //false
stats.size //1024000 //= 1MB
})

CONTRIBUTORS
EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Working with le descriptors in Node.js


Before you're able to interact with a le that sits in your lesystem, you must get a le descriptor.

A le descriptor is what's returned by opening the le using the open() method o ered by the fs
module:

const fs = require('fs')

fs.open('/Users/joe/test.txt', 'r', (err, fd) => {


//fd is our file descriptor
})

Notice the r we used as the second parameter to the fs.open() call.

That ag means we open the le for reading.

Other ags you'll commonly use are

r+ open the le for reading and writing


w+ open the le for reading and writing, positioning the stream at the beginning of the le. The le
is created if not existing
a open the le for writing, positioning the stream at the end of the le. The le is created if not
existing
a+ open the le for reading and writing, positioning the stream at the end of the le. The le is
created if not existing
You can also open the le by using the fs.openSync method, which returns the le descriptor, instead of
providing it in a callback:

const fs = require('fs')

try {
const fd = fs.openSync('/Users/joe/test.txt', 'r')
} catch (err) {
console.error(err)
}

Once you get the le descriptor, in whatever way you choose, you can perform all the operations that
require it, like calling fs.open() and many other operations that interact with the lesystem.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog


© OpenJS Foundation
Learn Docs Download Community 

Menu

npm global or local packages


The main di erence between local and global packages is this:

local packages are installed in the directory where you run npm install <package-name> , and
they are put in the node_modules folder under this directory
global packages are all put in a single place in your system (exactly where depends on your setup),
regardless of where you run npm install -g <package-name>

In your code you can only require local packages:

require('package-name')

so when should you install in one way or another?

In general, all packages should be installed locally.

This makes sure you can have dozens of applications in your computer, all running a di erent version of
each package if needed.

Updating a global package would make all your projects use the new release, and as you can imagine this
might cause nightmares in terms of maintenance, as some packages might break compatibility with
further dependencies, and so on.

All projects have their own local version of a package, even if this might appear like a waste of resources,
it's minimal compared to the possible negative consequences.
A package should be installed globally when it provides an executable command that you run from the
shell (CLI), and it's reused across projects.

You can also install executable commands locally and run them using npx, but some packages are just
better installed globally.

Great examples of popular global packages which you might know are

npm
create-react-app
vue-cli
grunt-cli
mocha

react-native-cli
gatsby-cli
forever
nodemon

You probably have some packages installed globally already on your system. You can see them by running

npm list -g --depth 0

on your command line.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB


←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Uninstalling npm packages


To uninstall a package you have previously installed locally (using npm install <package-name> in the
node_modules folder, run

npm uninstall <package-name>

from the project root folder (the folder that contains the node_modules folder).

Using the -S ag, or --save , this operation will also remove the reference in the package.json le.

If the package was a development dependency, listed in the devDependencies of the package.json le,
you must use the -D / --save-dev ag to remove it from the le:

npm uninstall -S <package-name>


npm uninstall -D <package-name>

If the package is installed globally, you need to add the -g / --global ag:

npm uninstall -g <package-name>

for example:
npm uninstall -g webpack

and you can run this command from anywhere you want on your system because the folder where you
currently are does not matter.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

The package.json guide


TABLE OF CONTENTS

If you work with JavaScript, or you've ever interacted with a JavaScript project, Node.js or a frontend
project, you surely met the package.json le.

What's that for? What should you know about it, and what are some of the cool things you can do with it?

The package.json le is kind of a manifest for your project. It can do a lot of things, completely
unrelated. It's a central repository of con guration for tools, for example. It's also where npm and yarn
store the names and versions for all the installed packages.

The le structure
Here's an example package.json le:

{}

It's empty! There are no xed requirements of what should be in a package.json le, for an application.
The only requirement is that it respects the JSON format, otherwise it cannot be read by programs that try
to access its properties programmatically.

If you're building a Node.js package that you want to distribute over npm things change radically, and you
must have a set of properties that will help other people use it. We'll see more about this later on.
This is another package.json:

{
"name": "test-project"
}

It de nes a name property, which tells the name of the app, or package, that's contained in the same
folder where this le lives.

Here's a much more complex example, which was extracted from a sample Vue.js application:

{
"name": "test-project",
"version": "1.0.0",
"description": "A Vue.js project",
"main": "src/main.js",
"private": true,
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"unit": "jest --config test/unit/jest.conf.js --coverage",
"test": "npm run unit",
"lint": "eslint --ext .js,.vue src test/unit",
"build": "node build/build.js"
},
"dependencies": {
"vue": "^2.5.2"
},
"devDependencies": {
"autoprefixer": "^7.1.2",
"babel-core": "^6.22.1",
"babel-eslint": "^8.2.1",
"babel-helper-vue-jsx-merge-props": "^2.0.3",
"babel-jest": "^21.0.2",
"babel-loader": "^7.1.1",
"babel-plugin-dynamic-import-node": "^1.2.0",
"babel-plugin-syntax-jsx": "^6.18.0",
"babel-plugin-transform-es2015-modules-commonjs": "^6.26.0",
"babel-plugin-transform-runtime": "^6.22.0",
"babel-plugin-transform-vue-jsx": "^3.5.0",
"babel-preset-env": "^1.3.2",
"babel-preset-stage-2": "^6.22.0",
"chalk": "^2.0.1",
"copy-webpack-plugin": "^4.0.1",
"css-loader": "^0.28.0",
"eslint": "^4.15.0",
"eslint-config-airbnb-base": "^11.3.0",
"eslint-friendly-formatter": "^3.0.0",
"eslint-import-resolver-webpack": "^0.8.3",
"eslint-loader": "^1.7.1",
"eslint-plugin-import": "^2.7.0",
"eslint-plugin-vue": "^4.0.0",
"extract-text-webpack-plugin": "^3.0.0",
"file-loader": "^1.1.4",
"friendly-errors-webpack-plugin": "^1.6.1",
"html-webpack-plugin": "^2.30.1",
"jest": "^22.0.4",
"jest-serializer-vue": "^0.3.0",
"node-notifier": "^5.1.2",
"optimize-css-assets-webpack-plugin": "^3.2.0",
"ora": "^1.2.0",
"portfinder": "^1.0.13",
"postcss-import": "^11.0.0",
"postcss-loader": "^2.0.8",
"postcss-url": "^7.2.1",
"rimraf": "^2.6.0",
"semver": "^5.3.0",
"shelljs": "^0.7.6",
"uglifyjs-webpack-plugin": "^1.1.1",
"url-loader": "^0.5.8",
"vue-jest": "^1.0.2",
"vue-loader": "^13.3.0",
"vue-style-loader": "^3.0.1",
"vue-template-compiler": "^2.5.2",
"webpack": "^3.6.0",
"webpack-bundle-analyzer": "^2.9.0",
"webpack-dev-server": "^2.9.1",
"webpack-merge": "^4.1.0"
},
"engines": {
"node": ">= 6.0.0",
"npm": ">= 3.0.0"
},
"browserslist": ["> 1%", "last 2 versions", "not ie <= 8"]
}

there are lots of things going on here:

version indicates the current version


name sets the application/package name
description is a brief description of the app/package
main set the entry point for the application
private if set to true prevents the app/package to be accidentally published on npm

scripts de nes a set of node scripts you can run


dependencies sets a list of npm packages installed as dependencies
devDependencies sets a list of npm packages installed as development dependencies
engines sets which versions of Node.js this package/app works on
browserslist is used to tell which browsers (and their versions) you want to support

All those properties are used by either npm or other tools that we can use.

Properties breakdown
This section describes the properties you can use in detail. We refer to "package" but the same thing
applies to local applications which you do not use as packages.

Most of those properties are only used on https://www.npmjs.com/, others by scripts that interact with
your code, like npm or others.

name

Sets the package name.

Example:

"name": "test-project"

The name must be less than 214 characters, must not have spaces, it can only contain lowercase letters,
hyphens ( - ) or underscores ( _ ).

This is because when a package is published on npm , it gets its own URL based on this property.

If you published this package publicly on GitHub, a good value for this property is the GitHub repository
name.

author
Lists the package author name

Example:

{
"author": "Joe <[email protected]> (https://whatever.com)"
}

Can also be used with this format:

{
"author": {
"name": "Joe",
"email": "[email protected]",
"url": "https://whatever.com"
}
}

contributors

As well as the author, the project can have one or more contributors. This property is an array that lists
them.

Example:

{
"contributors": ["Joe <[email protected]> (https://whatever.com)"]
}
Can also be used with this format:

{
"contributors": [
{
"name": "Joe",
"email": "[email protected]",
"url": "https://whatever.com"
}
]
}

bugs

Links to the package issue tracker, most likely a GitHub issues page

Example:

{
"bugs": "https://github.com/whatever/package/issues"
}

homepage

Sets the package homepage

Example:

{
"homepage": "https://whatever.com/package"
}

version

Indicates the current version of the package.

Example:

"version": "1.0.0"

This property follows the semantic versioning (semver) notation for versions, which means the version is
always expressed with 3 numbers: x.x.x .

The rst number is the major version, the second the minor version and the third is the patch version.

There is a meaning in these numbers: a release that only xes bugs is a patch release, a release that
introduces backward-compatible changes is a minor release, a major release can have breaking changes.

license

Indicates the license of the package.

Example:

"license": "MIT"

keywords
This property contains an array of keywords that associate with what your package does.

Example:

"keywords": [
"email",
"machine learning",
"ai"
]

This helps people nd your package when navigating similar packages, or when browsing the
https://www.npmjs.com/ website.

description

This property contains a brief description of the package

Example:

"description": "A package to work with strings"

This is especially useful if you decide to publish your package to npm so that people can nd out what the
package is about.

repository

This property speci es where this package repository is located.

Example:
"repository": "github:whatever/testing",

Notice the github pre x. There are other popular services baked in:

"repository": "gitlab:whatever/testing",

"repository": "bitbucket:whatever/testing",

You can explicitly set the version control system:

"repository": {
"type": "git",
"url": "https://github.com/whatever/testing.git"
}

You can use di erent version control systems:

"repository": {
"type": "svn",
"url": "..."
}

main

Sets the entry point for the package.


When you import this package in an application, that's where the application will search for the module
exports.

Example:

"main": "src/main.js"

private

if set to true prevents the app/package to be accidentally published on npm

Example:

"private": true

scripts

De nes a set of node scripts you can run

Example:

"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"unit": "jest --config test/unit/jest.conf.js --coverage",
"test": "npm run unit",
"lint": "eslint --ext .js,.vue src test/unit",
"build": "node build/build.js"
}
These scripts are command line applications. You can run them by calling npm run XXXX or yarn XXXX ,
where XXXX is the command name. Example: npm run dev .

You can use any name you want for a command, and scripts can do literally anything you want.

dependencies

Sets a list of npm packages installed as dependencies.

When you install a package using npm or yarn:

npm install <PACKAGENAME>


yarn add <PACKAGENAME>

that package is automatically inserted in this list.

Example:

"dependencies": {
"vue": "^2.5.2"
}

devDependencies

Sets a list of npm packages installed as development dependencies.

They di er from dependencies because they are meant to be installed only on a development machine,
not needed to run the code in production.
When you install a package using npm or yarn:

npm install --save-dev <PACKAGENAME>


yarn add --dev <PACKAGENAME>

that package is automatically inserted in this list.

Example:

"devDependencies": {
"autoprefixer": "^7.1.2",
"babel-core": "^6.22.1"
}

engines

Sets which versions of Node.js and other commands this package/app work on

Example:

"engines": {
"node": ">= 6.0.0",
"npm": ">= 3.0.0",
"yarn": "^0.13.0"
}

browserslist
Is used to tell which browsers (and their versions) you want to support. It's referenced by Babel,
Autopre xer, and other tools, to only add the poly lls and fallbacks needed to the browsers you target.

Example:

"browserslist": [
"> 1%",
"last 2 versions",
"not ie <= 8"
]

This con guration means you want to support the last 2 major versions of all browsers with at least 1% of
usage (from the CanIUse.com stats), except IE8 and lower.

(see more)

Command-speci c properties

The package.json le can also host command-speci c con guration, for example for Babel, ESLint, and
more.

Each has a speci c property, like eslintConfig , babel and others. Those are command-speci c, and
you can nd how to use those in the respective command/project documentation.

Package versions
You have seen in the description above version numbers like these: ~3.0.0 or ^0.13.0 . What do they
mean, and which other version speci ers can you use?

That symbol speci es which updates your package accepts, from that dependency.
Given that using semver (semantic versioning) all versions have 3 digits, the rst being the major release,
the second the minor release and the third is the patch release, you have these "Rules".

You can combine most of the versions in ranges, like this: 1.0.0 || >=1.1.0 <1.2.0 , to either use 1.0.0
or one release from 1.1.0 up, but lower than 1.2.0.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

The package-lock.json le
TABLE OF CONTENTS

In version 5, npm introduced the package-lock.json le.

What's that? You probably know about the package.json le, which is much more common and has
been around for much longer.

The goal of package-lock.json le is to keep track of the exact version of every package that is installed
so that a product is 100% reproducible in the same way even if packages are updated by their
maintainers.

This solves a very speci c problem that package.json left unsolved. In package.json you can set which
versions you want to upgrade to (patch or minor), using the semver notation, for example:

if you write ~0.13.0 , you want to only update patch releases: 0.13.1 is ok, but 0.14.0 is not.

if you write ^0.13.0 , you want to update patch and minor releases: 0.13.1 , 0.14.0 and so on.
if you write 0.13.0 , that is the exact version that will be used, always

You don't commit to Git your node_modules folder, which is generally huge, and when you try to replicate
the project on another machine by using the npm install command, if you speci ed the ~ syntax and
a patch release of a package has been released, that one is going to be installed. Same for ^ and minor
releases.

If you specify exact versions, like 0.13.0 in the example, you are not affected by this
problem.
It could be you, or another person trying to initialize the project on the other side of the world by running
npm install .

So your original project and the newly initialized project are actually di erent. Even if a patch or minor
release should not introduce breaking changes, we all know bugs can (and so, they will) slide in.

The package-lock.json sets your currently installed version of each package in stone, and npm will
use those exact versions when running npm install .

This concept is not new, and other programming languages package managers (like Composer in PHP) use
a similar system for years.

The package-lock.json le needs to be committed to your Git repository, so it can be fetched by other
people, if the project is public or you have collaborators, or if you use Git as a source for deployments.

The dependencies versions will be updated in the package-lock.json le when you run npm update .

An example
This is an example structure of a package-lock.json le we get when we run npm install cowsay in
an empty folder:

{
"requires": true,
"lockfileVersion": 1,
"dependencies": {
"ansi-regex": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.
0.0.tgz",
"integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
},
"cowsay": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/cowsay/-/cowsay-1.3.1.tgz"
,
"integrity": "sha512-3PVFe6FePVtPj1HTeLin9v8WyLl+VmM1l1H/5P+BTTDkM
Ajufp+0F9eLjzRnOHzVAYeIYFF5po5NjRrgefnRMQ==",
"requires": {
"get-stdin": "^5.0.1",
"optimist": "~0.6.1",
"string-width": "~2.1.1",
"strip-eof": "^1.0.0"
}
},
"get-stdin": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-5.0.
1.tgz",
"integrity": "sha1-Ei4WFZHiH/TFJTAwVpPyDmOTo5g="
},
"is-fullwidth-code-point": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/
is-fullwidth-code-point-2.0.0.tgz",
"integrity": "sha1-o7MKXE8ZkYMWeqq5O+764937ZU8="
},
"minimist": {
"version": "0.0.10",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.10
.tgz",
"integrity": "sha1-3j+YVD2/lggr5IrRoMfNqDYwHc8="
},
"optimist": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/optimist/-/optimist-0.6.1.tgz",
"integrity": "sha1-2j6nRob6IaGaERwybpDrFaAZZoY=",

"requires": {
"minimist": "~0.0.1",
"wordwrap": "~0.0.2"
}
},
"string-width": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz",
"integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVa
"requires": {
"is-fullwidth-code-point": "^2.0.0",
"strip-ansi": "^4.0.0"
}
},
"strip-ansi": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz",
"integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=",
"requires": {
"ansi-regex": "^3.0.0"
}
},
"strip-eof": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/strip-eof/-/strip-eof-1.0.0.tgz",
"integrity": "sha1-u0P/VZim6wXYm1n80SnJgzE2Br8="
},
"wordwrap": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/wordwrap/-/wordwrap-0.0.3.tgz",
"integrity": "sha1-o9XabNXAvAAI03I0u68b7WMFkQc="
}
}
}

We installed cowsay , which depends on

get-stdin
optimist
string-width
strip-eof

In turn, those packages require other packages, as we can see from the requires property that some
have:

ansi-regex

is-fullwidth-code-point
minimist
wordwrap
strip-eof

They are added in alphabetical order into the le, and each one has a version eld, a resolved eld
that points to the package location, and an integrity string that we can use to verify the package.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB


←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Expose functionality from a Node.js le using


exports
Node.js has a built-in module system.

A Node.js le can import functionality exposed by other Node.js les.

When you want to import something you use

const library = require('./library')

to import the functionality exposed in the library.js le that resides in the current le folder.

In this le, functionality must be exposed before it can be imported by other les.

Any other object or variable de ned in the le by default is private and not exposed to the outer world.

This is what the module.exports API o ered by the module system allows us to do.

When you assign an object or a function as a new exports property, that is the thing that's being
exposed, and as such, it can be imported in other parts of your app, or in other apps as well.

You can do so in 2 ways.

The rst is to assign an object to module.exports , which is an object provided out of the box by the
module system, and this will make your le export just that object:
// car.js
const car = {
brand: 'Ford',
model: 'Fiesta'
}

module.exports = car

// index.js
const car = require('./car')

The second way is to add the exported object as a property of exports . This way allows you to export
multiple objects, functions or data:

const car = {
brand: 'Ford',
model: 'Fiesta'
}

exports.car = car

or directly

exports.car = {
brand: 'Ford',
model: 'Fiesta'
}
And in the other le, you'll use it by referencing a property of your import:

const items = require('./items')


items.car

or

const car = require('./items').car

What's the di erence between module.exports and exports ?

The rst exposes the object it points to. The latter exposes the properties of the object it points to.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About


Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

Accept input from the command line in Node.js


How to make a Node.js CLI program interactive?

Node.js since version 7 provides the readline module to perform exactly this: get input from a readable
stream such as the process.stdin stream, which during the execution of a Node.js program is the
terminal input, one line at a time.

const readline = require('readline').createInterface({


input: process.stdin,
output: process.stdout
})

readline.question(`What's your name?`, name => {


console.log(`Hi ${name}!`)
readline.close()
})

This piece of code asks the username, and once the text is entered and the user presses enter, we send a
greeting.

The question() method shows the rst parameter (a question) and waits for the user input. It calls the
callback function once enter is pressed.

In this callback function, we close the readline interface.

readline o ers several other methods, and I'll let you check them out on the package documentation
li k d b
linked above.

If you need to require a password, it's best not to echo it back, but instead show a * symbol.

The simplest way is to use the readline-sync package which is very similar in terms of the API and
handles this out of the box.

A more complete and abstract solution is provided by the Inquirer.js package.

You can install it using npm install inquirer , and then you can replicate the above code like this:

const inquirer = require('inquirer')

var questions = [
{
type: 'input',
name: 'name',
message: "What's your name?"
}
]

inquirer.prompt(questions).then(answers => {
console.log(`Hi ${answers['name']}!`)
})

Inquirer.js lets you do many things like asking multiple choices, having radio buttons, con rmations, and
more.

It's worth knowing all the alternatives, especially the built-in ones provided by Node.js, but if you plan to
take CLI input to the next level, Inquirer.js is an optimal choice.

CONTRIBUTORS
CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Output to the command line using Node.js
TABLE OF CONTENTS

Basic output using the console module


Node.js provides a console module which provides tons of very useful ways to interact with the
command line.

It is basically the same as the console object you nd in the browser.

The most basic and most used method is console.log() , which prints the string you pass to it to the
console.

Learn Docs Download Community 


If you pass an object, it will render it as a string.

Menu
You can pass multiple variables to console.log , for example:

const x = 'x'
const y = 'y'
console.log(x, y)

and Node.js will print both.

We can also format pretty phrases by passing variables and a format speci er.

For example:
console.log('My %s has %d years', 'cat', 2)

%s format a variable as a string


%d format a variable as a number
%i format a variable as its integer part only
%o format a variable as an object

Example:

console.log('%o', Number)

Clear the console


console.clear() clears the console (the behavior might depend on the console used)

Counting elements
console.count() is a handy method.

Take this code:


What happens is that console.count() will count the number of times a string is printed, and print the
count next to it:

You can just count apples and oranges:

const oranges = ['orange', 'orange']


const apples = ['just one apple']
oranges.forEach(fruit => {
console.count(fruit)
})
apples.forEach(fruit => {
console
l count fruit
t(f it)
console.count(fruit)
})

Print the stack trace


There might be cases where it's useful to print the call stack trace of a function, maybe to answer the
question how did you reach that part of the code?

You can do so using console.trace() :

const function2 = () => console.trace()


const function1 = () => function2()
function1()

This will print the stack trace. This is what's printed if we try this in the Node.js REPL:

Trace
at function2 (repl:1:33)
at function1 (repl:1:25)
at repl:1:1
at ContextifyScript.Script.runInThisContext (vm.js:44:33)
at REPLServer.defaultEval (repl.js:239:29)
at bound (domain.js:301:14)
at REPLServer.runBound [as eval] (domain.js:314:12)
at REPLServer.onLine (repl.js:440:10)
at emitOne (events.js:120:20)
at REPLServer.emit (events.js:210:7)

Calculate the time spent


You can easily calculate how much time a function takes to run, using time() and timeEnd()
y g () ()

const doSomething = () => console.log('test')


const measureDoingSomething = () => {
console.time('doSomething()')
//do something, and measure the time it takes
doSomething()
console.timeEnd('doSomething()')
}
measureDoingSomething()

stdout and stderr


As we saw console.log is great for printing messages in the Console. This is what's called the standard
output, or stdout .

console.error prints to the stderr stream.

It will not appear in the console, but it will appear in the error log.

Color the output


You can color the output of your text in the console by using escape sequences. An escape sequence is a
set of characters that identi es a color.

Example:

console.log('\x1b[33m%s\x1b[0m', 'hi!')

You can try that in the Node.js REPL, and it will print hi! in yellow.
However, this is the low-level way to do this. The simplest way to go about coloring the console output is
by using a library. Chalk is such a library, and in addition to coloring it also helps with other styling
facilities, like making text bold, italic or underlined.

You install it with npm install chalk , then you can use it:

const chalk = require('chalk')


console.log(chalk.yellow('hi!'))

Using chalk.yellow is much more convenient than trying to remember the escape codes, and the code
is much more readable.

Check the project link posted above for more usage examples.

Create a progress bar


Progress is an awesome package to create a progress bar in the console. Install it using npm install
progress

This snippet creates a 10-step progress bar, and every 100ms one step is completed. When the bar
completes we clear the interval:

const ProgressBar = require('progress')

const bar = new ProgressBar(':bar', { total: 10 })


const timer = setInterval(() => {
bar.tick()
if (bar.complete) {
clearInterval(timer)
}
}, 100)
}, )
00)

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Learn Docs Download Community 

Menu

How to read environment variables from Node.js


The process core module of Node.js provides the env property which hosts all the environment
variables that were set at the moment the process was started.

Here is an example that accesses the NODE_ENV environment variable, which is set to development by
default.

Note: process does not require a "require", it's automatically available.

process.env.NODE_ENV // "development"

Setting it to "production" before the script runs will tell Node.js that this is a production environment.

In the same way you can access any custom environment variable you set.

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB


←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Get HTTP request body data using Node.js
Here is how you can extract the data that was sent as JSON in the request body.

If you are using Express, that's quite simple: use the body-parser Node.js module.

For example, to get the body of this request:

const axios = require('axios')

axios.post('https://whatever.com/todos', {
todo: 'Buy the milk'
})

This is the matching server-side code:

const express = require('express')


const app = express()

app.use(
express.urlencoded({
extended: true
})
)

app.use(express.json())
app.post('/todos', (req, res) => {

console.log(req.body.todo)
})

If you're not using Express and you want to do this in vanilla Node.js, you need to do a bit more work, of
course, as Express abstracts a lot of this for you.

The key thing to understand is that when you initialize the HTTP server using http.createServer() , the
callback is called when the server got all the HTTP headers, but not the request body.

The request object passed in the connection callback is a stream.

So, we must listen for the body content to be processed, and it's processed in chunks.

We rst get the data by listening to the stream data events, and when the data ends, the stream end
event is called, once:

const server = http.createServer((req, res) => {


// we can access HTTP headers
req.onLearn
('data'Docs
, chunk => {
Download Community 
console.log(`Data chunk available: ${chunk}`)
})
Menu
req.on('end', () => {
//end of data
})
})

So to access the data, assuming we expect to receive a string, we must concatenate the chunks into a
string when listening to the stream data , and when the stream end , we parse the string to JSON:
const server = http.createServer((req, res) => {
let data = '';

req.on('data', chunk => {


data += chunk;
})
req.on('end', () => {
console.log(JSON.parse(data).todo); // 'Buy the milk'
res.end();
})
})

CONTRIBUTORS

EDIT THIS PAGE ON GITHUB

←   PREV NEXT   →

Trademark Policy Code of Conduct About

Privacy Policy Security Reporting Blog

© OpenJS Foundation
Node.js v15.13.0 documentation

Path

Stability: 2 - Stable

Source Code: lib/path.js

The path module provides utilities for working with file and directory paths. It can be accessed using:

const path = require('path');

Windows vs. POSIX


The default operation of the path module varies based on the operating system on which a Node.js application is running. Specifically, when running on a Windows operating system, the
path module will assume that Windows-style paths are being used.

So using path.basename() might yield different results on POSIX and Windows:

On POSIX:

path.basename('C:\\temp\\myfile.html');
// Returns: 'C:\\temp\\myfile.html'

On Windows:

path.basename('C:\\temp\\myfile.html');
// Returns: 'myfile.html'

To achieve consistent results when working with Windows file paths on any operating system, use path.win32 :

On POSIX and Windows:


path.win32.basename('C:\\temp\\myfile.html');
// Returns: 'myfile.html'

To achieve consistent results when working with POSIX file paths on any operating system, use path.posix :

On POSIX and Windows:

path.posix.basename('/tmp/myfile.html');
// Returns: 'myfile.html'

On Windows Node.js follows the concept of per-drive working directory. This behavior can be observed when using a drive path without a backslash. For example, path.resolve('C:\\')
can potentially return a different result than path.resolve('C:') . For more information, see this MSDN page .

path.basename(path[, ext])
path <string>

ext <string> An optional file extension

Returns: <string>

The path.basename() method returns the last portion of a path , similar to the Unix basename command. Trailing directory separators are ignored, see path.sep .

path.basename('/foo/bar/baz/asdf/quux.html');
// Returns: 'quux.html'

path.basename('/foo/bar/baz/asdf/quux.html', '.html');
// Returns: 'quux'

Although Windows usually treats file names, including file extensions, in a case-insensitive manner, this function does not. For example, C:\\foo.html and C:\\foo.HTML refer to the same
file, but basename treats the extension as a case-sensitive string:

path.win32.basename('C:\\foo.html', '.html');
// Returns: 'foo'

path.win32.basename('C:\\foo.HTML', '.html');
// Returns: 'foo.HTML'

A TypeError is thrown if path is not a string or if ext is given and is not a string.
path.delimiter
<string>

Provides the platform-specific path delimiter:

; for Windows

: for POSIX

For example, on POSIX:

console.log(process.env.PATH);
// Prints: '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'

process.env.PATH.split(path.delimiter);
// Returns: ['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']

On Windows:

console.log(process.env.PATH);
// Prints: 'C:\Windows\system32;C:\Windows;C:\Program Files\node\'

process.env.PATH.split(path.delimiter);
// Returns ['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\node\\']

path.dirname(path)
path <string>

Returns: <string>

The path.dirname() method returns the directory name of a path , similar to the Unix dirname command. Trailing directory separators are ignored, see path.sep .

path.dirname('/foo/bar/baz/asdf/quux');
// Returns: '/foo/bar/baz/asdf'

A TypeError is thrown if path is not a string.


path.extname(path)
path <string>

Returns: <string>

The path.extname() method returns the extension of the path , from the last occurrence of the . (period) character to end of string in the last portion of the path . If there is no . in the
last portion of the path , or if there are no . characters other than the first character of the basename of path (see path.basename() ) , an empty string is returned.

path.extname('index.html');
// Returns: '.html'

path.extname('index.coffee.md');
// Returns: '.md'

path.extname('index.');
// Returns: '.'

path.extname('index');
// Returns: ''

path.extname('.index');
// Returns: ''

path.extname('.index.md');
// Returns: '.md'

A TypeError is thrown if path is not a string.

path.format(pathObject)
pathObject <Object>
dir <string>

root <string>

base <string>

name <string>

ext <string>

Returns: <string>
The path.format() method returns a path string from an object. This is the opposite of path.parse() .

When providing properties to the pathObject remember that there are combinations where one property has priority over another:

pathObject.root is ignored if pathObject.dir is provided

pathObject.ext and pathObject.name are ignored if pathObject.base exists

For example, on POSIX:

// If `dir`, `root` and `base` are provided,


// `${dir}${path.sep}${base}`
// will be returned. `root` is ignored.
path.format({
root: '/ignored',
dir: '/home/user/dir',
base: 'file.txt'
});
// Returns: '/home/user/dir/file.txt'

// `root` will be used if `dir` is not specified.


// If only `root` is provided or `dir` is equal to `root` then the
// platform separator will not be included. `ext` will be ignored.
path.format({
root: '/',
base: 'file.txt',
ext: 'ignored'
});
// Returns: '/file.txt'

// `name` + `ext` will be used if `base` is not specified.


path.format({
root: '/',
name: 'file',
ext: '.txt'
});
// Returns: '/file.txt'

On Windows:
path.format({
dir: 'C:\\path\\dir',
base: 'file.txt'
});
// Returns: 'C:\\path\\dir\\file.txt'

path.isAbsolute(path)
path <string>

Returns: <boolean>

The path.isAbsolute() method determines if path is an absolute path.

If the given path is a zero-length string, false will be returned.

For example, on POSIX:

path.isAbsolute('/foo/bar'); // true
path.isAbsolute('/baz/..'); // true
path.isAbsolute('qux/'); // false
path.isAbsolute('.'); // false

On Windows:

path.isAbsolute('//server'); // true
path.isAbsolute('\\\\server'); // true
path.isAbsolute('C:/foo/..'); // true
path.isAbsolute('C:\\foo\\..'); // true
path.isAbsolute('bar\\baz'); // false
path.isAbsolute('bar/baz'); // false
path.isAbsolute('.'); // false

A TypeError is thrown if path is not a string.

path.join([...paths])
...paths <string> A sequence of path segments
Returns: <string>

The path.join() method joins all given path segments together using the platform-specific separator as a delimiter, then normalizes the resulting path.

Zero-length path segments are ignored. If the joined path string is a zero-length string then '.' will be returned, representing the current working directory.

path.join('/foo', 'bar', 'baz/asdf', 'quux', '..');


// Returns: '/foo/bar/baz/asdf'

path.join('foo', {}, 'bar');


// Throws 'TypeError: Path must be a string. Received {}'

A TypeError is thrown if any of the path segments is not a string.

path.normalize(path)
path <string>

Returns: <string>

The path.normalize() method normalizes the given path , resolving '..' and '.' segments.

When multiple, sequential path segment separation characters are found (e.g. / on POSIX and either \ or / on Windows), they are replaced by a single instance of the platform-specific
path segment separator ( / on POSIX and \ on Windows). Trailing separators are preserved.

If the path is a zero-length string, '.' is returned, representing the current working directory.

For example, on POSIX:

path.normalize('/foo/bar//baz/asdf/quux/..');
// Returns: '/foo/bar/baz/asdf'

On Windows:

path.normalize('C:\\temp\\\\foo\\bar\\..\\');
// Returns: 'C:\\temp\\foo\\'

Since Windows recognizes multiple path separators, both separators will be replaced by instances of the Windows preferred separator ( \ ):
path.win32.normalize('C:////temp\\\\/\\/\\/foo/bar');
// Returns: 'C:\\temp\\foo\\bar'

A TypeError is thrown if path is not a string.

path.parse(path)
path <string>

Returns: <Object>

The path.parse() method returns an object whose properties represent significant elements of the path . Trailing directory separators are ignored, see path.sep .

The returned object will have the following properties:

dir <string>

root <string>

base <string>

name <string>

ext <string>

For example, on POSIX:

path.parse('/home/user/dir/file.txt');
// Returns:
// { root: '/',
// dir: '/home/user/dir',
// base: 'file.txt',
// ext: '.txt',
// name: 'file' }

┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" / home/user/dir / file .txt "
└──────┴──────────────┴──────┴─────┘
(All spaces in the "" line should be ignored. They are purely for formatting.)
On Windows:

path.parse('C:\\path\\dir\\file.txt');
// Returns:
// { root: 'C:\\',
// dir: 'C:\\path\\dir',
// base: 'file.txt',
// ext: '.txt',
// name: 'file' }

┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" C:\ path\dir \ file .txt "
└──────┴──────────────┴──────┴─────┘
(All spaces in the "" line should be ignored. They are purely for formatting.)

A TypeError is thrown if path is not a string.

path.posix
<Object>

The path.posix property provides access to POSIX specific implementations of the path methods.

The API is accessible via require('path').posix or require('path/posix') .

path.relative(from, to)
from <string>

to <string>

Returns: <string>

The path.relative() method returns the relative path from from to to based on the current working directory. If from and to each resolve to the same path (after calling
path.resolve() on each), a zero-length string is returned.

If a zero-length string is passed as from or to , the current working directory will be used instead of the zero-length strings.
For example, on POSIX:

path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb');
// Returns: '../../impl/bbb'

On Windows:

path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb');
// Returns: '..\\..\\impl\\bbb'

A TypeError is thrown if either from or to is not a string.

path.resolve([...paths])
...paths <string> A sequence of paths or path segments

Returns: <string>

The path.resolve() method resolves a sequence of paths or path segments into an absolute path.

The given sequence of paths is processed from right to left, with each subsequent path prepended until an absolute path is constructed. For instance, given the sequence of path segments:
/foo , /bar , baz , calling path.resolve('/foo', '/bar', 'baz') would return /bar/baz because 'baz' is not an absolute path but '/bar' + '/' + 'baz' is.

If, after processing all given path segments, an absolute path has not yet been generated, the current working directory is used.

The resulting path is normalized and trailing slashes are removed unless the path is resolved to the root directory.

Zero-length path segments are ignored.

If no path segments are passed, path.resolve() will return the absolute path of the current working directory.

path.resolve('/foo/bar', './baz');
// Returns: '/foo/bar/baz'

path.resolve('/foo/bar', '/tmp/file/');
// Returns: '/tmp/file'

path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif');


// If the current working directory is /home/myself/node,
// this returns '/home/myself/node/wwwroot/static_files/gif/image.gif'
A TypeError is thrown if any of the arguments is not a string.

path.sep
<string>

Provides the platform-specific path segment separator:

\ on Windows

/ on POSIX

For example, on POSIX:

'foo/bar/baz'.split(path.sep);
// Returns: ['foo', 'bar', 'baz']

On Windows:

'foo\\bar\\baz'.split(path.sep);
// Returns: ['foo', 'bar', 'baz']

On Windows, both the forward slash ( / ) and backward slash ( \ ) are accepted as path segment separators; however, the path methods only add backward slashes ( \ ).

path.toNamespacedPath(path)
path <string>

Returns: <string>

On Windows systems only, returns an equivalent namespace-prefixed path for the given path . If path is not a string, path will be returned without modifications.

This method is meaningful only on Windows systems. On POSIX systems, the method is non-operational and always returns path without modifications.

path.win32
<Object>

The path.win32 property provides access to Windows-specific implementations of the path methods.

The API is accessible via require('path').win32 or require('path/win32') .


How are we doing? Please help us improve Stack Overflow. Take our short survey

How do I get the path to the current script with Node.js?


Asked 10 years, 9 months ago Active 2 months ago Viewed 840k times

How would I get the path to the script in Node.js?

1072 I know there's process.cwd , but that only refers to the directory where the script was called, not of the script itself. For instance, say
I'm in /home/kyle/ and I run the following command:

node /home/kyle/some/dir/file.js

160
If I call process.cwd() , I get /home/kyle/ , not /home/kyle/some/dir/ . Is there a way to get that directory?

node.js

Share Follow edited Dec 17 '16 at 12:03 asked Jun 28 '10 at 14:31
Peter Mortensen Kyle Slattery
27.9k 21 94 123 28.8k 9 29 35

6 nodejs.org/docs/latest/api/globals.html the documentation link of the accepted answer. – allenhwkim Apr 12 '13 at 15:41

14 Answers Active Oldest Votes

I found it after looking through the documentation again. What I was looking for were the __filename and __dirname module-level
variables.
1494
__filename is the file name of the current module. This is the resolved absolute path of the current module file.
(ex: /home/kyle/some/dir/file.js )
__dirname is the directory name of the current module. (ex: /home/kyle/some/dir )

Share Follow edited Oct 24 '18 at 15:40 answered Jun 28 '10 at 14:39
doom Kyle Slattery
2,222 2 18 33 28.8k 9 29 35

3 If you want only the directory name and not the full path, you might do something like this: function getCurrentDirectoryName() { var fullPath =
__dirname; var path = fullPath.split('/'); var cwd = path[path.length-1]; return cwd; } – Anthony Martin Oct 30 '13 at 20:34

61 @AnthonyMartin __dirname.split("/").pop() – 19h Mar 30 '14 at 20:13

6 For those trying @apx solution (like I did:), this solution does not work on Windows. – Laoujin May 7 '15 at 19:33

40 Or simply __dirname.split(path.sep).pop() – Burgi Jun 11 '15 at 10:53

52 Or require('path').basename(__dirname); – Vyacheslav Cotruta Oct 5 '15 at 9:03

So basically you can do this:

268 fs.readFile(path.resolve(__dirname, 'settings.json'), 'UTF-8', callback);

Use resolve() instead of concatenating with '/' or '\' else you will run into cross-platform issues.

Note: __dirname is the local path of the module or included script. If you are writing a plugin which needs to know the path of the main
script it is:

require.main.filename

or, to just get the folder name:

require('path').dirname(require.main.filename)

Share Follow edited Oct 26 '12 at 17:49 answered Sep 8 '11 at 18:40
Marc
9,501 9 50 63
,

16 If your goal is just to parse and interact with the json file, you can often do this more easily via var settings = require('./settings.json') . Of
course, it's synchronous fs IO, so don't do it at run-time, but at startup time it's fine, and once it's loaded, it'll be cached. – isaacs May 9 '12 at 18:26

2 @Marc Thanks! For a while now I was hacking my way around the fact that __dirname is local to each module. I have a nested structure in my
library and need to know in several places the root of my app. Glad I know how to do this now :D – Thijs Koerselman Feb 28 '13 at 14:34

Node V8: path.dirname(process.mainModule.filename) – wayofthefuture Aug 26 '17 at 11:47

If you don't consider windows to be a real platform, can we skip resolve? BSD, Macos, linux, tizen, symbian, Solaris, android, flutter, webos all use /
right? – Ray Foss Feb 27 '19 at 18:18

This no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:38

This command returns the current directory:

128 var currentPath = process.cwd();

For example, to use the path to read the file:

var fs = require('fs');
fs.readFile(process.cwd() + "\\text.txt", function(err, data)
{
if(err)
console.log(err)
else
console.log(data.toString());
});

Share Follow edited Feb 12 '18 at 23:35 answered Oct 23 '16 at 7:17
dYale Masoud Siahkali
1,253 1 14 19 4,088 1 23 16

For those who didn't understand Asynchronous and Synchronous, see this link... stackoverflow.com/a/748235/5287072 – DarckBlezzer Feb 3 '17
at 17:33

20 this is exactly what the OP doesn't want... the request is for the path of the executable script! – caesarsol Mar 29 '18 at 9:10

3 Current directory is a very different thing If you run something like cd /foo; node bar/test js current directory would be /foo but the script is
3 Current directory is a very different thing. If you run something like cd /foo; node bar/test.js , current directory would be /foo , but the script is
located in /foo/bar/test.js . – rjmunro Jul 5 '18 at 11:20

It's not a good answer. It's mess a logic beacauese this can be much shorter path than you expect. – kris_IV Apr 9 '19 at 11:31

Why would you ever do this; if the file were relative to the current directory you could just read text.txt and it would work, you don't need to
construct the absolute path – Michael Mrozek Oct 3 '19 at 3:40

Use __dirname!!

124 __dirname

The directory name of the current module. This the same as the path.dirname() of the __filename .

Example: running node example.js from /Users/mjr

console.log(__dirname);
// Prints: /Users/mjr
console.log(path.dirname(__filename));
// Prints: /Users/mjr

https://nodejs.org/api/modules.html#modules_dirname

For ESModules you would want to use: import.meta.url

Share Follow edited Apr 28 '19 at 12:40 answered Dec 22 '17 at 14:30
DDD
2,384 2 11 28

1 This survives symlinks too. So if you create a bin and need to find a file, eg path.join(__dirname, "../example.json"); it will still work when your binary
is linked in node_modules/.bin – Jason Apr 17 '18 at 17:12

2 Not only was this answer given years earlier, it also no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:39

When it comes to the main script it's as simple as:


51 process.argv[1]

From the Node.js documentation:

process.argv
An array containing the command line arguments. The first element will be 'node', the second element will be the path to
the JavaScript file. The next elements will be any additional command line arguments.

If you need to know the path of a module file then use __filename.

Share Follow edited Jun 20 '20 at 9:12 answered Dec 17 '15 at 10:41
Community ♦ Lukasz Wiktor
1 1 16.4k 4 63 79

3 Could the downvoter please explain why this is not recommended? – Tamlyn Jan 15 '16 at 16:57

2 @Tamlyn Maybe because process.argv[1] applies only to the main script while __filename points to the module file being executed. I update
my answer to emphasize the difference. Still, I see nothing wrong in using process.argv[1] . Depends on one's requirements. – Lukasz Wiktor Jan
16 '16 at 6:40

10 If main script was launched with a node process manager like pm2 process.argv[1] will point to the executable of the process manager
/usr/local/lib/node_modules/pm2/lib/ProcessContainerFork.js – user3002996 Mar 1 '17 at 11:28

1 @LukaszWiktor Thanks a lot! Works perfectly with a custom Node.js CLI :-) – bgrand-ch Mar 31 at 16:06

Node.js 10 supports ECMAScript modules, where __dirname and __filename are no longer available.

49 Then to get the path to the current ES module one has to use:

import { fileURLToPath } from 'url';

const __filename = fileURLToPath(import.meta.url);

And for the directory containing the current module:


And for the directory containing the current module:

import { dirname } from 'path';


import { fileURLToPath } from 'url';

const __dirname = dirname(fileURLToPath(import.meta.url));

Share Follow edited Jun 5 '19 at 2:37 answered Apr 27 '18 at 1:01
GOTO 0
27.8k 18 96 125

How would I know if I'm writing an ES module or not? Is it just a matter of which Node version I'm running, or if I'm using import/export keywords? –
Ed Brannin Apr 18 '19 at 19:42

2 ES modules available only with --experimental-modules flag. – Nickensoul May 7 '19 at 16:01

4 --experimental-modules is only required if you are running node version is < 13.2. just name the file .mjs rather than .js – Brent Apr 12 '20 at 19:56

Thanks, that solved it for me! It looks to me that it'd be great for back-compatibility support. – Gal Grünfeld Dec 1 '20 at 10:27

var settings =
JSON.parse(
require('fs').readFileSync(
28 require('path').resolve(
__dirname,
'settings.json'),
'utf8'));

Share Follow edited Mar 30 '14 at 4:38 answered Nov 5 '12 at 5:41
Community ♦ foobar
1 1 297 3 2

7 Just a note, as of node 0.5 you can just require a JSON file. Of course that wouldn't answer the question. – Kevin Cox Apr 9 '13 at 21:18

1 __dirname no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:40
Every Node.js program has some global variables in its environment, which represents some information about your process and one

23 of it is __dirname .

Share Follow edited Feb 26 '17 at 10:10 answered May 25 '16 at 21:27
Omar Ali Hazarapet Tunanyan
7,777 4 29 56 2,433 23 25

Not only was this answer given years earlier, __dirname no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:40

It's about NodeJs 10, but this answer was published in 2016. – Hazarapet Tunanyan May 3 '19 at 7:59

I know this is pretty old, and the original question I was responding to is marked as duplicate and directed here, but I ran into an issue
trying to get jasmine-reporters to work and didn't like the idea that I had to downgrade in order for it to work. I found out that jasmine-
14 reporters wasn't resolving the savePath correctly and was actually putting the reports folder output in jasmine-reporters directory
instead of the root directory of where I ran gulp. In order to make this work correctly I ended up using process.env.INIT_CWD to get
the initial Current Working Directory which should be the directory where you ran gulp. Hope this helps someone.

var reporters = require('jasmine-reporters');


var junitReporter = new reporters.JUnitXmlReporter({
savePath: process.env.INIT_CWD + '/report/e2e/',
consolidateAll: true,
captureStdout: true
});

Share Follow edited Oct 30 '19 at 5:32 answered Mar 15 '17 at 15:37
Community ♦ Dana Harris
1 1 277 3 6

You can use process.env.PWD to get the current app folder path.

8 Share Follow answered Sep 28 '16 at 7:00


AbiSivam
410 6 17
5 OP asks for the requested "path to the script". PWD, which stands for something like Process Working Directory, is not that. Also, the "current app"
phrasing is misleading. – dmcontador Sep 8 '17 at 6:48

If you are using pkg to package your app, you'll find useful this expression:

7 appDirectory = require('path').dirname(process.pkg ? process.execPath : (require.main ?


require.main.filename : process.argv[0]));

process.pkg tells if the app has been packaged by pkg .


process.execPath holds the full path of the executable, which is /usr/bin/node or similar for direct invocations of scripts ( node
test.js ), or the packaged app.

require.main.filename holds the full path of the main script, but it's empty when Node runs in interactive mode.
__dirname holds the full path of the current script, so I'm not using it (although it may be what OP asks; then better use
appDirectory = process.pkg ? require('path').dirname(process.execPath) : (__dirname ||
require('path').dirname(process.argv[0])); noting that in interactive mode __dirname is empty.

For interactive mode, use either process.argv[0] to get the path to the Node executable or process.cwd() to get the current
directory.

Share Follow answered Sep 8 '17 at 7:28


dmcontador
482 6 14

Use the basename method of the path module:

6 var path = require('path');


var filename = path.basename(__filename);
console.log(filename);

Here is the documentation the above example is taken from.


As Dan pointed out, Node is working on ECMAScript modules with the "--experimental-modules" flag. Node 12 still supports
__dirname and __filename as above.

If you are using the --experimental-modules flag, there is an alternative approach.

The alternative is to get the path to the current ES module:

const __filename = new URL(import.meta.url).pathname;

And for the directory containing the current module:

import path from 'path';

const __dirname = path.dirname(new URL(import.meta.url).pathname);

Share Follow answered Apr 26 '19 at 1:08


Michael Cole
13.3k 4 65 79

index.js within any folder containing modules to export

0 const entries = {};


for (const aFile of require('fs').readdirSync(__dirname, { withFileTypes: true
}).filter(ent => ent.isFile() && ent.name !== 'index.js')) {
const [ name, suffix ] = aFile.name.split('.');
entries[name] = require(`./${aFile.name}`);
}

module.exports = entries;

This will find all files in the root of the current directory, require and export every file present with the same export name as the
filename stem.
Share Follow edited Jan 17 at 23:10 answered Jan 17 at 22:53

Andy Lorenz
1,726 1 20 22

If you want something more like $0 in a shell script, try this:

-3 var path = require('path');

var command = getCurrentScriptPath();

console.log(`Usage: ${command} <foo> <bar>`);

function getCurrentScriptPath () {
// Relative path from current working directory to the location of this script
var pathToScript = path.relative(process.cwd(), __filename);

// Check if current working dir is the same as the script


if (process.cwd() === __dirname) {
// E.g. "./foobar.js"
return '.' + path.sep + pathToScript;
} else {
// E.g. "foo/bar/baz.js"
return pathToScript;
}
}

Share Follow answered May 5 '17 at 9:32


dmayo3
75 5

__dirname and __filename are no longer available with ES modules. – Dan Dascalescu Apr 25 '19 at 0:41

Highly active question. Earn 10 reputation in order to answer this question. The reputation requirement helps protect this question from spam and non-
answer activity.
Create A REST API With JSON
Server
Sebastian Eschweiler Follow
Feb 26, 2017 · 6 min read
This post has been published first on CodingTheSmartWay.com.

A common task for front-end developers is to simulate a backend REST


service to deliver some data in JSON format to the front-end application
and make sure everything is working as expected.

Of course you can setup a full backend server, e.g. by using Node.js, Express
and MongoDB. However this takes some time and a much simpler approach
can help to speed up front-end development time.

JSON Server is a simple project that helps you to setup a REST API with
CRUD operations very fast. The project website can be found at
https://github.com/typicode/json-server.

In the following you’ll lean how to setup JSON server and publish a sample
REST API. Furthermore you’ll see how to use another library, Faker.js, to
generate fake data for the REST API which is exposed by using JSON server.

Installing JSON Server


JSON Server is available as a NPM package. The installation can be done by
using the Node.js package manager:
$ npm install -g json-server

By adding the -g option we make sure that the package is installed globally
on your system.

JSON File
Now let’s create a new JSON file with name db.json. This file contains the
data which should be exposed by the REST API. For objects contained in the
JSON structure CRUD entpoints are created automatically. Take a look at
the following sample db.json file:

{
"employees": [
{
"id": 1,
"first_name": "Sebastian",
"last_name": "Eschweiler",
"email": "[email protected]"
},
{
"id": 2,
"first_name": "Steve",
"last_name": "Palmer",
"email": "[email protected]"
},
{
"id": 3,
"first_name": "Ann",
"last_name": "Smith",
"email": "[email protected]"
}
]
}

The JSON structure consists of one employee object which has three data
sets assigned. Each employee object is consisting of four properties: id,
first_name, last_name and email.

Running The Server


Let’s start JSON server by executing the following command:

$ json-server --watch db.json

As a parameter we need to pass over the file containing our JSON structure
(db.json). Furthermore we’re using the — watch parameter. By using this
parameter we’re making sure that the server is started in watch mode which
means that it watches for file changes and updates the exposed API
accordingly.
Now we can open URL http://localhost:3000/employees in the browser
and we’ll get the following result:
From the output you can see that the employees resource has been
recognized correctly. Now you can click on the employees link and a HTTP
GET request to http://localhost:3000/employees shows the following
result:
The following HTTP endpoints are created automatically by JSON server:

GET /employees
GET /employees/{id}
POST /employees
PUT /employees/{id}
PATCH /employees/{id}
DELETE /employees/{id}

If you make POST, PUT, PATCH or DELETE requests, changes will be


automatically saved to db.json. A POST, PUT or PATCH request should
include a Content-Type: application/json header to use the JSON in the
request body. Otherwise it will result in a 200 OK but without changes
being made to the data.

It's possible to extend URLs with further parameter. E.g. you can apply
filtering by using URL parameters like you can see in the following:

http://localhost:3000/employees?first_name=Sebastian

This returns just one employee object as a result. Or just perform a full text
over all properties:

http://localhost:3000/employees?q=codingthesmartway

For a full list of available URL parameters take a look at the JSON server
documentation: https://github.com/typicode/json-server

Testing API Endpoints With POSTman


Initiating a GET request is easy by simply using the browser. For initiating
other types of HTTP requests you can make use of an HTTP client tool like
Postman (https://www.getpostman.com). Postman is available for MacOS,
Windows and Linux. Furthermore Postman is available as a Chrome App.
Get Request
The Postman user interface is easy to use. To initiate a GET request fill out
the form as you can see in the following screenshot. Click the Send button
and you’ll receive the response in JSON format:
DELETE REQUEST
A corresponding delete request can be seen in the following screenshot:

POST REQUEST
To create a new employee we need to perform a post request and set the
body content type to JSON (application/json). The new employee object is
entered in JSON format in the body data section:
PUT REQUEST
If you want to update or change an existing employee record you can use a
HTTP PUT request:
Mocking Data with Faker.js
So far we’ve entered data exposed by the API manually in a JSON file.
However, if you need a larger amount of data the manual way can be
cumbersome. An easy solution to this problem is to use the Faker.js
(https://github.com/marak/Faker.js/) library to generate fake data.
Integration of Faker.js into JSON server is easy. Just follow the steps below:

First, let’s initialize a new NPM project in the current repository:

$ npm init

Next, install Faker.js by using the NPM package faker:

$ npm install faker

Faker.js is installed to the node_modules folder. Create another file


employees.js an insert the following JavaScript code:

// employees.js
var faker = require('faker')

function generateEmployees () {
var employees = []
for (var id = 0; id < 50; id++) {
var firstName = faker.name.firstName()
var lastName = faker.name.lastName()
var email = faker.internet.email()

employees.push({
"id": id,
"first_name": firstName,
"last_name": lastName,
"email": email
})
}

return { "employees": employees }


}

module.exports = generateEmployees

We’re implementing the function generateEmployees() to generate a JSON


object containing 50 employees. To obtain the fake data for first name, last
name and email we’re using the following methods from the Faker.js
library:

faker.name.firstName()

faker.name.lastName()

faker.internet.email()
JSON server requires that we finally export the generateEmploees() function
which is responsible for fake data generation. This is done by using the
following line of code:

module.exports = generateEmployees

Having added that export, we're able to pass file employee.js directly to the
json-server command:

$ json-server employees.js

Now the exposed REST API gives you access to all 50 employee data sets
created with Faker.js.

Video Tutorial
This video tutorial contains the steps described in the text above:

Create A REST API With JSON Server


Also check out the great online course: The Complete Web Developer
Bootcamp

The only course you need to learn web development — HTML, CSS, JS,
Node, and More!

This post has been published first on CodingTheSmartWay.com.

Nodejs JavaScript Rest Api Json Web Services

Learn more. Make Medium yours. Share your thinking.


Medium is an open platform where 170 Follow the writers, publications, and topics If you have a story to tell, knowledge to
million readers come to find insightful and that matter to you, and you’ll see them on share, or a perspective to offer — welcome
dynamic thinking. Here, expert and your homepage and in your inbox. Explore home. It’s easy and free to post your thinking
undiscovered voices alike dive into the heart on any topic. Write on Medium
of any topic and bring new ideas to the
surface. Learn more

About Help Legal


dfsq / json-server-init

Generator of JSON files to work with json-server.

MIT License

119 stars 21 forks

Star Notifications

Code Issues 2 Pull requests Actions Projects Security Insights

master Go to file

dfsq Bump the version, publish. … on Feb 19, 2016 15

View code

JSON Server Init build passing npm package 0.2.4

Generate JSON database for JSON server using Filltext.com as random JSON data source.

Install

$ npm install -g json-server-init


Commands
create - Create new JSON database.
collection - Add new collection to existent database file (todo).

Options
Possible options are:

--name, -n - Specify name of the database JSON file to create (in case of create command) or use (collection command).
Default name if not provided is "db.json".
--help, -h - Show help.
--version, -v - Show version number.

For example, to create "dev.json" schema file:

$ json-server-init create -n dev.json

Commands overview

create
Command produces several prompts.

Collection prompt

Prompt for collection name and number of rows renders something like this:

> Collection name and number of rows, 5 if omitted (ex: posts 10):
Valid input would be a new collection name with optional number separated by space indicating how many rows should be
generated for this collection. For example, users 10 will generate collection "users" with 10 records in it, sessions will
result into collection "sessions" with default 5 records, etc.

Fields prompt

After collection name is entered one would need to configure what fields collection should have:

>> What fields should "users" have?


Comma-separated fieldname:fieldtype pairs (ex: id:index, username:username)

Entry must have specific format: fieldname:fieldtype .

fieldname - name of the field, only alpha-numeric characters.


fieldtype - type of the data. Corresponds to types filltext generator uses for fields, refer entire list for possible values.
Multiple fields concatenation is possible with + operator.

For example, to generate users collection with four fields: id, username, name and age, one could enter this command:

>> What fields should "users" have?


Comma-separated fieldname:fieldtype pairs (ex: id:index, username:username)
id:index, username:username, name:firstName+lastName, age:numberRange|18,80

Add another

You can add as many collections as necessary: after fields prompt there is a confirmation if more collections need to be
created:

> Add another collection? (y/n) n

If "y" is entered flow repeats "Collection prompt" step, otherwise it fetches JSON data and saves it to the file.
collection
TODO...

Example
Here is how typical workflow looks like with create command:

$ json-server-init create
> Collection name and number of rows, 5 if omitted (ex: posts 10): users 2
>> What fields should "users" have?
Comma-separated fieldname:fieldtype pairs (ex: id:index, username:username)
id:index, username:username, motto:lorem|5
> Add another collection? (y/n) n
db.json saved.

Above will produce db.json file with content similar to this:

{
"users": [
{
"id": 1,
"username": "RGershowitz",
"motto": "curabitur et magna placerat tellus"
},
{
"id": 2,
"username": "NMuroski",
"motto": "ante nullam dolor sit placerat"
}
]
}
README.md
Now you can start json-server:

$ json-server --watch db.json

License
MIT License © Aliaksandr Astashenkau

Releases

No releases published

Packages

No packages published

Used by 7

Contributors 2

dfsq Aliaksandr Astashenkau

typicode
Languages

JavaScript 100.0%
json-server-extension
json-server is great for stub server usage but in my opinion there where some caveat that i tried to solve in this package

so what this package gives you


 splitting to static files - json-server can serve only single file but in medium/large applications it not ideal, by using this package you
can split your json object to files
 dynamic generation - with json server you can generate the whole file now you can create multiple generated objects decoupled
each other and even combine static and generated files

Example
full example can be found here https://github.com/maty21/json-server-extension-example

Install
npm i json-server-extension

init example

const jsonServer = require('json-server');


const _jsonExtender = require('./jsonExtender');

//options:
//fullPath:fullpath for the combined object
//generatedPath:the path where the generated files will be found
//staticPath:the path where the static files will be found
const jsonExtender = new _jsonExtender({filePath:'./db_extends.json',
generatedPath:'./generated',
staticPath:'./static'})
//register accept array of generators or path to the generator scripts
//const funcs = Object.keys(generators).map(key => generators[key])
jsonExtender.register('../../../generators');
jsonExtender.generate().then((data)=>{
console.log(`wow ${data}`);
var server = jsonServer.create()
var router = jsonServer.router('./db_extends.json')
var middlewares = jsonServer.defaults()

server.use(middlewares)
server.use(router)
server.listen(4000, function () {
console.log('JSON Server is running')
}).catch((err) => {console.log(err)})

});

generator Example

const amount = 100;


const func =next =>create => {
const path = `feed/feedList.json`;
const data = (amount)=> {
let temp = [];
for (let i = 0; i < amount; i++) {
temp.push({
id: `${i}N12134`,
newNotificationCount: i * 3,
isRead: (i % 2 == 0),
isStarMark: (i % 4 == 0),
iconType: "SocialNotifications",
description: i + ": this is a new feed ",
date: new Date(Date.now()).toLocaleString()
}
)
}
return temp;
}
create({data: {feed: data(amount)}, path: path})
next(create);

}
module.exports = func;

api

constructor

constructor({filePath:'string',generatedPath:'string, staticPath:'string'})

fullPath - fullpath for the combined object


generatedPath - the path where the generated files will be found  default : './generated'
staticPath - the path where the static files will be found  default : './static'

register

register('path name') / register([...generator scripts])

register('path name')  - a path where the generators scripts will be found the package will insatiate the scripts automatically
register([...generator scripts])  -array of your generators after requiring them manually

generate

generate(isRun[default:true]) return promise

isRun  - there is ability to not generate the db.json each time good when you want to save the state after you close the process the
promise will recive the same data so you will not have to change the code
promise
resolve  -{files:array of combined files, filePath:the combined file path }
reject - error
generator
const func= next =>create => {}  - the generator should be initiated as follows first you will have to call for create this is sync function
and the for next

create({data: {feed: generatedObject}, path: path})


data  - the generated data where the name of the property will be the routing name in this case  feed
path  - a relative path under the generated path that you set in the constructor where you wish to put the output
next(create)  - just pass the create function there so it's reference will be passed in the pipeline
Products Pricing Documentation Community

Sign Up Sign In

Search packages Search

Wondering what’s next for npm? Check out our public roadmap! »

json-server
0.16.3 • Public • Published 5 months ago

Readme

Explore BETA

20 Dependencies

221 Dependents

123 Versions

Install

npm i json-server
Weekly Downloads

154,288

Version License
0.16.3 MIT

Unpacked Size Total Files


64.4 kB 27

Issues Pull Requests


504 77

Homepage
github.com/typicode/json-server

Repository
github.com/typicode/json-server

Last publish
5 months ago

Collaborators
Try on RunKit

Report malware

JSON Server build passing npm package 0.16.3

Get a full fake REST API with zero coding in less than 30 seconds (seriously)

Created with <3 for front-end developers who need a quick back-end for prototyping and mocking.

Egghead.io free video tutorial - Creating demo APIs with json-server


JSONPlaceholder - Live running version
My JSON Server - no installation required, use your own data

See also:

🐶 husky - Git hooks made easy


🏨 hotel - developer tool with local .localhost domain and https out of the box

Gold sponsors 🥇
 
 

Become a sponsor and have your company logo here

Table of contents
Getting started
Routes
Plural routes
Singular routes
Filter
Paginate
Sort
Slice
Operators
Full-text search
Relationships
Database
Homepage
Extras
Static file server
Alternative port

Access from anywhere


Remote schema
Generate random data
HTTPS
Add custom routes
Add middlewares
CLI usage
Module
Simple example
Custom routes example
Access control example
Custom output example
Rewriter example
Mounting JSON Server on another endpoint example
API
Deployment
Links
Video
Articles
Third-party tools
License

Getting started
Getting started

Install JSON Server

npm install -g json-server

Create a db.json file with some data

{
"posts": [
{ "id": 1, "title": "json-server", "author": "typicode" }
],
"comments": [
{ "id": 1, "body": "some comment", "postId": 1 }
],
"profile": { "name": "typicode" }
}

Start JSON Server

json-server --watch db.json

Now if you go to http://localhost:3000/posts/1, you'll get

{ "id": 1, "title": "json-server", "author": "typicode" }


{ , j , yp }

Also when doing requests, it's good to know that:


If you make POST, PUT, PATCH or DELETE requests, changes will be automatically and safely saved to db.json using
lowdb.
Your request body JSON should be object enclosed, just like the GET output. (for example {"name": "Foobar"} )
Id values are not mutable. Any id value in the body of your PUT or PATCH request will be ignored. Only a value set in a
POST request will be respected, but only if not already taken.
A POST, PUT or PATCH request should include a Content-Type: application/json header to use the JSON in the
request body. Otherwise it will return a 2XX status code, but without changes being made to the data.

Routes
Based on the previous db.json file, here are all the default routes. You can also add other routes using --routes .

Plural routes

GET /posts
GET /posts/1
POST /posts
PUT /posts/1
PATCH /posts/1
DELETE /posts/1

Singular routes

GET /profile
POST /profile
PUT /profile
PUT /profile
PATCH /profile

Filter
Use . to access deep properties

GET /posts?title=json-server&author=typicode
GET /posts?id=1&id=2
GET /comments?author.name=typicode

Paginate
Use _page and optionally _limit to paginate returned data.

In the Link header you'll get first , prev , next and last links.

GET /posts?_page=7
GET /posts?_page=7&_limit=20

10 items are returned by default

Sort
Add _sort and _order (ascending order by default)

GET /posts?_sort=views&_order=asc
GET /posts/1/comments?_sort=votes&_order=asc

For multiple fields use the following format:


For multiple fields, use the following format:

GET /posts?_sort=user,views&_order=desc,asc

Slice
Add _start and _end or _limit (an X-Total-Count header is included in the response)

GET /posts?_start=20&_end=30
GET /posts/1/comments?_start=20&_end=30
GET /posts/1/comments?_start=20&_limit=10

Works exactly as Array.slice (i.e. _start is inclusive and _end exclusive)

Operators
Add _gte or _lte for getting a range

GET /posts?views_gte=10&views_lte=20

Add _ne to exclude a value

GET /posts?id_ne=1

Add _like to filter (RegExp supported)

GET /posts?title_like=server
Full-text search
Add q

GET /posts?q=internet

Relationships
To include children resources, add _embed

GET /posts?_embed=comments
GET /posts/1?_embed=comments

To include parent resource, add _expand

GET /comments?_expand=post
GET /comments/1?_expand=post

To get or create nested resources (by default one level, add custom routes for more)

GET /posts/1/comments
POST /posts/1/comments

Database

GET /db

H
Homepage
Returns default index file or serves ./public directory

GET /

Extras
Static file server
You can use JSON Server to serve your HTML, JS and CSS, simply create a ./public directory or use --static to set a
different static files directory.

mkdir public
echo 'hello world' > public/index.html
json-server db.json

json-server db.json --static ./some-other-dir

Alternative port
You can start JSON Server on other ports with the --port flag:

$ json-server --watch db.json --port 3004

Access from anywhere


You can access your fake API from anywhere using CORS and JSONP.
Remote schema
You can load remote schemas.

$ json-server http://example.com/file.json
$ json-server http://jsonplaceholder.typicode.com/db

Generate random data


Using JS instead of a JSON file, you can create data programmatically.

// index.js
module.exports = () => {
const data = { users: [] }
// Create 1000 users
for (let i = 0; i < 1000; i++) {
data.users.push({ id: i, name: `user${i}` })
}
return data
}

$ json-server index.js

Tip use modules like Faker, Casual, Chance or JSON Schema Faker.

HTTPS
There are many ways to set up SSL in development. One simple way is to use hotel.
Add custom routes
Create a routes.json file. Pay attention to start every route with / .

{
"/api/*": "/$1",
"/:resource/:id/show": "/:resource/:id",
"/posts/:category": "/posts?category=:category",
"/articles\\?id=:id": "/posts/:id"
}

Start JSON Server with --routes option.

json-server db.json --routes routes.json

Now you can access resources using additional routes.

/api/posts # → /posts
/api/posts/1 # → /posts/1
/posts/1/show # → /posts/1
/posts/javascript # → /posts?category=javascript
/articles?id=1 # → /posts/1

Add middlewares
You can add your middlewares from the CLI using --middlewares option:

// h ll j
// hello.js
module.exports = (req, res, next) => {
res.header('X-Hello', 'World')
next()
}

json-server db.json --middlewares ./hello.js


json-server db.json --middlewares ./first.js ./second.js

CLI usage

json-server [options] <source>

Options:
--config, -c Path to config file [default: "json-server.json"]
--port, -p Set port [default: 3000]
--host, -H Set host [default: "localhost"]
--watch, -w Watch file(s) [boolean]
--routes, -r Path to routes file
--middlewares, -m Paths to middleware files [array]
--static, -s Set static files directory
--read-only, --ro Allow only GET requests [boolean]
--no-cors, --nc Disable Cross-Origin Resource Sharing [boolean]
--no-gzip, --ng Disable GZIP Content-Encoding [boolean]
--snapshots, -S Set snapshots directory [default: "."]
--delay, -d Add delay to responses (ms)
i i i ( i ) [ i ]
--id, -i Set database id property (e.g. _id) [default: "id"]
--foreignKeySuffix, --fks Set foreign key suffix, (e.g. _id as in post_id)

[default: "Id"]
--quiet, -q Suppress log messages from output [boolean]
--help, -h Show help [boolean]
--version, -v Show version number [boolean]

Examples:
json-server db.json
json-server file.js
json-server http://example.com/db.json

https://github.com/typicode/json-server

You can also set options in a json-server.json configuration file.

{
"port": 3000
}

Module
If you need to add authentication, validation, or any behavior, you can use the project as a module in combination with
other Express middlewares.

Simple example
$ npm install json-server --save-dev

// server.js
const jsonServer = require('json-server')
const server = jsonServer.create()
const router = jsonServer.router('db.json')
const middlewares = jsonServer.defaults()

server.use(middlewares)
server.use(router)
server.listen(3000, () => {
console.log('JSON Server is running')
})

$ node server.js

The path you provide to the jsonServer.router function is relative to the directory from where you launch your node
process. If you run the above code from another directory, it’s better to use an absolute path:

const path = require('path')


const router = jsonServer.router(path.join(__dirname, 'db.json'))

For an in-memory database, simply pass an object to jsonServer.router() .

Please note also that jsonServer router() can be used in existing Express projects
Please note also that jsonServer.router() can be used in existing Express projects.

Custom routes example

Let's say you want a route that echoes query parameters and another one that set a timestamp on every resource created.

const jsonServer = require('json-server')


const server = jsonServer.create()
const router = jsonServer.router('db.json')
const middlewares = jsonServer.defaults()

// Set default middlewares (logger, static, cors and no-cache)


server.use(middlewares)

// Add custom routes before JSON Server router


server.get('/echo', (req, res) => {
res.jsonp(req.query)
})

// To handle POST, PUT and PATCH you need to use a body-parser


// You can use the one used by JSON Server
server.use(jsonServer.bodyParser)
server.use((req, res, next) => {
if (req.method === 'POST') {
req.body.createdAt = Date.now()
}
// Continue to JSON Server router
next()
e t()
})

// Use default router


server.use(router)
server.listen(3000, () => {
console.log('JSON Server is running')
})

Access control example

const jsonServer = require('json-server')


const server = jsonServer.create()
const router = jsonServer.router('db.json')
const middlewares = jsonServer.defaults()

server.use(middlewares)
server.use((req, res, next) => {
if (isAuthorized(req)) { // add your authorization logic here
next() // continue to JSON Server router
} else {
res.sendStatus(401)
}
})
server.use(router)
server.listen(3000, () => {
console.log('JSON Server is running')
g( g )
})

Custom output example

To modify responses, overwrite router.render method:

// In this example, returned resources will be wrapped in a body property


router.render = (req, res) => {
res.jsonp({
body: res.locals.data
})
}

You can set your own status code for the response:

// In this example we simulate a server side error response


router.render = (req, res) => {
res.status(500).jsonp({
error: "error message here"
})
}

Rewriter example

To add rewrite rules, use jsonServer.rewriter() :

// Add this before server use(router)


// Add this before server.use(router)
server.use(jsonServer.rewriter({
'/api/*': '/$1',
'/blog/:resource/:id/show': '/:resource/:id'
}))

Mounting JSON Server on another endpoint example

Alternatively, you can also mount the router on /api .

server.use('/api', router)

API

jsonServer.create()

Returns an Express server.

jsonServer.defaults([options])

Returns middlewares used by JSON Server.

options
static path to static files
logger enable logger middleware (default: true)
bodyParser enable body-parser middleware (default: true)
noCors disable CORS (default: false)
readOnly accept only GET requests (default: false)

jsonServer router([path|object])
jsonServer.router([path|object])

Returns JSON Server router.

Deployment
You can deploy JSON Server. For example, JSONPlaceholder is an online fake API powered by JSON Server and running on
Heroku.

Links
Video

Creating Demo APIs with json-server on egghead.io

Articles

Node Module Of The Week - json-server


ng-admin: Add an AngularJS admin GUI to any RESTful API
Fast prototyping using Restangular and Json-server
Create a Mock REST API in Seconds for Prototyping your Frontend
No API? No Problem! Rapid Development via Mock APIs
Zero Code REST With json-server

Third-party tools

Grunt JSON Server


Docker JSON Server
JSON Server GUI
JSON file generator
JSON Server extension

License
MIT

Supporters ✨

Keywords

JSON server fake REST API prototyping mock mocking test testing rest data dummy

sandbox

Support

Help

Community
Advisories

Status

Contact npm

Company

About

Blog

Press

Terms & Policies

Policies

Terms of Use

Code of Conduct

Privacy
Modes and Environment Variables

Modes
Mode is an important concept in Vue CLI projects. By default, there are three modes:

development  is used by  vue-cli-service serve


test  is used by  vue-cli-service test:unit
production  is used by  vue-cli-service build  and  vue-cli-service test:e2e

You can overwrite the default mode used for a command by passing the  --mode  option flag. For example, if you want to use
development variables in the build command:

vue-cli-service build --mode development

When running  vue-cli-service , environment variables are loaded from all corresponding files. If they don't contain
a  NODE_ENV  variable, it will be set accordingly. For example,  NODE_ENV  will be set to  "production"  in production mode,  "test"  in
test mode, and defaults to  "development"  otherwise.

Then  NODE_ENV  will determine the primary mode your app is running in - development, production or test - and consequently, what
kind of webpack config will be created.

With  NODE_ENV  set to "test" for example, Vue CLI creates a webpack config that is intended to be used and optimized for unit tests. It
doesn't process images and other assets that are unnecessary for unit tests.
Similarly,  NODE_ENV=development  creates a webpack configuration which enables HMR, doesn't hash assets or create vendor bundles in
order to allow for fast re-builds when running a dev server.

When you are running  vue-cli-service build , your  NODE_ENV  should always be set to "production" to obtain an app ready for
deployment, regardless of the environment you're deploying to.

NODE_ENV

If you have a default  NODE_ENV  in your environment, you should either remove it or explicitly set  NODE_ENV  when running  vue-
cli-service  commands.

Environment Variables
You can specify env variables by placing the following files in your project root:

.env # loaded in all cases


.env.local # loaded in all cases, ignored by git
.env.[mode] # only loaded in specified mode
.env.[mode].local # only loaded in specified mode, ignored by git

An env file simply contains key=value pairs of environment variables:

FOO=bar
VUE_APP_NOT_SECRET_CODE=some_value

WARNING
Do not store any secrets (such as private API keys) in your app!

Environment variables are embedded into the build, meaning anyone can view them by inspecting your app's files.

Note that only  NODE_ENV ,  BASE_URL , and variables that start with  VUE_APP_  will be statically embedded into the client
bundle with  webpack.DefinePlugin . It is to avoid accidentally exposing a private key on the machine that could have the same name.

For more detailed env parsing rules, please refer to the documentation of  dotenv . We also use dotenv-expand  for variable
expansion (available in Vue CLI 3.5+). For example:

FOO=foo
BAR=bar

CONCAT=$FOO$BAR # CONCAT=foobar

Loaded variables will become available to all  vue-cli-service  commands, plugins and dependencies.

Env Loading Priorities

An env file for a specific mode (e.g.  .env.production ) will take higher priority than a generic one (e.g.  .env ).

In addition, environment variables that already exist when Vue CLI is executed have the highest priority and will not be
overwritten by  .env  files.

.env  files are loaded at the start of  vue-cli-service . Restart the service after making changes.

Example: Staging Mode


Assuming we have an app with the following  .env  file:
VUE_APP_TITLE=My App

And the following  .env.staging  file:

NODE_ENV=production
VUE_APP_TITLE=My App (staging)

vue-cli-service build  builds a production app, loading  .env ,  .env.production  and  .env.production.local  if they are present;

vue-cli-service build --mode staging  builds a production app in staging mode,


using  .env ,  .env.staging  and  .env.staging.local  if they are present.

In both cases, the app is built as a production app because of the  NODE_ENV , but in the staging version,  process.env.VUE_APP_TITLE  is
overwritten with a different value.

Using Env Variables in Client-side Code


You can access env variables in your application code:

console.log(process.env.VUE_APP_NOT_SECRET_CODE)

During build,  process.env.VUE_APP_NOT_SECRET_CODE  will be replaced by the corresponding value. In the case
of  VUE_APP_NOT_SECRET_CODE=some_value , it will be replaced by  "some_value" .

In addition to  VUE_APP_*  variables, there are also two special variables that will always be available in your app code:

NODE_ENV  - this will be one of  "development" ,  "production"  or  "test"  depending on the mode the app is running in.
BASE_URL  - this corresponds to the  publicPath  option in  vue.config.js  and is the base path your app is deployed at.

All resolved env variables will be available inside  public/index.html  as discussed in HTML - Interpolation.

TIP

You can have computed env vars in your  vue.config.js  file. They still need to be prefixed with  VUE_APP_ . This is useful for
version info

process.env.VUE_APP_VERSION = require('./package.json').version

module.exports = {
// config
}

Local Only Variables


Sometimes you might have env variables that should not be committed into the codebase, especially if your project is hosted in a public
repository. In that case you should use an  .env.local  file instead. Local env files are ignored in  .gitignore  by default.

.local  can also be appended to mode-specific env files, for example  .env.development.local  will be loaded during development,
and is ignored by git.
Simple Configuration
The easiest way to tweak the webpack config is providing an object to the  configureWebpack  option in  vue.config.js :

// vue.config.js
module.exports = {
configureWebpack: {
plugins: [
new MyAwesomeWebpackPlugin()
]
}
}

The object will be merged into the final webpack config using webpack-merge .

WARNING

Some webpack options are set based on values in  vue.config.js  and should not be mutated directly. For example, instead of
modifying  output.path , you should use the  outputDir  option in  vue.config.js ; instead of modifying  output.publicPath ,
you should use the  publicPath  option in  vue.config.js . This is because the values in  vue.config.js  will be used in multiple
places inside the config to ensure everything works properly together.

If you need conditional behavior based on the environment, or want to directly mutate the config, use a function (which will be lazy
evaluated after the env variables are set). The function receives the resolved config as the argument. Inside the function, you can either
mutate the config directly, OR return an object which will be merged:

// vue.config.js
module.exports = {
configureWebpack: config => {
if (process.env.NODE_ENV === 'production') {
// mutate config for production...
} else {
// mutate for development...
}
}
}

Chaining (Advanced)
The internal webpack config is maintained using webpack-chain . The library provides an abstraction over the raw webpack config, with
the ability to define named loader rules and named plugins, and later "tap" into those rules and modify their options.

This allows us finer-grained control over the internal config. Below you will see some examples of common modifications done via
the  chainWebpack  option in  vue.config.js .

TIP

vue inspect will be extremely helpful when you are trying to access specific loaders via chaining.

Modifying Options of a Loader

// vue.config.js
module.exports = {
chainWebpack: config => {
config.module
.rule('vue')
.use('vue-loader')
.tap(options => {
// modify the options...
return options
})
}
}

TIP

For CSS related loaders, it's recommended to use css.loaderOptions instead of directly targeting loaders via chaining. This is
because there are multiple rules for each CSS file type and  css.loaderOptions  ensures you can affect all rules in one single
place.

Adding a New Loader

// vue.config.js
module.exports = {
chainWebpack: config => {
// GraphQL Loader
config.module
.rule('graphql')
.test(/\.graphql$/)
.use('graphql-tag/loader')
.loader('graphql-tag/loader')
.end()
// Add another loader
.use('other-loader')
.loader('other-loader')
.end()
}
}
Replacing Loaders of a Rule
If you want to replace an existing Base Loader , for example using  vue-svg-loader  to inline SVG files instead of loading the file:

// vue.config.js
module.exports = {
chainWebpack: config => {
const svgRule = config.module.rule('svg')

// clear all existing loaders.


// if you don't do this, the loader below will be appended to
// existing loaders of the rule.
svgRule.uses.clear()

// add replacement loader(s)


svgRule
.use('vue-svg-loader')
.loader('vue-svg-loader')
}
}

Modifying Options of a Plugin

// vue.config.js
module.exports = {
chainWebpack: config => {
config
.plugin('html')
.tap(args => {
return [/* new args to pass to html-webpack-plugin's constructor */]
})
}
}

You will need to familiarize yourself with webpack-chain's API  and read some source code  in order to understand how to leverage
the full power of this option, but it gives you a more expressive and safer way to modify the webpack config than directly mutate values.

For example, say you want to change the default location


of  index.html  from  /Users/username/proj/public/index.html  to  /Users/username/proj/app/templates/index.html . By
referencing html-webpack-plugin you can see a list of options you can pass in. To change our template path we can pass in a new
template path with the following config:

// vue.config.js
module.exports = {
chainWebpack: config => {
config
.plugin('html')
.tap(args => {
args[0].template = '/Users/username/proj/app/templates/index.html'
return args
})
}
}

You can confirm that this change has taken place by examining the vue webpack config with the  vue inspect  utility, which we will
discuss next.

Inspecting the Project's Webpack Config


Since  @vue/cli-service  abstracts away the webpack config, it may be more difficult to understand what is included in the config,
especially when you are trying to make tweaks yourself.

vue-cli-service  exposes the  inspect  command for inspecting the resolved webpack config. The global  vue  binary also provides
the  inspect  command, and it simply proxies to  vue-cli-service inspect  in your project.

The command will print the resolved webpack config to stdout, which also contains hints on how to access rules and plugins via
chaining.

You can redirect the output into a file for easier inspection:

vue inspect > output.js

By default,  inspect  command will show the output for development config. To see the production configuration, you need to run

vue inspect --mode production > output.prod.js

Note the output is not a valid webpack config file, it's a serialized format only meant for inspection.

You can also inspect a subset of the config by specifying a path:

# only inspect the first rule


vue inspect module.rules.0

Or, target a named rule or plugin:

vue inspect --rule vue


vue inspect --plugin html
Finally, you can list all named rules and plugins:

vue inspect --rules


vue inspect --plugins

Using Resolved Config as a File


Some external tools may need access to the resolved webpack config as a file, for example IDEs or command line tools that expect a
webpack config path. In that case you can use the following path:

<projectRoot>/node_modules/@vue/cli-service/webpack.config.js

This file dynamically resolves and exports the exact same webpack config used in  vue-cli-service  commands, including those from
plugins and even your custom configurations.
Modes and Environment Variables

Modes
Mode is an important concept in Vue CLI projects. By default, there are three modes:

development  is used by  vue-cli-service serve


test  is used by  vue-cli-service test:unit
production  is used by  vue-cli-service build  and  vue-cli-service test:e2e

You can overwrite the default mode used for a command by passing the  --mode  option flag. For example, if you want to use
development variables in the build command:

vue-cli-service build --mode development

When running  vue-cli-service , environment variables are loaded from all corresponding files. If they don't contain
a  NODE_ENV  variable, it will be set accordingly. For example,  NODE_ENV  will be set to  "production"  in production mode,  "test"  in
test mode, and defaults to  "development"  otherwise.

Then  NODE_ENV  will determine the primary mode your app is running in - development, production or test - and consequently, what
kind of webpack config will be created.
With  NODE_ENV  set to "test" for example, Vue CLI creates a webpack config that is intended to be used and optimized for unit tests. It
doesn't process images and other assets that are unnecessary for unit tests.

Similarly,  NODE_ENV=development  creates a webpack configuration which enables HMR, doesn't hash assets or create vendor bundles in
order to allow for fast re-builds when running a dev server.

When you are running  vue-cli-service build , your  NODE_ENV  should always be set to "production" to obtain an app ready for
deployment, regardless of the environment you're deploying to.

NODE_ENV

If you have a default  NODE_ENV  in your environment, you should either remove it or explicitly set  NODE_ENV  when running  vue-
cli-service  commands.

Environment Variables
You can specify env variables by placing the following files in your project root:

.env # loaded in all cases


.env.local # loaded in all cases, ignored by git
.env.[mode] # only loaded in specified mode
.env.[mode].local # only loaded in specified mode, ignored by git

An env file simply contains key=value pairs of environment variables:

FOO=bar
VUE_APP_NOT_SECRET_CODE=some_value
WARNING

Do not store any secrets (such as private API keys) in your app!

Environment variables are embedded into the build, meaning anyone can view them by inspecting your app's files.

Note that only  NODE_ENV ,  BASE_URL , and variables that start with  VUE_APP_  will be statically embedded into the client
bundle with  webpack.DefinePlugin . It is to avoid accidentally exposing a private key on the machine that could have the same name.

For more detailed env parsing rules, please refer to the documentation of  dotenv . We also use dotenv-expand  for variable
expansion (available in Vue CLI 3.5+). For example:

FOO=foo
BAR=bar

CONCAT=$FOO$BAR # CONCAT=foobar

Loaded variables will become available to all  vue-cli-service  commands, plugins and dependencies.

Env Loading Priorities

An env file for a specific mode (e.g.  .env.production ) will take higher priority than a generic one (e.g.  .env ).

In addition, environment variables that already exist when Vue CLI is executed have the highest priority and will not be
overwritten by  .env  files.

.env  files are loaded at the start of  vue-cli-service . Restart the service after making changes.
Example: Staging Mode
Assuming we have an app with the following  .env  file:

VUE_APP_TITLE=My App

And the following  .env.staging  file:

NODE_ENV=production
VUE_APP_TITLE=My App (staging)

vue-cli-service build  builds a production app, loading  .env ,  .env.production  and  .env.production.local  if they are present;

vue-cli-service build --mode staging  builds a production app in staging mode,


using  .env ,  .env.staging  and  .env.staging.local  if they are present.

In both cases, the app is built as a production app because of the  NODE_ENV , but in the staging version,  process.env.VUE_APP_TITLE  is
overwritten with a different value.

Using Env Variables in Client-side Code


You can access env variables in your application code:

console.log(process.env.VUE_APP_NOT_SECRET_CODE)
During build,  process.env.VUE_APP_NOT_SECRET_CODE  will be replaced by the corresponding value. In the case
of  VUE_APP_NOT_SECRET_CODE=some_value , it will be replaced by  "some_value" .

In addition to  VUE_APP_*  variables, there are also two special variables that will always be available in your app code:

NODE_ENV  - this will be one of  "development" ,  "production"  or  "test"  depending on the mode the app is running in.
BASE_URL  - this corresponds to the  publicPath  option in  vue.config.js  and is the base path your app is deployed at.

All resolved env variables will be available inside  public/index.html  as discussed in HTML - Interpolation.

TIP

You can have computed env vars in your  vue.config.js  file. They still need to be prefixed with  VUE_APP_ . This is useful for
version info

process.env.VUE_APP_VERSION = require('./package.json').version

module.exports = {
// config
}

Local Only Variables


Sometimes you might have env variables that should not be committed into the codebase, especially if your project is hosted in a public
repository. In that case you should use an  .env.local  file instead. Local env files are ignored in  .gitignore  by default.

.local  can also be appended to mode-specific env files, for example  .env.development.local  will be loaded during development,
and is ignored by git.
Cross-Origin Resource Sharing (CORS)
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to
indicate any other origins (domain, scheme, or port) than its own from which a browser should permit
loading of resources. CORS also relies on a mechanism by which browsers make a “preflight” request to
the server hosting the cross-origin resource, in order to check that the server will permit the actual request.
In that preflight, the browser sends headers that indicate the HTTP method and headers that will be used in
the actual request.

An example of a cross-origin request: the front-end JavaScript code served from https://domain-a.com
uses XMLHttpRequest to make a request for https://domain-b.com/data.json .

For security reasons, browsers restrict cross-origin HTTP requests initiated from scripts. For example,
XMLHttpRequest and the Fetch API follow the same-origin policy. This means that a web application using
those APIs can only request resources from the same origin the application was loaded from unless the
response from other origins includes the right CORS headers.
The CORS mechanism supports secure cross-origin requests and data transfers between browsers and
servers. Modern browsers use CORS in APIs such as XMLHttpRequest or Fetch to mitigate the risks of
cross-origin HTTP requests
cross-origin HTTP requests.

Who should read this article?


Everyone, really.

More specifically, this article is for web administrators, server developers, and front-end developers.
Modern browsers handle the client side of cross-origin sharing, including headers and policy enforcement.
But the CORS standard means servers have to handle new request and response headers.

What requests use CORS?


This cross-origin sharing standard can enable cross-site HTTP requests for:

Invocations of the XMLHttpRequest or Fetch APIs, as discussed above.


Web Fonts (for cross-domain font usage in @font-face within CSS), so that servers can deploy
TrueType fonts that can only be cross-site loaded and used by web sites that are permitted to do so.

WebGL textures.
Images/video frames drawn to a canvas using drawImage() .
CSS Shapes from images.

This article is a general discussion of Cross-Origin Resource Sharing and includes a discussion of the
necessary HTTP headers.

Functional overview
The Cross-Origin Resource Sharing standard works by adding new HTTP headers that let servers describe
which origins are permitted to read that information from a web browser. Additionally, for HTTP request
methods that can cause side-effects on server data (in particular, HTTP methods other than GET , or POST
( p , ,
with certain MIME types), the specification mandates that browsers "preflight" the request, soliciting
supported methods from the server with the HTTP OPTIONS request method, and then, upon "approval"
from the server, sending the actual request. Servers can also inform clients whether "credentials" (such as
Cookies and HTTP Authentication) should be sent with requests.
CORS failures result in errors, but for security reasons, specifics about the error are not available to
JavaScript. All the code knows is that an error occurred. The only way to determine what specifically went
wrong is to look at the browser's console for details.

Subsequent sections discuss scenarios, as well as provide a breakdown of the HTTP headers used.

Examples of access control scenarios


We present three scenarios that demonstrate how Cross-Origin Resource Sharing works. All these
examples use XMLHttpRequest , which can make cross-site requests in any supporting browser.

Simple requests

Some requests don’t trigger a CORS preflight. Those are called “simple requests” in this article, though the
Fetch spec (which defines CORS) doesn’t use that term. A “simple request” is one that meets all the
following conditions:

One of the allowed methods:


GET
HEAD
POST
Apart from the headers automatically set by the user agent (for example, Connection , User-Agent ,
or the other headers defined in the Fetch spec as a “forbidden header name” ), the only headers
hi h ll dt b ll t th hi h th F t h d fi “CORS f li t d
which are allowed to be manually set are those which the Fetch spec defines as a “CORS-safelisted
request-header” , which are:
Accept
Accept-Language

Content-Language
Content-Type (but note the additional requirements below)
The only allowed values for the Content-Type header are:
application/x-www-form-urlencoded
multipart/form-data
text/plain
If the request is made using an XMLHttpRequest object, no event listeners are registered on the
object returned by the XMLHttpRequest.upload property used in the request; that is, given an
XMLHttpRequest instance xhr , no code has called xhr.upload.addEventListener() to add an
event listener to monitor the upload.
No ReadableStream object is used in the request.

Note
These are the same kinds of cross-site requests that web content can already issue, and no response data
is released to the requester unless the server sends an appropriate header. Therefore, sites that prevent
cross-site request forgery have nothing new to fear from HTTP access control.

Note
WebKit Nightly and Safari Technology Preview place additional restrictions on the values allowed in the
Accept , Accept-Language , and Content-Language headers. If any of those headers have
”nonstandard” values, WebKit/Safari does not consider the request to be a “simple request”. What values
WebKit/Safari consider “nonstandard” is not documented, except in the following WebKit bugs:

Require preflight for non-standard CORS-safelisted request headers Accept, Accept-Language, and
Content-Language

Allow commas in Accept, Accept-Language, and Content-Language request headers for simple
CORS
Switch to a blacklist model for restricted Accept headers in simple CORS requests

No other browsers implement these extra restrictions, because they’re not part of the spec.

For example, suppose web content at https://foo.example wishes to invoke content on domain
https://bar.other . Code of this sort might be used in JavaScript deployed on foo.example :

const xhr = new XMLHttpRequest();


const url = 'https://bar.other/resources/public-data/';

xhr.open('GET', url);
xhr.onreadystatechange = someHandler;
xhr.send();

This performs a simple exchange between the client and the server, using CORS headers to handle the
privileges:
Let's look at what the browser will send to the server in this case, and let's see how the server responds:

GET /resources/public-data/ HTTP/1.1


Host: bar.other
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:71.0) Gecko/20100101 Firefox/71.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Connection: keep-alive
Origin: https://foo.example
The request header of note is Origin , which shows that the invocation is coming from
https://foo.example .

HTTP/1.1 200 OK
Date: Mon, 01 Dec 2008 00:23:53 GMT
Server: Apache/2
Access-Control-Allow-Origin: *
Keep-Alive: timeout=2, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/xml

[…XML Data…]

In response, the server sends back an Access-Control-Allow-Origin header with Access-Control-


Allow-Origin: * , which means that the resource can be accessed by any origin.

Access-Control-Allow-Origin: *

This pattern of the Origin and Access-Control-Allow-Origin headers is the simplest use of the
access control protocol. If the resource owners at https://bar.other wished to restrict access to the
resource to requests only from https://foo.example , (i.e no domain other than https://foo.example
can access the resource in a cross-site manner) they would send:

Access-Control-Allow-Origin: https://foo.example

Note
When responding to a credentialed requests request, the server must specify an origin in the value of the
Access-Control-Allow-Origin header, instead of specifying the " * " wildcard.
g , p y g

Preflighted requests

Unlike “simple requests” (discussed above), for "preflighted" requests the browser first sends an HTTP
request using the OPTIONS method to the resource on the other origin, in order to determine if the actual
request is safe to send. Cross-site requests are preflighted like this since they may have implications to
user data.

The following is an example of a request that will be preflighted:

const xhr = new XMLHttpRequest();


xhr.open('POST', 'https://bar.other/resources/post-here/');
xhr.setRequestHeader('X-PINGOTHER', 'pingpong');
xhr.setRequestHeader('Content-Type', 'application/xml');
xhr.onreadystatechange = handler;
xhr.send('<person><name>Arun</name></person>');

The example above creates an XML body to send with the POST request. Also, a non-standard HTTP X-
PINGOTHER request header is set. Such headers are not part of HTTP/1.1, but are generally useful to web
applications. Since the request uses a Content-Type of application/xml , and since a custom header
is set, this request is preflighted.
Note
As described below, the actual POST request does not include the Access-Control-Request-* headers;
they are needed only for the OPTIONS request.

Let's look at the full exchange between client and server. The first exchange is the preflight
request/response:

OPTIONS /doc HTTP/1.1


Host: bar.other
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:71.0) Gecko/20100101 Firefox/71.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Connection: keep-alive
Origin: http://foo.example
Access-Control-Request-Method: POST
Access-Control-Request-Headers: X-PINGOTHER, Content-Type

HTTP/1.1 204 No Content


Date: Mon, 01 Dec 2008 01:15:39 GMT
Server: Apache/2
Access Control Allow Origin: https://foo example
Access-Control-Allow-Origin: https://foo.example
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Headers: X-PINGOTHER, Content-Type
Access-Control-Max-Age: 86400
Vary: Accept-Encoding, Origin

Keep-Alive: timeout=2, max=100


Connection: Keep-Alive

Lines 1 - 10 above represent the preflight request with the OPTIONS method. The browser determines that
it needs to send this based on the request parameters that the JavaScript code snippet above was using,
so that the server can respond whether it is acceptable to send the request with the actual request
parameters. OPTIONS is an HTTP/1.1 method that is used to determine further information from servers,
and is a safe method, meaning that it can't be used to change the resource. Note that along with the
OPTIONS request, two other request headers are sent (lines 9 and 10 respectively):

Access-Control-Request-Method: POST
Access-Control-Request-Headers: X-PINGOTHER, Content-Type

The Access-Control-Request-Method header notifies the server as part of a preflight request that when
the actual request is sent, it will be sent with a POST request method. The Access-Control-Request-
Headers header notifies the server that when the actual request is sent, it will be sent with a X-PINGOTHER
and Content-Type custom headers. The server now has an opportunity to determine whether it wishes to
accept a request under these circumstances.

Lines 13 - 22 above are the response that the server sends back, which indicate that the request method
( POST ) and request headers ( X-PINGOTHER ) are acceptable. In particular, let's look at lines 16-19:

Access-Control-Allow-Origin: http://foo.example
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Headers: X-PINGOTHER, Content-Type
ccess Co t o o eade s: GO , Co te t ype
Access-Control-Max-Age: 86400

The server responds with Access-Control-Allow-Origin: http://foo.example , restricting access to


just the requesting origin domain. It also responds with Access-Control-Allow-Methods , which says

that POST and GET are viable methods to query the resource in question (this header is similar to the
Allow response header, but used strictly within the context of access control).

The server also sends Access-Control-Allow-Headers with a value of " X-PINGOTHER, Content-
Type ", confirming that these are permitted headers to be used with the actual request. Like Access-
Control-Allow-Methods , Access-Control-Allow-Headers is a comma separated list of acceptable
headers.

Finally, Access-Control-Max-Age gives the value in seconds for how long the response to the preflight
request can be cached for without sending another preflight request. In this case, 86400 seconds is 24
hours. Note that each browser has a maximum internal value that takes precedence when the Access-
Control-Max-Age is greater.

Once the preflight request is complete, the real request is sent:

POST /doc HTTP/1.1


Host: bar.other
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:71.0) Gecko/20100101 Firefox/71.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Connection: keep-alive
X-PINGOTHER: pingpong
Content-Type: text/xml; charset=UTF-8
Referer: https://foo.example/examples/preflightInvocation.html
p // p / p /p g
Content-Length: 55
Origin: https://foo.example
Pragma: no-cache
Cache-Control: no-cache

<person><name>Arun</name></person>

HTTP/1.1 200 OK
Date: Mon, 01 Dec 2008 01:15:40 GMT
Server: Apache/2
Access-Control-Allow-Origin: https://foo.example
Vary: Accept-Encoding, Origin
Content-Encoding: gzip
Content-Length: 235
Keep-Alive: timeout=2, max=99
Connection: Keep-Alive
Content-Type: text/plain

[Some XML payload]

Preflighted requests and redirects


Not all browsers currently support following redirects after a preflighted request. If a redirect occurs after a
preflighted request, some browsers currently will report an error message such as the following.

The request was redirected to 'https://example.com/foo', which is disallowed for cross-origin requests that require
preflight

Request requires preflight, which is disallowed to follow cross-origin redirect


The CORS protocol originally required that behavior but was subsequently changed to no longer require it
. However, not all browsers have implemented the change, and so still exhibit the behavior that was

originally required.

Until browsers catch up with the spec, you may be able to work around this limitation by doing one or both
of the following:

Change the server-side behavior to avoid the preflight and/or to avoid the redirect
Change the request such that it is a simple request that doesn’t cause a preflight

If that's not possible, then another way is to:

1. Make a simple request (using Response.url for the Fetch API, or XMLHttpRequest.responseURL )
to determine what URL the real preflighted request would end up at.
2. Make another request (the “real” request) using the URL you obtained from Response.url or
XMLHttpRequest.responseURL in the first step.

However, if the request is one that triggers a preflight due to the presence of the Authorization header in
the request, you won’t be able to work around the limitation using the steps above. And you won’t be able
to work around it at all unless you have control over the server the request is being made to.

Requests with credentials

Note
When making credentialed requests to a different domain, third-party cookie policies will still apply. The
policy is always enforced independent of any setup on the server and the client, as described in this
chapter.

The most interesting capability exposed by both XMLHttpRequest or Fetch and CORS is the ability to
make "credentialed" requests that are aware of HTTP cookies and HTTP Authentication information. By
default, in cross-site XMLHttpRequest or Fetch invocations, browsers will not send credentials. A specific
flag has to be set on the XMLHttpRequest object or the Request constructor when it is invoked.

In this example, content originally loaded from http://foo.example makes a simple GET request to a
resource on http://bar.other which sets Cookies. Content on foo.example might contain JavaScript
like this:

const invocation = new XMLHttpRequest();


const url = 'http://bar.other/resources/credentialed-content/';

function callOtherDomain() {
if (invocation) {
invocation.open('GET', url, true);
invocation.withCredentials = true;
invocation.onreadystatechange = handler;
invocation.send();
}
}

Line 7 shows the flag on XMLHttpRequest that has to be set in order to make the invocation with Cookies,
namely the withCredentials boolean value. By default, the invocation is made without Cookies. Since
this is a simple GET request, it is not preflighted, but the browser will reject any response that does not
have the Access-Control-Allow-Credentials : true header, and not make the response available to
the invoking web content.

Here is a sample exchange between client and server:

GET /resources/credentialed-content/ HTTP/1.1


Host: bar.other
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:71.0) Gecko/20100101 Firefox/71.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Connection: keep-alive
Referer: http://foo.example/examples/credential.html
i i h //f l
Origin: http://foo.example
Cookie: pageAccess=2

HTTP/1.1 200 OK
Date: Mon, 01 Dec 2008 01:34:52 GMT

Server: Apache/2
Access-Control-Allow-Origin: https://foo.example
Access-Control-Allow-Credentials: true
Cache-Control: no-cache
Pragma: no-cache
Set-Cookie: pageAccess=3; expires=Wed, 31-Dec-2008 01:34:53 GMT
Vary: Accept-Encoding, Origin
Content-Encoding: gzip
Content-Length: 106
Keep-Alive: timeout=2, max=100
Connection: Keep-Alive
Content-Type: text/plain

[text/plain payload]

Although line 10 contains the Cookie destined for the content on http://bar.other , if bar.other did not
respond with an Access-Control-Allow-Credentials : true (line 17) the response would be ignored
and not made available to web content.

Preflight requests and credentials


CORS-preflight requests must never include credentials. The response to a preflight request must specify
Access-Control-Allow-Credentials: true to indicate that the actual request can be made with
credentials.

Note
Some enterprise authentication services require TLS client certificates be sent in preflight requests, in
contravention of the Fetch specification.

Firefox 87 allows this non-compliant behavior to be enabled by setting the preference:


network.cors_preflight.allow_client_cert to true (bug 1511151 ). Chromium-based browsers
currently always send TLS client certificates in CORS preflight requests (Chrome bug 775438 ).

Credentialed requests and wildcards


When responding to a credentialed request, the server must specify an origin in the value of the Access-
Control-Allow-Origin header, instead of specifying the " * " wildcard.

Because the request headers in the above example include a Cookie header, the request would fail if the
value of the Access-Control-Allow-Origin header was "*". But it does not fail: Because the value of
the Access-Control-Allow-Origin header is " http://foo.example " (an actual origin) rather than the
" * " wildcard, the credential-cognizant content is returned to the invoking web content.

Note that the Set-Cookie response header in the example above also sets a further cookie. In case of
failure, an exception—depending on the API used—is raised.

Third-party cookies
Note that cookies set in CORS responses are subject to normal third-party cookie policies. In the example
above, the page is loaded from foo.example , but the cookie on line 20 is sent by bar.other , and would
thus not be saved if the user has configured their browser to reject all third-party cookies.

Cookie in the request (line 10) may also be suppressed in normal third-party cookie policies. The enforced
cookie policy may therefore nullify the capability described in this chapter, effectively prevents you from
making credentialed requests whatsoever.

Cookie policy around the SameSite attribute would apply.

The HTTP response headers


This section lists the HTTP response headers that servers send back for access control requests as
defined by the Cross-Origin Resource Sharing specification. The previous section gives an overview of
these in action.

Access-Control-Allow-Origin

A returned resource may have one Access-Control-Allow-Origin header, with the following syntax:

Access-Control-Allow-Origin: <origin> | *

Access-Control-Allow-Origin specifies either a single origin, which tells browsers to allow that origin
to access the resource; or else — for requests without credentials — the " * " wildcard, to tell browsers to
allow any origin to access the resource.

For example, to allow code from the origin https://mozilla.org to access the resource, you can
specify:

Access-Control-Allow-Origin: https://mozilla.org
Vary: Origin

If the server specifies a single origin (that may dynamically change based on the requesting origin as part
of a white-list) rather than the " * " wildcard, then the server should also include Origin in the Vary
h d t i di t t li t th t ill diff b d th l f th O i i
response header — to indicate to clients that server responses will differ based on the value of the Origin
request header.

Access-Control-Expose-Headers

The Access-Control-Expose-Headers header lets a server whitelist headers that Javascript (such as
getResponseHeader() ) in browsers are allowed to access.

Access-Control-Expose-Headers: <header-name>[, <header-name>]*

For example, the following:

Access-Control-Expose-Headers: X-My-Custom-Header, X-Another-Custom-Header

…would allow the X-My-Custom-Header and X-Another-Custom-Header headers to be exposed to the


browser.

Access-Control-Max-Age

The Access-Control-Max-Age header indicates how long the results of a preflight request can be
cached. For an example of a preflight request, see the above examples.

Access-Control-Max-Age: <delta-seconds>

The delta-seconds parameter indicates the number of seconds the results can be cached.

Access-Control-Allow-Credentials

The Access-Control-Allow-Credentials header indicates whether or not the response to the request
b d h th d ti l fl i t Wh d t f t fli ht
can be exposed when the credentials flag is true. When used as part of a response to a preflight
request, this indicates whether or not the actual request can be made using credentials. Note that simple
GET requests are not preflighted, and so if a request is made for a resource with credentials, if this header
is not returned with the resource, the response is ignored by the browser and not returned to web content.

Access-Control-Allow-Credentials: true

Credentialed requests are discussed above.

Access-Control-Allow-Methods

The Access-Control-Allow-Methods header specifies the method or methods allowed when accessing
the resource. This is used in response to a preflight request. The conditions under which a request is
preflighted are discussed above.

Access-Control-Allow-Methods: <method>[, <method>]*

An example of a preflight request is given above, including an example which sends this header to the
browser.

Access-Control-Allow-Headers

The Access-Control-Allow-Headers header is used in response to a preflight request to indicate which


HTTP headers can be used when making the actual request. This header is the server side response to the
browser's Access-Control-Request-Headers header.

Access-Control-Allow-Headers: <header-name>[, <header-name>]*

The HTTP request headers


This section lists headers that clients may use when issuing HTTP requests in order to make use of the
cross-origin sharing feature. Note that these headers are set for you when making invocations to servers.

Developers using cross-site XMLHttpRequest capability do not have to set any cross-origin sharing
request headers programmatically.

Origin

The Origin header indicates the origin of the cross-site access request or preflight request.

Origin: <origin>

The origin is a URL indicating the server from which the request initiated. It does not include any path
information, but only the server name.

Note
The origin value can be null .

Note that in any access control request, the Origin header is always sent.

Access-Control-Request-Method

The Access-Control-Request-Method is used when issuing a preflight request to let the server know
what HTTP method will be used when the actual request is made.

Access-Control-Request-Method: <method>
q

Examples of this usage can be found above.

Access-Control-Request-Headers
The Access-Control-Request-Headers header is used when issuing a preflight request to let the server
know what HTTP headers will be used when the actual request is made (such as
with setRequestHeader() ). This browser side header will be answered by the complementary server
side header of Access-Control-Allow-Headers .

Access-Control-Request-Headers: <field-name>[, <field-name>]*

Examples of this usage can be found above.

Specifications

Specification Status Comment

Fetch
Living Standard New definition; supplants W3C CORS specification.
The definition of 'CORS' in that specification.

Browser compatibility
Report problems with this compatibility data on GitHub

Access-Control-Allow-Origin

Chrome 4
Edge 12

Firefox 3.5

Internet Explorer 10

Opera 12

Safari 4

WebView Android 2

Chrome Android Yes

Firefox for Android 4

Opera Android 12

Safari on iOS 3.2

Samsung Internet Yes

Full support

Compatibility notes

Internet Explorer 8 and 9 expose CORS via the XDomainRequest object, but have a full implementation in
IE 10

See also
CORS
CORS errors
Enable CORS: I want to add CORS support to my server
XMLHttpRequest
Fetch API

Will it CORS? - an interactive CORS explainer & generator


Using CORS with All (Modern) Browsers
How to run Chrome browser without CORS
Stack Overflow answer with “how to” info for dealing with common problems :
How to avoid the CORS preflight
How to use a CORS proxy to get around “No Access-Control-Allow-Origin header”
How to fix “Access-Control-Allow-Origin header must not be the wildcard”

Last modified: Mar 16, 2021, by MDN contributors

Change your language


English (US) Change language
API 101 TUTORIALS POSTMAN COMMUNITY COLLABORATION

How to Use the Twitter API to Create a


Hashtag Search Bot

Tags: Collections Postman Galaxy Slack Tutorials Twitter Webhooks

By Sean Keegan March 26, 2021


Reading Time: 6 minutes

When Postman recently hosted the Postman Galaxy virtual conference with attendees from
around the world, I needed to nd out what people were saying about the event. And more
importantly, who was saying these things. As a new Postman developer advocate, I was assigned
to “ gure out a way to have all the tweets with the #PostmanGalaxy hashtag automatically sent
into a Slack channel.”

After brainstorming some di erent approaches, I was able to use a combination of the Twitter
API, the Postman API, and Slack Incoming Webhooks to achieve exactly that. Below is a result of
the nal integration in action. Look at all those kind tweets about Postman Galaxy!
Twitter hashtag search bot for Postman Galaxy in action showing results in Slack

The start of a new search bot


Twitter has a vast, well-documented, and (dare I say) fun-to-use API. Despite that, I found that I
couldn’t do exactly what I wanted right out of the gate—which was to use the API to nd all
tweets containing a speci c hashtag and get the user information for those tweets. It was that
second part that was the challenge.

While the Twitter API can obtain a large amount of user data for a speci c tweet, depending on
which product track you are using, you may have some limited access to certain endpoints. Since
I used the basic “Standard” track, I was restricted from getting everything I needed in one shot.

Instead of upgrading my product track, I thought to myself: “I’m a developer for the people! I’m
going to nd a way for even Standard track devs to do this.” I was also on a deadline and didn’t
have time to apply and wait for my access to be upgraded.

This limited access meant I had to get creative with gathering all the necessary data. Instead of
doing a single search and getting everything I desired (username, user handle, date, source,
body of tweet, etc.), I had to chain requests together using data from my initial search. Because
of the limited access inherent to the Standard product track, when I sent a search query for the
string %23PostmanGalaxy to the Twitter API, it would return only the author_id, but not the
user’s Twitter handle or Twitter name.

Now I don’t know about you, but I don’t have other peoples’ Twitter ID numbers memorized. In
fact, I couldn’t even tell you what mine is. And since we wanted to know who was using the
#PostmanGalaxy hashtag, I needed to nd a way to tie these author_ids to an actual Twitter
handle that humans would understand. So I captured the IDs of the users in a comma separated
string and saved it as an environment variable called allUserIdsString.

Thankfully, the Twitter API has a “Users by ID” request that takes the string of IDs as a parameter.
I copied this request from the Twitter API v2 Postman Collection into my own folder and entered
the environment variable as the query parameter value as shown below. Upon a successful
request, I used some JavaScript code in the Tests tab to match the newly acquired usernames
with the corresponding author_ids to our saved data.
Sending a second request to Twitter to obtain more user information

With all of our tweets and necessary information in place, it’s time to send these tweets
somewhere the rest of our team can see them: a Slack channel.

Setting up Slack webhooks for noti cations


Slack makes it pretty easy to send things to a certain Slack channel using webhooks. After
generating a webhook URL, I set up a POST request in Postman to send all tweets with the
#PostmanGalaxy hashtag to a designated Slack channel that my team members were following.
In the Body tab of the request, I formatted the content (the tweets) that I wanted sent over to
Slack as JSON.

Using Slack’s nifty Block Kit Builder tool, you can get quite a bit of exibility and customization on
how you want the message to go through. Here’s a picture of that JSON body below. You can see
that it sends only one tweet at time, called current_tweet, which is in the double curly braces to
reference the variable named “current_tweet.”

This is what the JSON body looks like for a POST request to a Slack webhook URL

Solving a character-limit conundrum

I learned the hard way that Slack has a character limit on the body of a request. When I tried to
send the content for 20 tweets to post to Slack in one shot, I continuously got an error message.
After banging my head against a wall for a while, an experienced co-worker suggested that the
character limit may be the reason (thank you, Arlemi Turpault, for that key insight!).

Because of the character limit, I reused this request multiple times. If there are 20 tweets we
want to send to Slack, we end up looping through the array of 20 tweets and calling this request
20 times. Thankfully, Postman makes it really easy to specify the order of your work ow based
on certain conditions.
Automating the process with Postman monitors
If you planned on running this collection manually in Postman every time you wanted to see the
new Tweets, we’d be done. But then you’d have to come into Postman every so often and use
the Collection Runner to run through the entire collection. Not too bad if you only need to do it
once or twice, but for a multi-day event like Postman Galaxy you’ll want a more e cient solution.

Since I didn’t want to wake up every 10 minutes in the middle of the night to update my team
with the new #PostmanGalaxy tweets, I found a way to automate this process. By taking
advantage of Postman’s monitors, we can run this collection automatically at set intervals.

The last step of this collection involves a pair of requests that work together to track the most
recent tweet’s ID number, which is saved as an environment variable highest_tweet_id.
Because we’re automating this process with monitors, it’s important to note that
global/environment variables are not persisted across collection runs using a monitor.

To get around this, we can actually use Postman to help us use Postman. While Postman is an
API collaboration platform, we also have the Postman API, which you can use to ensure only the
newest, unseen tweets get pushed to Slack each time the monitor is run. All we’re really doing in
the nal two requests is making sure that the environment variable for the highest tweet ID gets
updated and safely tracked for future monitor runs.

Here’s a visual overview of everything in this work ow. You can see that POST request to Slack
getting called again so long as there are tweets to send to Slack:

Create your own Twitter hashtag watch


Now that you’ve learned how I made an automated Twitter hashtag search bot that posts
updates to Slack, you can try creating your own. Feel free to fork the collection and play around
with it. With complete instructions in the collection documentation, here’s a quick summary of
the steps:

1. Fork the Twitter Hashtag Search collection into your own workspace.

2. Fork the Twitter Hashtag environment into your own workspace.

3. Enter the missing auth credentials in the environment (Twitter bearer token, Postman API
key, and Slack webhook URL).

4. Change the query param in the “Search for Hashtag” request.

5. Create and run a monitor for the collection.

6. Check out the Slack messages for search results.

Stay up to date with your favorite hashtagged tweets about anything you nd interesting (weird
animals, anyone?). Seriously, there are some adorable and strange hashtags worth following—
the Twitter world of helpful and entertaining hashtags is endless.

Technical review by Meenakshi Dhanani.


Sean Keegan

Sean Keegan is a developer advocate at Postman
Sean Keegan is a developer advocate at Postman.

+7

What do you think about this topic? Tell us in a comment below.

Comments

Your name

Your email

Write a public comment

 Receive replies to your comment via email.

Post Comment

2 thoughts on “How to Use the Twitter API to Create a Hashtag


Search Bot”

Rolo
March 29, 2021
+3
Great POST

Sean Keegan AUTHOR


March 29, 2021
+1
I see what you did there! Thanks for the kind words, Rolo!

You might also like:


Kubernetes Tutorial: Your Complete How To Set Up The New Twitter API
Guide To Deploying An App On AWS Faster In Postman
With Postman Twitter recently released version 2.0 of its hugely
Kubernetes is an open source system that is useful for popular social media API, and there’s no better way to
container orchestration. In this tutorial, we are going to get started playing around with it than by using the new
use Postman Collections to learn Kubernetes and use Twitter API v2 collection in the Postman API Network.
its API to deploy an app on a Kubernetes cluster hosted Postman Collections make onboarding easy: Within 5 to
on AWS. This is a hands-on tutorial. Don’t fret if you 10 minutes, you should be able to download the…
aren’t familiar with Kubernetes and containerization,…
Read More
Read More

Librarian: Building A Serverless Slack


App Using Postman And Airtable
At Postman, some of us are avid readers, and we have a
nice little library at our o ce where we share our
books. Even though our library is small, nding a book
is tedious because we have to manually search for one.
We don’t keep track of books that are borrowed. It’s
happened before that…

Read More

MORE
Continuous API Testing with Postman 10 Tips for Working with Postman Variables Kubernetes Tutorial: Guide to Deploying an App...
OVERVIEW GET POSTMAN RESOURCES

API Platform Download The App Docs

API Client Contact Sales Blog

Automated Testing Community

Design & Mock Webinars

API Documentation Customers

API Monitoring

API Network

Version Control

Workspaces

Interceptor

API Visualizer

API Testing

PRICING SUPPORT COMPANY

Plans & Pricing Overview Support Center About

Postman For Enterprises Resellers Support Careers We're Hiring!

Contact Us

Press & Media

Postman For Nonpro ts

Student Program

Swag Shop

Postman Galaxy Privacy Policy Terms Careers Support Security

© 2021 Postman, Inc. All rights reserved


In this article we’re going to try out Puppeteer and demonstrate a variety of the available capabilities, through concrete examples.

Disclaimer: This article doesn’t claim to replace the official documentation but rather elaborate it - you definitely should go over it in
order to be aligned with the most updated API specification.

Sponsor:

Active Reliability for Modern DevOps Teams


Checkly does in-depth API monitoring and synthetic monitoring using Puppeteer. It lets us run Puppeteer scripts every couple of
minutes or trigger them from the continuous integration pipeline. Check it out during the article or afterwards.

How to Install
To begin with, we’ll have to install one of Puppeteer’s packages.

Library Package
A lightweight package, called puppeteer-core , which is a library that interacts with any browser that’s based on DevTools protocol -
without actually installing Chromium. It comes in handy mainly when we don’t need a downloaded version of Chromium, for instance,
bundling this library within a project that interacts with a browser remotely.

In order to install, just run:

npm install puppeteer-core

Product Package
The main package, called puppeteer , which is actually a full product for browser automation on top of puppeteer-core . Once it’s
installed, the most recent version of Chromium is placed inside node_modules , what guarantees that the downloaded version is
compatible with the host operating system.

Simply run the following to install:


npm install puppeteer

Now, we’re absolutely ready to go! 🤓

Interacting Browser
As mentioned before, Puppeteer is just an API over the Chrome DevTools Protocol. Naturally, it should have a Chromium instance to
interact with. This is the reason why Puppeteer’s ecosystem provides methods to launch a new Chromium instance and connect an
existing instance also.

Let’s examine a few cases.

Launching Chromium
The easiest way to interact with the browser is by launching a Chromium instance using Puppeteer:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
console.info(browser);
await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw


The launch method initializes the instance at first, and then attaching Puppeteer to that. Notice this method is asynchronous (like
most Puppeteer’s methods) which, as we know, returns a Promise . Once it’s resolved, we get a browser instance that represents our
initialized instance.

Connecting Chromium
Sometimes we want to interact with an existing Chromium instance - whether using puppeteer-core or just attaching a remote
instance:

const chromeLauncher = require('chrome-launcher');


const axios = require('axios');
const puppeteer = require('puppeteer');

(async () => {
// Initializing a Chrome instance manually
const chrome = await chromeLauncher.launch({
chromeFlags: ['--headless']
});
const response = await axios.get(`http://localhost:${chrome.port}/json/version`);
const { webSocketDebuggerUrl } = response.data;

// Connecting the instance using `browserWSEndpoint`


const browser = await puppeteer.connect({ browserWSEndpoint: webSocketDebuggerUrl });
console.info(browser);

await browser.close();
await chrome.kill();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw


Well, it’s easy to see that we use chrome-launcher in order to launch a Chrome instance manually. Then, we simply fetch
the webSocketDebuggerUrl value of the created instance.

The connect method attaches the instance we just created to Puppeteer. All we’ve to do is supplying the WebSocket endpoint of our
instance.

Note: Of course, chrome-launcher is only to demonstrate an instance creation. We absolutely could connect an instance in other ways,
as long as we have the appropriate WebSocket endpoint.

Launching Firefox
Some of you might wonder - could Puppeteer interact with other browsers besides Chromium? 🤔

Although there are projects that claim to support the variety browsers - the official team has started to maintain an experimental
project that interacts with Firefox, specifically:

npm install puppeteer-firefox

Update: puppeteer-firefox was an experimental package to examine communication with an outdated Firefox fork, however, this
project is no longer maintained. Presently, the way to go is by setting the PUPPETEER_PRODUCT environment variable to firefox and so
fetching the binary of Firefox Nightly.

We can easily do that as part of the installation:

PUPPETEER_PRODUCT=firefox npm install puppeteer

Alternatively, we can use the BrowserFetcher to fetch the binary.

Once we’ve the binary, we merely need to change the product to “firefox” whereas the rest of the lines remain the same - what means
we’re already familiar with how to launch the browser:
// Deprecated package
// const puppeteer = require('puppeteer-firefox');
const puppeteer = require('puppeteer');

(async () => {
// FireFox's binary is needed to be fetched before
const browser = await puppeteer.launch({ product: 'firefox' });
console.info(browser);
await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

⚠ Pay attention - the API integration isn’t totally ready yet and implemented progressively. Also, it’s better to check out the
implementation status here.

Browser Context
Imagine that instead of recreating a browser instance each time, which is pretty expensive operation, we could use the same instance
but separate it into different individual sessions which belong to this shared browser.

It’s actually possible, and these sessions are known as Browser Contexts.

A default browser context is created as soon as creating a browser instance, but we can create additional browser contexts as
necessary:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
// A reference for the default browser context
const defaultContext = browser.defaultBrowserContext();
console.info(defaultContext.isIncognito()); // False

// Creates a new browser context


const newContext = await browser.createIncognitoBrowserContext();
console.info(newContext.isIncognito()); // True

// Closes the created browser context


await newContext.close();

// Closes the browser with the default context


await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Apart from the fact that we demonstrate how to access each context, we need to know that the only way to terminate the default context
is by closing the browser instance - which, in fact, terminates all the contexts that belong to the browser.

Better yet, the browser context also come in handy when we want to apply a specific configuration on the session isolatedly - for
instance, granting additional permissions.

Headful Mode
As opposed to the headless mode - which merely uses the command line, the headful mode opens the browser with a graphical user
interface during the instruction:

const puppeteer = require('puppeteer');

(async () => {
// Makes the browser to be launched in a headful way
const browser = await puppeteer.launch({ headless: false });
console.info(browser);
await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Because of the fact that the browser is launched in headless mode by default, we demonstrate how to launch it in a headful way.

In case you wonder - headless mode is mostly useful for environments that don’t really need the UI or neither support such an interface.
The cool thing is that we can headless almost everything in Puppeteer. 💪

Note: We’re going to launch the browser in a headful mode for most of the upcoming examples, which will allow us to notice the result
clearly.

Debugging
When writing code, we should be aware of what kinds of ways are available to debug our program. The documentation lists
several tips about debugging Puppeteer.

Let’s cover the core principles:

- Checking how the browser is operated

That’s fairly probable we would like to see how our script instructs the browser and what’s actually displayed, at some point.

The headful mode, which we’re already familiar with, helps us to practically do that:

const puppeteer = require('puppeteer');


(async () => {
const browser = await puppeteer.launch({ headless: false, slowMo: 200 });

// Browser operations

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Beyond that the browser is truly opened, we can notice now the operated instructions clearly - due to slowMo which slows down
Puppeteer when performing each operation.

- Debugging our application code in the browser

In case we want to debug the application itself in the opened browser - it basically means to open the DevTools and start debugging as
usual:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ devtools: true });

// Browser operations

// Holds the browser until we terminate the process explicitly


await browser.waitForTarget(() => false);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Notice that we use devtools which launches the browser in a headful mode by default and opens the DevTools automatically. On top
of that, we utilize waitForTarget in order to hold the browser process until we terminate it explicitly.
Apparently - some of you may wonder if it’s possible to sleep the browser with a specified time period, so:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ devtools: true });

// Browser operations

// Option 1 - resolving a promise when `setTimeout` finishes


const sleep = duration => new Promise(resolve => setTimeout(resolve, duration));
await sleep(3000);

// Option 2 - if we have a page instance, just using `waitFor`


await page.waitFor(3000);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

The first approach is merely a function that resolves a promise when setTimeout finishes. The second approach, however, is much
simpler but demands having a page instance (we’ll get to that later).

- Debugging the process that uses Puppeteer

As we know, Puppeteer is executed in a Node.js process - which is absolutely separated from the browser process. Hence, in this case,
we should treat it as much as we debug a regular Node.js application.

Whether we connect to an inspector client or prefer using ndb - it’s all about placing the breakpoints right before Puppeteer’s operation.
Adding them programmatically is possible either, simply by inserting the debugger; statement, obviously.
Interacting Page
Now that Puppeteer is attached to a browser instance - which, as we already mentioned, represents our browser instance (Chromium,
Firefox, whatever), allows us creating easily a page (or multiple pages):

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();

// Creates a new page on the default browser context


const page = await browser.newPage();
console.info(page);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

In the code example above we plainly create a new page by invoking the newPage method. Notice it’s created on the default browser
context.
Basically, Page is a class that represents a single tab in the browser (or an extension background). As you guess, this class provides
handy methods and events in order to interact with the page (such as selecting elements, retrieving information, waiting for elements,
etc.).

Well, it’s about time to present a list of practical examples, as promised. To do this, we’re going to scrape data from the official
Puppeteer website and operate it.🕵

Navigating by URL
One of the earliest things is, intuitively, instructing the blank page to navigate to a specified URL:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

// Instructs the blank page to navigate a URL


await page.goto('https://pptr.dev');

// Fetches page's title


const title = await page.title();
console.info(`The title is: ${title}`);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

We use goto to drive the created page to navigate Puppeteer’s website. Afterward, we just take the title of Page’s main frame, print it,
and expect to get that as an output:
Navigating by a URL and scraping the title

As we notice, the title is unexpectedly missing. 🧐

This example shows us which there’s no guarantee that our page would render the selected element at the right moment, and if
anything. To clarify - possible reasons could be that the page is loaded slowly, part of the page is lazy-loaded, or perhaps it’s navigated
immediately to another page.

That’s exactly why Puppeteer provides methods to wait for stuff like elements, navigation, functions, requests, responses or simply a
certain predicate - mainly to deal with an asynchronous flow.

Anyway, it turns out that Puppeteer’s website has an entry page, which immediately redirects us to the well-known website’s index page.
The thing is, that entry page in question doesn’t render a title meta element:
Evaluating the title meta element

When navigating to Puppeteer’s website, the title element is evaluated as an empty string. However, a few moments later, the page
is really navigated to the website’s index page and rendered with a title.

This means that the invoked title method is actually applied too early, on the entry page, instead of the website’s index page. Thus,
the entry page is considered as the first main frame, and eventually its title, which is an empty string, is returned.

Let’s solve that case in a simple way:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

await page.goto('https://pptr.dev');

// Waits until the `title` meta element is rendered


await page.waitForSelector('title');

const title = await page.title();


console.info(`The title is: ${title}`);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

All we do, is instructing Puppeteer to wait until the page renders a title meta element, which is achieved by
invoking waitForSelector . This method basically waits until the selected element is rendered within the page.

In that way - we can easily deal with asynchronous rendering and ensure that elements are visible on the page.

Emulating Devices
Puppeteer’s library provides tools for approximating how the page looks and behaves on various devices, which are pretty useful when
testing a website’s responsiveness.

Let’s emulate a mobile device and navigate to the official website:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

// Emulates an iPhone X
await page.setUserAgent('Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mob
await page.setViewport({ width: 375, height: 812 });

await page.goto('https://pptr.dev');

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

We choose to emulate an iPhone X - which means changing the user agent appropriately. Furthermore, we adjust the viewport size
according to the display points that appear here.

It’s easy to understand that setUserAgent defines a specific user agent for the page, whereas setViewport modifies the viewport
definition of the page. In case of multiple pages, each one has its own user agent and viewport definition.

Here’s the result of the code example above:


Emulating an iPhone X

Indeed, the console panel shows us that the page is opened with the right user agent and viewport size.

The truth is that we don’t have to specify the iPhone X’s descriptions explicitly, because the library arrives with a built-in list of device
descriptors. On top of that, it provides a method called emulate which is practically a shortcut for
invoking setUserAgent and setViewport , one after another.
Let’s use that:

const puppeteer = require('puppeteer');


const devices = require('puppeteer/DeviceDescriptors');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

await page.emulate(devices['iPhone X']);


await page.goto('https://pptr.dev');

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

It’s merely changed to pass the boilerplate descriptor to emulate (instead of declaring that explicitly). Notice we import the descriptors
out of puppeteer/DeviceDescriptors .

Handling Events
The Page class supports emitting of various events by actually extending the Node.js’s EventEmitter object. This means we can use
the natively supported methods in order to handle these events - such as: on , once , removeListener and so on.

Here’s the list of the supported events:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

// Emitted when the DOM is parsed and ready (without waiting for resources)
page.once('domcontentloaded', () => console.info('✅ DOM is ready'));

// Emitted when the page is fully loaded


page.once('load', () => console.info('✅ Page is loaded'));

// Emitted when the page attaches a frame


page.on('frameattached', () => console.info('✅ Frame is attached'));

// Emitted when a frame within the page is navigated to a new URL


page.on('framenavigated', () => console.info('👉 Frame is navigated'));

// Emitted when a script within the page uses `console.timeStamp`


page.on('metrics', data => console.info(`👉 Timestamp added at ${data.metrics.Timestamp}`));

// Emitted when a script within the page uses `console`


page.on('console', message => console[message.type()](`👉 ${message.text()}`));

// Emitted when the page emits an error event (for example, the page crashes)
page.on('error', error => console.error(`❌ ${error}`));

// Emitted when a script within the page has uncaught exception


page.on('pageerror', error => console.error(`❌ ${error}`));

// Emitted when a script within the page uses `alert`, `prompt`, `confirm` or `beforeunload`
page.on('dialog', async dialog => {
console.info(`👉 ${dialog.message()}`);
await dialog.dismiss();
});

// Emitted when a new page, that belongs to the browser context, is opened
page.on('popup', () => console.info('👉 New page is opened'));

// Emitted when the page produces a request


page.on('request', request => console.info(`👉 Request: ${request.url()}`));

// Emitted when a request, which is produced by the page, fails


page.on('requestfailed', request => console.info(`❌ Failed request: ${request.url()}`));

// Emitted when a request, which is produced by the page, finishes successfully


page.on('requestfinished', request => console.info(`👉 Finished request: ${request.url()}`));

// Emitted when a response is received


page.on('response', response => console.info(`👉 Response: ${response.url()}`));

// Emitted when the page creates a dedicated WebWorker


page.on('workercreated', worker => console.info(`👉 Worker: ${worker.url()}`));

// Emitted when the page destroys a dedicated WebWorker


page.on('workerdestroyed', worker => console.info(`👉 Destroyed worker: ${worker.url()}`));

// Emitted when the page detaches a frame


page.on('framedetached', () => console.info('✅ Frame is detached'));

// Emitted after the page is closed


page.once('close', () => console.info('✅ Page is closed'));

await page.goto('https://pptr.dev');

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw


From looking at the list above - we clearly understand that the supported events include aspects of loading, frames, metrics, console,
errors, requests, responses and even more!

Let’s simulate and trigger part of the events by adding this script:

// Triggers `metrics` event


await page.evaluate(() => console.timeStamp());

// Triggers `console` event


await page.evaluate(() => console.info('A console message within the page'));

// Triggers `dialog` event


await page.evaluate(() => alert('An alert within the page'));

// Triggers `error` event


page.emit('error', new Error('An error within the page'));

// Triggers `close` event


await page.close();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

As we probably know, evaluate just executes the supplied script within the page context.

Though, the output is going to reflect the events we listen:


Listening the page events

In case you wonder - it’s possible to listen for custom events that are triggered in the page. Basically it means to define the event
handler on page’s window using the exposeFunction method. Check out this example to understand exactly how to implement it.
Operating Mouse
In general, the mouse controls the motion of a pointer in two dimensions within a viewport. Unsurprisingly, Puppeteer represents the
mouse by a class called Mouse .

Moreover, every Page instance has a Mouse - which allows performing operations such as changing its position and clicking within the
viewport.

Let’s start with changing the mouse position:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

await page.setViewport({ width: 1920, height: 1080 });


await page.goto('https://pptr.dev');

// Waits until the API sidebar is rendered


await page.waitForSelector('sidebar-component');
// Hovers the second link inside the API sidebar
await page.mouse.move(40, 150);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

The scenario we simulate is moving the mouse over the second link of the left API sidebar. We set a viewport size and wait explicitly for
the sidebar component to ensure it’s really rendered.

Then, we invoke move in order to position the mouse with appropriate coordinates, that actually represent the center of the second link.

This is the expected result:


Hovering the second link

Although it’s hard to see, the second link is hovered as we planned.

The next step is simply clicking on the link by the respective coordinates:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.setViewport({ width: 1920, height: 1080 });
await page.goto('https://pptr.dev');
await page.waitForSelector('sidebar-component');

// Clicks the second link and triggers `mouseup` event after 1000ms
await page.mouse.click(40, 150, { delay: 1000 });

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Instead of changing the position explicitly, we just use click - which basically triggers mousemove , mousedown and mouseup events,
one after another.

Note: We delay the pressing in order to demonstrate how to modify the click behavior, nothing more. It’s worth pointing out that we can
also control the mouse buttons (left, center, right) and the number of clicks.

Another nice thing is the ability to simulate a drag and drop behavior easily:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

await page.setViewport({ width: 1920, height: 1080 });


await page.goto('https://pptr.dev');
await page.waitForSelector('sidebar-component');

// Drags the mouse from a point


await page.mouse.move(0, 0);
await page.mouse.down();
// Drops the mouse to another point
await page.mouse.move(100, 100);
await page.mouse.up();

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

All we do is using the Mouse methods for grabbing the mouse, from one position to another, and afterward releasing it.

Operating Keyboard
The keyboard is another way to interact with the page, mostly for input purposes.

Similar to the mouse, Puppeteer represents the keyboard by a class called Keyboard - and every Page instance holds such an
instance.

Let’s type some text within the search input:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

await page.setViewport({ width: 1920, height: 1080 });


await page.goto('https://pptr.dev');

// Waits until the toolbar is rendered


await page.waitForSelector('toolbar-component');

// Focuses the search input


await page.focus('[type="search"]');

// Types the text into the focused element


await page.keyboard.type('Keyboard', { delay: 100 });

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Notice that we wait for the toolbar (instead of the API sidebar). Then, we focus the search input element and simply type a text into it.

On top of typing text, it’s obviously possible to trigger keyboard events:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

await page.setViewport({ width: 1920, height: 1080 });


await page.goto('https://pptr.dev');
await page.waitForSelector('toolbar-component');

await page.focus('[type="search"]');
await page.keyboard.type('Keyboard', { delay: 100 });

// Choosing the third result


await page.keyboard.press('ArrowDown', { delay: 200 });
await page.keyboard.press('ArrowDown', { delay: 200 });
await page.keyboard.press('Enter');
await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Basically, we press ArrowDown twice and Enter in order to choose the third search result.

See that in action:

Choosing a search result using the keyboard

By the way, it’s nice to know that there is a list of the key codes.
Taking Screenshots
Taking screenshots through Puppeteer is a quite easy mission.

The API provides us a dedicated method for that:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

await page.setViewport({ width: 1920, height: 1080 });


await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Takes a screenshot of the whole viewport


await page.screenshot({ path: 'screenshot.png' });

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

As we see, the screenshot method makes all the charm - whereas we just have to insert a path for the output.

Moreover, it’s also possible to control the type, quality and even clipping the image:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setViewport({ width: 1920, height: 1080 });
await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Takes a screenshot of an area within the page


await page.screenshot({
path: 'screenshot.jpg',
type: 'jpeg',
quality: 80,
clip: { x: 220, y: 0, width: 630, height: 360 }
});

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Here’s the output:


Capturing an area within the page

Generating PDF
Puppeteer is either useful for generating a PDF file from the page content.

Let’s demonstrate that:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Navigates to the project README file
await page.goto('https://github.com/GoogleChrome/puppeteer/blob/master/README.md');

// Generates a PDF from the page content


await page.pdf({ path: 'overview.pdf' });

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Running the pdf method simply generates us the following file:

Generating a PDF file from the content


Faking Geolocation
Many websites customize their content based on the user’s geolocation.

Modifying the geolocation of a page is pretty obvious:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({ devtools: true });
const page = await browser.newPage();

// Grants permission for changing geolocation


const context = browser.defaultBrowserContext();
await context.overridePermissions('https://pptr.dev', ['geolocation']);

await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Changes to the north pole's location


await page.setGeolocation({ latitude: 90, longitude: 0 });

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

First, we grants the browser context the appropriate permissions. Then, we use setGeolocation to override the current geolocation
with the coordinates of the north pole.

Here’s what we get when printing the location through navigator :


Changing the geolocation of the page

Accessibility
The accessibility tree is a subset of the DOM that includes only elements with relevant information for assistive technologies such as
screen readers, voice controls and so on. Having the accessibility tree means we can analyze and test the accessibility support in the
page.

When it comes to Puppeteer, it enables to capture the current state of the tree:
const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Captures the current state of the accessibility tree


const snapshot = await page.accessibility.snapshot();
console.info(snapshot);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

The snapshot doesn’t pretend to be the full tree, but rather including just the interesting nodes (those which are acceptable by most of
the assistive technologies).

Note: We can obtain the full tree through setting interestingOnly to false.

Code Coverage
The code coverage feature was introduced officially as part of Chrome v59 - and provides the ability to measure how much code is
being used, compared to the code that is actually loaded. In this manner, we can reduce the dead code and eventually speed up the
loading time of the pages.

With Puppeteer, we can manipulate the same feature programmatically:


const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

// Starts to gather coverage information for JS and CSS files


await Promise.all([page.coverage.startJSCoverage(), page.coverage.startCSSCoverage()]);

await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Stops the coverage gathering


const [jsCoverage, cssCoverage] = await Promise.all([
page.coverage.stopJSCoverage(),
page.coverage.stopCSSCoverage()
]);

// Calculates how many bytes are being used based on the coverage
const calculateUsedBytes = (type, coverage) =>
coverage.map(({ url, ranges, text }) => {
let usedBytes = 0;

ranges.forEach(range => (usedBytes += range.end - range.start - 1));

return {
url,
type,
usedBytes,
totalBytes: text.length
};
});
console.info([
...calculateUsedBytes('js', jsCoverage),
...calculateUsedBytes('css', cssCoverage)
]);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

We instruct Puppeteer to gather coverage information for JavaScript and CSS files, until the page is loaded. Thereafter, we
define calculateUsedBytes which goes through a collected coverage data and calculates how many bytes are being used (based on
the coverage). At last, we merely invoke the created function on both coverages.

Let’s look at the output:

[
{
url: 'https://pptr.dev/',
type: 'js',
usedBytes: 149,
totalBytes: 150
},
{
url: 'https://www.googletagmanager.com/gtag/js?id=UA-106086244-2',
type: 'js',
usedBytes: 21018,
totalBytes: 66959
},
{
url: 'https://pptr.dev/index.js',
type: 'js',
usedBytes: 108922,
totalBytes: 141703
},
{
url: 'https://www.google-analytics.com/analytics.js',
type: 'js',
usedBytes: 19665,
totalBytes: 44287
},
{
url: 'https://pptr.dev/style.css',
type: 'css',
usedBytes: 5135,
totalBytes: 14326
}
]

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

As expected, the output contains usedBytes and totalBytes for each file.
Measuring Performance
One objective of measuring performance in terms of websites is to analyze how a page performs, during load and runtime - intending to
make it faster.

Let’s see how we use Puppeteer to measure our page performance:

- Analyzing load time through metrics

Navigation Timing is a Web API that provides information and metrics relating to page navigation and load events, and accessible
by window.performance .

In order to benefit from it, we should evaluate this API within the page context:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Executes Navigation API within the page context


const metrics = await page.evaluate(() => JSON.stringify(window.performance));

// Parses the result to JSON


console.info(JSON.parse(metrics));

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw


Notice that if evaluate receives a function which returns a non-serializable value - then evaluate returns eventually undefined .
That’s exactly why we stringify window.performance when evaluating within the page context.

The result is transformed into a comfy object, which looks like the following:

{
timeOrigin: 1562785571340.2559,
timing: {
navigationStart: 1562785571340,
unloadEventStart: 0,
unloadEventEnd: 0,
redirectStart: 0,
redirectEnd: 0,
fetchStart: 1562785571340,
domainLookupStart: 1562785571347,
domainLookupEnd: 1562785571348,
connectStart: 1562785571348,
connectEnd: 1562785571528,
secureConnectionStart: 1562785571425,
requestStart: 1562785571529,
responseStart: 1562785571607,
responseEnd: 1562785571608,
domLoading: 1562785571615,
domInteractive: 1562785571621,
domContentLoadedEventStart: 1562785571918,
domContentLoadedEventEnd: 1562785571926,
domComplete: 1562785572538,
loadEventStart: 1562785572538,
loadEventEnd: 1562785572538
},
navigation: {
type: 0,
redirectCount: 0
}
}

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Now we can simply combine these metrics and calculate different load times over the loading timeline. For instance, loadEventEnd -
navigationStart represents the time since the navigation started until the page is loaded.

Note: All explanations about the different timings above are available here.

- Analyzing runtime through metrics

As far as the runtime metrics, unlike load time, Puppeteer provides a neat API:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Returns runtime metrics of the page


const metrics = await page.metrics();
console.info(metrics);

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

We invoke the metrics method and get the following result:


{
Timestamp: 6400.768827, // When the metrics were taken
Documents: 13, // Number of documents
Frames: 7, // Number of frames
JSEventListeners: 33, // Number of events
Nodes: 51926, // Number of DOM elements
LayoutCount: 6, // Number of page layouts
RecalcStyleCount: 13, // Number of page style recalculations
LayoutDuration: 0.545877, // Total duration of all page layouts
RecalcStyleDuration: 0.011856, // Total duration of all page style recalculations
ScriptDuration: 0.064591, // Total duration of JavaScript executions
TaskDuration: 1.244381, // Total duration of all performed tasks by the browser
JSHeapUsedSize: 17158776, // Actual memory usage by JavaScript
JSHeapTotalSize: 33492992 // Total memory usage, including free allocated space, by JavaScript
}

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

The interesting metric above is apparently JSHeapUsedSize which represents, in other words, the actual memory usage of the page.
Notice that the result is actually the output of Performance.getMetrics , which is part of Chrome DevTools Protocol.

- Analyzing browser activities through tracing

Chromium Tracing is a profiling tool that allows recording what the browser is really doing under the hood - with an emphasis on every
thread, tab, and process. And yet, it’s reflected in Chrome DevTools as part of the Timelinepanel.

Furthermore, this tracing ability is possible with Puppeteer either - which, as we might guess, practically uses the Chrome DevTools
Protocol.

For example, let’s record the browser activities during navigation:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

// Starts to record a trace of the operations


await page.tracing.start({ path: 'trace.json' });

await page.goto('https://pptr.dev');
await page.waitForSelector('title');

// Stops the recording


await page.tracing.stop();

await browser.close();
})();

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

When the recording is stopped, a file called trace.json is created and contains the output that looks like:

{
"traceEvents":[
{
"pid": 21975,
"tid": 38147,
"ts": 17376402124,
"ph": "X",
"cat": "toplevel",
"name": "MessageLoop::RunTask",
"args": {
"src_file": "../../mojo/public/cpp/system/simple_watcher.cc",
"src_func": "Notify"
},
"dur": 68,
"tdur": 56,
"tts": 26330
},
// More trace events
]
}

getting-to-know-puppeteer.example.js hosted with ❤ by GitHub view raw

Now that we’ve the trace file, we can open it using Chrome DevTools, chrome://tracing or Timeline Viewer.

Here’s the Performance panel after importing the trace file into the DevTools:

Importing a trace file


Summary
We introduced today the Puppeteer’s API through concrete examples.

Let’s recap the main points:

Puppeteer is a Node.js library for automating, testing and scraping web pages on top of the Chrome DevTools Protocol.
Puppeteer’s ecosystem provides a lightweight package, puppeteer-core , which is a library for browser automation - that interacts
with any browser, which is based on DevTools protocol, without installing Chromium.
Puppeteer’s ecosystem provides a package, which is actually the full product, that installs Chromium in addition to the browser
automation library.
Puppeteer provides the ability to launch a Chromium browser instance or just connect an existing instance.
Puppeteer’s ecosystem provides an experimental package, puppeteer-firefox , that interacts with Firefox.
The browser context allows separating different sessions for a single browser instance.
Puppeteer launches the browser in a headless mode by default, which merely uses the command line. Also - a headful mode, for
opening the browser with a GUI, is supported either.
Puppeteer provides several ways to debug our application in the browser, whereas, debugging the process that executes
Puppeteer is obviously the same as debugging a regular Node.js process.
Puppeteer allows navigating to a page by a URL and operating the page through the mouse and keyboard.
Puppeteer allows examining a page’s visibility, behavior and responsiveness on various devices.
Puppeteer allows taking screenshots of the page and generating PDFs from the content, easily.
Puppeteer allows analyzing and testing the accessibility support in the page.
Puppeteer allows speeding up the page performance by providing information about the dead code, handy metrics and manually
tracing ability.

And finally, Puppeteer is a powerful browser automation tool with a pretty simple API. A decent number of capabilities are supported,
including such we haven’t covered at all - and that’s why your next step could definitely be the official documentation. 😉

Here’s attached the final project:


VS Code Snippets
Well, if you wish to get some useful code snippets of Puppeteer API for Visual Studio Code - then the following extension might interest
you:

Using the snippets to generate a basic Puppeteer script

You’re welcome to take a look at the extension page.


Running crawls with Node.js

For this guide, we're going to assume you're interested in scraping keywords from a specific list of websites you're interested in.

1. Install the request module for Node


In your terminal, run the following to install the 

request
 module for Node:

Shell

npm install request

2. Get your API token


The next thing you'll need is your API token. The API token lets you authenticate with 80legs API and tells it who you are, what you have access
to, and so on. Without it, you can't use the API.

To get your API token, go the 80legs Web Portal (https://portal.80legs.com), login, and click on your account name and the top-right. From
there, you'll see a link to the "My Account" page, which will take you to a page showing your token. Your API token will be a long string of
letters and numbers. Copy the API token or store it somewhere you can easily reference.

📘 For the rest of this document, we'll use AAAXXXXXXXXXXXX as a substitute example for your actual API token when showing
example API calls.

3. Upload your URL list


Before we can create our web crawl, we need to create a URL list. A URL list is one or more URLs from which your crawl will start. Without the
URL list, a crawl won't know where to start.
Write the following code in your code editor (replace the dummy API token with your real API token):
JavaScript
var request = require('request');

// Set your API parameters here.


var API_token = 'AAAXXXXXXXXXXXX';
var urllist_name = 'urlList1';

var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/urllists/' + urllist_name,
method: 'PUT',
json: [
'https://www.80legs.com',
'https://www.datafiniti.co'
],
headers: {
'Content-Type': 'application/json'
}
}

// Make the API call.


request(request_options, function(error, response, body) {
if (error) {
console.log(response);
} else {
console.log(body);
}
});

In this example, we're creating a URL list with just 

https://www.80legs.com
 and  https://www.datafiniti.co . Any crawl using this URL list will start crawling from these two URLs.

You should get a response similar to this (although it may not look as pretty in your terminal):

JSON
{
location: 'urllists/AAAXXXXXXXXXXXX/urlList1',
name: 'urlList1js2',
user: 'AAAXXXXXXXXXXXX',
date_created: '2018-07-24T00:30:43.991Z',
date_updated: '2018-07-24T00:30:43.991Z',
id: '5b5673331141d3e8f728dde6'
}

4. Upload your 80app


The next thing we'll need to do is upload an 80app. An 80app is a small piece of code that runs every time your crawler requests a URL and
does the work of generating links to crawl and scraping data from the web page.

You can read more about 80apps here. You can also view sample 80app code here. For now, we'll just use the code from the KeywordCollector
80app, since we're interested in scraping keywords for this example. Copy the code and save it your local system as 

keywordCollector.js
.

Write the following code in your code editor (replace the dummy API token with your real API token and 

/path/to/keywordCollector.js
 with the actual path to this file on your local system):

JavaScript

var request = require('request');


var fs = require('fs');

// Set your API parameters here.


var API_token = 'AAAXXXXXXXXXXXX';
var app_name = 'keywordCollector.js';
var app_content = fs.readFileSync('keywordCollector.js','utf8');

var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/apps/' + app_name,
method: 'PUT',
body: app_content,
headers: {
'Content-Type': 'application/octet-stream'
}
}

// Make the API call.


request(request_options, function(error, response, body) {
if (error) {
console log(response);
console.log(response);

} else {
console.log(body);
}
});

You should get a response similar to this (although it may not look as pretty in your terminal):

JSON

{
"location":"80apps/AAAXXXXXXXXXXXX/keywordCollector.js",
"name":"app1",
"user":"AAAXXXXXXXXXXXX",
"date_created":"2018-07-24T00:41:29.598Z",
"date_updated":"2018-07-24T00:41:29.598Z",
"id":"5b5675b91141d3e8f76d4fc7"
}

5. Configure and run your crawl


Now that we've created a URL list and an 80app, we're ready to run our web crawl!

Write the following code in your code editor (replace the dummy API token with your real API token):

JavaScript
var request = require('request');

// Set your API parameters here.


var API_token = 'AAAXXXXXXXXXXXX';
var crawl_name = 'crawl1';
var url_list = 'urlList1';
var app = 'keywordCollector.js';
var max_depth = 10;
var max_urls = 1000;

var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/crawls/' + crawl_name,
method: 'PUT',
json: {
"urllist": url_list,
"app": app,
"max_depth": max_depth,
"max_urls": max_urls
},
headers: {
'Content-Type': 'application/json'
}
}

// Make the API call.


request(request_options, function(error, response, body) {
if (error) {
console.log(response);
} else {
console.log(body);
}
});

You should get a response similar to this (although it may not look as pretty in your terminal):

JSON

{
date_updated: '2018-07-24T00:57:47.445Z',
date_created: '2018-07-24T00:57:47.245Z',
user: 'AAAXXXXXXXXXXXX',
name: 'crawl1',
urllist: 'urlList1',
max_urls: 1000,
date_started: '2018-07-24T00:57:47.444Z',
format: 'json',
urls_crawled: 0,
max_depth: 10,
depth: 0,
status: 'STARTED',
app: 'keywordCollector.js',
id: 1568124
}

Let's break down each of the parameters we sent in our request:

Request Body Parameter Description


Request Body Parameter Description
Request Body Parameter Description

app The name of the 80app we're going to use.

urllist The name of the URL list we're going to use.

max_depth The maximum depth level for this crawl. Learn more about crawl depth here.

max_urls The maximum number of URLs this crawl will request.

Now let's dive through the response the API returned:

Response Field Description

id The ID of the crawl. This is a globally unique identifier.

name The name you gave the crawl.

user Your API token.

app The name of the 80app this crawl is using.

urllist The URL list this crawl is using.

max_depth The maximum depth level for this crawl.

max_urls The maximum number of URLs this crawl will request.

status The current status of the crawl. Check the possible values here.

depth The current depth level of the crawl.

urls_crawled The number of URLs crawled so far.

date_created The date you created this crawl.

date_completed The date the crawl completed. This will be empty until the crawl completes or is canceled.

The date the crawl started running. This can be different than  date_created  when the crawl starts off
date_started
as queued.
6. Check on crawl status
As mentioned, there is a 

status
 field in the response body above. This field shows us the crawl has started, which means it's running. Web crawls typically do not complete
instantaneously, since they need to spend requesting URLs and crawling links. In order to tell if the crawl has finished, we can check on its
status on a periodic basis.

Write the following code in your code editor (replace the dummy API token with your real API token):

JavaScript

var request = require('request');

// Set your API parameters here.


var API_token = 'AAAXXXXXXXXXXXX';
var crawl_name = 'crawl1';

var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/crawls/' + crawl_name,
method: 'GET'
}

// Make the API call.


request(request_options, function(error, response, body) {
if (error) {
console.log(response);
} else {
console.log(body);
}
});

You'll get another crawl object as your response like this:

JSON
{
date_updated: '2018-07-24T00:57:47.445Z',
date_created: '2018-07-24T00:57:47.245Z',
user: 'AAAXXXXXXXXXXXX',
name: 'crawl1',
lli ' l i '
urllist: 'urlList1',

max_urls: 1000,
date_started: '2018-07-24T00:57:47.444Z',
format: 'json',
urls_crawled: 1,
max_depth: 10,
depth: 0,
status: 'STARTED',
app: 'keywordCollector.js',
id: 1568124
}

If you keep sending this request, you should notice 

depth
 and  urls_crawled  gradually increasing. At some point,  status  will change to  COMPLETED . That's how you know the crawl has finished
running.

7. Download results
After the crawl finishes, you'll want to download the result files. Result files are logs of all the data scraped during the crawl.

Once you see a 

status
 of  COMPLETED  for your crawl, use the following code to get the results (replace the dummy API token with your real API token):

JavaScript
var request = require('request');

// Set your API parameters here.


var API_token = 'AAAXXXXXXXXXXXX';
var crawl_name = 'crawl1';

var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/results/' + crawl_name,
method: 'GET'
}

// Make the API call.


request(request_options, function(error, response, body) {
if (error) {
if (error) {

console.log(response);
} else {
console.log(body);
}
});

You should get a response similar to this (although it may not look as pretty in your terminal):

JSON

[
"http://datafiniti-voltron-results.s3.amazonaws.com/abcdefghijklmnopqrstuvwxyz012345/123456_1.txt?AWSAccessKeyId=AKIAIELL2XADVPVJZ4
]

Depending on how many URLs you crawl, and how much data you scrape from each URL, you'll see one or more links to result files in your
results response. 80legs will create a results file for every 100 MB of data you scrape, which means result files can post while your crawl is
running.

For very large crawls that take more than 7 days to run, we recommend checking your available results on a weekly basis. Result files will expire
7 days after they are created.

To download the result files, you can run code like this:

JavaScript
var request = require('request');
var fs = require('fs');

// Set your API parameters here.


var API_token = 'AAAXXXXXXXX';
var crawl_name = 'crawl1';

var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/results/' + crawl_name,
method: 'GET'
}

request(request_options, function (error, response, body) {


var results = JSON.parse(body);
console.log(results);
for (let i = 0; i < results.length; i++) {
var filename = crawl name + ' ' + i + ' txt';
var filename = crawl_name + _ + i + .txt ;

var file = fs.createWriteStream(filename);


request(results[i]).pipe(file).on('end', function() {
console.log('File ' + (i+1) + ' out of ' + results.length + ' saved: ' + filename);
num_files_downloaded++;
if (num_files_downloaded === results.length) process.exit();
});
}
});

8. Process the results


After you've download the result files, you'll want to process them so you can make use of the data. A result file will have a structure similar to
this:

JSON
[
{
"url": "https://www.80legs.com",
"result": "...."
},
{
"url": "https://www.datafiniti.co",
"result": "...."
},
...
]

Note that the file is a large JSON object. Specifically, it's an array of objects, where each object consists of a 

url
 field and a  result  field. The  result  field will contain a string related to the data you've scraped, which, if you remember, is determined by
your 80app.

In order to process these results files, you can use code similar to this:

JavaScript
var fs = require('fs');

// Set the location of your file here.


var file = 'xxxx_x.txt';

function processData(result) {
// Edit these lines to do more with the data.
console.log(result);
}

var input = fs.readFileSync(file);


var resultRegex = /\{\"url\":\".*?\",\"result\":\"\{.*?\}\"\}/g;
var match;
do {
match = resultRegex.exec(input);
if (match) {
let result = JSON.parse(match[0]);
processData(result);
}
} while (match);

You can edit the code in the 

processData
 function above to do whatever you'd like with the data, such as store the data in a database, write it out to your console, etc.

📘 For this guide, we have created separate code files or blocks for each step of the crawl creation process. We've done this so you can
understand the process better. In practice, it's probably best to combine the code into a single application to improve
maintainability and usability.
Creating Sequelize Associations
with the Sequelize CLI
Bruno Galvao Follow
Apr 16, 2020 · 7 min read
Sequelize is a popular, easy-to-use JavaScript object relational mapping
(ORM) tool that works with SQL databases. It’s fairly straightforward to
start a new project using the Sequelize CLI, but to truly take advantage of
Sequelize’s capabilities, you’ll want to define relationships between your
models.

In this walkthrough, we’ll set up a Sequelize project to assign tasks to


particular users. We’ll use associations to define that relationship, then
explore ways to query the database based on those associations.

Let’s start by installing Postgres, Sequelize, and the Sequelize CLI in a new
project folder:

mkdir sequelize-associations
cd sequelize-associations
npm init -y
npm install sequelize pg
npm install --save-dev sequelize-cli

Next, let’s initialize a Sequelize project, then open the whole directory in
our code editor:
npx sequelize-cli init
code .

To learn more about any of the Sequelize CLI commands below, see:
Getting Started with Sequelize CLI

Let’s configure our Sequelize project to work with Postgres. Find


config.json in the /config directory and replace what’s there with this
code:

{
"development": {
"database": "sequelize_associations_development",
"host": "127.0.0.1",
"dialect": "postgres"
},
"test": {
"database": "sequelize_associations_test",
"host": "127.0.0.1",
"dialect": "postgres"
},
"production": {
"database": "sequelize_associations_production",
"host": "127.0.0.1",
"dialect": "postgres"
}
}
Cool, now we can tell Sequelize to create the database:

npx sequelize-cli db:create

Next we will create a User model from the command line:

npx sequelize-cli model:generate --name User --attributes


firstName:string,lastName:string,email:string,password:string

Running model:generate automatically creates both a model file and a


migration with the attributes we’ve specified. You can find these files
within your project directory, but there’s no need to change them right now.
(Later, we’ll edit the model file to define our associations.)

Now we’ll execute our migration to create the Users table in our database:

npx sequelize-cli db:migrate


Now let’s create a seed file:

npx sequelize-cli seed:generate --name user

You will see a new file in /seeders . In that file, paste the following code to

create a “John Doe” demo user:

module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.bulkInsert('Users', [{
firstName: 'John',
lastName: 'Doe',
email: '[email protected]',
password: '$321!pass!123$',
createdAt: new Date(),
updatedAt: new Date()
}], {});
},

down: (queryInterface, Sequelize) => {


return queryInterface.bulkDelete('Users', null, {});
}
};

Once we’ve saved our seed file, let’s execute it:


npx sequelize-cli db:seed:all

Drop into psql and query the database to see the Users table:

psql sequelize_associations_development
SELECT * FROM "Users";

Defining associations
Great! We’ve got a working User model, but our John Doe seems a little
bored. Let’s give John something to do by creating a Task model:

npx sequelize-cli model:generate --name Task --attributes


title:string,userId:integer

Just as with the User model above, this Sequelize CLI command will create
both a model file and a migration based on the attributes we specified. But
this time, we’ll need to edit both in order to tie our models together.
First, find task.js in the /models subdirectory within your project
directory. This is the Sequelize model for tasks, and you’ll see that the
sequelize.define() method sets up title and userId as attributes, just as
we specified above.

Below that, you’ll see Task.associate . It’s currently empty, but this is where

we’ll actually tie each task to a userId . Edit your file to look like this:

module.exports = (sequelize, DataTypes) => {


const Task = sequelize.define('Task', {
title: DataTypes.STRING,
userId: DataTypes.INTEGER
}, {});
Task.associate = function(models) {
// associations can be defined here
Task.belongsTo(models.User, {
foreignKey: 'userId',
onDelete: 'CASCADE'
})
};
return Task;
};

What do those changes do? Task.belongsTo() sets up a “belongs to”


relationship with the User model, meaning that each task will be associated
with a specific user.
We do this by setting userId as a “foreign key,” which means it refers to a
key in another model. In our model, tasks must belong to a user, so userId

will correspond to the id in a particular User entry. (The onDelete:

'CASCADE' configures our model so that if a user is deleted, the user’s tasks
will be deleted too.)

We also need to change our User model to reflect the other side of this
relationship. Find user.js and change the section under User.associate so
that your file looks like this:

module.exports = (sequelize, DataTypes) => {


const User = sequelize.define('User', {
firstName: DataTypes.STRING,
lastName: DataTypes.STRING,
password: DataTypes.STRING,
email: DataTypes.STRING
}, {});
User.associate = function(models) {
// associations can be defined here
User.hasMany(models.Task, {
foreignKey: 'userId',
})
};
return User;
};
For this model, we’ve set up a “has many” relationship, meaning a user can
have multiple tasks. In the .hasMany() method, the foreignKey option is set
to the name of the key on the other table. In other words, when the userId
on a task is the same as the id of a user, we have a match.

We still have to make one more change to set up our relationship in the
database. In your project’s /migrations folder, you should see a file whose
name ends with create-task.js . Change the object labeled userId so that
your file looks like the code below:

module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.createTable('Tasks', {
id: {
allowNull: false,
autoIncrement: true,
primaryKey: true,
type: Sequelize.INTEGER
},
title: {
type: Sequelize.STRING
},
userId: {
type: Sequelize.INTEGER,
onDelete: 'CASCADE',
references: {
model: 'Users',
key: 'id',
as: 'userId',
}
},
createdAt: {
allowNull: false,
type: Sequelize.DATE
},
updatedAt: {
allowNull: false,
type: Sequelize.DATE
}
});
},
down: (queryInterface, Sequelize) => {
return queryInterface.dropTable('Tasks');
}
};

The references section will set up the Tasks table in our database to reflect
the same relationships we described above. Now we can run our migration:

npx sequelize-cli db:migrate

Now our John Doe is ready to take on tasks — but John still doesn’t have
any actual tasks assigned. Let’s create a task seed file:

npx sequelize-cli seed:generate --name task


Find the newly generated seed file and paste in the following to create a
task:

module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.bulkInsert('Tasks', [{
title: 'Build an app',
userId: 1,
createdAt: new Date(),
updatedAt: new Date()
}], {});
},

down: (queryInterface, Sequelize) => {


return queryInterface.bulkDelete('Tasks', null, {});
}
};

We’ll set userId to 1 so that the task will belong to the user we created
earlier. Now we can populate the database.

npx sequelize-cli db:seed:all

Test the database:


psql sequelize_associations_development
SELECT * FROM "Users" JOIN "Tasks" ON "Tasks"."userId" = "Users".id;

Querying via Sequelize


Now we can query our database for information based on these associations
— and through Sequelize, we can do it with JavaScript, which makes it easy
to incorporate with a Node.js application. Let’s create a file to hold our
queries:

touch query.js

Paste the code below into your new file:

const { User, Task } = require('./models')


const Sequelize = require('sequelize');
const Op = Sequelize.Op

// Find all users with their associated tasks


// Raw SQL: SELECT * FROM "Users" JOIN "Tasks" ON "Tasks"."userId" =
"Users".id;

const findAllWithTasks = async () => {


const users = await User.findAll({
include: [{
model: Task
}]
});
console.log("All users with their associated tasks:",
JSON.stringify(users, null, 4));
}

const run = async () => {


await findAllWithTasks()
await process.exit()
}

run()

The first three lines above import our User and Task models, along with
Sequelize. After that, we include a query function that returns every User

along with that user’s associated tasks.

Sequelize’s .findAll() method accepts options as a JavaScript object.


Above, we used the include option to take advantage of “eager loading” —
querying data from multiple models at the same time. With this option,
Sequelize will return a JavaScript object that includes each User with all
associated Task instances as nested objects.

Let’s run our query file to see this in action:


node query.js

Now it’s clear that our John Doe has a project to work on! We can use the
same method to include the User when our query finds a Task . Paste the

following code into query.js :

// Find a task with its associated user


// Raw SQL: SELECT * FROM "Tasks" JOIN "Users" ON "Users"."id" =
"Tasks"."userId";

const findTasksWithUser = async () => {


const tasks = await Task.findAll({
include: [{
model: User
}]
});
console.log("All tasks with their associated user:",
JSON.stringify(tasks, null, 4));
}

Modify const run at the bottom of query.js by adding a line to call


findTasksWithUser() . Now run your file again in Node — each Task should
include info for the User it belongs to.
The queries in this walkthrough make use of the .findAll() method. To learn
more about other Sequelize queries, see: Using the Sequelize CLI and
Querying

You can also include other options alongside include to make more specific
queries. For example, below we’ll use the where option to find only the
users named John while still returning the associated tasks for each:

// Find all users named John with their associated tasks


// Raw SQL: SELECT * FROM "Users" WHERE firstName = "John" JOIN tasks
ON "Tasks"."userId" = "Users".id;

const findAllJohnsWithTasks = async () => {


const users = await User.findAll({
where: { firstName: "John" },
include: [{
model: Task
}]
});
console.log("All users named John with their associated tasks:",
JSON.stringify(users, null, 4));
}

Paste the above into your query.js and change const run to call
findAllJohnsWithTasks() to try it out.
Now that you know how to use model associations in Sequelize, you can
design your application to deliver the nested data you need. For your next
step, you might decide to include more robust seed data using Faker or
integrate your Sequelize application with Express to create a Node.js
server!

This article was co-authored with Jeremy Rose, a software engineer, editor,
and writer based in New York City.

More info on Sequelize CLI:


Getting Started with Sequelize CLI

Using Sequelize CLI and Querying

Sequelize CLI and Express

Getting Started with Sequelize CLI using Faker

Build an Express API with Sequelize CLI and Express Router

Building an Express API with Sequelize CLI and Unit Testing!

Resources
https://sequelize.org/master/manual/associations.html
https://sequelize.org/master/manual/querying.html

Sequelize Sequelize Cli JavaScript

Learn more. Make Medium yours. Share your thinking.


Medium is an open platform where 170 Follow the writers, publications, and topics If you have a story to tell, knowledge to
million readers come to find insightful and that matter to you, and you’ll see them on share, or a perspective to offer — welcome
dynamic thinking. Here, expert and your homepage and in your inbox. Explore home. It’s easy and free to post your thinking
undiscovered voices alike dive into the heart on any topic. Write on Medium
of any topic and bring new ideas to the
surface. Learn more

About Help Legal


Console
Stability: 2  - Stable

Source Code:  lib/console.js

The  console  module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

A  Console  class with methods such as  console.log() ,  console.error()  and  console.warn()  that can be used to write to any Node.js stream.

A global  console  instance configured to write to  process.stdout  and  process.stderr . The global  console  can be used without calling  require('console') .

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like
all other Node.js streams. See the  note on process I/O  for more information.

Example using the global  console :

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3

const name = 'Will Robinson';


console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the  Console  class:

const out = getStreamSomehow();


const err = getStreamSomehow();
const myConsole = new console.Console(out, err);

myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err

const name = 'Will Robinson';


myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

Class:  Console
The  Console  class can be used to create a simple logger with configurable output streams and can be accessed using

either  require('console').Console  or  console.Console  (or their destructured counterparts):

const { Console } = require('console');

const { Console } = console;

new Console(stdout[, stderr][, ignoreErrors])


new Console(options)
options   <Object>

stdout   <stream.Writable>

stderr   <stream.Writable>

ignoreErrors   <boolean>  Ignore errors when writing to the underlying streams. Default:  true .

colorMode   <boolean>  |  <string>  Set color support for this  Console  instance. Setting to  true  enables coloring while inspecting values. Setting

to  false  disables coloring while inspecting values. Setting to  'auto'  makes color support depend on the value of the  isTTY  property and the value

returned by  getColorDepth()  on the respective stream. This option can not be used, if  inspectOptions.colors  is set as well. Default:  'auto' .

inspectOptions   <Object>  Specifies options that are passed along to  util.inspect() .

groupIndentation   <number>  Set group indentation. Default:  2 .

Creates a new  Console  with one or two writable stream instances.  stdout  is a writable stream to print log or info output.  stderr  is used for warning or error output.

If  stderr  is not provided,  stdout  is used for  stderr .

const output = fs.createWriteStream('./stdout.log');

const errorOutput = fs.createWriteStream('./stderr.log');

// Custom simple logger


const logger = new Console({ stdout: output, stderr: errorOutput });

// use it like console

const count = 5;

logger.log('count: %d', count);

// In stdout.log: count 5

The global  console  is a special  Console  whose output is sent to  process.stdout  and  process.stderr . It is equivalent to calling:

new Console({ stdout: process.stdout, stderr: process.stderr });

console.assert(value[, ...message])
value   <any>  The value tested for being truthy.

...message   <any>  All arguments besides  value  are used as error message.

console.assert()  writes a message if  value  is  falsy  or omitted. It only writes a message and does not otherwise affect execution. The output always starts

with  "Assertion failed" . If provided,  message  is formatted using  util.format() .

If  value  is  truthy , nothing happens.

console.assert(true, 'does nothing');

console.assert(false, 'Whoops %s work', 'didn\'t');

// Assertion failed: Whoops didn't work

console.assert();

// Assertion failed

l l ()
console.clear()
When  stdout  is a TTY, calling  console.clear()  will attempt to clear the TTY. When  stdout  is not a TTY, this method does nothing.

The specific operation of  console.clear()  can vary across operating systems and terminal types. For most Linux operating systems,  console.clear()  operates

similarly to the  clear  shell command. On Windows,  console.clear()  will clear only the output in the current terminal viewport for the Node.js binary.

console.count([label])
label   <string>  The display label for the counter. Default:  'default' .

Maintains an internal counter specific to  label  and outputs to  stdout  the number of times  console.count()  has been called with the given  label .

> console.count()

default: 1

undefined

> console.count('default')

default: 2

undefined

> console.count('abc')

abc: 1

undefined

> console.count('xyz')

xyz: 1

undefined

> console.count('abc')

abc: 2

undefined

> console.count()

default: 3
undefined

>

console.countReset([label])
label   <string>  The display label for the counter. Default:  'default' .

Resets the internal counter specific to  label .

> console.count('abc');

abc: 1

undefined

> console.countReset('abc');

undefined

> console.count('abc');

abc: 1

undefined

>

console.debug(data[, ...args])
data   <any>

...args   <any>

The  console.debug()  function is an alias for  console.log() .

console.dir(obj[, options])
obj   <any>
options   <Object>

showHidden   <boolean>  If  true  then the object's non-enumerable and symbol properties will be shown too. Default:  false .

depth   <number>  Tells  util.inspect()  how many times to recurse while formatting the object. This is useful for inspecting large complicated objects. To

make it recurse indefinitely, pass  null . Default:  2 .

colors   <boolean>  If  true , then the output will be styled with ANSI color codes. Colors are customizable;

see  customizing util.inspect() colors . Default:  false .

Uses  util.inspect()  on  obj  and prints the resulting string to  stdout . This function bypasses any custom  inspect()  function defined on  obj .

console.dirxml(...data)
...data   <any>

This method calls  console.log()  passing it the arguments received. This method does not produce any XML formatting.

console.error([data][, ...args])
data   <any>

...args   <any>

Prints to  stderr  with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar
to  printf(3)  (the arguments are all passed to  util.format() ).

const code = 5;

console.error('error #%d', code);

// Prints: error #5, to stderr

console.error('error', code);

// Prints: error 5, to stderr

If formatting elements (e.g.  %d ) are not found in the first string then  util.inspect()  is called on each argument and the resulting string values are concatenated.
See  util.format()  for more information.

l ([ l b l])
console.group([...label])
...label   <any>

Increases indentation of subsequent lines by spaces for  groupIndentation  length.

If one or more  label s are provided, those are printed first without the additional indentation.

console.groupCollapsed()
An alias for  console.group() .

console.groupEnd()
Decreases indentation of subsequent lines by spaces for  groupIndentation  length.

console.info([data][, ...args])
data   <any>

...args   <any>

The  console.info()  function is an alias for  console.log() .

console.log([data][, ...args])
data   <any>

...args   <any>

Prints to  stdout  with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar
to  printf(3)  (the arguments are all passed to  util.format() ).

const count = 5;

console.log('count: %d', count);


// Prints: count: 5, to stdout

console.log('count:', count);

// Prints: count: 5, to stdout

See  util.format()  for more information.

console.table(tabularData[, properties])
tabularData   <any>

properties   <string[]>  Alternate properties for constructing the table.

Try to construct a table with the columns of the properties of  tabularData  (or use  properties ) and rows of  tabularData  and log it. Falls back to just logging the
argument if it can’t be parsed as tabular.

// These can't be parsed as tabular data

console.table(Symbol());

// Symbol()

console.table(undefined);

// undefined

console.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }]);

// ┌─────────┬─────┬─────┐

// │ (index) │ a │ b │

// ├─────────┼─────┼─────┤

// │ 0 │ 1 │ 'Y' │

// │ 1 │ 'Z' │ 2 │

// └─────────┴─────┴─────┘

console.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }], ['a']);


// ┌─────────┬─────┐

// │ (index) │ a │

// ├─────────┼─────┤

// │ 0 │ 1 │

// │ 1 │ 'Z' │

// └─────────┴─────┘

console.time([label])
label   <string>  Default:  'default'

Starts a timer that can be used to compute the duration of an operation. Timers are identified by a unique  label . Use the same  label  when

calling  console.timeEnd()  to stop the timer and output the elapsed time in suitable time units to  stdout . For example, if the elapsed time is

3869ms,  console.timeEnd()  displays "3.869s".

console.timeEnd([label])
label   <string>  Default:  'default'

Stops a timer that was previously started by calling  console.time()  and prints the result to  stdout :

console.time('100-elements');

for (let i = 0; i < 100; i++) {}

console.timeEnd('100-elements');

// prints 100-elements: 225.438ms

console.timeLog([label][, ...data])
label   <string>  Default:  'default'
...data   <any>

For a timer that was previously started by calling  console.time() , prints the elapsed time and other  data  arguments to  stdout :

console.time('process');

const value = expensiveProcess1(); // Returns 42

console.timeLog('process', value);

// Prints "process: 365.227ms 42".

doExpensiveProcess2(value);

console.timeEnd('process');

console.trace([message][, ...args])
message   <any>

...args   <any>

Prints to  stderr  the string  'Trace: ' , followed by the  util.format()  formatted message and stack trace to the current position in the code.

console.trace('Show me');

// Prints: (stack trace will vary based on where trace is called)

// Trace: Show me

// at repl:2:9

// at REPLServer.defaultEval (repl.js:248:27)

// at bound (domain.js:287:14)

// at REPLServer.runBound [as eval] (domain.js:300:12)

// at REPLServer.<anonymous> (repl.js:412:12)

// at emitOne (events.js:82:20)

// at REPLServer.emit (events.js:169:7)

// at REPLServer.Interface._onLine (readline.js:210:10)
// at REPLServer.Interface._line (readline.js:549:8)

// at REPLServer.Interface._ttyWrite (readline.js:826:14)

console.warn([data][, ...args])
data   <any>

...args   <any>

The  console.warn()  function is an alias for  console.error() .

Inspector only methods


The following methods are exposed by the V8 engine in the general API but do not display anything unless used in conjunction with the  inspector  ( --inspect  flag).

console.profile([label])
label   <string>

This method does not display anything unless used in the inspector. The  console.profile()  method starts a JavaScript CPU profile with an optional label

until  console.profileEnd()  is called. The profile is then added to the Profile panel of the inspector.

console.profile('MyLabel');

// Some code

console.profileEnd('MyLabel');

// Adds the profile 'MyLabel' to the Profiles panel of the inspector.

console.profileEnd([label])
label   <string>

This method does not display anything unless used in the inspector. Stops the current JavaScript CPU profiling session if one has been started and prints the report
to the Profiles panel of the inspector. See  console.profile()  for an example.

If this method is called without a label, the most recently started profile is stopped.

console.timeStamp([label])
label   <string>

This method does not display anything unless used in the inspector. The  console.timeStamp()  method adds an event with the label  'label'  to the Timeline panel of
the inspector.
Path
Stability: 2  - Stable

Source Code:  lib/path.js

The  path  module provides utilities for working with file and directory paths. It can be accessed using:

const path = require('path');

Windows vs. POSIX


The default operation of the  path  module varies based on the operating system on which a Node.js application is running. Specifically, when running on a Windows

operating system, the  path  module will assume that Windows-style paths are being used.

So using  path.basename()  might yield different results on POSIX and Windows:

On POSIX:

path.basename('C:\\temp\\myfile.html');

// Returns: 'C:\\temp\\myfile.html'

On Windows:
path.basename('C:\\temp\\myfile.html');

// Returns: 'myfile.html'

To achieve consistent results when working with Windows file paths on any operating system, use  path.win32 :

On POSIX and Windows:

path.win32.basename('C:\\temp\\myfile.html');

// Returns: 'myfile.html'

To achieve consistent results when working with POSIX file paths on any operating system, use  path.posix :

On POSIX and Windows:

path.posix.basename('/tmp/myfile.html');

// Returns: 'myfile.html'

On Windows Node.js follows the concept of per-drive working directory. This behavior can be observed when using a drive path without a backslash. For
example,  path.resolve('C:\\') can potentially return a different result than  path.resolve('C:') . For more information, see  this MSDN page .

path.basename(path[, ext])
path   <string>

ext   <string>  An optional file extension

Returns:  <string>

The  path.basename()  method returns the last portion of a  path , similar to the Unix  basename  command. Trailing directory separators are ignored, see  path.sep .
path.basename('/foo/bar/baz/asdf/quux.html');

// Returns: 'quux.html'

path.basename('/foo/bar/baz/asdf/quux.html', '.html');

// Returns: 'quux'

Although Windows usually treats file names, including file extensions, in a case-insensitive manner, this function does not. For
example,  C:\\foo.html  and  C:\\foo.HTML  refer to the same file, but  basename  treats the extension as a case-sensitive string:

path.win32.basename('C:\\foo.html', '.html');

// Returns: 'foo'

path.win32.basename('C:\\foo.HTML', '.html');

// Returns: 'foo.HTML'

A  TypeError  is thrown if  path  is not a string or if  ext  is given and is not a string.

path.delimiter
<string>

Provides the platform-specific path delimiter:

;  for Windows

:  for POSIX

For example, on POSIX:

console.log(process.env.PATH);

// Prints: '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'
process.env.PATH.split(path.delimiter);

// Returns: ['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']

On Windows:

console.log(process.env.PATH);

// Prints: 'C:\Windows\system32;C:\Windows;C:\Program Files\node\'

process.env.PATH.split(path.delimiter);

// Returns ['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\node\\']

path.dirname(path)
path   <string>

Returns:  <string>

The  path.dirname()  method returns the directory name of a  path , similar to the Unix  dirname  command. Trailing directory separators are ignored, see  path.sep .

path.dirname('/foo/bar/baz/asdf/quux');

// Returns: '/foo/bar/baz/asdf'

A  TypeError  is thrown if  path  is not a string.

path.extname(path)
path   <string>

Returns:  <string>
The  path.extname()  method returns the extension of the  path , from the last occurrence of the  .  (period) character to end of string in the last portion of the  path .

If there is no  .  in the last portion of the  path , or if there are no  .  characters other than the first character of the basename of  path  (see  path.basename() ) , an
empty string is returned.

path.extname('index.html');

// Returns: '.html'

path.extname('index.coffee.md');

// Returns: '.md'

path.extname('index.');

// Returns: '.'

path.extname('index');

// Returns: ''

path.extname('.index');

// Returns: ''

path.extname('.index.md');

// Returns: '.md'

A  TypeError  is thrown if  path  is not a string.

path.format(pathObject)
pathObject   <Object>

dir   <string>

root   <string>
base   <string>

name   <string>

ext   <string>

Returns:  <string>

The  path.format()  method returns a path string from an object. This is the opposite of  path.parse() .

When providing properties to the  pathObject  remember that there are combinations where one property has priority over another:

pathObject.root  is ignored if  pathObject.dir  is provided

pathObject.ext  and  pathObject.name  are ignored if  pathObject.base  exists

For example, on POSIX:

// If `dir`, `root` and `base` are provided,

// `${dir}${path.sep}${base}`

// will be returned. `root` is ignored.

path.format({

root: '/ignored',

dir: '/home/user/dir',

base: 'file.txt'

});

// Returns: '/home/user/dir/file.txt'

// `root` will be used if `dir` is not specified.

// If only `root` is provided or `dir` is equal to `root` then the

// platform separator will not be included. `ext` will be ignored.

path.format({

root: '/',

base: 'file.txt',

ext: 'ignored'
});

// Returns: '/file.txt'

// `name` + `ext` will be used if `base` is not specified.

path.format({

root: '/',

name: 'file',

ext: '.txt'

});

// Returns: '/file.txt'

On Windows:

path.format({

dir: 'C:\\path\\dir',

base: 'file.txt'

});

// Returns: 'C:\\path\\dir\\file.txt'

path.isAbsolute(path)
path   <string>

Returns:  <boolean>

The  path.isAbsolute()  method determines if  path  is an absolute path.

If the given  path  is a zero-length string,  false  will be returned.

For example, on POSIX:


path.isAbsolute('/foo/bar'); // true

path.isAbsolute('/baz/..'); // true

path.isAbsolute('qux/'); // false

path.isAbsolute('.'); // false

On Windows:

path.isAbsolute('//server'); // true

path.isAbsolute('\\\\server'); // true

path.isAbsolute('C:/foo/..'); // true

path.isAbsolute('C:\\foo\\..'); // true

path.isAbsolute('bar\\baz'); // false

path.isAbsolute('bar/baz'); // false

path.isAbsolute('.'); // false

A  TypeError  is thrown if  path  is not a string.

path.join([...paths])
...paths   <string>  A sequence of path segments

Returns:  <string>

The  path.join()  method joins all given  path  segments together using the platform-specific separator as a delimiter, then normalizes the resulting path.

Zero-length  path  segments are ignored. If the joined path string is a zero-length string then  '.'  will be returned, representing the current working directory.

path.join('/foo', 'bar', 'baz/asdf', 'quux', '..');

// Returns: '/foo/bar/baz/asdf'
path.join('foo', {}, 'bar');

// Throws 'TypeError: Path must be a string. Received {}'

A  TypeError  is thrown if any of the path segments is not a string.

path.normalize(path)
path   <string>

Returns:  <string>

The  path.normalize()  method normalizes the given  path , resolving  '..'  and  '.'  segments.

When multiple, sequential path segment separation characters are found (e.g.  /  on POSIX and either  \  or  /  on Windows), they are replaced by a single instance of

the platform-specific path segment separator ( /  on POSIX and  \  on Windows). Trailing separators are preserved.

If the  path  is a zero-length string,  '.'  is returned, representing the current working directory.

For example, on POSIX:

path.normalize('/foo/bar//baz/asdf/quux/..');

// Returns: '/foo/bar/baz/asdf'

On Windows:

path.normalize('C:\\temp\\\\foo\\bar\\..\\');

// Returns: 'C:\\temp\\foo\\'

Since Windows recognizes multiple path separators, both separators will be replaced by instances of the Windows preferred separator ( \ ):
path.win32.normalize('C:////temp\\\\/\\/\\/foo/bar');

// Returns: 'C:\\temp\\foo\\bar'

A  TypeError  is thrown if  path  is not a string.

path.parse(path)
path   <string>

Returns:  <Object>

The  path.parse()  method returns an object whose properties represent significant elements of the  path . Trailing directory separators are ignored, see  path.sep .

The returned object will have the following properties:

dir   <string>

root   <string>

base   <string>

name   <string>

ext   <string>

For example, on POSIX:

path.parse('/home/user/dir/file.txt');

// Returns:

// { root: '/',

// dir: '/home/user/dir',

// base: 'file.txt',

// ext: '.txt',

// name: 'file' }
┌─────────────────────┬────────────┐

│ dir │ base │

├──────┬ ├──────┬─────┤

│ root │ │ name │ ext │

" / home/user/dir / file .txt "

└──────┴──────────────┴──────┴─────┘

(All spaces in the "" line should be ignored. They are purely for formatting.)

On Windows:

path.parse('C:\\path\\dir\\file.txt');

// Returns:

// { root: 'C:\\',

// dir: 'C:\\path\\dir',

// base: 'file.txt',

// ext: '.txt',

// name: 'file' }

┌─────────────────────┬────────────┐

│ dir │ base │

├──────┬ ├──────┬─────┤

│ root │ │ name │ ext │

" C:\ path\dir \ file .txt "

└──────┴──────────────┴──────┴─────┘

(All spaces in the "" line should be ignored. They are purely for formatting.)

A  TypeError  is thrown if  path  is not a string.


path.posix
<Object>

The  path.posix  property provides access to POSIX specific implementations of the  path  methods.

The API is accessible via  require('path').posix  or  require('path/posix') .

path.relative(from, to)
from   <string>

to   <string>

Returns:  <string>

The  path.relative()  method returns the relative path from  from  to  to  based on the current working directory. If  from  and  to  each resolve to the same path (after

calling  path.resolve()  on each), a zero-length string is returned.

If a zero-length string is passed as  from  or  to , the current working directory will be used instead of the zero-length strings.

For example, on POSIX:

path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb');

// Returns: '../../impl/bbb'

On Windows:

path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb');

// Returns: '..\\..\\impl\\bbb'
A  TypeError  is thrown if either  from  or  to  is not a string.

path.resolve([...paths])
...paths   <string>  A sequence of paths or path segments

Returns:  <string>

The  path.resolve()  method resolves a sequence of paths or path segments into an absolute path.

The given sequence of paths is processed from right to left, with each subsequent  path  prepended until an absolute path is constructed. For instance, given the

sequence of path segments:  /foo ,  /bar ,  baz , calling  path.resolve('/foo', '/bar', 'baz')  would return  /bar/baz  because  'baz'  is not an absolute path but  '/bar'

+ '/' + 'baz'  is.

If, after processing all given  path  segments, an absolute path has not yet been generated, the current working directory is used.

The resulting path is normalized and trailing slashes are removed unless the path is resolved to the root directory.

Zero-length  path  segments are ignored.

If no  path  segments are passed,  path.resolve()  will return the absolute path of the current working directory.

path.resolve('/foo/bar', './baz');

// Returns: '/foo/bar/baz'

path.resolve('/foo/bar', '/tmp/file/');

// Returns: '/tmp/file'

path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif');

// If the current working directory is /home/myself/node,

// this returns '/home/myself/node/wwwroot/static_files/gif/image.gif'


A  TypeError  is thrown if any of the arguments is not a string.

path.sep
<string>

Provides the platform-specific path segment separator:

\  on Windows

/  on POSIX

For example, on POSIX:

'foo/bar/baz'.split(path.sep);

// Returns: ['foo', 'bar', 'baz']

On Windows:

'foo\\bar\\baz'.split(path.sep);

// Returns: ['foo', 'bar', 'baz']

On Windows, both the forward slash ( / ) and backward slash ( \ ) are accepted as path segment separators; however, the  path  methods only add backward slashes

( \ ).

path.toNamespacedPath(path)
path   <string>

Returns:  <string>

On Windows systems only, returns an equivalent  namespace-prefixed path  for the given  path . If  path  is not a string,  path  will be returned without modifications.
This method is meaningful only on Windows systems. On POSIX systems, the method is non-operational and always returns  path  without modifications.

path.win32
<Object>

The  path.win32  property provides access to Windows-specific implementations of the  path  methods.

The API is accessible via  require('path').win32  or  require('path/win32') .


Node.js v15.12.0 Documentation

Timers
Stability: 2 - Stable

Source Code: lib/timers.js

The timer module exposes a global API for scheduling functions to be called at some future period of time. Because the timer functions are globals, there is no need to call
require('timers') to use the API.

The timer functions within Node.js implement a similar API as the timers API provided by Web Browsers but use a different internal implementation that is built around the Node.js Event
Loop .

Class: Immediate
This object is created internally and is returned from setImmediate() . It can be passed to clearImmediate() in order to cancel the scheduled actions.

By default, when an immediate is scheduled, the Node.js event loop will continue running as long as the immediate is active. The Immediate object returned by setImmediate() exports
both immediate.ref() and immediate.unref() functions that can be used to control this default behavior.

immediate.hasRef()
Returns: <boolean>

If true, the Immediate object will keep the Node.js event loop active.

immediate.ref()
Returns: <Immediate> a reference to immediate

When called, requests that the Node.js event loop not exit so long as the Immediate is active. Calling immediate.ref() multiple times will have no effect.

By default, all Immediate objects are "ref'ed", making it normally unnecessary to call immediate.ref() unless immediate.unref() had been called previously.

immediate.unref()
Returns: <Immediate> a reference to immediate

When called, the active Immediate object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit before the
Immediate object's callback is invoked. Calling immediate.unref() multiple times will have no effect.

Class: Timeout
This object is created internally and is returned from setTimeout() and setInterval() . It can be passed to either clearTimeout() or clearInterval() in order to cancel the scheduled
actions.

By default, when a timer is scheduled using either setTimeout() or setInterval() , the Node.js event loop will continue running as long as the timer is active. Each of the Timeout objects
returned by these functions export both timeout.ref() and timeout.unref() functions that can be used to control this default behavior.

timeout.hasRef()
Returns: <boolean>

If true, the Timeout object will keep the Node.js event loop active.

timeout.ref()
Returns: <Timeout> a reference to timeout

When called, requests that the Node.js event loop not exit so long as the Timeout is active. Calling timeout.ref() multiple times will have no effect.

By default, all Timeout objects are "ref'ed", making it normally unnecessary to call timeout.ref() unless timeout.unref() had been called previously.

timeout.refresh()
Returns: <Timeout> a reference to timeout

Sets the timer's start time to the current time, and reschedules the timer to call its callback at the previously specified duration adjusted to the current time. This is useful for refreshing a
timer without allocating a new JavaScript object.

Using this on a timer that has already called its callback will reactivate the timer.

timeout.unref()
Returns: <Timeout> a reference to timeout

When called, the active Timeout object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit before the
Timeout object's callback is invoked. Calling timeout.unref() multiple times will have no effect.

Calling timeout.unref() creates an internal timer that will wake the Node.js event loop. Creating too many of these can adversely impact performance of the Node.js application.
timeout[Symbol.toPrimitive]()
Returns: <integer> a number that can be used to reference this timeout

Coerce a Timeout to a primitive. The primitive can be used to clear the Timeout . The primitive can only be used in the same thread where the timeout was created. Therefore, to use it
across worker_threads it must first be passed to the correct thread. This allows enhanced compatibility with browser setTimeout() and setInterval() implementations.

Scheduling timers
A timer in Node.js is an internal construct that calls a given function after a certain period of time. When a timer's function is called varies depending on which method was used to create the
timer and what other work the Node.js event loop is doing.

setImmediate(callback[, ...args])
callback <Function> The function to call at the end of this turn of the Node.js Event Loop

...args <any> Optional arguments to pass when the callback is called.

Returns: <Immediate> for use with clearImmediate()

Schedules the "immediate" execution of the callback after I/O events' callbacks.

When multiple calls to setImmediate() are made, the callback functions are queued for execution in the order in which they are created. The entire callback queue is processed every
event loop iteration. If an immediate timer is queued from inside an executing callback, that timer will not be triggered until the next event loop iteration.

If callback is not a function, a TypeError will be thrown.

This method has a custom variant for promises that is available using util.promisify() :

const util = require('util');


const setImmediatePromise = util.promisify(setImmediate);

setImmediatePromise('foobar').then((value) => {
// value === 'foobar' (passing values is optional)
// This is executed after all I/O callbacks.
});

// Or with async function


async function timerExample() {
console.log('Before I/O callbacks');
await setImmediatePromise();
console.log('After I/O callbacks');
}
timerExample();

setInterval(callback[, delay[, ...args]])


callback <Function> The function to call when the timer elapses.

delay <number> The number of milliseconds to wait before calling the callback . Default: 1 .

...args <any> Optional arguments to pass when the callback is called.

Returns: <Timeout> for use with clearInterval()

Schedules repeated execution of callback every delay milliseconds.

When delay is larger than 2147483647 or less than 1 , the delay will be set to 1 . Non-integer delays are truncated to an integer.

If callback is not a function, a TypeError will be thrown.

setTimeout(callback[, delay[, ...args]])


callback <Function> The function to call when the timer elapses.

delay <number> The number of milliseconds to wait before calling the callback . Default: 1 .

...args <any> Optional arguments to pass when the callback is called.

Returns: <Timeout> for use with clearTimeout()

Schedules execution of a one-time callback after delay milliseconds.

The callback will likely not be invoked in precisely delay milliseconds. Node.js makes no guarantees about the exact timing of when callbacks will fire, nor of their ordering. The callback
will be called as close as possible to the time specified.

When delay is larger than 2147483647 or less than 1 , the delay will be set to 1 . Non-integer delays are truncated to an integer.

If callback is not a function, a TypeError will be thrown.

This method has a custom variant for promises that is available using util.promisify() :

const util = require('util');


const setTimeoutPromise = util.promisify(setTimeout);

setTimeoutPromise(40, 'foobar').then((value) => {


// value === 'foobar' (passing values is optional)
// This is executed after about 40 milliseconds.
});

Cancelling timers
The setImmediate() , setInterval() , and setTimeout() methods each return objects that represent the scheduled timers. These can be used to cancel the timer and prevent it from
triggering.

For the promisified variants of setImmediate() and setTimeout() , an AbortController may be used to cancel the timer. When canceled, the returned Promises will be rejected with an
'AbortError' .

For setImmediate() :

const util = require('util');


const setImmediatePromise = util.promisify(setImmediate);

const ac = new AbortController();


const signal = ac.signal;

setImmediatePromise('foobar', { signal })
.then(console.log)
.catch((err) => {
if (err.message === 'AbortError')
console.log('The immediate was aborted');
});

ac.abort();

For setTimeout() :

const util = require('util');


const setTimeoutPromise = util.promisify(setTimeout);

const ac = new AbortController();


const signal = ac.signal;

setTimeoutPromise(1000, 'foobar', { signal })


.then(console.log)
.catch((err) => {
if (err.message === 'AbortError')
console.log('The timeout was aborted');
});

ac.abort();

clearImmediate(immediate)
immediate <Immediate> An Immediate object as returned by setImmediate() .

Cancels an Immediate object created by setImmediate() .

clearInterval(timeout)
timeout <Timeout> A Timeout object as returned by setInterval() .

Cancels a Timeout object created by setInterval() .

clearTimeout(timeout)
timeout <Timeout> A Timeout object as returned by setTimeout() .

Cancels a Timeout object created by setTimeout() .

Timers Promises API

Stability: 1 - Experimental

The timers/promises API provides an alternative set of timer functions that return Promise objects. The API is accessible via require('timers/promises') .

const timersPromises = require('timers/promises');

timersPromises.setTimeout([delay[, value[, options]]])


delay <number> The number of milliseconds to wait before resolving the Promise . Default: 1 .

value <any> A value with which the Promise is resolved.


options <Object>
ref <boolean> Set to false to indicate that the scheduled Timeout should not require the Node.js event loop to remain active. Default: true .

signal <AbortSignal> An optional AbortSignal that can be used to cancel the scheduled Timeout .

timersPromises.setImmediate([value[, options]])
value <any> A value with which the Promise is resolved.

options <Object>
ref <boolean> Set to false to indicate that the scheduled Immediate should not require the Node.js event loop to remain active. Default: true .

signal <AbortSignal> An optional AbortSignal that can be used to cancel the scheduled Immediate .

timersPromises.setInterval([delay[, value[, options]]])


Returns an async iterator that generates values in an interval of delay ms.

delay <number> The number of milliseconds to wait between iterations. Default: 1 .

value <any> A value with which the iterator returns.

options <Object>
ref <boolean> Set to false to indicate that the scheduled Timeout between iterations should not require the Node.js event loop to remain active. Default: true .

signal <AbortSignal> An optional AbortSignal that can be used to cancel the scheduled Timeout between operations.

(async function() {
const { setInterval } = require('timers/promises');
const interval = 100;
for await (const startTime of setInterval(interval, Date.now())) {
const now = Date.now();
console.log(now);
if ((now - startTime) > 1000)
break;
}
console.log(Date.now());
})();
Node.js v15.12.0 Documentation

Stream
Stability: 2 - Stable

Source Code: lib/stream.js

A stream is an abstract interface for working with streaming data in Node.js. The stream module provides an API for implementing the stream interface.

There are many stream objects provided by Node.js. For instance, a request to an HTTP server and process.stdout are both stream instances.

Streams can be readable, writable, or both. All streams are instances of EventEmitter .

To access the stream module:

const stream = require('stream');

The stream module is useful for creating new types of stream instances. It is usually not necessary to use the stream module to consume streams.

Organization of this document


This document contains two primary sections and a third section for notes. The first section explains how to use existing streams within an application. The second section explains how to
create new types of streams.

Types of streams
There are four fundamental stream types within Node.js:

Writable : streams to which data can be written (for example, fs.createWriteStream() ).

Readable : streams from which data can be read (for example, fs.createReadStream() ).

Duplex : streams that are both Readable and Writable (for example, net.Socket ).
Transform : Duplex streams that can modify or transform the data as it is written and read (for example, zlib.createDeflate() ).

Additionally, this module includes the utility functions stream.pipeline() , stream.finished() , stream.Readable.from() and stream.addAbortSignal() .

Streams Promises API


The stream/promises API provides an alternative set of asynchronous utility functions for streams that return Promise objects rather than using callbacks. The API is accessible via
require('stream/promises') or require('stream').promises .

Object mode
All streams created by Node.js APIs operate exclusively on strings and Buffer (or Uint8Array ) objects. It is possible, however, for stream implementations to work with other types of
JavaScript values (with the exception of null , which serves a special purpose within streams). Such streams are considered to operate in "object mode".

Stream instances are switched into object mode using the objectMode option when the stream is created. Attempting to switch an existing stream into object mode is not safe.

Buffering
Both Writable and Readable streams will store data in an internal buffer.

The amount of data potentially buffered depends on the highWaterMark option passed into the stream's constructor. For normal streams, the highWaterMark option specifies a total
number of bytes . For streams operating in object mode, the highWaterMark specifies a total number of objects.

Data is buffered in Readable streams when the implementation calls stream.push(chunk) . If the consumer of the Stream does not call stream.read() , the data will sit in the internal queue
until it is consumed.

Once the total size of the internal read buffer reaches the threshold specified by highWaterMark , the stream will temporarily stop reading data from the underlying resource until the data
currently buffered can be consumed (that is, the stream will stop calling the internal readable._read() method that is used to fill the read buffer).

Data is buffered in Writable streams when the writable.write(chunk) method is called repeatedly. While the total size of the internal write buffer is below the threshold set by
highWaterMark , calls to writable.write() will return true . Once the size of the internal buffer reaches or exceeds the highWaterMark , false will be returned.

A key goal of the stream API, particularly the stream.pipe() method, is to limit the buffering of data to acceptable levels such that sources and destinations of differing speeds will not
overwhelm the available memory.

The highWaterMark option is a threshold, not a limit: it dictates the amount of data that a stream buffers before it stops asking for more data. It does not enforce a strict memory limitation in
general. Specific stream implementations may choose to enforce stricter limits but doing so is optional.

Because Duplex and Transform streams are both Readable and Writable , each maintains two separate internal buffers used for reading and writing, allowing each side to operate
independently of the other while maintaining an appropriate and efficient flow of data. For example, net.Socket instances are Duplex streams whose Readable side allows consumption of
data received from the socket and whose Writable side allows writing data to the socket. Because data may be written to the socket at a faster or slower rate than data is received, each side
should operate (and buffer) independently of the other.
The mechanics of the internal buffering are an internal implementation detail and may be changed at any time. However, for certain advanced implementations, the internal buffers can be
retrieved using writable.writableBuffer or readable.readableBuffer . Use of these undocumented properties is discouraged.

API for stream consumers


Almost all Node.js applications, no matter how simple, use streams in some manner. The following is an example of using streams in a Node.js application that implements an HTTP server:

const http = require('http');

const server = http.createServer((req, res) => {


// `req` is an http.IncomingMessage, which is a readable stream.
// `res` is an http.ServerResponse, which is a writable stream.

let body = '';


// Get the data as utf8 strings.
// If an encoding is not set, Buffer objects will be received.
req.setEncoding('utf8');

// Readable streams emit 'data' events once a listener is added.


req.on('data', (chunk) => {
body += chunk;
});

// The 'end' event indicates that the entire body has been received.
req.on('end', () => {
try {
const data = JSON.parse(body);
// Write back something interesting to the user:
res.write(typeof data);
res.end();
} catch (er) {
// uh oh! bad json!
res.statusCode = 400;
return res.end(`error: ${er.message}`);
}
});
});

server.listen(1337);
// $ curl localhost:1337 -d "{}"
// object
// $ curl localhost:1337 -d "\"foo\""
// string
// $ curl localhost:1337 -d "not json"
// error: Unexpected token o in JSON at position 1

Writable streams (such as res in the example) expose methods such as write() and end() that are used to write data onto the stream.

Readable streams use the EventEmitter API for notifying application code when data is available to be read off the stream. That available data can be read from the stream in multiple
ways.

Both Writable and Readable streams use the EventEmitter API in various ways to communicate the current state of the stream.

Duplex and Transform streams are both Writable and Readable .

Applications that are either writing data to or consuming data from a stream are not required to implement the stream interfaces directly and will generally have no reason to call
require('stream') .

Developers wishing to implement new types of streams should refer to the section API for stream implementers .

Writable streams
Writable streams are an abstraction for a destination to which data is written.

Examples of Writable streams include:

HTTP requests, on the client


HTTP responses, on the server
fs write streams
zlib streams
crypto streams
TCP sockets
child process stdin
process.stdout , process.stderr

Some of these examples are actually Duplex streams that implement the Writable interface.

All Writable streams implement the interface defined by the stream.Writable class.

While specific instances of Writable streams may differ in various ways, all Writable streams follow the same fundamental usage pattern as illustrated in the example below:
const myStream = getWritableStreamSomehow();
myStream.write('some data');
myStream.write('some more data');
myStream.end('done writing data');

Class: stream.Writable

Event: 'close'
The 'close' event is emitted when the stream and any of its underlying resources (a file descriptor, for example) have been closed. The event indicates that no more events will be emitted,
and no further computation will occur.

A Writable stream will always emit the 'close' event if it is created with the emitClose option.

Event: 'drain'
If a call to stream.write(chunk) returns false , the 'drain' event will be emitted when it is appropriate to resume writing data to the stream.

// Write the data to the supplied writable stream one million times.
// Be attentive to back-pressure.
function writeOneMillionTimes(writer, data, encoding, callback) {
let i = 1000000;
write();
function write() {
let ok = true;
do {
i--;
if (i === 0) {
// Last time!
writer.write(data, encoding, callback);
} else {
// See if we should continue, or wait.
// Don't pass the callback, because we're not done yet.
ok = writer.write(data, encoding);
}
} while (i > 0 && ok);
if (i > 0) {
// Had to stop early!
// Write some more once it drains.
writer.once('drain', write);
}
}
}

Event: 'error'
<Error>

The 'error' event is emitted if an error occurred while writing or piping data. The listener callback is passed a single Error argument when called.

The stream is closed when the 'error' event is emitted unless the autoDestroy option was set to false when creating the stream.

After 'error' , no further events other than 'close' should be emitted (including 'error' events).

Event: 'finish'
The 'finish' event is emitted after the stream.end() method has been called, and all data has been flushed to the underlying system.

const writer = getWritableStreamSomehow();


for (let i = 0; i < 100; i++) {
writer.write(`hello, #${i}!\n`);
}
writer.on('finish', () => {
console.log('All writes are now complete.');
});
writer.end('This is the end\n');

Event: 'pipe'
src <stream.Readable> source stream that is piping to this writable

The 'pipe' event is emitted when the stream.pipe() method is called on a readable stream, adding this writable to its set of destinations.

const writer = getWritableStreamSomehow();


const reader = getReadableStreamSomehow();
writer.on('pipe', (src) => {
console.log('Something is piping into the writer.');
assert.equal(src, reader);
});
reader.pipe(writer);
Event: 'unpipe'
src <stream.Readable> The source stream that unpiped this writable

The 'unpipe' event is emitted when the stream.unpipe() method is called on a Readable stream, removing this Writable from its set of destinations.

This is also emitted in case this Writable stream emits an error when a Readable stream pipes into it.

const writer = getWritableStreamSomehow();


const reader = getReadableStreamSomehow();
writer.on('unpipe', (src) => {
console.log('Something has stopped piping into the writer.');
assert.equal(src, reader);
});
reader.pipe(writer);
reader.unpipe(writer);

writable.cork()
The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the stream.uncork() or stream.end() methods are called.

The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them
to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev() , if present. This prevents a head-
of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing
writable._writev() may have an adverse effect on throughput.

See also: writable.uncork() , writable._writev() .

writable.destroy([error])
error <Error> Optional, an error to emit with 'error' event.

Returns: <this>

Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false ). After this call, the writable stream has ended and subsequent calls to
write() or end() will result in an ERR_STREAM_DESTROYED error. This is a destructive and immediate way to destroy a stream. Previous calls to write() may not have drained, and may
trigger an ERR_STREAM_DESTROYED error. Use end() instead of destroy if data should flush before close, or wait for the 'drain' event before destroying the stream.

Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error' .

Implementors should not override this method, but instead implement writable._destroy() .

writable.destroyed
<boolean>

Is true after writable.destroy() has been called.

writable.end([chunk[, encoding]][, callback])


chunk <string> | <Buffer> | <Uint8Array> | <any> Optional data to write. For streams not operating in object mode, chunk must be a string, Buffer or Uint8Array . For object
mode streams, chunk may be any JavaScript value other than null .

encoding <string> The encoding if chunk is a string

callback <Function> Callback for when the stream is finished.

Returns: <this>

Calling the writable.end() method signals that no more data will be written to the Writable . The optional chunk and encoding arguments allow one final additional chunk of data to be
written immediately before closing the stream.

Calling the stream.write() method after calling stream.end() will raise an error.

// Write 'hello, ' and then end with 'world!'.


const fs = require('fs');
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!

writable.setDefaultEncoding(encoding)
encoding <string> The new default encoding

Returns: <this>

The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

writable.uncork()
The writable.uncork() method flushes all data buffered since stream.cork() was called.

When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, it is recommended that calls to writable.uncork() be deferred using
process.nextTick() . Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

stream.cork();
stream.write('some ');
stream.write('data ');
process.nextTick(() => stream.uncork());

If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

stream.cork();
stream.write('some ');
stream.cork();
stream.write('data ');
process.nextTick(() => {
stream.uncork();
// The data will not be flushed until uncork() is called a second time.
stream.uncork();
});

See also: writable.cork() .

writable.writable
<boolean>

Is true if it is safe to call writable.write() , which means the stream has not been destroyed, errored or ended.

writable.writableEnded
<boolean>

Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

writable.writableCorked
<integer>

Number of times writable.uncork() needs to be called in order to fully uncork the stream.

writable.writableFinished
<boolean>

Is set to true immediately before the 'finish' event is emitted.

writable.writableHighWaterMark
<number>

Return the value of highWaterMark passed when creating this Writable .

writable.writableLength
<number>

This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark .

writable.writableNeedDrain
<boolean>

Is true if the stream's buffer has been full and stream will emit 'drain' .

writable.writableObjectMode
<boolean>

Getter for the property objectMode of a given Writable stream.

writable.write(chunk[, encoding][, callback])


chunk <string> | <Buffer> | <Uint8Array> | <any> Optional data to write. For streams not operating in object mode, chunk must be a string, Buffer or Uint8Array . For object
mode streams, chunk may be any JavaScript value other than null .
encoding <string> | <null> The encoding, if chunk is a string. Default: 'utf8'

callback <Function> Callback for when this chunk of data is flushed.

Returns: <boolean> false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true .

The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback may or may not be
called with the error as its first argument. To reliably detect write errors, add a listener for the 'error' event. The callback is called asynchronously and before 'error' is emitted.

The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk . If false is returned, further attempts to
write data to the stream should stop until the 'drain' event is emitted.

While a stream is not draining, calls to write() will buffer chunk , and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the
'drain' event will be emitted. It is recommended that once write() returns false, no more chunks be written until the 'drain' event is emitted. While calling write() on a stream that is
not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will
cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain
if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

Writing data while the stream is not draining is particularly problematic for a Transform , because the Transform streams are paused by default until they are piped or a 'data' or
'readable' event handler is added.
If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use stream.pipe() . However, if calling write() is
preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

function write(data, cb) {


if (!stream.write(data)) {
stream.once('drain', cb);
} else {
process.nextTick(cb);
}
}

// Wait for cb to be called before doing any other write.


write('hello', () => {
console.log('Write completed, do more writes now.');
});

A Writable stream in object mode will always ignore the encoding argument.

Readable streams
Readable streams are an abstraction for a source from which data is consumed.

Examples of Readable streams include:

HTTP responses, on the client


HTTP requests, on the server
fs read streams
zlib streams
crypto streams
TCP sockets
child process stdout and stderr
process.stdin

All Readable streams implement the interface defined by the stream.Readable class.

Two reading modes


Readable streams effectively operate in one of two modes: flowing and paused. These modes are separate from object mode . A Readable stream can be in object mode or not, regardless
of whether it is in flowing mode or paused mode.
In flowing mode, data is read from the underlying system automatically and provided to an application as quickly as possible using events via the EventEmitter interface.

In paused mode, the stream.read() method must be called explicitly to read chunks of data from the stream.

All Readable streams begin in paused mode but can be switched to flowing mode in one of the following ways:

Adding a 'data' event handler.

Calling the stream.resume() method.

Calling the stream.pipe() method to send the data to a Writable .

The Readable can switch back to paused mode using one of the following:

If there are no pipe destinations, by calling the stream.pause() method.

If there are pipe destinations, by removing all pipe destinations. Multiple pipe destinations may be removed by calling the stream.unpipe() method.

The important concept to remember is that a Readable will not generate data until a mechanism for either consuming or ignoring that data is provided. If the consuming mechanism is
disabled or taken away, the Readable will attempt to stop generating the data.

For backward compatibility reasons, removing 'data' event handlers will not automatically pause the stream. Also, if there are piped destinations, then calling stream.pause() will not
guarantee that the stream will remain paused once those destinations drain and ask for more data.

If a Readable is switched into flowing mode and there are no consumers available to handle the data, that data will be lost. This can occur, for instance, when the readable.resume()
method is called without a listener attached to the 'data' event, or when a 'data' event handler is removed from the stream.

Adding a 'readable' event handler automatically makes the stream stop flowing, and the data has to be consumed via readable.read() . If the 'readable' event handler is removed, then
the stream will start flowing again if there is a 'data' event handler.

Three states
The "two modes" of operation for a Readable stream are a simplified abstraction for the more complicated internal state management that is happening within the Readable stream
implementation.

Specifically, at any given point in time, every Readable is in one of three possible states:

readable.readableFlowing === null

readable.readableFlowing === false

readable.readableFlowing === true

When readable.readableFlowing is null , no mechanism for consuming the stream's data is provided. Therefore, the stream will not generate data. While in this state, attaching a listener
for the 'data' event, calling the readable.pipe() method, or calling the readable.resume() method will switch readable.readableFlowing to true , causing the Readable to begin
actively emitting events as data is generated.
Calling readable.pause() , readable.unpipe() , or receiving backpressure will cause the readable.readableFlowing to be set as false , temporarily halting the flowing of events but not
halting the generation of data. While in this state, attaching a listener for the 'data' event will not switch readable.readableFlowing to true .

const { PassThrough, Writable } = require('stream');


const pass = new PassThrough();
const writable = new Writable();

pass.pipe(writable);
pass.unpipe(writable);
// readableFlowing is now false.

pass.on('data', (chunk) => { console.log(chunk.toString()); });


pass.write('ok'); // Will not emit 'data'.
pass.resume(); // Must be called to make stream emit 'data'.

While readable.readableFlowing is false , data may be accumulating within the stream's internal buffer.

Choose one API style


The Readable stream API evolved across multiple Node.js versions and provides multiple methods of consuming stream data. In general, developers should choose one of the methods of
consuming data and should never use multiple methods to consume data from a single stream. Specifically, using a combination of on('data') , on('readable') , pipe() , or async iterators
could lead to unintuitive behavior.

Use of the readable.pipe() method is recommended for most users as it has been implemented to provide the easiest way of consuming stream data. Developers that require more fine-
grained control over the transfer and generation of data can use the EventEmitter and readable.on('readable') / readable.read() or the readable.pause() / readable.resume() APIs.

Class: stream.Readable

Event: 'close'
The 'close' event is emitted when the stream and any of its underlying resources (a file descriptor, for example) have been closed. The event indicates that no more events will be emitted,
and no further computation will occur.

A Readable stream will always emit the 'close' event if it is created with the emitClose option.

Event: 'data'
chunk <Buffer> | <string> | <any> The chunk of data. For streams that are not operating in object mode, the chunk will be either a string or Buffer . For streams that are in object
mode, the chunk can be any JavaScript value other than null .
The 'data' event is emitted whenever the stream is relinquishing ownership of a chunk of data to a consumer. This may occur whenever the stream is switched in flowing mode by calling
readable.pipe() , readable.resume() , or by attaching a listener callback to the 'data' event. The 'data' event will also be emitted whenever the readable.read() method is called and
a chunk of data is available to be returned.

Attaching a 'data' event listener to a stream that has not been explicitly paused will switch the stream into flowing mode. Data will then be passed as soon as it is available.

The listener callback will be passed the chunk of data as a string if a default encoding has been specified for the stream using the readable.setEncoding() method; otherwise the data will
be passed as a Buffer .

const readable = getReadableStreamSomehow();


readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
});

Event: 'end'
The 'end' event is emitted when there is no more data to be consumed from the stream.

The 'end' event will not be emitted unless the data is completely consumed. This can be accomplished by switching the stream into flowing mode, or by calling stream.read() repeatedly
until all data has been consumed.

const readable = getReadableStreamSomehow();


readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
});
readable.on('end', () => {
console.log('There will be no more data.');
});

Event: 'error'
<Error>

The 'error' event may be emitted by a Readable implementation at any time. Typically, this may occur if the underlying stream is unable to generate data due to an underlying internal
failure, or when a stream implementation attempts to push an invalid chunk of data.

The listener callback will be passed a single Error object.

Event: 'pause'
The 'pause' event is emitted when stream.pause() is called and readableFlowing is not false .

Event: 'readable'
The 'readable' event is emitted when there is data available to be read from the stream. In some cases, attaching a listener for the 'readable' event will cause some amount of data to be
read into an internal buffer.

const readable = getReadableStreamSomehow();


readable.on('readable', function() {
// There is some data to read now.
let data;

while (data = this.read()) {


console.log(data);
}
});

The 'readable' event will also be emitted once the end of the stream data has been reached but before the 'end' event is emitted.

Effectively, the 'readable' event indicates that the stream has new information: either new data is available or the end of the stream has been reached. In the former case, stream.read()
will return the available data. In the latter case, stream.read() will return null . For instance, in the following example, foo.txt is an empty file:

const fs = require('fs');
const rr = fs.createReadStream('foo.txt');
rr.on('readable', () => {
console.log(`readable: ${rr.read()}`);
});
rr.on('end', () => {
console.log('end');
});

The output of running this script is:

$ node test.js
readable: null
end

In general, the readable.pipe() and 'data' event mechanisms are easier to understand than the 'readable' event. However, handling 'readable' might result in increased throughput.
If both 'readable' and 'data' are used at the same time, 'readable' takes precedence in controlling the flow, i.e. 'data' will be emitted only when stream.read() is called. The
readableFlowing property would become false . If there are 'data' listeners when 'readable' is removed, the stream will start flowing, i.e. 'data'  events will be emitted without
calling .resume() .

Event: 'resume'
The 'resume' event is emitted when stream.resume() is called and readableFlowing is not true .

readable.destroy([error])
error <Error> Error which will be passed as payload in 'error' event

Returns: <this>

Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false ). After this call, the readable stream will release any internal resources
and subsequent calls to push() will be ignored.

Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error' .

Implementors should not override this method, but instead implement readable._destroy() .

readable.destroyed
<boolean>

Is true after readable.destroy() has been called.

readable.isPaused()
Returns: <boolean>

The readable.isPaused() method returns the current operating state of the Readable . This is used primarily by the mechanism that underlies the readable.pipe() method. In most
typical cases, there will be no reason to use this method directly.

const readable = new stream.Readable();

readable.isPaused(); // === false


readable.pause();
readable.isPaused(); // === true
readable.resume();
readable.isPaused(); // === false

readable.pause()
Returns: <this>

The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the
internal buffer.

const readable = getReadableStreamSomehow();


readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
readable.pause();
console.log('There will be no additional data for 1 second.');
setTimeout(() => {
console.log('Now data will start flowing again.');
readable.resume();
}, 1000);
});

The readable.pause() method has no effect if there is a 'readable' event listener.

readable.pipe(destination[, options])
destination <stream.Writable> The destination for writing data

options <Object> Pipe options


end <boolean> End the writer when the reader ends. Default: true .

Returns: <stream.Writable> The destination, allowing for a chain of pipes if it is a Duplex or a Transform stream

The readable.pipe() method attaches a Writable stream to the readable , causing it to switch automatically into flowing mode and push all of its data to the attached Writable . The flow
of data will be automatically managed so that the destination Writable stream is not overwhelmed by a faster Readable stream.

The following example pipes all of the data from the readable into a file named file.txt :

const fs = require('fs');
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt'.
readable.pipe(writable);

It is possible to attach multiple Writable streams to a single Readable stream.

The readable.pipe() method returns a reference to the destination stream making it possible to set up chains of piped streams:
const fs = require('fs');
const r = fs.createReadStream('file.txt');
const z = zlib.createGzip();
const w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);

By default, stream.end() is called on the destination Writable stream when the source Readable stream emits 'end' , so that the destination is no longer writable. To disable this default
behavior, the end option can be passed as false , causing the destination stream to remain open:

reader.pipe(writer, { end: false });


reader.on('end', () => {
writer.end('Goodbye\n');
});

One important caveat is that if the Readable stream emits an error during processing, the Writable destination is not closed automatically. If an error occurs, it will be necessary to manually
close each stream in order to prevent memory leaks.

The process.stderr and process.stdout Writable streams are never closed until the Node.js process exits, regardless of the specified options.

readable.read([size])
size <number> Optional argument to specify how much data to read.

Returns: <string> | <Buffer> | <null> | <any>

The readable.read() method pulls some data out of the internal buffer and returns it. If no data available to be read, null is returned. By default, the data will be returned as a Buffer
object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of
the data remaining in the internal buffer will be returned.

If the size argument is not specified, all of the data contained in the internal buffer will be returned.

The size argument must be less than or equal to 1 GiB.

The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is
fully drained.

const readable = getReadableStreamSomehow();

// 'readable' may be triggered multiple times as data is buffered in


readable.on('readable', () => {
let chunk;
console.log('Stream is readable (new data received in buffer)');
// Use a loop to make sure we read all currently available data
while (null !== (chunk = readable.read())) {
console.log(`Read ${chunk.length} bytes of data...`);
}
});

// 'end' will be triggered once when there is no more data available


readable.on('end', () => {
console.log('Reached end of stream.');
});

Each call to readable.read() returns a chunk of data, or null . The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a
large file .read() may return null , having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted
when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

Therefore to read a file's whole contents from a readable , it is necessary to collect chunks across multiple 'readable' events:

const chunks = [];

readable.on('readable', () => {
let chunk;
while (null !== (chunk = readable.read())) {
chunks.push(chunk);
}
});

readable.on('end', () => {
const content = chunks.join('');
});

A Readable stream in object mode will always return a single item from a call to readable.read(size) , regardless of the value of the size argument.

If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

Calling stream.read([size]) after the 'end' event has been emitted will return null . No runtime error will be raised.

readable.readable
<boolean>

Is true if it is safe to call readable.read() , which means the stream has not been destroyed or emitted 'error' or 'end' .

readable.readableEncoding
<null> | <string>

Getter for the property encoding of a given Readable stream. The encoding property can be set using the readable.setEncoding() method.

readable.readableEnded
<boolean>

Becomes true when 'end' event is emitted.

readable.readableFlowing
<boolean>

This property reflects the current state of a Readable stream as described in the Three states section.

readable.readableHighWaterMark
<number>

Returns the value of highWaterMark passed when creating this Readable .

readable.readableLength
<number>

This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark .

readable.readableObjectMode
<boolean>

Getter for the property objectMode of a given Readable stream.

readable.resume()
Returns: <this>

The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:
getReadableStreamSomehow()
.resume()
.on('end', () => {
console.log('Reached the end, but did not read anything.');
});

The readable.resume() method has no effect if there is a 'readable' event listener.

readable.setEncoding(encoding)
encoding <string> The encoding to use.

Returns: <this>

The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather
than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling
readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as
Buffer objects.

const readable = getReadableStreamSomehow();


readable.setEncoding('utf8');
readable.on('data', (chunk) => {
assert.equal(typeof chunk, 'string');
console.log('Got %d characters of string data:', chunk.length);
});

readable.unpipe([destination])
destination <stream.Writable> Optional specific stream to unpipe

Returns: <this>

The readable.unpipe() method detaches a Writable stream previously attached using the stream.pipe() method.

If the destination is not specified, then all pipes are detached.

If the destination is specified, but no pipe is set up for it, then the method does nothing.
const fs = require('fs');
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt',
// but only for the first second.
readable.pipe(writable);
setTimeout(() => {
console.log('Stop writing to file.txt.');
readable.unpipe(writable);
console.log('Manually close the file stream.');
writable.end();
}, 1000);

readable.unshift(chunk[, encoding])
chunk <Buffer> | <Uint8Array> | <string> | <null> | <any> Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a string,
Buffer , Uint8Array or null . For object mode streams, chunk may be any JavaScript value.

encoding <string> Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii' .

Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null) , after which no more data can be written. The EOF signal is put at the end of the
buffer and any buffered data will still be flushed.

The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-
consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

// Pull off a header delimited by \n\n.


// Use unshift() if we get too much.
// Call the callback with (error, header, stream).
const { StringDecoder } = require('string_decoder');
function parseHeader(stream, callback) {
stream.on('error', callback);
stream.on('readable', onReadable);
const decoder = new StringDecoder('utf8');
let header = '';
function onReadable() {
let chunk;
while (null !== (chunk = stream.read())) {
const str = decoder.write(chunk);
if (str.match(/\n\n/)) {
// Found the header boundary.
const split = str.split(/\n\n/);
header += split.shift();
const remaining = split.join('\n\n');
const buf = Buffer.from(remaining, 'utf8');
stream.removeListener('error', callback);
// Remove the 'readable' listener before unshifting.
stream.removeListener('readable', onReadable);
if (buf.length)
stream.unshift(buf);
// Now the body of the message can be read from the stream.
callback(null, header, stream);
} else {
// Still reading the header.
header += str;
}
}
}
}

Unlike stream.push(chunk) , stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if
readable.unshift() is called during a read (i.e. from within a stream._read() implementation on a custom stream). Following the call to readable.unshift() with an immediate
stream.push('') will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

readable.wrap(stream)
stream <Stream> An "old style" readable stream

Returns: <this>

Prior to Node.js 0.10, streams did not implement the entire stream module API as it is currently defined. (See Compatibility for more information.)

When using an older Node.js library that emits 'data' events and has a stream.pause() method that is advisory only, the readable.wrap() method can be used to create a Readable
stream that uses the old stream as its data source.

It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

const { OldReader } = require('./old-api-module.js');


const { Readable } = require('stream');
const oreader = new OldReader();
const myReader = new Readable().wrap(oreader);

myReader.on('readable', () => {
myReader.read(); // etc.
});

readable[Symbol.asyncIterator]()
Returns: <AsyncIterator> to fully consume the stream.

const fs = require('fs');

async function print(readable) {


readable.setEncoding('utf8');
let data = '';
for await (const chunk of readable) {
data += chunk;
}
console.log(data);
}

print(fs.createReadStream('file')).catch(console.error);

If the loop terminates with a break or a throw , the stream will be destroyed. In other terms, iterating over a stream will consume the stream fully. The stream will be read in chunks of size
equal to the highWaterMark option. In the code example above, data will be in a single chunk if the file has less then 64KB of data because no highWaterMark option is provided to
fs.createReadStream() .

Duplex and transform streams

Class: stream.Duplex
Duplex streams are streams that implement both the Readable and Writable interfaces.

Examples of Duplex streams include:

TCP sockets
zlib streams
crypto streams
Class: stream.Transform
Transform streams are Duplex streams where the output is in some way related to the input. Like all Duplex streams, Transform streams implement both the Readable and Writable
interfaces.

Examples of Transform streams include:

zlib streams
crypto streams

transform.destroy([error])
error <Error>

Returns: <this>

Destroy the stream, and optionally emit an 'error' event. After this call, the transform stream would release any internal resources. Implementors should not override this method, but
instead implement readable._destroy() . The default implementation of _destroy() for Transform also emit 'close' unless emitClose is set in false.

Once destroy() has been called, any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error' .

stream.finished(stream[, options], callback)


stream <Stream> A readable and/or writable stream.

options <Object>
error <boolean> If set to false , then a call to emit('error', err) is not treated as finished. Default: true .

readable <boolean> When set to false , the callback will be called when the stream ends even though the stream might still be readable. Default: true .

writable <boolean> When set to false , the callback will be called when the stream ends even though the stream might still be writable. Default: true .

signal <AbortSignal> allows aborting the wait for the stream finish. The underlying stream will not be aborted if the signal is aborted. The callback will get called with an
AbortError . All registered listeners added by this function will also be removed.

callback <Function> A callback function that takes an optional error argument.

Returns: <Function> A cleanup function which removes all registered listeners.

A function to get notified when a stream is no longer readable, writable or has experienced an error or a premature close event.

const { finished } = require('stream');

const rs = fs.createReadStream('archive.tar');

finished(rs, (err) => {


if (err) {
console.error('Stream failed.', err);
} else {
console.log('Stream is done reading.');
}
});

rs.resume(); // Drain the stream.

Especially useful in error handling scenarios where a stream is destroyed prematurely (like an aborted HTTP request), and will not emit 'end' or 'finish' .

The finished API provides promise version:

const { finished } = require('stream/promises');

const rs = fs.createReadStream('archive.tar');

async function run() {


await finished(rs);
console.log('Stream is done reading.');
}

run().catch(console.error);
rs.resume(); // Drain the stream.

stream.finished() leaves dangling event listeners (in particular 'error' , 'end' , 'finish' and 'close' ) after callback has been invoked. The reason for this is so that unexpected
'error' events (due to incorrect stream implementations) do not cause unexpected crashes. If this is unwanted behavior then the returned cleanup function needs to be invoked in the
callback:

const cleanup = finished(rs, (err) => {


cleanup();
// ...
});

stream.pipeline(source[, ...transforms], destination, callback)


stream.pipeline(streams, callback)
streams <Stream[]> | <Iterable[]> | <AsyncIterable[]> | <Function[]>
source <Stream> | <Iterable> | <AsyncIterable> | <Function>
Returns: <Iterable> | <AsyncIterable>

...transforms <Stream> | <Function>


source <AsyncIterable>

Returns: <AsyncIterable>

destination <Stream> | <Function>


source <AsyncIterable>

Returns: <AsyncIterable> | <Promise>

callback <Function> Called when the pipeline is fully done.


err <Error>

val Resolved value of Promise returned by destination .

Returns: <Stream>

A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

const { pipeline } = require('stream');


const fs = require('fs');
const zlib = require('zlib');

// Use the pipeline API to easily pipe a series of streams


// together and get notified when the pipeline is fully done.

// A pipeline to gzip a potentially huge tar file efficiently:

pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
}
);
The pipeline API provides a promise version, which can also receive an options argument as the last parameter with a signal <AbortSignal> property. When the signal is aborted,
destroy will be called on the underlying pipeline, with an AbortError .

const { pipeline } = require('stream/promises');

async function run() {


await pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz')
);
console.log('Pipeline succeeded.');
}

run().catch(console.error);

To use an AbortSignal , pass it inside an options object, as the last argument:

const { pipeline } = require('stream/promises');

async function run() {


const ac = new AbortController();
const options = {
signal: ac.signal,
};

setTimeout(() => ac.abort(), 1);


await pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
options,
);
}

run().catch(console.error); // AbortError

The pipeline API also supports async generators:


const { pipeline } = require('stream/promises');
const fs = require('fs');

async function run() {


await pipeline(
fs.createReadStream('lowercase.txt'),
async function* (source) {
source.setEncoding('utf8'); // Work with strings rather than `Buffer`s.
for await (const chunk of source) {
yield chunk.toUpperCase();
}
},
fs.createWriteStream('uppercase.txt')
);
console.log('Pipeline succeeded.');
}

run().catch(console.error);

stream.pipeline() will call stream.destroy(err) on all streams except:

Readable streams which have emitted 'end' or 'close' .

Writable streams which have emitted 'finish' or 'close' .

stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and
swallowed errors.

stream.Readable.from(iterable, [options])
iterable <Iterable> Object implementing the Symbol.asyncIterator or Symbol.iterator iterable protocol. Emits an 'error' event if a null value is passed.

options <Object> Options provided to new stream.Readable([options]) . By default, Readable.from() will set options.objectMode to true , unless this is explicitly opted out by
setting options.objectMode to false .

Returns: <stream.Readable>

A utility method for creating readable streams out of iterators.

const { Readable } = require('stream');

async function * generate() {


yield 'hello';
yield 'streams';
}

const readable = Readable.from(generate());

readable.on('data', (chunk) => {


console.log(chunk);
});

Calling Readable.from(string) or Readable.from(buffer) will not have the strings or buffers be iterated to match the other streams semantics for performance reasons.

stream.addAbortSignal(signal, stream)
signal <AbortSignal> A signal representing possible cancellation

stream <Stream> a stream to attach a signal to

Attaches an AbortSignal to a readable or writeable stream. This lets code control stream destruction using an AbortController .

Calling abort on the AbortController corresponding to the passed AbortSignal will behave the same way as calling .destroy(new AbortError()) on the stream.

const fs = require('fs');

const controller = new AbortController();


const read = addAbortSignal(
controller.signal,
fs.createReadStream(('object.json'))
);
// Later, abort the operation closing the stream
controller.abort();

Or using an AbortSignal with a readable stream as an async iterable:

const controller = new AbortController();


setTimeout(() => controller.abort(), 10_000); // set a timeout
const stream = addAbortSignal(
controller.signal,
fs.createReadStream(('object.json'))
);
(async () => {
try {
for await (const chunk of stream) {
await process(chunk);
}
} catch (e) {
if (e.name === 'AbortError') {
// The operation was cancelled
} else {
throw e;
}
}
})();

API for stream implementers


The stream module API has been designed to make it possible to easily implement streams using JavaScript's prototypal inheritance model.

First, a stream developer would declare a new JavaScript class that extends one of the four basic stream classes ( stream.Writable , stream.Readable , stream.Duplex , or
stream.Transform ), making sure they call the appropriate parent class constructor:

const { Writable } = require('stream');

class MyWritable extends Writable {


constructor({ highWaterMark, ...options }) {
super({ highWaterMark });
// ...
}
}

When extending streams, keep in mind what options the user can and should provide before forwarding these to the base constructor. For example, if the implementation makes
assumptions in regard to the autoDestroy and emitClose options, do not allow the user to override these. Be explicit about what options are forwarded instead of implicitly forwarding all
options.

The new stream class must then implement one or more specific methods, depending on the type of stream being created, as detailed in the chart below:

Use-case Class Method(s) to implement

Reading only Readable _read()

Writing only Writable _write() , _writev() , _final()


Use-case Class Method(s) to implement

Reading and writing Duplex _read() , _write() , _writev() , _final()

Operate on written data, then read the result Transform _transform() , _flush() , _final()

The implementation code for a stream should never call the "public" methods of a stream that are intended for use by consumers (as described in the API for stream consumers section).
Doing so may lead to adverse side effects in application code consuming the stream.

Avoid overriding public methods such as write() , end() , cork() , uncork() , read() and destroy() , or emitting internal events such as 'error' , 'data' , 'end' , 'finish' and
'close' through .emit() . Doing so can break current and future stream invariants leading to behavior and/or compatibility issues with other streams, stream utilities, and user
expectations.

Simplified construction
For many simple cases, it is possible to create a stream without relying on inheritance. This can be accomplished by directly creating instances of the stream.Writable , stream.Readable ,
stream.Duplex or stream.Transform objects and passing appropriate methods as constructor options.

const { Writable } = require('stream');

const myWritable = new Writable({


construct(callback) {
// Initialize state and load resources...
},
write(chunk, encoding, callback) {
// ...
},
destroy() {
// Free resources...
}
});

Implementing a writable stream


The stream.Writable class is extended to implement a Writable stream.

Custom Writable streams must call the new stream.Writable([options]) constructor and implement the writable._write() and/or writable._writev() method.

new stream.Writable([options])
options <Object>
highWaterMark <number> Buffer level when stream.write() starts returning false . Default: 16384 (16KB), or 16 for objectMode streams.

decodeStrings <boolean> Whether to encode string s passed to stream.write() to Buffer s (with the encoding specified in the stream.write() call) before passing them to
stream._write() . Other types of data are not converted (i.e. Buffer s are not decoded into string s). Setting to false will prevent string s from being converted. Default: true .

defaultEncoding <string> The default encoding that is used when no encoding is specified as an argument to stream.write() . Default: 'utf8' .

objectMode <boolean> Whether or not the stream.write(anyObj) is a valid operation. When set, it becomes possible to write JavaScript values other than string, Buffer or
Uint8Array if supported by the stream implementation. Default: false .

emitClose <boolean> Whether or not the stream should emit 'close' after it has been destroyed. Default: true .

write <Function> Implementation for the stream._write() method.

writev <Function> Implementation for the stream._writev() method.

destroy <Function> Implementation for the stream._destroy() method.

final <Function> Implementation for the stream._final() method.

construct <Function> Implementation for the stream._construct() method.

autoDestroy <boolean> Whether this stream should automatically call .destroy() on itself after ending. Default: true .

signal <AbortSignal> A signal representing possible cancellation.

const { Writable } = require('stream');

class MyWritable extends Writable {


constructor(options) {
// Calls the stream.Writable() constructor.
super(options);
// ...
}
}

Or, when using pre-ES6 style constructors:

const { Writable } = require('stream');


const util = require('util');

function MyWritable(options) {
if (!(this instanceof MyWritable))
return new MyWritable(options);
Writable.call(this, options);
}
util.inherits(MyWritable, Writable);

Or, using the simplified constructor approach:

const { Writable } = require('stream');

const myWritable = new Writable({


write(chunk, encoding, callback) {
// ...
},
writev(chunks, callback) {
// ...
}
});

Calling abort on the AbortController corresponding to the passed AbortSignal will behave the same way as calling .destroy(new AbortError()) on the writeable stream.

const { Writable } = require('stream');

const controller = new AbortController();


const myWritable = new Writable({
write(chunk, encoding, callback) {
// ...
},
writev(chunks, callback) {
// ...
},
signal: controller.signal
});
// Later, abort the operation closing the stream
controller.abort();

writable._construct(callback)
callback <Function> Call this function (optionally with an error argument) when the stream has finished initializing.

The _construct() method MUST NOT be called directly. It may be implemented by child classes, and if so, will be called by the internal Writable class methods only.
This optional function will be called in a tick after the stream constructor has returned, delaying any _write() , _final() and _destroy() calls until callback is called. This is useful to
initialize state or asynchronously initialize resources before the stream can be used.

const { Writable } = require('stream');


const fs = require('fs');

class WriteStream extends Writable {


constructor(filename) {
super();
this.filename = filename;
}
_construct(callback) {
fs.open(this.filename, (err, fd) => {
if (err) {
callback(err);
} else {
this.fd = fd;
callback();
}
});
}
_write(chunk, encoding, callback) {
fs.write(this.fd, chunk, callback);
}
_destroy(err, callback) {
if (this.fd) {
fs.close(this.fd, (er) => callback(er || err));
} else {
callback(err);
}
}
}

writable._write(chunk, encoding, callback)


chunk <Buffer> | <string> | <any> The Buffer to be written, converted from the string passed to stream.write() . If the stream's decodeStrings option is false or the stream
is operating in object mode, the chunk will not be converted & will be whatever was passed to stream.write() .
encoding <string> If the chunk is a string, then encoding is the character encoding of that string. If chunk is a Buffer , or if the stream is operating in object mode, encoding may be
ignored.
callback <Function> Call this function (optionally with an error argument) when processing is complete for the supplied chunk.

All Writable stream implementations must provide a writable._write() and/or writable._writev() method to send data to the underlying resource.

Transform streams provide their own implementation of the writable._write() .

This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Writable class methods only.

The callback function must be called synchronously inside of writable._write() or asynchronously (i.e. different tick) to signal either that the write completed successfully or failed with
an error. The first argument passed to the callback must be the Error object if the call failed or null if the write succeeded.

All calls to writable.write() that occur between the time writable._write() is called and the callback is called will cause the written data to be buffered. When the callback is
invoked, the stream might emit a 'drain' event. If a stream implementation is capable of processing multiple chunks of data at once, the writable._writev() method should be
implemented.

If the decodeStrings property is explicitly set to false in the constructor options, then chunk will remain the same object that is passed to .write() , and may be a string rather than a
Buffer . This is to support implementations that have an optimized handling for certain string data encodings. In that case, the encoding argument will indicate the character encoding of
the string. Otherwise, the encoding argument can be safely ignored.

The writable._write() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.

writable._writev(chunks, callback)
chunks <Object[]> The data to be written. The value is an array of <Object> that each represent a discreet chunk of data to write. The properties of these objects are:
chunk <Buffer> | <string> A buffer instance or string containing the data to be written. The chunk will be a string if the Writable was created with the decodeStrings option
set to false and a string was passed to write() .
encoding <string> The character encoding of the chunk . If chunk is a Buffer , the encoding will be 'buffer' .

callback <Function> A callback function (optionally with an error argument) to be invoked when processing is complete for the supplied chunks.

This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Writable class methods only.

The writable._writev() method may be implemented in addition or alternatively to writable._write() in stream implementations that are capable of processing multiple chunks of data
at once. If implemented and if there is buffered data from previous writes, _writev() will be called instead of _write() .

The writable._writev() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.

writable._destroy(err, callback)
err <Error> A possible error.

callback <Function> A callback function that takes an optional error argument.

The _destroy() method is called by writable.destroy() . It can be overridden by child classes but it must not be called directly.
writable._final(callback)
callback <Function> Call this function (optionally with an error argument) when finished writing any remaining data.

The _final() method must not be called directly. It may be implemented by child classes, and if so, will be called by the internal Writable class methods only.

This optional function will be called before the stream closes, delaying the 'finish' event until callback is called. This is useful to close resources or write buffered data before a stream
ends.

Errors while writing


Errors occurring during the processing of the writable._write() , writable._writev() and writable._final() methods must be propagated by invoking the callback and passing the
error as the first argument. Throwing an Error from within these methods or manually emitting an 'error' event results in undefined behavior.

If a Readable stream pipes into a Writable stream when Writable emits an error, the Readable stream will be unpiped.

const { Writable } = require('stream');

const myWritable = new Writable({


write(chunk, encoding, callback) {
if (chunk.toString().indexOf('a') >= 0) {
callback(new Error('chunk is invalid'));
} else {
callback();
}
}
});

An example writable stream


The following illustrates a rather simplistic (and somewhat pointless) custom Writable stream implementation. While this specific Writable stream instance is not of any real particular
usefulness, the example illustrates each of the required elements of a custom Writable stream instance:

const { Writable } = require('stream');

class MyWritable extends Writable {


_write(chunk, encoding, callback) {
if (chunk.toString().indexOf('a') >= 0) {
callback(new Error('chunk is invalid'));
} else {
callback();
}
}
}

Decoding buffers in a writable stream


Decoding buffers is a common task, for instance, when using transformers whose input is a string. This is not a trivial process when using multi-byte characters encoding, such as UTF-8. The
following example shows how to decode multi-byte strings using StringDecoder and Writable .

const { Writable } = require('stream');


const { StringDecoder } = require('string_decoder');

class StringWritable extends Writable {


constructor(options) {
super(options);
this._decoder = new StringDecoder(options && options.defaultEncoding);
this.data = '';
}
_write(chunk, encoding, callback) {
if (encoding === 'buffer') {
chunk = this._decoder.write(chunk);
}
this.data += chunk;
callback();
}
_final(callback) {
this.data += this._decoder.end();
callback();
}
}

const euro = [[0xE2, 0x82], [0xAC]].map(Buffer.from);


const w = new StringWritable();

w.write('currency: ');
w.write(euro[0]);
w.end(euro[1]);

console.log(w.data); // currency: €
Implementing a readable stream
The stream.Readable class is extended to implement a Readable stream.

Custom Readable streams must call the new stream.Readable([options]) constructor and implement the readable._read() method.

new stream.Readable([options])
options <Object>
highWaterMark <number> The maximum number of bytes to store in the internal buffer before ceasing to read from the underlying resource. Default: 16384 (16KB), or 16 for
objectMode streams.

encoding <string> If specified, then buffers will be decoded to strings using the specified encoding. Default: null .

objectMode <boolean> Whether this stream should behave as a stream of objects. Meaning that stream.read(n) returns a single value instead of a Buffer of size n . Default:
false .

emitClose <boolean> Whether or not the stream should emit 'close' after it has been destroyed. Default: true .

read <Function> Implementation for the stream._read() method.

destroy <Function> Implementation for the stream._destroy() method.

construct <Function> Implementation for the stream._construct() method.

autoDestroy <boolean> Whether this stream should automatically call .destroy() on itself after ending. Default: true .

signal <AbortSignal> A signal representing possible cancellation.

const { Readable } = require('stream');

class MyReadable extends Readable {


constructor(options) {
// Calls the stream.Readable(options) constructor.
super(options);
// ...
}
}

Or, when using pre-ES6 style constructors:

const { Readable } = require('stream');


const util = require('util');

function MyReadable(options) {
if (!(this instanceof MyReadable))
return new MyReadable(options);
Readable.call(this, options);
}
util.inherits(MyReadable, Readable);

Or, using the simplified constructor approach:

const { Readable } = require('stream');

const myReadable = new Readable({


read(size) {
// ...
}
});

Calling abort on the AbortController corresponding to the passed AbortSignal will behave the same way as calling .destroy(new AbortError()) on the readable created.

const { Readable } = require('stream');


const controller = new AbortController();
const read = new Readable({
read(size) {
// ...
},
signal: controller.signal
});
// Later, abort the operation closing the stream
controller.abort();

readable._construct(callback)
callback <Function> Call this function (optionally with an error argument) when the stream has finished initializing.

The _construct() method MUST NOT be called directly. It may be implemented by child classes, and if so, will be called by the internal Readable class methods only.

This optional function will be scheduled in the next tick by the stream constructor, delaying any _read() and _destroy() calls until callback is called. This is useful to initialize state or
asynchronously initialize resources before the stream can be used.
const { Readable } = require('stream');
const fs = require('fs');

class ReadStream extends Readable {


constructor(filename) {
super();
this.filename = filename;
this.fd = null;
}
_construct(callback) {
fs.open(this.filename, (err, fd) => {
if (err) {
callback(err);
} else {
this.fd = fd;
callback();
}
});
}
_read(n) {
const buf = Buffer.alloc(n);
fs.read(this.fd, buf, 0, n, null, (err, bytesRead) => {
if (err) {
this.destroy(err);
} else {
this.push(bytesRead > 0 ? buf.slice(0, bytesRead) : null);
}
});
}
_destroy(err, callback) {
if (this.fd) {
fs.close(this.fd, (er) => callback(er || err));
} else {
callback(err);
}
}
}

readable._read(size)
size <number> Number of bytes to read asynchronously

This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Readable class methods only.

All Readable stream implementations must provide an implementation of the readable._read() method to fetch data from the underlying resource.

When readable._read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method.
_read() should continue reading from the resource and pushing data until readable.push() returns false . Only when _read() is called again after it has stopped should it resume
pushing additional data onto the queue.

Once the readable._read() method has been called, it will not be called again until more data is pushed through the readable.push() method. Empty data such as empty buffers and
strings will not cause readable._read() to be called.

The size argument is advisory. For implementations where a "read" is a single operation that returns data can use the size argument to determine how much data to fetch. Other
implementations may ignore this argument and simply provide data whenever it becomes available. There is no need to "wait" until size bytes are available before calling
stream.push(chunk) .

The readable._read() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.

readable._destroy(err, callback)
err <Error> A possible error.

callback <Function> A callback function that takes an optional error argument.

The _destroy() method is called by readable.destroy() . It can be overridden by child classes but it must not be called directly.

readable.push(chunk[, encoding])
chunk <Buffer> | <Uint8Array> | <string> | <null> | <any> Chunk of data to push into the read queue. For streams not operating in object mode, chunk must be a string, Buffer
or Uint8Array . For object mode streams, chunk may be any JavaScript value.

encoding <string> Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii' .

Returns: <boolean> true if additional chunks of data may continue to be pushed; false otherwise.

When chunk is a Buffer , Uint8Array or string , the chunk of data will be added to the internal queue for users of the stream to consume. Passing chunk as null signals the end of the
stream (EOF), after which no more data can be written.

When the Readable is operating in paused mode, the data added with readable.push() can be read out by calling the readable.read() method when the 'readable' event is emitted.

When the Readable is operating in flowing mode, the data added with readable.push() will be delivered by emitting a 'data' event.

The readable.push() method is designed to be as flexible as possible. For example, when wrapping a lower-level source that provides some form of pause/resume mechanism, and a data
callback, the low-level source can be wrapped by the custom Readable instance:
// `_source` is an object with readStop() and readStart() methods,
// and an `ondata` member that gets called when it has data, and
// an `onend` member that gets called when the data is over.

class SourceWrapper extends Readable {


constructor(options) {
super(options);

this._source = getLowLevelSourceObject();

// Every time there's data, push it into the internal buffer.


this._source.ondata = (chunk) => {
// If push() returns false, then stop reading from source.
if (!this.push(chunk))
this._source.readStop();
};

// When the source ends, push the EOF-signaling `null` chunk.


this._source.onend = () => {
this.push(null);
};
}
// _read() will be called when the stream wants to pull more data in.
// The advisory size argument is ignored in this case.
_read(size) {
this._source.readStart();
}
}

The readable.push() method is used to push the content into the internal buffer. It can be driven by the readable._read() method.

For streams not operating in object mode, if the chunk parameter of readable.push() is undefined , it will be treated as empty string or buffer. See readable.push('') for more
information.

Errors while reading


Errors occurring during processing of the readable._read() must be propagated through the readable.destroy(err) method. Throwing an Error from within readable._read() or
manually emitting an 'error' event results in undefined behavior.
const { Readable } = require('stream');

const myReadable = new Readable({


read(size) {
const err = checkSomeErrorCondition();
if (err) {
this.destroy(err);
} else {
// Do some work.
}
}
});

An example counting stream


The following is a basic example of a Readable stream that emits the numerals from 1 to 1,000,000 in ascending order, and then ends.

const { Readable } = require('stream');

class Counter extends Readable {


constructor(opt) {
super(opt);
this._max = 1000000;
this._index = 1;
}

_read() {
const i = this._index++;
if (i > this._max)
this.push(null);
else {
const str = String(i);
const buf = Buffer.from(str, 'ascii');
this.push(buf);
}
}
}
Implementing a duplex stream
A Duplex stream is one that implements both Readable and Writable , such as a TCP socket connection.

Because JavaScript does not have support for multiple inheritance, the stream.Duplex class is extended to implement a Duplex stream (as opposed to extending the stream.Readable and
stream.Writable classes).

The stream.Duplex class prototypically inherits from stream.Readable and parasitically from stream.Writable , but instanceof will work properly for both base classes due to overriding
Symbol.hasInstance on stream.Writable .

Custom Duplex streams must call the new stream.Duplex([options]) constructor and implement both the readable._read() and writable._write() methods.

new stream.Duplex(options)
options <Object> Passed to both Writable and Readable constructors. Also has the following fields:
allowHalfOpen <boolean> If set to false , then the stream will automatically end the writable side when the readable side ends. Default: true .

readable <boolean> Sets whether the Duplex should be readable. Default: true .

writable <boolean> Sets whether the Duplex should be writable. Default: true .

readableObjectMode <boolean> Sets objectMode for readable side of the stream. Has no effect if objectMode is true . Default: false .

writableObjectMode <boolean> Sets objectMode for writable side of the stream. Has no effect if objectMode is true . Default: false .

readableHighWaterMark <number> Sets highWaterMark for the readable side of the stream. Has no effect if highWaterMark is provided.

writableHighWaterMark <number> Sets highWaterMark for the writable side of the stream. Has no effect if highWaterMark is provided.

const { Duplex } = require('stream');

class MyDuplex extends Duplex {


constructor(options) {
super(options);
// ...
}
}

Or, when using pre-ES6 style constructors:

const { Duplex } = require('stream');


const util = require('util');

function MyDuplex(options) {
if (!(this instanceof MyDuplex))
return new MyDuplex(options);
Duplex.call(this, options);
}
util.inherits(MyDuplex, Duplex);

Or, using the simplified constructor approach:

const { Duplex } = require('stream');

const myDuplex = new Duplex({


read(size) {
// ...
},
write(chunk, encoding, callback) {
// ...
}
});

When using pipeline:

const { Transform, pipeline } = require('stream');


const fs = require('fs');

pipeline(
fs.createReadStream('object.json')
.setEncoding('utf-8'),
new Transform({
decodeStrings: false, // Accept string input rather than Buffers
construct(callback) {
this.data = '';
callback();
},
transform(chunk, encoding, callback) {
this.data += chunk;
callback();
},
flush(callback) {
try {
// Make sure is valid json.
JSON.parse(this.data);
this.push(this.data);
} catch (err) {
callback(err);
}
}
}),
fs.createWriteStream('valid-object.json'),
(err) => {
if (err) {
console.error('failed', err);
} else {
console.log('completed');
}
}
);

An example duplex stream


The following illustrates a simple example of a Duplex stream that wraps a hypothetical lower-level source object to which data can be written, and from which data can be read, albeit using
an API that is not compatible with Node.js streams. The following illustrates a simple example of a Duplex stream that buffers incoming written data via the Writable interface that is read
back out via the Readable interface.

const { Duplex } = require('stream');


const kSource = Symbol('source');

class MyDuplex extends Duplex {


constructor(source, options) {
super(options);
this[kSource] = source;
}

_write(chunk, encoding, callback) {


// The underlying source only deals with strings.
if (Buffer.isBuffer(chunk))
chunk = chunk.toString();
this[kSource].writeSomeData(chunk);
callback();
}
_read(size) {
this[kSource].fetchSomeData(size, (data, encoding) => {
this.push(Buffer.from(data, encoding));
});
}
}

The most important aspect of a Duplex stream is that the Readable and Writable sides operate independently of one another despite co-existing within a single object instance.

Object mode duplex streams


For Duplex streams, objectMode can be set exclusively for either the Readable or Writable side using the readableObjectMode and writableObjectMode options respectively.

In the following example, for instance, a new Transform stream (which is a type of Duplex stream) is created that has an object mode Writable side that accepts JavaScript numbers that
are converted to hexadecimal strings on the Readable side.

const { Transform } = require('stream');

// All Transform streams are also Duplex Streams.


const myTransform = new Transform({
writableObjectMode: true,

transform(chunk, encoding, callback) {


// Coerce the chunk to a number if necessary.
chunk |= 0;

// Transform the chunk into something else.


const data = chunk.toString(16);

// Push the data onto the readable queue.


callback(null, '0'.repeat(data.length % 2) + data);
}
});

myTransform.setEncoding('ascii');
myTransform.on('data', (chunk) => console.log(chunk));

myTransform.write(1);
// Prints: 01
myTransform.write(10);
// Prints: 0a
myTransform.write(100);
// Prints: 64

Implementing a transform stream


A Transform stream is a Duplex stream where the output is computed in some way from the input. Examples include zlib streams or crypto streams that compress, encrypt, or decrypt
data.

There is no requirement that the output be the same size as the input, the same number of chunks, or arrive at the same time. For example, a Hash stream will only ever have a single chunk
of output which is provided when the input is ended. A zlib stream will produce output that is either much smaller or much larger than its input.

The stream.Transform class is extended to implement a Transform stream.

The stream.Transform class prototypically inherits from stream.Duplex and implements its own versions of the writable._write() and readable._read() methods. Custom Transform
implementations must implement the transform._transform() method and may also implement the transform._flush() method.

Care must be taken when using Transform streams in that data written to the stream can cause the Writable side of the stream to become paused if the output on the Readable side is not
consumed.

new stream.Transform([options])
options <Object> Passed to both Writable and Readable constructors. Also has the following fields:
transform <Function> Implementation for the stream._transform() method.

flush <Function> Implementation for the stream._flush() method.

const { Transform } = require('stream');

class MyTransform extends Transform {


constructor(options) {
super(options);
// ...
}
}

Or, when using pre-ES6 style constructors:


const { Transform } = require('stream');
const util = require('util');

function MyTransform(options) {
if (!(this instanceof MyTransform))
return new MyTransform(options);
Transform.call(this, options);
}
util.inherits(MyTransform, Transform);

Or, using the simplified constructor approach:

const { Transform } = require('stream');

const myTransform = new Transform({


transform(chunk, encoding, callback) {
// ...
}
});

Event: 'end'
The 'end' event is from the stream.Readable class. The 'end' event is emitted after all data has been output, which occurs after the callback in transform._flush() has been called. In
the case of an error, 'end' should not be emitted.

Event: 'finish'
The 'finish' event is from the stream.Writable class. The 'finish' event is emitted after stream.end() is called and all chunks have been processed by stream._transform() . In the
case of an error, 'finish' should not be emitted.

transform._flush(callback)
callback <Function> A callback function (optionally with an error argument and data) to be called when remaining data has been flushed.

This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Readable class methods only.

In some cases, a transform operation may need to emit an additional bit of data at the end of the stream. For example, a zlib compression stream will store an amount of internal state used
to optimally compress the output. When the stream ends, however, that additional data needs to be flushed so that the compressed data will be complete.
Custom Transform implementations may implement the transform._flush() method. This will be called when there is no more written data to be consumed, but before the 'end' event is
emitted signaling the end of the Readable stream.

Within the transform._flush() implementation, the transform.push() method may be called zero or more times, as appropriate. The callback function must be called when the flush
operation is complete.

The transform._flush() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.

transform._transform(chunk, encoding, callback)


chunk <Buffer> | <string> | <any> The Buffer to be transformed, converted from the string passed to stream.write() . If the stream's decodeStrings option is false or the
stream is operating in object mode, the chunk will not be converted & will be whatever was passed to stream.write() .

encoding <string> If the chunk is a string, then this is the encoding type. If chunk is a buffer, then this is the special value 'buffer' . Ignore it in that case.

callback <Function> A callback function (optionally with an error argument and data) to be called after the supplied chunk has been processed.

This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Readable class methods only.

All Transform stream implementations must provide a _transform() method to accept input and produce output. The transform._transform() implementation handles the bytes being
written, computes an output, then passes that output off to the readable portion using the transform.push() method.

The transform.push() method may be called zero or more times to generate output from a single input chunk, depending on how much is to be output as a result of the chunk.

It is possible that no output is generated from any given chunk of input data.

The callback function must be called only when the current chunk is completely consumed. The first argument passed to the callback must be an Error object if an error occurred while
processing the input or null otherwise. If a second argument is passed to the callback , it will be forwarded on to the transform.push() method. In other words, the following are
equivalent:

transform.prototype._transform = function(data, encoding, callback) {


this.push(data);
callback();
};

transform.prototype._transform = function(data, encoding, callback) {


callback(null, data);
};

The transform._transform() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.

transform._transform() is never called in parallel; streams implement a queue mechanism, and to receive the next chunk, callback must be called, either synchronously or
asynchronously.
Class: stream.PassThrough
The stream.PassThrough class is a trivial implementation of a Transform stream that simply passes the input bytes across to the output. Its purpose is primarily for examples and testing,
but there are some use cases where stream.PassThrough is useful as a building block for novel sorts of streams.

Additional notes
Streams compatibility with async generators and async iterators
With the support of async generators and iterators in JavaScript, async generators are effectively a first-class language-level stream construct at this point.

Some common interop cases of using Node.js streams with async generators and async iterators are provided below.

Consuming readable streams with async iterators

(async function() {
for await (const chunk of readable) {
console.log(chunk);
}
})();

Async iterators register a permanent error handler on the stream to prevent any unhandled post-destroy errors.

Creating readable streams with async generators


A Node.js readable stream can be created from an asynchronous generator using the Readable.from() utility method:

const { Readable } = require('stream');

async function * generate() {


yield 'a';
yield 'b';
yield 'c';
}

const readable = Readable.from(generate());

readable.on('data', (chunk) => {


console.log(chunk);
});

Piping to writable streams from async iterators


When writing to a writable stream from an async iterator, ensure correct handling of backpressure and errors. stream.pipeline() abstracts away the handling of backpressure and
backpressure-related errors:

const fs = require('fs');
const { pipeline } = require('stream');
const { pipeline: pipelinePromise } = require('stream/promises');

const writable = fs.createWriteStream('./file');

// Callback Pattern
pipeline(iterator, writable, (err, value) => {
if (err) {
console.error(err);
} else {
console.log(value, 'value returned');
}
});

// Promise Pattern
pipelinePromise(iterator, writable)
.then((value) => {
console.log(value, 'value returned');
})
.catch(console.error);

Compatibility with older Node.js versions


Prior to Node.js 0.10, the Readable stream interface was simpler, but also less powerful and less useful.

Rather than waiting for calls to the stream.read() method, 'data' events would begin emitting immediately. Applications that would need to perform some amount of work to decide
how to handle data were required to store read data into buffers so the data would not be lost.
The stream.pause() method was advisory, rather than guaranteed. This meant that it was still necessary to be prepared to receive 'data' events even when the stream was in a paused
state.
In Node.js 0.10, the Readable class was added. For backward compatibility with older Node.js programs, Readable streams switch into "flowing mode" when a 'data' event handler is
added, or when the stream.resume() method is called. The effect is that, even when not using the new stream.read() method and 'readable' event, it is no longer necessary to worry
about losing 'data' chunks.

While most applications will continue to function normally, this introduces an edge case in the following conditions:

No 'data' event listener is added.

The stream.resume() method is never called.


The stream is not piped to any writable destination.
For example, consider the following code:

// WARNING! BROKEN!
net.createServer((socket) => {

// We add an 'end' listener, but never consume the data.


socket.on('end', () => {
// It will never get here.
socket.end('The message was received but was not processed.\n');
});

}).listen(1337);

Prior to Node.js 0.10, the incoming message data would be simply discarded. However, in Node.js 0.10 and beyond, the socket remains paused forever.

The workaround in this situation is to call the stream.resume() method to begin the flow of data:

// Workaround.
net.createServer((socket) => {
socket.on('end', () => {
socket.end('The message was received but was not processed.\n');
});

// Start the flow of data, discarding it.


socket.resume();
}).listen(1337);

In addition to new Readable streams switching into flowing mode, pre-0.10 style streams can be wrapped in a Readable class using the readable.wrap() method.
readable.read(0)
There are some cases where it is necessary to trigger a refresh of the underlying readable stream mechanisms, without actually consuming any data. In such cases, it is possible to call
readable.read(0) , which will always return null .

If the internal read buffer is below the highWaterMark , and the stream is not currently reading, then calling stream.read(0) will trigger a low-level stream._read() call.

While most applications will almost never need to do this, there are situations within Node.js where this is done, particularly in the Readable stream class internals.

readable.push('')
Use of readable.push('') is not recommended.

Pushing a zero-byte string, Buffer or Uint8Array to a stream that is not in object mode has an interesting side effect. Because it is a call to readable.push() , the call will end the reading
process. However, because the argument is an empty string, no data is added to the readable buffer so there is nothing for a user to consume.

highWaterMark discrepancy after calling readable.setEncoding()


The use of readable.setEncoding() will change the behavior of how the highWaterMark operates in non-object mode.

Typically, the size of the current buffer is measured against the highWaterMark in bytes. However, after setEncoding() is called, the comparison function will begin to measure the buffer's
size in characters.

This is not a problem in common cases with latin1 or ascii . But it is advised to be mindful about this behavior when working with strings that could contain multi-byte characters.
Node.js v15.12.0 Documentation

Events
Stability: 2 - Stable

Source Code: lib/events.js

Much of the Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain kinds of objects (called "emitters") emit named events that cause
Function objects ("listeners") to be called.

For instance: a net.Server object emits an event each time a peer connects to it; a fs.ReadStream emits an event when the file is opened; a stream emits an event whenever data is
available to be read.

All objects that emit events are instances of the EventEmitter class. These objects expose an eventEmitter.on() function that allows one or more functions to be attached to named
events emitted by the object. Typically, event names are camel-cased strings but any valid JavaScript property key can be used.

When the EventEmitter object emits an event, all of the functions attached to that specific event are called synchronously. Any values returned by the called listeners are ignored and
discarded.

The following example shows a simple EventEmitter instance with a single listener. The eventEmitter.on() method is used to register listeners, while the eventEmitter.emit() method is
used to trigger the event.

const EventEmitter = require('events');

class MyEmitter extends EventEmitter {}

const myEmitter = new MyEmitter();


myEmitter.on('event', () => {
console.log('an event occurred!');
});
myEmitter.emit('event');

Passing arguments and this to listeners


The eventEmitter.emit() method allows an arbitrary set of arguments to be passed to the listener functions. Keep in mind that when an ordinary listener function is called, the standard
this keyword is intentionally set to reference the EventEmitter instance to which the listener is attached.

const myEmitter = new MyEmitter();


myEmitter.on('event', function(a, b) {
console.log(a, b, this, this === myEmitter);
// Prints:
// a b MyEmitter {
// domain: null,
// _events: { event: [Function] },
// _eventsCount: 1,
// _maxListeners: undefined } true
});
myEmitter.emit('event', 'a', 'b');

It is possible to use ES6 Arrow Functions as listeners, however, when doing so, the this keyword will no longer reference the EventEmitter instance:

const myEmitter = new MyEmitter();


myEmitter.on('event', (a, b) => {
console.log(a, b, this);
// Prints: a b {}
});
myEmitter.emit('event', 'a', 'b');

Asynchronous vs. synchronous


The EventEmitter calls all listeners synchronously in the order in which they were registered. This ensures the proper sequencing of events and helps avoid race conditions and logic errors.
When appropriate, listener functions can switch to an asynchronous mode of operation using the setImmediate() or process.nextTick() methods:

const myEmitter = new MyEmitter();


myEmitter.on('event', (a, b) => {
setImmediate(() => {
console.log('this happens asynchronously');
});
});
myEmitter.emit('event', 'a', 'b');
Handling events only once
When a listener is registered using the eventEmitter.on() method, that listener is invoked every time the named event is emitted.

const myEmitter = new MyEmitter();


let m = 0;
myEmitter.on('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Prints: 2

Using the eventEmitter.once() method, it is possible to register a listener that is called at most once for a particular event. Once the event is emitted, the listener is unregistered and then
called.

const myEmitter = new MyEmitter();


let m = 0;
myEmitter.once('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Ignored

Error events
When an error occurs within an EventEmitter instance, the typical action is for an 'error' event to be emitted. These are treated as special cases within Node.js.

If an EventEmitter does not have at least one listener registered for the 'error' event, and an 'error' event is emitted, the error is thrown, a stack trace is printed, and the Node.js
process exits.

const myEmitter = new MyEmitter();


myEmitter.emit('error', new Error('whoops!'));
// Throws and crashes Node.js
To guard against crashing the Node.js process the domain module can be used. (Note, however, that the domain module is deprecated.)

As a best practice, listeners should always be added for the 'error' events.

const myEmitter = new MyEmitter();


myEmitter.on('error', (err) => {
console.error('whoops! there was an error');
});
myEmitter.emit('error', new Error('whoops!'));
// Prints: whoops! there was an error

It is possible to monitor 'error' events without consuming the emitted error by installing a listener using the symbol events.errorMonitor .

const { EventEmitter, errorMonitor } = require('events');

const myEmitter = new EventEmitter();


myEmitter.on(errorMonitor, (err) => {
MyMonitoringTool.log(err);
});
myEmitter.emit('error', new Error('whoops!'));
// Still throws and crashes Node.js

Capture rejections of promises

Stability: 1 - captureRejections is experimental.

Using async functions with event handlers is problematic, because it can lead to an unhandled rejection in case of a thrown exception:

const ee = new EventEmitter();


ee.on('something', async (value) => {
throw new Error('kaboom');
});

The captureRejections option in the EventEmitter constructor or the global setting change this behavior, installing a .then(undefined, handler) handler on the Promise . This handler
routes the exception asynchronously to the Symbol.for('nodejs.rejection') method if there is one, or to 'error' event handler if there is none.
const ee1 = new EventEmitter({ captureRejections: true });
ee1.on('something', async (value) => {
throw new Error('kaboom');
});

ee1.on('error', console.log);

const ee2 = new EventEmitter({ captureRejections: true });


ee2.on('something', async (value) => {
throw new Error('kaboom');
});

ee2[Symbol.for('nodejs.rejection')] = console.log;

Setting events.captureRejections = true will change the default for all new instances of EventEmitter .

const events = require('events');


events.captureRejections = true;
const ee1 = new events.EventEmitter();
ee1.on('something', async (value) => {
throw new Error('kaboom');
});

ee1.on('error', console.log);

The 'error' events that are generated by the captureRejections behavior do not have a catch handler to avoid infinite error loops: the recommendation is to not use async functions as
'error' event handlers.

Class: EventEmitter
The EventEmitter class is defined and exposed by the events module:

const EventEmitter = require('events');

All EventEmitter s emit the event 'newListener' when new listeners are added and 'removeListener' when existing listeners are removed.

It supports the following option:


captureRejections <boolean> It enables automatic capturing of promise rejection . Default: false .

Event: 'newListener'
eventName <string> | <symbol> The name of the event being listened for

listener <Function> The event handler function

The EventEmitter instance will emit its own 'newListener' event before a listener is added to its internal array of listeners.

Listeners registered for the 'newListener' event are passed the event name and a reference to the listener being added.

The fact that the event is triggered before adding the listener has a subtle but important side effect: any additional listeners registered to the same name within the 'newListener' callback
are inserted before the listener that is in the process of being added.

class MyEmitter extends EventEmitter {}

const myEmitter = new MyEmitter();


// Only do this once so we don't loop forever
myEmitter.once('newListener', (event, listener) => {
if (event === 'event') {
// Insert a new listener in front
myEmitter.on('event', () => {
console.log('B');
});
}
});
myEmitter.on('event', () => {
console.log('A');
});
myEmitter.emit('event');
// Prints:
// B
// A

Event: 'removeListener'
eventName <string> | <symbol> The event name

listener <Function> The event handler function

The 'removeListener' event is emitted after the listener is removed.


emitter.addListener(eventName, listener)
eventName <string> | <symbol>

listener <Function>

Alias for emitter.on(eventName, listener) .

emitter.emit(eventName[, ...args])
eventName <string> | <symbol>

...args <any>

Returns: <boolean>

Synchronously calls each of the listeners registered for the event named eventName , in the order they were registered, passing the supplied arguments to each.

Returns true if the event had listeners, false otherwise.

const EventEmitter = require('events');


const myEmitter = new EventEmitter();

// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener`);
});

console.log(myEmitter.listeners('event'));

myEmitter.emit('event', 1, 2, 3, 4, 5);

// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener

emitter.eventNames()
Returns: <Array>

Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbol s.

const EventEmitter = require('events');


const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});

const sym = Symbol('symbol');


myEE.on(sym, () => {});

console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]

emitter.getMaxListeners()
Returns: <integer>

Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners .

emitter.listenerCount(eventName)
eventName <string> | <symbol> The name of the event being listened for

Returns: <integer>

Returns the number of listeners listening to the event named eventName .

emitter.listeners(eventName)
eventName <string> | <symbol>
Returns: <Function[]>

Returns a copy of the array of listeners for the event named eventName .

server.on('connection', (stream) => {


console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection')));
// Prints: [ [Function] ]

emitter.off(eventName, listener)
eventName <string> | <symbol>

listener <Function>

Returns: <EventEmitter>

Alias for emitter.removeListener() .

emitter.on(eventName, listener)
eventName <string> | <symbol> The name of the event.

listener <Function> The callback function

Returns: <EventEmitter>

Adds the listener function to the end of the listeners array for the event named eventName . No checks are made to see if the listener has already been added. Multiple calls passing the
same combination of eventName and listener will result in the listener being added, and called, multiple times.

server.on('connection', (stream) => {


console.log('someone connected!');
});

Returns a reference to the EventEmitter , so that calls can be chained.

By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the
listeners array.

const myEE = new EventEmitter();


myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a

emitter.once(eventName, listener)
eventName <string> | <symbol> The name of the event.

listener <Function> The callback function

Returns: <EventEmitter>

Adds a one-time listener function for the event named eventName . The next time eventName is triggered, this listener is removed and then invoked.

server.once('connection', (stream) => {


console.log('Ah, we have our first user!');
});

Returns a reference to the EventEmitter , so that calls can be chained.

By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of
the listeners array.

const myEE = new EventEmitter();


myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a

emitter.prependListener(eventName, listener)
eventName <string> | <symbol> The name of the event.

listener <Function> The callback function

Returns: <EventEmitter>
Adds the listener function to the beginning of the listeners array for the event named eventName . No checks are made to see if the listener has already been added. Multiple calls passing
the same combination of eventName and listener will result in the listener being added, and called, multiple times.

server.prependListener('connection', (stream) => {


console.log('someone connected!');
});

Returns a reference to the EventEmitter , so that calls can be chained.

emitter.prependOnceListener(eventName, listener)
eventName <string> | <symbol> The name of the event.

listener <Function> The callback function

Returns: <EventEmitter>

Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

server.prependOnceListener('connection', (stream) => {


console.log('Ah, we have our first user!');
});

Returns a reference to the EventEmitter , so that calls can be chained.

emitter.removeAllListeners([eventName])
eventName <string> | <symbol>

Returns: <EventEmitter>

Removes all listeners, or those of the specified eventName .

It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file
streams).

Returns a reference to the EventEmitter , so that calls can be chained.

emitter.removeListener(eventName, listener)
eventName <string> | <symbol>

listener <Function>
Returns: <EventEmitter>

Removes the specified listener from the listener array for the event named eventName .

const callback = (stream) => {


console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);

removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified
eventName , then removeListener() must be called multiple times to remove each instance.

Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and
before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.

const myEmitter = new MyEmitter();

const callbackA = () => {


console.log('A');
myEmitter.removeListener('event', callbackB);
};

const callbackB = () => {


console.log('B');
};

myEmitter.on('event', callbackA);

myEmitter.on('event', callbackB);

// callbackA removes listener callbackB but it will still be called.


// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// Prints:
// A
// B

// callbackB is now removed.


// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A

Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in
which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the
example the once('ping') listener is removed:

const ee = new EventEmitter();

function pong() {
console.log('pong');
}

ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);

ee.emit('ping');
ee.emit('ping');

Returns a reference to the EventEmitter , so that calls can be chained.

emitter.setMaxListeners(n)
n <integer>

Returns: <EventEmitter>

By default EventEmitter s will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The
emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0 ) to indicate an unlimited number of
listeners.

Returns a reference to the EventEmitter , so that calls can be chained.

emitter.rawListeners(eventName)
eventName <string> | <symbol>
Returns: <Function[]>

Returns a copy of the array of listeners for the event named eventName , including any wrappers (such as those created by .once() ).

const emitter = new EventEmitter();


emitter.once('log', () => console.log('log once'));

// Returns a new Array with a function `onceWrapper` which has a property


// `listener` which contains the original listener bound above
const listeners = emitter.rawListeners('log');
const logFnWrapper = listeners[0];

// Logs "log once" to the console and does not unbind the `once` event
logFnWrapper.listener();

// Logs "log once" to the console and removes the listener


logFnWrapper();

emitter.on('log', () => console.log('log persistently'));


// Will return a new Array with a single function bound by `.on()` above
const newListeners = emitter.rawListeners('log');

// Logs "log persistently" twice


newListeners[0]();
emitter.emit('log');

emitter[Symbol.for('nodejs.rejection')](err, eventName[, ...args])

Stability: 1 - captureRejections is experimental.

err Error

eventName <string> | <symbol>

...args <any>

The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use
events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection') .
const { EventEmitter, captureRejectionSymbol } = require('events');

class MyClass extends EventEmitter {


constructor() {
super({ captureRejections: true });
}

[captureRejectionSymbol](err, event, ...args) {


console.log('rejection happened for', event, 'with', err, ...args);
this.destroy(err);
}

destroy(err) {
// Tear the resource down here.
}
}

events.defaultMaxListeners
By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n)
method. To change the default for all EventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling
emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners .

This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has
been detected. For any single EventEmitter , the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

emitter.setMaxListeners(emitter.getMaxListeners() + 1);
emitter.once('event', () => {
// do stuff
emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
});

The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

The emitted warning can be inspected with process.on('warning') and will have the additional emitter , type and count properties, referring to the event emitter instance, the event’s
name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning' .
events.errorMonitor
This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

Installing a listener using this symbol does not change the behavior once an 'error' event is emitted, therefore the process will still crash if no regular 'error' listener is installed.

events.getEventListeners(emitterOrTarget, eventName)
emitterOrTarget <EventEmitter> | <EventTarget>

eventName <string> | <symbol>

Returns: <Function[]>

Returns a copy of the array of listeners for the event named eventName .

For EventEmitter s this behaves exactly the same as calling .listeners on the emitter.

For EventTarget s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

const { getEventListeners, EventEmitter } = require('events');

{
const ee = new EventEmitter();
const listener = () => console.log('Events are fun');
ee.on('foo', listener);
getEventListeners(ee, 'foo'); // [listener]
}
{
const et = new EventTarget();
const listener = () => console.log('Events are fun');
et.addEventListener('foo', listener);
getEventListeners(et, 'foo'); // [listener]
}

events.once(emitter, name[, options])


emitter <EventEmitter>

name <string>
options <Object>
signal <AbortSignal> Can be used to cancel waiting for the event.

Returns: <Promise>

Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an
array of all the arguments emitted to the given event.

This method is intentionally generic and works with the web platform EventTarget interface, which has no special 'error' event semantics and does not listen to the 'error' event.

const { once, EventEmitter } = require('events');

async function run() {


const ee = new EventEmitter();

process.nextTick(() => {
ee.emit('myevent', 42);
});

const [value] = await once(ee, 'myevent');


console.log(value);

const err = new Error('kaboom');


process.nextTick(() => {
ee.emit('error', err);
});

try {
await once(ee, 'myevent');
} catch (err) {
console.log('error happened', err);
}
}

run();

The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the ' error' event itself, then it is treated
as any other kind of event without special handling:

const { EventEmitter, once } = require('events');


const ee = new EventEmitter();

once(ee, 'error')
.then(([err]) => console.log('ok', err.message))
.catch((err) => console.log('error', err.message));

ee.emit('error', new Error('boom'));

// Prints: ok boom

An <AbortSignal> can be used to cancel waiting for the event:

const { EventEmitter, once } = require('events');

const ee = new EventEmitter();


const ac = new AbortController();

async function foo(emitter, event, signal) {


try {
await once(emitter, event, { signal });
console.log('event emitted!');
} catch (error) {
if (error.name === 'AbortError') {
console.error('Waiting for the event was canceled!');
} else {
console.error('There was an error', error.message);
}
}
}

foo(ee, 'foo', ac.signal);


ac.abort(); // Abort waiting for the event
ee.emit('foo'); // Prints: Waiting for the event was canceled!

Awaiting multiple events emitted on process.nextTick()


There is an edge case worth noting when using the events.once() function to await multiple events emitted on in the same batch of process.nextTick() operations, or whenever multiple
events are emitted synchronously. Specifically, because the process.nextTick() queue is drained before the Promise microtask queue, and because EventEmitter emits all events
synchronously, it is possible for events.once() to miss an event.
const { EventEmitter, once } = require('events');

const myEE = new EventEmitter();

async function foo() {


await once(myEE, 'bar');
console.log('bar');

// This Promise will never resolve because the 'foo' event will
// have already been emitted before the Promise is created.
await once(myEE, 'foo');
console.log('foo');
}

process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});

foo().then(() => console.log('done'));

To catch both events, create each of the Promises before awaiting either of them, then it becomes possible to use Promise.all() , Promise.race() , or Promise.allSettled() :

const { EventEmitter, once } = require('events');

const myEE = new EventEmitter();

async function foo() {


await Promise.all([once(myEE, 'bar'), once(myEE, 'foo')]);
console.log('foo', 'bar');
}

process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});

foo().then(() => console.log('done'));


events.captureRejections
Stability: 1 - captureRejections is experimental.

Value: <boolean>

Change the default captureRejections option on all new EventEmitter objects.

events.captureRejectionSymbol
Stability: 1 - captureRejections is experimental.

Value: Symbol.for('nodejs.rejection')

See how to write a custom rejection handler .

events.listenerCount(emitter, eventName)
Stability: 0 - Deprecated: Use emitter.listenerCount() instead.

emitter <EventEmitter> The emitter to query

eventName <string> | <symbol> The event name

A class method that returns the number of listeners for the given eventName registered on the given emitter .

const { EventEmitter, listenerCount } = require('events');


const myEmitter = new EventEmitter();
myEmitter.on('event', () => {});
myEmitter.on('event', () => {});
console.log(listenerCount(myEmitter, 'event'));
// Prints: 2
events.on(emitter, eventName[, options])
emitter <EventEmitter>

eventName <string> | <symbol> The name of the event being listened for

options <Object>
signal <AbortSignal> Can be used to cancel awaiting events.

Returns: <AsyncIterator> that iterates eventName events emitted by the emitter

const { on, EventEmitter } = require('events');

(async () => {
const ee = new EventEmitter();

// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});

for await (const event of on(ee, 'foo')) {


// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
})();

Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error' . It removes all listeners when exiting the loop. The value returned by each
iteration is an array composed of the emitted event arguments.

An <AbortSignal> can be used to cancel waiting on events:

const { on, EventEmitter } = require('events');


const ac = new AbortController();

(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});

for await (const event of on(ee, 'foo', { signal: ac.signal })) {


// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
})();

process.nextTick(() => ac.abort());

events.setMaxListeners(n[, ...eventTargets])
n <number> A non-negative number. The maximum number of listeners per EventTarget event.

...eventsTargets <EventTarget[]> | <EventEmitter[]> Zero or more <EventTarget> or <EventEmitter> instances. If none are specified, n is set as the default max for all newly
created <EventTarget> and <EventEmitter> objects.

const {
setMaxListeners,
EventEmitter
} = require('events');

const target = new EventTarget();


const emitter = new EventEmitter();

setMaxListeners(5, target, emitter);

EventTarget and Event API


The EventTarget and Event objects are a Node.js-specific implementation of the EventTarget Web API that are exposed by some Node.js core APIs.
const target = new EventTarget();

target.addEventListener('foo', (event) => {


console.log('foo event happened!');
});

Node.js EventTarget vs. DOM EventTarget


There are two key differences between the Node.js EventTarget and the EventTarget Web API :

1. Whereas DOM EventTarget instances may be hierarchical, there is no concept of hierarchy and event propagation in Node.js. That is, an event dispatched to an EventTarget does not
propagate through a hierarchy of nested target objects that may each have their own set of handlers for the event.
2. In the Node.js EventTarget , if an event listener is an async function or returns a Promise , and the returned Promise rejects, the rejection is automatically captured and handled the
same way as a listener that throws synchronously (see EventTarget error handling for details).

NodeEventTarget vs. EventEmitter


The NodeEventTarget object implements a modified subset of the EventEmitter API that allows it to closely emulate an EventEmitter in certain situations. A NodeEventTarget is not an
instance of EventEmitter and cannot be used in place of an EventEmitter in most cases.

1. Unlike EventEmitter , any given listener can be registered at most once per event type . Attempts to register a listener multiple times are ignored.

2. The NodeEventTarget does not emulate the full EventEmitter API. Specifically the prependListener() , prependOnceListener() , rawListeners() , setMaxListeners() ,
getMaxListeners() , and errorMonitor APIs are not emulated. The 'newListener' and 'removeListener' events will also not be emitted.

3. The NodeEventTarget does not implement any special default behavior for events with type 'error' .

4. The NodeEventTarget supports EventListener objects as well as functions as handlers for all event types.

Event listener
Event listeners registered for an event type may either be JavaScript functions or objects with a handleEvent property whose value is a function.

In either case, the handler function is invoked with the event argument passed to the eventTarget.dispatchEvent() function.

Async functions may be used as event listeners. If an async handler function rejects, the rejection is captured and handled as described in EventTarget error handling .

An error thrown by one handler function does not prevent the other handlers from being invoked.

The return value of a handler function is ignored.

Handlers are always invoked in the order they were added.


Handler functions may mutate the event object.

function handler1(event) {
console.log(event.type); // Prints 'foo'
event.a = 1;
}

async function handler2(event) {


console.log(event.type); // Prints 'foo'
console.log(event.a); // Prints 1
}

const handler3 = {
handleEvent(event) {
console.log(event.type); // Prints 'foo'
}
};

const handler4 = {
async handleEvent(event) {
console.log(event.type); // Prints 'foo'
}
};

const target = new EventTarget();

target.addEventListener('foo', handler1);
target.addEventListener('foo', handler2);
target.addEventListener('foo', handler3);
target.addEventListener('foo', handler4, { once: true });

EventTarget error handling


When a registered event listener throws (or returns a Promise that rejects), by default the error is treated as an uncaught exception on process.nextTick() . This means uncaught
exceptions in EventTarget s will terminate the Node.js process by default.

Throwing within an event listener will not stop the other registered handlers from being invoked.

The EventTarget does not implement any special default handling for 'error' type events like EventEmitter .
Currently errors are first forwarded to the process.on('error') event before reaching process.on('uncaughtException') . This behavior is deprecated and will change in a future release
to align EventTarget with other Node.js APIs. Any code relying on the process.on('error') event should be aligned with the new behavior.

Class: Event
The Event object is an adaptation of the Event Web API . Instances are created internally by Node.js.

event.bubbles
Type: <boolean> Always returns false .

This is not used in Node.js and is provided purely for completeness.

event.cancelBubble()
Alias for event.stopPropagation() . This is not used in Node.js and is provided purely for completeness.

event.cancelable
Type: <boolean> True if the event was created with the cancelable option.

event.composed
Type: <boolean> Always returns false .

This is not used in Node.js and is provided purely for completeness.

event.composedPath()
Returns an array containing the current EventTarget as the only entry or empty if the event is not being dispatched. This is not used in Node.js and is provided purely for completeness.

event.currentTarget
Type: <EventTarget> The EventTarget dispatching the event.

Alias for event.target .

event.defaultPrevented
Type: <boolean>

Is true if cancelable is true and event.preventDefault() has been called.

event.eventPhase
Type: <number> Returns 0 while an event is not being dispatched, 2 while it is being dispatched.

This is not used in Node.js and is provided purely for completeness.

event.isTrusted
Type: <boolean>

The <AbortSignal> "abort" event is emitted with isTrusted set to true . The value is false in all other cases.

event.preventDefault()
Sets the defaultPrevented property to true if cancelable is true .

event.returnValue
Type: <boolean> True if the event has not been canceled.

This is not used in Node.js and is provided purely for completeness.

event.srcElement
Type: <EventTarget> The EventTarget dispatching the event.

Alias for event.target .

event.stopImmediatePropagation()
Stops the invocation of event listeners after the current one completes.

event.stopPropagation()
This is not used in Node.js and is provided purely for completeness.

event.target
Type: <EventTarget> The EventTarget dispatching the event.

event.timeStamp
Type: <number>

The millisecond timestamp when the Event object was created.

event.type
Type: <string>

The event type identifier.

Class: EventTarget

eventTarget.addEventListener(type, listener[, options])


type <string>

listener <Function> | <EventListener>

options <Object>
once <boolean> When true , the listener is automatically removed when it is first invoked. Default: false .

passive <boolean> When true , serves as a hint that the listener will not call the Event object's preventDefault() method. Default: false .

capture <boolean> Not directly used by Node.js. Added for API completeness. Default: false .

Adds a new handler for the type event. Any given listener is added only once per type and per capture option value.

If the once option is true , the listener is removed after the next time a type event is dispatched.

The capture option is not used by Node.js in any functional way other than tracking registered event listeners per the EventTarget specification. Specifically, the capture option is used as
part of the key when registering a listener . Any individual listener may be added once with capture = false , and once with capture = true .

function handler(event) {}

const target = new EventTarget();


target.addEventListener('foo', handler, { capture: true }); // first
target.addEventListener('foo', handler, { capture: false }); // second

// Removes the second instance of handler


target.removeEventListener('foo', handler);

// Removes the first instance of handler


target.removeEventListener('foo', handler, { capture: true });

eventTarget.dispatchEvent(event)
event <Object> | <Event>

Dispatches the event to the list of handlers for event.type . The event may be an Event object or any object with a type property whose value is a string .

The registered event listeners is synchronously invoked in the order they were registered.
eventTarget.removeEventListener(type, listener)
type <string>

listener <Function> | <EventListener>

options <Object>
capture <boolean>

Removes the listener from the list of handlers for event type .

Class: NodeEventTarget
Extends: <EventTarget>

The NodeEventTarget is a Node.js-specific extension to EventTarget that emulates a subset of the EventEmitter API.

nodeEventTarget.addListener(type, listener[, options])


type <string>

listener <Function> | <EventListener>

options <Object>

once <boolean>

Returns: <EventTarget> this

Node.js-specific extension to the EventTarget class that emulates the equivalent EventEmitter API. The only difference between addListener() and addEventListener() is that
addListener() will return a reference to the EventTarget .

nodeEventTarget.eventNames()
Returns: <string[]>

Node.js-specific extension to the EventTarget class that returns an array of event type names for which event listeners are registered.

nodeEventTarget.listenerCount(type)
type <string>

Returns: <number>

Node.js-specific extension to the EventTarget class that returns the number of event listeners registered for the type .
nodeEventTarget.off(type, listener)
type <string>

listener <Function> | <EventListener>

Returns: <EventTarget> this

Node.js-specific alias for eventTarget.removeListener() .

nodeEventTarget.on(type, listener[, options])


type <string>

listener <Function> | <EventListener>

options <Object>

once <boolean>

Returns: <EventTarget> this

Node.js-specific alias for eventTarget.addListener() .

nodeEventTarget.once(type, listener[, options])


type <string>

listener <Function> | <EventListener>

options <Object>

Returns: <EventTarget> this

Node.js-specific extension to the EventTarget class that adds a once listener for the given event type . This is equivalent to calling on with the once option set to true .

nodeEventTarget.removeAllListeners([type])
type <string>

Returns: <EventTarget> this

Node.js-specific extension to the EventTarget class. If type is specified, removes all registered listeners for type , otherwise removes all registered listeners.

nodeEventTarget.removeListener(type, listener)
type <string>

listener <Function> | <EventListener>

Returns: <EventTarget> this

Node.js-specific extension to the EventTarget class that removes the listener for the given type . The only difference between removeListener() and removeEventListener() is that
removeListener() will return a reference to the EventTarget .
Node.js v15.12.0 Documentation

File system
Stability: 2 - Stable

Source Code: lib/fs.js

The fs module enables interacting with the file system in a way modeled on standard POSIX functions.

To use the promise-based APIs:

// Using ESM Module syntax:


import * as fs from 'fs/promises';
CJS ESM

// Using CommonJS syntax:


const fs = require('fs/promises');

To use the callback and sync APIs:

// Using ESM Module syntax:


import * as fs from 'fs';
CJS ESM

// Using CommonJS syntax:


const fs = require('fs');

All file system operations have synchronous, callback, and promise-based forms, and are accessible using both CommonJS syntax and ES6 Modules (ESM).

Promise example
Promise-based operations return a promise that is fulfilled when the asynchronous operation is complete.

// Using ESM Module syntax:


import { unlink } from 'fs/promises';
CJS ESM
try {
await unlink('/tmp/hello');
console.log('successfully deleted /tmp/hello');
} catch (error) {
console.error('there was an error:', error.message);
}

// Using CommonJS syntax


const { unlink } = require('fs/promises');

(async function(path) {
try {
await unlink(path);
console.log(`successfully deleted ${path}`);
} catch (error) {
console.error('there was an error:', error.message);
}
})('/tmp/hello');

Callback example
The callback form takes a completion callback function as its last argument and invokes the operation asynchronously. The arguments passed to the completion callback depend on the
method, but the first argument is always reserved for an exception. If the operation is completed successfully, then the first argument is null or undefined .

// Using ESM syntax


import { unlink } from 'fs';
CJS ESM
unlink('/tmp/hello', (err) => {
if (err) throw err;
console.log('successfully deleted /tmp/hello');
});
// Using CommonJS syntax
const { unlink } = require('fs');

unlink('/tmp/hello', (err) => {


if (err) throw err;
console.log('successfully deleted /tmp/hello');
});

The callback-based versions of the fs module APIs are preferable over the use of the promise APIs when maximal performance (both in terms of execution time and memory allocation are
required).

Synchronous example
The synchronous APIs block the Node.js event loop and further JavaScript execution until the operation is complete. Exceptions are thrown immediately and can be handled using try…
catch , or can be allowed to bubble up.

// Using ESM syntax


import { unlinkSync } from 'fs';
CJS ESM
try {
unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');
} catch (err) {
// handle the error
}

// Using CommonJS syntax


const { unlinkSync } = require('fs');

try {
unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');
} catch (err) {
// handle the error
}
Promises API
The fs/promises API provides asynchronous file system methods that return promises.

The promise APIs use the underlying Node.js threadpool to perform file system operations off the event loop thread. These operations are not synchronized or threadsafe. Care must be
taken when performing multiple concurrent modifications on the same file or data corruption may occur.

Class: FileHandle
A <FileHandle> object is an object wrapper for a numeric file descriptor.

Instances of the <FileHandle> object are created by the fsPromises.open() method.

All <FileHandle> objects are <EventEmitter> s.

If a <FileHandle> is not closed using the filehandle.close() method, it will try to automatically close the file descriptor and emit a process warning, helping to prevent memory leaks.
Please do not rely on this behavior because it can be unreliable and the file may not be closed. Instead, always explicitly close <FileHandle> s. Node.js may change this behavior in the future.

Event: 'close'
The 'close' event is emitted when the <FileHandle> has been closed and can no longer be used.

filehandle.appendFile(data[, options])
data <string> | <Buffer> | <TypedArray> | <DataView>

options <Object> | <string>


encoding <string> | <null> Default: 'utf8'

Returns: <Promise> Fulfills with undefined upon success.

Alias of filehandle.writeFile() .

When operating on file handles, the mode cannot be changed from what it was set to with fsPromises.open() . Therefore, this is equivalent to filehandle.writeFile() .

filehandle.chmod(mode)
mode <integer> the file mode bit mask.

Returns: <Promise> Fulfills with undefined upon success.

Modifies the permissions on the file. See chmod(2) .

filehandle.chown(uid, gid)
uid <integer> The file's new owner's user id.

gid <integer> The file's new group's group id.

Returns: <Promise> Fulfills with undefined upon success.

Changes the ownership of the file. A wrapper for chown(2) .

filehandle.close()
Returns: <Promise> Fulfills with undefined upon success.

Closes the file handle after waiting for any pending operation on the handle to complete.

import { open } from 'fs/promises';

let filehandle;
try {
filehandle = await open('thefile.txt', 'r');
} finally {
await filehandle?.close();
}

filehandle.datasync()
Returns: <Promise> Fulfills with undefined upon success.

Forces all currently queued I/O operations associated with the file to the operating system's synchronized I/O completion state. Refer to the POSIX fdatasync(2) documentation for
details.

Unlike filehandle.sync this method does not flush modified metadata.

filehandle.fd
<number> The numeric file descriptor managed by the <FileHandle> object.

filehandle.read(buffer, offset, length, position)


buffer <Buffer> | <Uint8Array> A buffer that will be filled with the file data read.

offset <integer> The location in the buffer at which to start filling. Default: 0

length <integer> The number of bytes to read. Default: buffer.length

position <integer> The location where to begin reading data from the file. If null , data will be read from the current file position, and the position will be updated. If position is an
integer, the current file position will remain unchanged.
Returns: <Promise> Fulfills upon success with an object with two properties:
bytesRead <integer> The number of bytes read

buffer <Buffer> | <Uint8Array> A reference to the passed in buffer argument.

Reads data from the file and stores that in the given buffer.

If the file is not modified concurrently, the end-of-file is reached when the number of bytes read is zero.

filehandle.read(options)
options <Object>
buffer <Buffer> | <Uint8Array> A buffer that will be filled with the file data read. Default: Buffer.alloc(16384)

offset <integer> The location in the buffer at which to start filling. Default: 0

length <integer> The number of bytes to read. Default: buffer.length

position <integer> The location where to begin reading data from the file. If null , data will be read from the current file position, and the position will be updated. If position
is an integer, the current file position will remain unchanged. Default:: null

Returns: <Promise> Fulfills upon success with an object with two properties:
bytesRead <integer> The number of bytes read

buffer <Buffer> | <Uint8Array> A reference to the passed in buffer


argument.
Reads data from the file and stores that in the given buffer.

If the file is not modified concurrently, the end-of-file is reached when the number of bytes read is zero.

filehandle.readFile(options)
options <Object> | <string>
encoding <string> | <null> Default: null

signal <AbortSignal> allows aborting an in-progress readFile

Returns: <Promise> Fulfills upon a successful read with the contents of the file. If no encoding is specified (using options.encoding ), the data is returned as a <Buffer> object.
Otherwise, the data will be a string.

Asynchronously reads the entire contents of a file.

If options is a string, then it specifies the encoding .

The <FileHandle> has to support reading.

If one or more filehandle.read() calls are made on a file handle and then a filehandle.readFile() call is made, the data will be read from the current position till the end of the file. It
doesn't always read from the beginning of the file.
filehandle.readv(buffers[, position])
buffers <Buffer[]> | <TypedArray[]> | <DataView[]>

position <integer> The offset from the beginning of the file where the data should be read from. If position is not a number , the data will be read from the current position.

Returns: <Promise> Fulfills upon success an object containing two properties:


bytesRead <integer> the number of bytes read

buffers <Buffer[]> | <TypedArray[]> | <DataView[]> property containing a reference to the buffers input.

Read from a file and write to an array of <ArrayBufferView> s

filehandle.stat([options])
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

Returns: <Promise> Fulfills with an <fs.Stats> for the file.

filehandle.sync()
Returns: <Promise> Fufills with undefined upon success.

Request that all data for the open file descriptor is flushed to the storage device. The specific implementation is operating system and device specific. Refer to the POSIX fsync(2)
documentation for more detail.

filehandle.truncate(len)
len <integer> Default: 0

Returns: <Promise> Fulfills with undefined upon success.

Truncates the file.

If the file was larger than len bytes, only the first len bytes will be retained in the file.

The following example retains only the first four bytes of the file:

import { open } from 'fs/promises';

let filehandle = null;


try {
filehandle = await open('temp.txt', 'r+');
await filehandle.truncate(4);
} finally {
filehandle?.close();
}

If the file previously was shorter than len bytes, it is extended, and the extended part is filled with null bytes ( '\0' ):

If len is negative then 0 will be used.

filehandle.utimes(atime, mtime)
atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

Returns: <Promise>

Change the file system timestamps of the object referenced by the <FileHandle> then resolves the promise with no arguments upon success.

This function does not work on AIX versions before 7.1, it will reject the promise with an error using code UV_ENOSYS .

filehandle.write(buffer[, offset[, length[, position]]])


buffer <Buffer> | <Uint8Array> | <string> | <Object>

offset <integer> The start position from within buffer where the data to write begins.

length <integer> The number of bytes from buffer to write.

position <integer> The offset from the beginning of the file where the data from buffer should be written. If position is not a number , the data will be written at the current
position. See the POSIX pwrite(2) documentation for more detail.
Returns: <Promise>

Write buffer to the file.

The promise is resolved with an object containing two properties:

bytesWritten <integer> the number of bytes written

buffer <Buffer> | <Uint8Array> | <string> | <Object> a reference to the buffer written.

It is unsafe to use filehandle.write() multiple times on the same file without waiting for the promise to be resolved (or rejected). For this scenario, use fs.createWriteStream() .

On Linux, positional writes do not work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

filehandle.write(string[, position[, encoding]])


string <string> | <Object>
position <integer> The offset from the beginning of the file where the data from string should be written. If position is not a number the data will be written at the current
position. See the POSIX pwrite(2) documentation for more detail.
encoding <string> The expected string encoding. Default: 'utf8'

Returns: <Promise>

Write string to the file. If string is not a string, or an object with an own toString function property, the promise is rejected with an error.

The promise is resolved with an object containing two properties:

bytesWritten <integer> the number of bytes written

buffer <string> | <Object> a reference to the string written.

It is unsafe to use filehandle.write() multiple times on the same file without waiting for the promise to be resolved (or rejected). For this scenario, use fs.createWriteStream() .

On Linux, positional writes do not work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

filehandle.writeFile(data, options)
data <string> | <Buffer> | <Uint8Array> | <Object>

options <Object> | <string>


encoding <string> | <null> The expected character encoding when data is a string. Default: 'utf8'

Returns: <Promise>

Asynchronously writes data to a file, replacing the file if it already exists. data can be a string, a buffer, or an object with an own toString function property. The promise is resolved with no
arguments upon success.

If options is a string, then it specifies the encoding .

The <FileHandle> has to support writing.

It is unsafe to use filehandle.writeFile() multiple times on the same file without waiting for the promise to be resolved (or rejected).

If one or more filehandle.write() calls are made on a file handle and then a filehandle.writeFile() call is made, the data will be written from the current position till the end of the file.
It doesn't always write from the beginning of the file.

filehandle.writev(buffers[, position])
buffers <Buffer[]> | <TypedArray[]> | <DataView[]>

position <integer> The offset from the beginning of the file where the data from buffers should be written. If position is not a number , the data will be written at the current
position.
Returns: <Promise>

Write an array of <ArrayBufferView> s to the file.


The promise is resolved with an object containing a two properties:

bytesWritten <integer> the number of bytes written

buffers <Buffer[]> | <TypedArray[]> | <DataView[]> a reference to the buffers input.

It is unsafe to call writev() multiple times on the same file without waiting for the promise to be resolved (or rejected).

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

fsPromises.access(path[, mode])
path <string> | <Buffer> | <URL>

mode <integer> Default: fs.constants.F_OK

Returns: <Promise> Fulfills with undefined upon success.

Tests a user's permissions for the file or directory specified by path . The mode argument is an optional integer that specifies the accessibility checks to be performed. Check File access
constants for possible values of mode . It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. fs.constants.W_OK | fs.constants.R_OK ).

If the accessibility check is successful, the promise is resolved with no value. If any of the accessibility checks fail, the promise is rejected with an <Error> object. The following example
checks if the file /etc/passwd can be read and written by the current process.

import { access } from 'fs/promises';


import { constants } from 'fs';

try {
await access('/etc/passwd', constants.R_OK | constants.W_OK);
console.log('can access');
} catch {
console.error('cannot access');
}

Using fsPromises.access() to check for the accessibility of a file before calling fsPromises.open() is not recommended. Doing so introduces a race condition, since other processes may
change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file is not accessible.

fsPromises.appendFile(path, data[, options])


path <string> | <Buffer> | <URL> | <FileHandle> filename or <FileHandle>

data <string> | <Buffer>

options <Object> | <string>


encoding <string> | <null> Default: 'utf8'
mode <integer> Default: 0o666

flag <string> See support of file system flags . Default: 'a' .

Returns: <Promise> Fulfills with undefined upon success.

Asynchronously append data to a file, creating the file if it does not yet exist. data can be a string or a <Buffer> .

If options is a string, then it specifies the encoding .

The path may be specified as a <FileHandle> that has been opened for appending (using fsPromises.open() ).

fsPromises.chmod(path, mode)
path <string> | <Buffer> | <URL>

mode <string> | <integer>

Returns: <Promise> Fulfills with undefined upon success.

Changes the permissions of a file.

fsPromises.chown(path, uid, gid)


path <string> | <Buffer> | <URL>

uid <integer>

gid <integer>

Returns: <Promise> Fulfills with undefined upon success.

Changes the ownership of a file.

fsPromises.copyFile(src, dest[, mode])


src <string> | <Buffer> | <URL> source filename to copy

dest <string> | <Buffer> | <URL> destination filename of the copy operation

mode <integer> Optional modifiers that specify the behavior of the copy operation. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g.
fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE ) Default: 0 .
fs.constants.COPYFILE_EXCL : The copy operation will fail if dest already exists.

fs.constants.COPYFILE_FICLONE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then a fallback copy
mechanism is used.
fs.constants.COPYFILE_FICLONE_FORCE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then the operation will
fail.
Returns: <Promise> Fulfills with undefined upon success.
Asynchronously copies src to dest . By default, dest is overwritten if it already exists.

No guarantees are made about the atomicity of the copy operation. If an error occurs after the destination file has been opened for writing, an attempt will be made to remove the
destination.

import { constants } from 'fs';


import { copyFile } from 'fs/promises';

try {
await copyFile('source.txt', 'destination.txt');
console.log('source.txt was copied to destination.txt');
} catch {
console.log('The file could not be copied');
}

// By using COPYFILE_EXCL, the operation will fail if destination.txt exists.


try {
await copyFile('source.txt', 'destination.txt', constants.COPYFILE_EXCL);
console.log('source.txt was copied to destination.txt');
} catch {
console.log('The file could not be copied');
}

fsPromises.lchmod(path, mode)
path <string> | <Buffer> | <URL>

mode <integer>

Returns: <Promise> Fulfills with undefined upon success.

Changes the permissions on a symbolic link.

This method is only implemented on macOS.

fsPromises.lchown(path, uid, gid)


path <string> | <Buffer> | <URL>

uid <integer>

gid <integer>

Returns: <Promise> Fulfills with undefined upon success.


Changes the ownership on a symbolic link.

fsPromises.lutimes(path, atime, mtime)


path <string> | <Buffer> | <URL>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

Returns: <Promise> Fulfills with undefined upon success.

Changes the access and modification times of a file in the same way as fsPromises.utimes() , with the difference that if the path refers to a symbolic link, then the link is not dereferenced:
instead, the timestamps of the symbolic link itself are changed.

fsPromises.link(existingPath, newPath)
existingPath <string> | <Buffer> | <URL>

newPath <string> | <Buffer> | <URL>

Returns: <Promise> Fulfills with undefined upon success.

Creates a new link from the existingPath to the newPath . See the POSIX link(2) documentation for more detail.

fsPromises.lstat(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

Returns: <Promise> Fulfills with the <fs.Stats> object for the given symbolic link path .

Equivalent to fsPromises.stats() when path refers to a symbolic link. Refer to the POSIX lstat(2) document for more detail.

fsPromises.mkdir(path[, options])
path <string> | <Buffer> | <URL>

options <Object> | <integer>


recursive <boolean> Default: false

mode <string> | <integer> Not supported on Windows. Default: 0o777 .

Returns: <Promise> Upon success, fulfills with undefined if recursive is false , or the first directory path created if recursive is true .

Asynchronously creates a directory.


The optional options argument can be an integer specifying mode (permission and sticky bits), or an object with a mode property and a recursive property indicating whether parent
directories should be created. Calling fsPromises.mkdir() when path is a directory that exists results in a rejection only when recursive is false.

fsPromises.mkdtemp(prefix[, options])
prefix <string>

options <string> | <Object>


encoding <string> Default: 'utf8'

Returns: <Promise> Fulfills with a string containing the filesystem path of the newly created temporary directory.

Creates a unique temporary directory. A unique directory name is generated by appending six random characters to the end of the provided prefix . Due to platform inconsistencies, avoid
trailing X characters in prefix . Some platforms, notably the BSDs, can return more than six random characters, and replace trailing X characters in prefix with random characters.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use.

import { mkdtemp } from 'fs/promises';

try {
await mkdtemp(path.join(os.tmpdir(), 'foo-'));
} catch (err) {
console.error(err);
}

The fsPromises.mkdtemp() method will append the six randomly selected characters directly to the prefix string. For instance, given a directory /tmp , if the intention is to create a
temporary directory within /tmp , the prefix must end with a trailing platform-specific path separator ( require('path').sep ).

fsPromises.open(path, flags[, mode])


path <string> | <Buffer> | <URL>

flags <string> | <number> See support of file system flags . Default: 'r' .

mode <string> | <integer> Sets the file mode (permission and sticky bits) if the file is created. Default: 0o666 (readable and writable)

Returns: <Promise> Fulfills with a <FileHandle> object.

Opens a <FileHandle> .

Refer to the POSIX open(2) documentation for more detail.

Some characters ( < > : " / \ | ? * ) are reserved under Windows as documented by Naming Files, Paths, and Namespaces . Under NTFS, if the filename contains a colon, Node.js will
open a file system stream, as described by this MSDN page .
fsPromises.opendir(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
encoding <string> | <null> Default: 'utf8'

bufferSize <number> Number of directory entries that are buffered internally when reading from the directory. Higher values lead to better performance but higher memory
usage. Default: 32
Returns: <Promise> Fulfills with an <fs.Dir> .

Asynchronously open a directory for iterative scanning. See the POSIX opendir(3) documentation for more detail.

Creates an <fs.Dir> , which contains all further functions for reading from and cleaning up the directory.

The encoding option sets the encoding for the path while opening the directory and subsequent read operations.

Example using async iteration:

import { opendir } from 'fs/promises';

try {
const dir = await opendir('./');
for await (const dirent of dir)
console.log(dirent.name);
} catch (err) {
console.error(err);
}

fsPromises.readdir(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

withFileTypes <boolean> Default: false

Returns: <Promise> Fulfills with an array of the names of the files in the directory excluding '.' and '..' .

Reads the contents of a directory.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames. If the encoding is
set to 'buffer' , the filenames returned will be passed as <Buffer> objects.
If options.withFileTypes is set to true , the resolved array will contain <fs.Dirent> objects.

import { readdir } from 'fs/promises';

try {
const files = await readdir(path);
for await (const file of files)
console.log(file);
} catch (err) {
console.error(err);
}

fsPromises.readFile(path[, options])
path <string> | <Buffer> | <URL> | <FileHandle> filename or FileHandle

options <Object> | <string>


encoding <string> | <null> Default: null

flag <string> See support of file system flags . Default: 'r' .

signal <AbortSignal> allows aborting an in-progress readFile

Returns: <Promise> Fulfills with the contents of the file.

Asynchronously reads the entire contents of a file.

If no encoding is specified (using options.encoding ), the data is returned as a <Buffer> object. Otherwise, the data will be a string.

If options is a string, then it specifies the encoding.

When the path is a directory, the behavior of fsPromises.readFile() is platform-specific. On macOS, Linux, and Windows, the promise will be rejected with an error. On FreeBSD, a
representation of the directory's contents will be returned.

It is possible to abort an ongoing readFile using an <AbortSignal> . If a request is aborted the promise returned is rejected with an AbortError :

import { readFile } from 'fs/promises';

try {
const controller = new AbortController();
const signal = controller.signal;
readFile(fileName, { signal });
// Abort the request
controller.abort();
} catch (err) {
console.error(err);
}

Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.readFile performs.

Any specified <FileHandle> has to support reading.

fsPromises.readlink(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

Returns: <Promise> Fulfills with the linkString upon success.

Reads the contents of the symbolic link referred to by path . See the POSIX readlink(2) documentation for more detail. The promise is resolved with the linkString upon success.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path returned. If the
encoding is set to 'buffer' , the link path returned will be passed as a <Buffer> object.

fsPromises.realpath(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

Returns: <Promise> Fulfills with the resolved path upon success.

Determines the actual location of path using the same semantics as the fs.realpath.native() function.

Only paths that can be converted to UTF8 strings are supported.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path. If the encoding is set
to 'buffer' , the path returned will be passed as a <Buffer> object.

On Linux, when Node.js is linked against musl libc, the procfs file system must be mounted on /proc in order for this function to work. Glibc does not have this restriction.

fsPromises.rename(oldPath, newPath)
oldPath <string> | <Buffer> | <URL>
newPath <string> | <Buffer> | <URL>

Returns: <Promise> Fulfills with undefined upon success.

Renames oldPath to newPath .

fsPromises.rmdir(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js retries the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive directory removal. In recursive mode, errors are not reported if path does not exist, and operations are retried on failure.
Default: false .

retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .

Returns: <Promise> Fulfills with undefined upon success.

Removes the directory identified by path .

Using fsPromises.rmdir() on a file (not a directory) results in the promise being rejected with an ENOENT error on Windows and an ENOTDIR error on POSIX.

Setting recursive to true results in behavior similar to the Unix command rm -rf : an error will not be raised for paths that do not exist, and paths that represent files will be deleted. The
permissive behavior of the recursive option is deprecated, ENOTDIR and ENOENT will be thrown in the future.

fsPromises.rm(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
force <boolean> When true , exceptions will be ignored if path does not exist. Default: false .

maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js will retry the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .

recursive <boolean> If true , perform a recursive directory removal. In recursive mode operations are retried on failure. Default: false .

retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .

Returns: <Promise> Fulfills with undefined upon success.

Removes files and directories (modeled on the standard POSIX rm utility).

fsPromises.stat(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

Returns: <Promise> Fulfills with the <fs.Stats> object for the given path .

fsPromises.symlink(target, path[, type])


target <string> | <Buffer> | <URL>

path <string> | <Buffer> | <URL>

type <string> Default: 'file'

Returns: <Promise> Fulfills with undefined upon success.

Creates a symbolic link.

The type argument is only used on Windows platforms and can be one of 'dir' , 'file' , or 'junction' . Windows junction points require the destination path to be absolute. When using
'junction' , the target argument will automatically be normalized to absolute path.

fsPromises.truncate(path[, len])
path <string> | <Buffer> | <URL>

len <integer> Default: 0

Returns: <Promise> Fulfills with undefined upon success.

Truncates (shortens or extends the length) of the content at path to len bytes.

fsPromises.unlink(path)
path <string> | <Buffer> | <URL>

Returns: <Promise> Fulfills with undefined upon success.

If path refers to a symbolic link, then the link is removed without affecting the file or directory to which that link refers. If the path refers to a file path that is not a symbolic link, the file is
deleted. See the POSIX unlink(2) documentation for more detail.

fsPromises.utimes(path, atime, mtime)


path <string> | <Buffer> | <URL>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

Returns: <Promise> Fulfills with undefined upon success.

Change the file system timestamps of the object referenced by path .


The atime and mtime arguments follow these rules:

Values can be either numbers representing Unix epoch time, Date s, or a numeric string like '123456789.0' .

If the value can not be converted to a number, or is NaN , Infinity or -Infinity , an Error will be thrown.

fsPromises.watch(filename[, options])
filename <string> | <Buffer> | <URL>

options <string> | <Object>


persistent <boolean> Indicates whether the process should continue to run as long as files are being watched. Default: true .

recursive <boolean> Indicates whether all subdirectories should be watched, or only the current directory. This applies when a directory is specified, and only on supported
platforms (See caveats ). Default: false .

encoding <string> Specifies the character encoding to be used for the filename passed to the listener. Default: 'utf8' .

signal <AbortSignal> An <AbortSignal> used to signal when the watcher should stop.

Returns: <AsyncIterator> of objects with the properties:


eventType <string> The type of change

filename <string> | <Buffer> The name of the file changed.

Returns an async iterator that watches for changes on filename , where filename is either a file or a directory.

const { watch } = require('fs/promises');

const ac = new AbortController();


const { signal } = ac;
setTimeout(() => ac.abort(), 10000);

(async () => {
try {
const watcher = watch(__filename, { signal });
for await (const event of watcher)
console.log(event);
} catch (err) {
if (err.name === 'AbortError')
return;
throw err;
}
})();
On most platforms, 'rename' is emitted whenever a filename appears or disappears in the directory.

All the caveats for fs.watch() also apply to fsPromises.watch() .

fsPromises.writeFile(file, data[, options])


file <string> | <Buffer> | <URL> | <FileHandle> filename or FileHandle

data <string> | <Buffer> | <Uint8Array> | <Object>

options <Object> | <string>


encoding <string> | <null> Default: 'utf8'

mode <integer> Default: 0o666

flag <string> See support of file system flags . Default: 'w' .

signal <AbortSignal> allows aborting an in-progress writeFile

Returns: <Promise> Fulfills with undefined upon success.

Asynchronously writes data to a file, replacing the file if it already exists. data can be a string, a <Buffer> , or an object with an own toString function property.

The encoding option is ignored if data is a buffer.

If options is a string, then it specifies the encoding.

Any specified <FileHandle> has to support writing.

It is unsafe to use fsPromises.writeFile() multiple times on the same file without waiting for the promise to be settled.

Similarly to fsPromises.readFile - fsPromises.writeFile is a convenience method that performs multiple write calls internally to write the buffer passed to it. For performance
sensitive code consider using fs.createWriteStream() .

It is possible to use an <AbortSignal> to cancel an fsPromises.writeFile() . Cancelation is "best effort", and some amount of data is likely still to be written.

import { writeFile } from 'fs/promises';

try {
const controller = new AbortController();
const { signal } = controller;
const data = new Uint8Array(Buffer.from('Hello Node.js'));
writeFile('message.txt', data, { signal });
controller.abort();
} catch (err) {
// When a request is aborted - err is an AbortError
console.error(err);
}

Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.writeFile performs.

Callback API
The callback APIs perform all operations asynchronously, without blocking the event loop, then invoke a callback function upon completion or error.

The callback APIs use the underlying Node.js threadpool to perform file system operations off the event loop thread. These operations are not synchronized or threadsafe. Care must be
taken when performing multiple concurrent modifications on the same file or data corruption may occur.

fs.access(path[, mode], callback)


path <string> | <Buffer> | <URL>

mode <integer> Default: fs.constants.F_OK

callback <Function>
err <Error>

Tests a user's permissions for the file or directory specified by path . The mode argument is an optional integer that specifies the accessibility checks to be performed. Check File access
constants for possible values of mode . It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. fs.constants.W_OK | fs.constants.R_OK ).

The final argument, callback , is a callback function that is invoked with a possible error argument. If any of the accessibility checks fail, the error argument will be an Error object. The
following examples check if package.json exists, and if it is readable or writable.

import { access, constants } from 'fs';

const file = 'package.json';

// Check if the file exists in the current directory.


access(file, constants.F_OK, (err) => {
console.log(`${file} ${err ? 'does not exist' : 'exists'}`);
});

// Check if the file is readable.


access(file, constants.R_OK, (err) => {
console.log(`${file} ${err ? 'is not readable' : 'is readable'}`);
});
// Check if the file is writable.
access(file, constants.W_OK, (err) => {
console.log(`${file} ${err ? 'is not writable' : 'is writable'}`);
});

// Check if the file exists in the current directory, and if it is writable.


access(file, constants.F_OK | fs.constants.W_OK, (err) => {
if (err) {
console.error(
`${file} ${err.code === 'ENOENT' ? 'does not exist' : 'is read-only'}`);
} else {
console.log(`${file} exists, and it is writable`);
}
});

Do not use fs.access() to check for the accessibility of a file before calling fs.open() , fs.readFile() or fs.writeFile() . Doing so introduces a race condition, since other processes
may change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file is not accessible.

write (NOT RECOMMENDED)

import { access, open, close } from 'fs';

access('myfile', (err) => {


if (!err) {
console.error('myfile already exists');
return;
}

open('myfile', 'wx', (err, fd) => {


if (err) throw err;

try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
});
write (RECOMMENDED)

import { open, close } from 'fs';

open('myfile', 'wx', (err, fd) => {


if (err) {
if (err.code === 'EEXIST') {
console.error('myfile already exists');
return;
}

throw err;
}

try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});

read (NOT RECOMMENDED)

import { access, open, close } from 'fs';


access('myfile', (err) => {
if (err) {
if (err.code === 'ENOENT') {
console.error('myfile does not exist');
return;
}

throw err;
}

open('myfile', 'r', (err, fd) => {


if (err) throw err;
try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
});

read (RECOMMENDED)

import { open, close } from 'fs';

open('myfile', 'r', (err, fd) => {


if (err) {
if (err.code === 'ENOENT') {
console.error('myfile does not exist');
return;
}

throw err;
}

try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});

The "not recommended" examples above check for accessibility and then use the file; the "recommended" examples are better because they use the file directly and handle the error, if any.

In general, check for the accessibility of a file only if the file will not be used directly, for example when its accessibility is a signal from another process.

On Windows, access-control policies (ACLs) on a directory may limit access to a file or directory. The fs.access() function, however, does not check the ACL and therefore may report that
a path is accessible even if the ACL restricts the user from reading or writing to it.
fs.appendFile(path, data[, options], callback)
path <string> | <Buffer> | <URL> | <number> filename or file descriptor

data <string> | <Buffer>

options <Object> | <string>


encoding <string> | <null> Default: 'utf8'

mode <integer> Default: 0o666

flag <string> See support of file system flags . Default: 'a' .

callback <Function>
err <Error>

Asynchronously append data to a file, creating the file if it does not yet exist. data can be a string or a <Buffer> .

import { appendFile } from 'fs';

appendFile('message.txt', 'data to append', (err) => {


if (err) throw err;
console.log('The "data to append" was appended to file!');
});

If options is a string, then it specifies the encoding:

import { appendFile } from 'fs';

appendFile('message.txt', 'data to append', 'utf8', callback);

The path may be specified as a numeric file descriptor that has been opened for appending (using fs.open() or fs.openSync() ). The file descriptor will not be closed automatically.

import { open, close, appendFile } from 'fs';

function closeFd(fd) {
close(fd, (err) => {
if (err) throw err;
});
}

open('message.txt', 'a', (err, fd) => {


if (err) throw err;
try {
appendFile(fd, 'data to append', 'utf8', (err) => {
closeFd(fd);
if (err) throw err;
});
} catch (err) {
closeFd(fd);
throw err;
}
});

fs.chmod(path, mode, callback)


path <string> | <Buffer> | <URL>

mode <string> | <integer>

callback <Function>
err <Error>

Asynchronously changes the permissions of a file. No arguments other than a possible exception are given to the completion callback.

See the POSIX chmod(2) documentation for more detail.

import { chmod } from 'fs';

chmod('my_file.txt', 0o775, (err) => {


if (err) throw err;
console.log('The permissions for file "my_file.txt" have been changed!');
});

File modes
The mode argument used in both the fs.chmod() and fs.chmodSync() methods is a numeric bitmask created using a logical OR of the following constants:

Constant Octal Description

fs.constants.S_IRUSR 0o400 read by owner

fs.constants.S_IWUSR 0o200 write by owner


Constant Octal Description

fs.constants.S_IXUSR 0o100 execute/search by owner

fs.constants.S_IRGRP 0o40 read by group

fs.constants.S_IWGRP 0o20 write by group

fs.constants.S_IXGRP 0o10 execute/search by group

fs.constants.S_IROTH 0o4 read by others

fs.constants.S_IWOTH 0o2 write by others

fs.constants.S_IXOTH 0o1 execute/search by others

An easier method of constructing the mode is to use a sequence of three octal digits (e.g. 765 ). The left-most digit ( 7 in the example), specifies the permissions for the file owner. The middle
digit ( 6 in the example), specifies permissions for the group. The right-most digit ( 5 in the example), specifies the permissions for others.

Number Description

7 read, write, and execute

6 read and write

5 read and execute

4 read only

3 write and execute

2 write only

1 execute only

0 no permission

For example, the octal value 0o765 means:

The owner may read, write and execute the file.


The group may read and write the file.
Others may read and execute the file.
When using raw numbers where file modes are expected, any value larger than 0o777 may result in platform-specific behaviors that are not supported to work consistently. Therefore
constants like S_ISVTX , S_ISGID or S_ISUID are not exposed in fs.constants .

Caveats: on Windows only the write permission can be changed, and the distinction among the permissions of group, owner or others is not implemented.

fs.chown(path, uid, gid, callback)


path <string> | <Buffer> | <URL>

uid <integer>

gid <integer>

callback <Function>
err <Error>

Asynchronously changes owner and group of a file. No arguments other than a possible exception are given to the completion callback.

See the POSIX chown(2) documentation for more detail.

fs.close(fd[, callback])
fd <integer>

callback <Function>
err <Error>

Closes the file descriptor. No arguments other than a possible exception are given to the completion callback.

Calling fs.close() on any file descriptor ( fd ) that is currently in use through any other fs operation may lead to undefined behavior.

See the POSIX close(2) documentation for more detail.

fs.copyFile(src, dest[, mode], callback)


src <string> | <Buffer> | <URL> source filename to copy

dest <string> | <Buffer> | <URL> destination filename of the copy operation

mode <integer> modifiers for copy operation. Default: 0 .

callback <Function>

Asynchronously copies src to dest . By default, dest is overwritten if it already exists. No arguments other than a possible exception are given to the callback function. Node.js makes no
guarantees about the atomicity of the copy operation. If an error occurs after the destination file has been opened for writing, Node.js will attempt to remove the destination.

mode is an optional integer that specifies the behavior of the copy operation. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g.
fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE ).
fs.constants.COPYFILE_EXCL : The copy operation will fail if dest already exists.

fs.constants.COPYFILE_FICLONE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then a fallback copy mechanism is
used.
fs.constants.COPYFILE_FICLONE_FORCE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then the operation will fail.

import { copyFile, constants } from 'fs';

function callback(err) {
if (err) throw err;
console.log('source.txt was copied to destination.txt');
}

// destination.txt will be created or overwritten by default.


copyFile('source.txt', 'destination.txt', callback);

// By using COPYFILE_EXCL, the operation will fail if destination.txt exists.


copyFile('source.txt', 'destination.txt', constants.COPYFILE_EXCL, callback);

fs.createReadStream(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


flags <string> See support of file system flags . Default: 'r' .

encoding <string> Default: null

fd <integer> | <FileHandle> Default: null

mode <integer> Default: 0o666

autoClose <boolean> Default: true

emitClose <boolean> Default: true

start <integer>

end <integer> Default: Infinity

highWaterMark <integer> Default: 64 * 1024

fs <Object> | <null> Default: null

Returns: <fs.ReadStream> See Readable Stream .

Unlike the 16 kb default highWaterMark for a readable stream, the stream returned by this method has a default highWaterMark of 64 kb.
options can include start and end values to read a range of bytes from the file instead of the entire file. Both start and end are inclusive and start counting at 0, allowed values are in
the [0, Number.MAX_SAFE_INTEGER ] range. If fd is specified and start is omitted or undefined , fs.createReadStream() reads sequentially from the current file position. The encoding
can be any one of those accepted by <Buffer> .

If fd is specified, ReadStream will ignore the path argument and will use the specified file descriptor. This means that no 'open' event will be emitted. fd should be blocking; non-
blocking fd s should be passed to <net.Socket> .

If fd points to a character device that only supports blocking reads (such as keyboard or sound card), read operations do not finish until data is available. This can prevent the process from
exiting and the stream from closing naturally.

By default, the stream will emit a 'close' event after it has been destroyed, like most Readable streams. Set the emitClose option to false to change this behavior.

By providing the fs option, it is possible to override the corresponding fs implementations for open , read , and close . When providing the fs option, overrides for open , read , and
close are required.

import { createReadStream } from 'fs';

// Create a stream from some character device.


const stream = createReadStream('/dev/input/event0');
setTimeout(() => {
stream.close(); // This may not close the stream.
// Artificially marking end-of-stream, as if the underlying resource had
// indicated end-of-file by itself, allows the stream to close.
// This does not cancel pending read operations, and if there is such an
// operation, the process may still not be able to exit successfully
// until it finishes.
stream.push(null);
stream.read(0);
}, 100);

If autoClose is false, then the file descriptor won't be closed, even if there's an error. It is the application's responsibility to close it and make sure there's no file descriptor leak. If autoClose
is set to true (default behavior), on 'error' or 'end' the file descriptor will be closed automatically.

mode sets the file mode (permission and sticky bits), but only if the file was created.

An example to read the last 10 bytes of a file which is 100 bytes long:

import { createReadStream } from 'fs';

createReadStream('sample.txt', { start: 90, end: 99 });


If options is a string, then it specifies the encoding.

fs.createWriteStream(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


flags <string> See support of file system flags . Default: 'w' .

encoding <string> Default: 'utf8'

fd <integer> | <FileHandle> Default: null

mode <integer> Default: 0o666

autoClose <boolean> Default: true

emitClose <boolean> Default: true

start <integer>

fs <Object> | <null> Default: null

Returns: <fs.WriteStream> See Writable Stream .

options may also include a start option to allow writing data at some position past the beginning of the file, allowed values are in the [0, Number.MAX_SAFE_INTEGER ] range. Modifying a
file rather than replacing it may require the flags option to be set to r+ rather than the default w . The encoding can be any one of those accepted by <Buffer> .

If autoClose is set to true (default behavior) on 'error' or 'finish' the file descriptor will be closed automatically. If autoClose is false, then the file descriptor won't be closed, even if
there's an error. It is the application's responsibility to close it and make sure there's no file descriptor leak.

By default, the stream will emit a 'close' event after it has been destroyed, like most Writable streams. Set the emitClose option to false to change this behavior.

By providing the fs option it is possible to override the corresponding fs implementations for open , write , writev and close . Overriding write() without writev() can reduce
performance as some optimizations ( _writev() ) will be disabled. When providing the fs option, overrides for open , close , and at least one of write and writev are required.

Like <fs.ReadStream> , if fd is specified, <fs.WriteStream> will ignore the path argument and will use the specified file descriptor. This means that no 'open' event will be emitted. fd
should be blocking; non-blocking fd s should be passed to <net.Socket> .

If options is a string, then it specifies the encoding.

fs.exists(path, callback)
Stability: 0 - Deprecated: Use fs.stat() or fs.access() instead.

path <string> | <Buffer> | <URL>


callback <Function>
exists <boolean>

Test whether or not the given path exists by checking with the file system. Then call the callback argument with either true or false:

import { exists } from 'fs';

exists('/etc/passwd', (e) => {


console.log(e ? 'it exists' : 'no passwd!');
});

The parameters for this callback are not consistent with other Node.js callbacks. Normally, the first parameter to a Node.js callback is an err parameter, optionally followed by other
parameters. The fs.exists() callback has only one boolean parameter. This is one reason fs.access() is recommended instead of fs.exists() .

Using fs.exists() to check for the existence of a file before calling fs.open() , fs.readFile() or fs.writeFile() is not recommended. Doing so introduces a race condition, since other
processes may change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file does not exist.

write (NOT RECOMMENDED)

import { exists, open, close } from 'fs';

exists('myfile', (e) => {


if (e) {
console.error('myfile already exists');
} else {
open('myfile', 'wx', (err, fd) => {
if (err) throw err;

try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
}
});

write (RECOMMENDED)
import { open, close } from 'fs';
open('myfile', 'wx', (err, fd) => {
if (err) {
if (err.code === 'EEXIST') {
console.error('myfile already exists');
return;
}

throw err;
}

try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});

read (NOT RECOMMENDED)

import { open, close, exists } from 'fs';

exists('myfile', (e) => {


if (e) {
open('myfile', 'r', (err, fd) => {
if (err) throw err;

try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
} else {
console.error('myfile does not exist');
}
});

read (RECOMMENDED)

import { open, close } from 'fs';

open('myfile', 'r', (err, fd) => {


if (err) {
if (err.code === 'ENOENT') {
console.error('myfile does not exist');
return;
}

throw err;
}

try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});

The "not recommended" examples above check for existence and then use the file; the "recommended" examples are better because they use the file directly and handle the error, if any.

In general, check for the existence of a file only if the file won’t be used directly, for example when its existence is a signal from another process.

fs.fchmod(fd, mode, callback)


fd <integer>

mode <string> | <integer>

callback <Function>
err <Error>

Sets the permissions on the file. No arguments other than a possible exception are given to the completion callback.

See the POSIX fchmod(2) documentation for more detail.


fs.fchown(fd, uid, gid, callback)
fd <integer>

uid <integer>

gid <integer>

callback <Function>
err <Error>

Sets the owner of the file. No arguments other than a possible exception are given to the completion callback.

See the POSIX fchown(2) documentation for more detail.

fs.fdatasync(fd, callback)
fd <integer>

callback <Function>
err <Error>

Forces all currently queued I/O operations associated with the file to the operating system's synchronized I/O completion state. Refer to the POSIX fdatasync(2) documentation for
details. No arguments other than a possible exception are given to the completion callback.

fs.fstat(fd[, options], callback)


fd <integer>

options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

callback <Function>
err <Error>

stats <fs.Stats>

Invokes the callback with the <fs.Stats> for the file descriptor.

See the POSIX fstat(2) documentation for more detail.

fs.fsync(fd, callback)
fd <integer>

callback <Function>
err <Error>
Request that all data for the open file descriptor is flushed to the storage device. The specific implementation is operating system and device specific. Refer to the POSIX fsync(2)
documentation for more detail. No arguments other than a possible exception are given to the completion callback.

fs.ftruncate(fd[, len], callback)


fd <integer>

len <integer> Default: 0

callback <Function>
err <Error>

Truncates the file descriptor. No arguments other than a possible exception are given to the completion callback.

See the POSIX ftruncate(2) documentation for more detail.

If the file referred to by the file descriptor was larger than len bytes, only the first len bytes will be retained in the file.

For example, the following program retains only the first four bytes of the file:

import { open, close, ftruncate } from 'fs';

function closeFd(fd) {
close(fd, (err) => {
if (err) throw err;
});
}

open('temp.txt', 'r+', (err, fd) => {


if (err) throw err;

try {
ftruncate(fd, 4, (err) => {
closeFd(fd);
if (err) throw err;
});
} catch (err) {
closeFd(fd);
if (err) throw err;
}
});
If the file previously was shorter than len bytes, it is extended, and the extended part is filled with null bytes ( '\0' ):

If len is negative then 0 will be used.

fs.futimes(fd, atime, mtime, callback)


fd <integer>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

callback <Function>
err <Error>

Change the file system timestamps of the object referenced by the supplied file descriptor. See fs.utimes() .

This function does not work on AIX versions before 7.1, it will return the error UV_ENOSYS .

fs.lchmod(path, mode, callback)


path <string> | <Buffer> | <URL>

mode <integer>

callback <Function>
err <Error>

Changes the permissions on a symbolic link. No arguments other than a possible exception are given to the completion callback.

This method is only implemented on macOS.

See the POSIX lchmod(2) documentation for more detail.

fs.lchown(path, uid, gid, callback)


path <string> | <Buffer> | <URL>

uid <integer>

gid <integer>

callback <Function>
err <Error>

Set the owner of the symbolic link. No arguments other than a possible exception are given to the completion callback.

See the POSIX lchown(2) documentation for more detail.


fs.lutimes(path, atime, mtime, callback)
path <string> | <Buffer> | <URL>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

callback <Function>
err <Error>

Changes the access and modification times of a file in the same way as fs.utimes() , with the difference that if the path refers to a symbolic link, then the link is not dereferenced: instead,
the timestamps of the symbolic link itself are changed.

No arguments other than a possible exception are given to the completion callback.

fs.link(existingPath, newPath, callback)


existingPath <string> | <Buffer> | <URL>

newPath <string> | <Buffer> | <URL>

callback <Function>
err <Error>

Creates a new link from the existingPath to the newPath . See the POSIX link(2) documentation for more detail. No arguments other than a possible exception are given to the
completion callback.

fs.lstat(path[, options], callback)


path <string> | <Buffer> | <URL>

options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

callback <Function>
err <Error>

stats <fs.Stats>

Retrieves the <fs.Stats> for the symbolic link referred to by the path. The callback gets two arguments (err, stats) where stats is a { fs.Stats} object. lstat() is identical
to stat() , except that if path` is a symbolic link, then the link itself is stat-ed, not the file that it refers to.

See the POSIX lstat(2) documentation for more details.

fs.mkdir(path[, options], callback)


path <string> | <Buffer> | <URL>
options <Object> | <integer>
recursive <boolean> Default: false

mode <string> | <integer> Not supported on Windows. Default: 0o777 .

callback <Function>
err <Error>

Asynchronously creates a directory.

The callback is given a possible exception and, if recursive is true , the first directory path created, (err, [path]) . path can still be undefined when recursive is true , if no directory
was created.

The optional options argument can be an integer specifying mode (permission and sticky bits), or an object with a mode property and a recursive property indicating whether parent
directories should be created. Calling fs.mkdir() when path is a directory that exists results in an error only when recursive is false.

import { mkdir } from 'fs';

// Creates /tmp/a/apple, regardless of whether `/tmp` and /tmp/a exist.


mkdir('/tmp/a/apple', { recursive: true }, (err) => {
if (err) throw err;
});

On Windows, using fs.mkdir() on the root directory even with recursion will result in an error:

import { mkdir } from 'fs';

mkdir('/', { recursive: true }, (err) => {


// => [Error: EPERM: operation not permitted, mkdir 'C:\']
});

See the POSIX mkdir(2) documentation for more details.

fs.mkdtemp(prefix[, options], callback)


prefix <string>

options <string> | <Object>


encoding <string> Default: 'utf8'

callback <Function>
err <Error>
directory <string>

Creates a unique temporary directory.

Generates six random characters to be appended behind a required prefix to create a unique temporary directory. Due to platform inconsistencies, avoid trailing X characters in prefix .
Some platforms, notably the BSDs, can return more than six random characters, and replace trailing X characters in prefix with random characters.

The created directory path is passed as a string to the callback's second parameter.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use.

import { mkdtemp } from 'fs';

mkdtemp(path.join(os.tmpdir(), 'foo-'), (err, directory) => {


if (err) throw err;
console.log(directory);
// Prints: /tmp/foo-itXde2 or C:\Users\...\AppData\Local\Temp\foo-itXde2
});

The fs.mkdtemp() method will append the six randomly selected characters directly to the prefix string. For instance, given a directory /tmp , if the intention is to create a temporary
directory within /tmp , the prefix must end with a trailing platform-specific path separator ( require('path').sep ).

import { tmpdir } from 'os';


import { mkdtemp } from 'fs';

// The parent directory for the new temporary directory


const tmpDir = tmpdir();

// This method is *INCORRECT*:


mkdtemp(tmpDir, (err, directory) => {
if (err) throw err;
console.log(directory);
// Will print something similar to `/tmpabc123`.
// A new temporary directory is created at the file system root
// rather than *within* the /tmp directory.
});

// This method is *CORRECT*:


import { sep } from 'path';
mkdtemp(`${tmpDir}${sep}`, (err, directory) => {
if (err) throw err;
console.log(directory);
// Will print something similar to `/tmp/abc123`.
// A new temporary directory is created within
// the /tmp directory.
});

fs.open(path[, flags[, mode]], callback)


path <string> | <Buffer> | <URL>

flags <string> | <number> See support of file system flags . Default: 'r' .

mode <string> | <integer> Default: 0o666 (readable and writable)

callback <Function>
err <Error>

fd <integer>

Asynchronous file open. See the POSIX open(2) documentation for more details.

mode sets the file mode (permission and sticky bits), but only if the file was created. On Windows, only the write permission can be manipulated; see fs.chmod() .

The callback gets two arguments (err, fd) .

Some characters ( < > : " / \ | ? * ) are reserved under Windows as documented by Naming Files, Paths, and Namespaces . Under NTFS, if the filename contains a colon, Node.js will
open a file system stream, as described by this MSDN page .

Functions based on fs.open() exhibit this behavior as well: fs.writeFile() , fs.readFile() , etc.

fs.opendir(path[, options], callback)


path <string> | <Buffer> | <URL>

options <Object>
encoding <string> | <null> Default: 'utf8'

bufferSize <number> Number of directory entries that are buffered internally when reading from the directory. Higher values lead to better performance but higher memory
usage. Default: 32
callback <Function>
err <Error>

dir <fs.Dir>

Asynchronously open a directory. See the POSIX opendir(3) documentation for more details.
Creates an <fs.Dir> , which contains all further functions for reading from and cleaning up the directory.

The encoding option sets the encoding for the path while opening the directory and subsequent read operations.

fs.read(fd, buffer, offset, length, position, callback)


fd <integer>

buffer <Buffer> | <TypedArray> | <DataView> The buffer that the data will be written to.

offset <integer> The position in buffer to write the data to.

length <integer> The number of bytes to read.

position <integer> | <bigint> Specifies where to begin reading from in the file. If position is null or -1 , data will be read from the current file position, and the file position will
be updated. If position is an integer, the file position will be unchanged.

callback <Function>
err <Error>

bytesRead <integer>

buffer <Buffer>

Read data from the file specified by fd .

The callback is given the three arguments, (err, bytesRead, buffer) .

If the file is not modified concurrently, the end-of-file is reached when the number of bytes read is zero.

If this method is invoked as its util.promisify() ed version, it returns a promise for an Object with bytesRead and buffer properties.

fs.read(fd, [options,] callback)


fd <integer>

options <Object>
buffer <Buffer> | <TypedArray> | <DataView> Default: Buffer.alloc(16384)

offset <integer> Default: 0

length <integer> Default: buffer.length

position <integer> | <bigint> Default: null

callback <Function>
err <Error>

bytesRead <integer>

buffer <Buffer>
Similar to the fs.read90 function, this version takes an optional options object. If no options object is specified, it will default with the above values.

fs.readdir(path[, options], callback)


path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

withFileTypes <boolean> Default: false

callback <Function>
err <Error>

files <string[]> | <Buffer[]> | <fs.Dirent[]>

Reads the contents of a directory. The callback gets two arguments (err, files) where files is an array of the names of the files in the directory excluding '.' and '..' .

See the POSIX readdir(3) documentation for more details.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames passed to the
callback. If the encoding is set to 'buffer' , the filenames returned will be passed as <Buffer> objects.

If options.withFileTypes is set to true , the files array will contain <fs.Dirent> objects.

fs.readFile(path[, options], callback)


path <string> | <Buffer> | <URL> | <integer> filename or file descriptor

options <Object> | <string>


encoding <string> | <null> Default: null

flag <string> See support of file system flags . Default: 'r' .

signal <AbortSignal> allows aborting an in-progress readFile

callback <Function>
err <Error>

data <string> | <Buffer>

Asynchronously reads the entire contents of a file.

import { readFile } from 'fs';

readFile('/etc/passwd', (err, data) => {


if (err) throw err;
console.log(data);
});

The callback is passed two arguments (err, data) , where data is the contents of the file.

If no encoding is specified, then the raw buffer is returned.

If options is a string, then it specifies the encoding:

import { readFile } from 'fs';

readFile('/etc/passwd', 'utf8', callback);

When the path is a directory, the behavior of fs.readFile() and fs.readFileSync() is platform-specific. On macOS, Linux, and Windows, an error will be returned. On FreeBSD, a
representation of the directory's contents will be returned.

import { readFile } from 'fs';

// macOS, Linux, and Windows


readFile('<directory>', (err, data) => {
// => [Error: EISDIR: illegal operation on a directory, read <directory>]
});

// FreeBSD
readFile('<directory>', (err, data) => {
// => null, <data>
});

It is possible to abort an ongoing request using an AbortSignal . If a request is aborted the callback is called with an AbortError :

import { readFile } from 'fs';

const controller = new AbortController();


const signal = controller.signal;
readFile(fileInfo[0].name, { signal }, (err, buf) => {
// ...
});
// When you want to abort the request
controller.abort();
The fs.readFile() function buffers the entire file. To minimize memory costs, when possible prefer streaming via fs.createReadStream() .

Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.readFile performs.

File descriptors
1. Any specified file descriptor has to support reading.
2. If a file descriptor is specified as the path , it will not be closed automatically.
3. The reading will begin at the current position. For example, if the file already had 'Hello World ' and six bytes are read with the file descriptor, the call to fs.readFile() with the same
file descriptor, would give 'World' , rather than 'Hello World' .

Performance Considerations
The fs.readFile() method asynchronously reads the contents of a file into memory one chunk at a time, allowing the event loop to turn between each chunk. This allows the read
operation to have less impact on other activity that may be using the underlying libuv thread pool but means that it will take longer to read a complete file into memory.

The additional read overhead can vary broadly on different systems and depends on the type of file being read. If the file type is not a regular file (a pipe for instance) and Node.js is unable to
determine an actual file size, each read operation will load on 64kb of data. For regular files, each read will process 512kb of data.

For applications that require as-fast-as-possible reading of file contents, it is better to use fs.read() directly and for application code to manage reading the full contents of the file itself.

The Node.js GitHub issue #25741 provides more information and a detailed analysis on the performance of fs.readFile() for multiple file sizes in different Node.js versions.

fs.readlink(path[, options], callback)


path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

callback <Function>
err <Error>

linkString <string> | <Buffer>

Reads the contents of the symbolic link referred to by path . The callback gets two arguments (err, linkString) .

See the POSIX readlink(2) documentation for more details.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path passed to the
callback. If the encoding is set to 'buffer' , the link path returned will be passed as a <Buffer> object.

fs.readv(fd, buffers[, position], callback)


fd <integer>

buffers <ArrayBufferView[]>

position <integer>

callback <Function>
err <Error>

bytesRead <integer>

buffers <ArrayBufferView[]>

Read from a file specified by fd and write to an array of ArrayBufferView s using readv() .

position is the offset from the beginning of the file from where data should be read. If typeof position !== 'number' , the data will be read from the current position.

The callback will be given three arguments: err , bytesRead , and buffers . bytesRead is how many bytes were read from the file.

If this method is invoked as its util.promisify() ed version, it returns a promise for an Object with bytesRead and buffers properties.

fs.realpath(path[, options], callback)


path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

callback <Function>
err <Error>

resolvedPath <string> | <Buffer>

Asynchronously computes the canonical pathname by resolving . , .. and symbolic links.

A canonical pathname is not necessarily unique. Hard links and bind mounts can expose a file system entity through many pathnames.

This function behaves like realpath(3) , with some exceptions:

1. No case conversion is performed on case-insensitive file systems.

2. The maximum number of symbolic links is platform-independent and generally (much) higher than what the native realpath(3) implementation supports.

The callback gets two arguments (err, resolvedPath) . May use process.cwd to resolve relative paths.

Only paths that can be converted to UTF8 strings are supported.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path passed to the callback.
If the encoding is set to 'buffer' , the path returned will be passed as a <Buffer> object.
If path resolves to a socket or a pipe, the function will return a system dependent name for that object.

fs.realpath.native(path[, options], callback)


path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

callback <Function>
err <Error>

resolvedPath <string> | <Buffer>

Asynchronous realpath(3) .

The callback gets two arguments (err, resolvedPath) .

Only paths that can be converted to UTF8 strings are supported.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path passed to the callback.
If the encoding is set to 'buffer' , the path returned will be passed as a <Buffer> object.

On Linux, when Node.js is linked against musl libc, the procfs file system must be mounted on /proc in order for this function to work. Glibc does not have this restriction.

fs.rename(oldPath, newPath, callback)


oldPath <string> | <Buffer> | <URL>

newPath <string> | <Buffer> | <URL>

callback <Function>
err <Error>

Asynchronously rename file at oldPath to the pathname provided as newPath . In the case that newPath already exists, it will be overwritten. If there is a directory at newPath , an error will
be raised instead. No arguments other than a possible exception are given to the completion callback.

See also: rename(2) .

import { rename } from 'fs';

rename('oldFile.txt', 'newFile.txt', (err) => {


if (err) throw err;
console.log('Rename complete!');
});
fs.rmdir(path[, options], callback)
path <string> | <Buffer> | <URL>

options <Object>
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js retries the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive directory removal. In recursive mode, errors are not reported if path does not exist, and operations are retried on failure.
Default: false .

retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .

callback <Function>
err <Error>

Asynchronous rmdir(2) . No arguments other than a possible exception are given to the completion callback.

Using fs.rmdir() on a file (not a directory) results in an ENOENT error on Windows and an ENOTDIR error on POSIX.

Setting recursive to true results in behavior similar to the Unix command rm -rf : an error will not be raised for paths that do not exist, and paths that represent files will be deleted. The
permissive behavior of the recursive option is deprecated, ENOTDIR and ENOENT will be thrown in the future.

fs.rm(path[, options], callback)


path <string> | <Buffer> | <URL>

options <Object>
force <boolean> When true , exceptions will be ignored if path does not exist. Default: false .

maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js will retry the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .

recursive <boolean> If true , perform a recursive removal. In recursive mode operations are retried on failure. Default: false .

retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .

callback <Function>
err <Error>

Asynchronously removes files and directories (modeled on the standard POSIX rm utility). No arguments other than a possible exception are given to the completion callback.

fs.stat(path[, options], callback)


path <string> | <Buffer> | <URL>

options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

callback <Function>
err <Error>

stats <fs.Stats>

Asynchronous stat(2) . The callback gets two arguments (err, stats) where stats is an <fs.Stats> object.

In case of an error, the err.code will be one of Common System Errors .

Using fs.stat() to check for the existence of a file before calling fs.open() , fs.readFile() or fs.writeFile() is not recommended. Instead, user code should open/read/write the file
directly and handle the error raised if the file is not available.

To check if a file exists without manipulating it afterwards, fs.access() is recommended.

For example, given the following directory structure:

- txtDir
-- file.txt
- app.js

The next program will check for the stats of the given paths:

import { stat } from 'fs';

const pathsToCheck = ['./txtDir', './txtDir/file.txt'];

for (let i = 0; i < pathsToCheck.length; i++) {


stat(pathsToCheck[i], (err, stats) => {
console.log(stats.isDirectory());
console.log(stats);
});
}

The resulting output will resemble:

true
Stats {
dev: 16777220,
mode: 16877,
nlink: 3,
uid: 501,
gid: 20,
rdev: 0,
blksize: 4096,
ino: 14214262,
size: 96,
blocks: 0,
atimeMs: 1561174653071.963,
mtimeMs: 1561174614583.3518,
ctimeMs: 1561174626623.5366,
birthtimeMs: 1561174126937.2893,
atime: 2019-06-22T03:37:33.072Z,
mtime: 2019-06-22T03:36:54.583Z,
ctime: 2019-06-22T03:37:06.624Z,
birthtime: 2019-06-22T03:28:46.937Z
}
false
Stats {
dev: 16777220,
mode: 33188,
nlink: 1,
uid: 501,
gid: 20,
rdev: 0,
blksize: 4096,
ino: 14214074,
size: 8,
blocks: 8,
atimeMs: 1561174616618.8555,
mtimeMs: 1561174614584,
ctimeMs: 1561174614583.8145,
birthtimeMs: 1561174007710.7478,
atime: 2019-06-22T03:36:56.619Z,
mtime: 2019-06-22T03:36:54.584Z,
ctime: 2019-06-22T03:36:54.584Z,
birthtime: 2019-06-22T03:26:47.711Z
}

fs.symlink(target, path[, type], callback)


target <string> | <Buffer> | <URL>
path <string> | <Buffer> | <URL>

type <string>

callback <Function>
err <Error>

Creates the link called path pointing to target . No arguments other than a possible exception are given to the completion callback.

See the POSIX symlink(2) documentation for more details.

The type argument is only available on Windows and ignored on other platforms. It can be set to 'dir' , 'file' , or 'junction' . If the type argument is not set, Node.js will autodetect
target type and use 'file' or 'dir' . If the target does not exist, 'file' will be used. Windows junction points require the destination path to be absolute. When using 'junction' ,
the target argument will automatically be normalized to absolute path.

Relative targets are relative to the link’s parent directory.

import { symlink } from 'fs';

symlink('./mew', './example/mewtwo', callback);

The above example creates a symbolic link mewtwo in the example which points to mew in the same directory:

$ tree example/
example/
├── mew
└── mewtwo -> ./mew

fs.truncate(path[, len], callback)


path <string> | <Buffer> | <URL>

len <integer> Default: 0

callback <Function>
err <Error>

Truncates the file. No arguments other than a possible exception are given to the completion callback. A file descriptor can also be passed as the first argument. In this case, fs.ftruncate()
is called.

Passing a file descriptor is deprecated and may result in an error being thrown in the future.

See the POSIX truncate(2) documentation for more details.


fs.unlink(path, callback)
path <string> | <Buffer> | <URL>

callback <Function>
err <Error>

Asynchronously removes a file or symbolic link. No arguments other than a possible exception are given to the completion callback.

import { unlink } from 'fs';


// Assuming that 'path/file.txt' is a regular file.
unlink('path/file.txt', (err) => {
if (err) throw err;
console.log('path/file.txt was deleted');
});

fs.unlink() will not work on a directory, empty or otherwise. To remove a directory, use fs.rmdir() .

See the POSIX unlink(2) documentation for more details.

fs.unwatchFile(filename[, listener])
filename <string> | <Buffer> | <URL>

listener <Function> Optional, a listener previously attached using fs.watchFile()

Stop watching for changes on filename . If listener is specified, only that particular listener is removed. Otherwise, all listeners are removed, effectively stopping watching of filename .

Calling fs.unwatchFile() with a filename that is not being watched is a no-op, not an error.

Using fs.watch() is more efficient than fs.watchFile() and fs.unwatchFile() . fs.watch() should be used instead of fs.watchFile() and fs.unwatchFile() when possible.

fs.utimes(path, atime, mtime, callback)


path <string> | <Buffer> | <URL>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

callback <Function>
err <Error>

Change the file system timestamps of the object referenced by path .

The atime and mtime arguments follow these rules:


Values can be either numbers representing Unix epoch time in seconds, Date s, or a numeric string like '123456789.0' .
If the value can not be converted to a number, or is NaN , Infinity or -Infinity , an Error will be thrown.

fs.watch(filename[, options][, listener])


filename <string> | <Buffer> | <URL>

options <string> | <Object>


persistent <boolean> Indicates whether the process should continue to run as long as files are being watched. Default: true .

recursive <boolean> Indicates whether all subdirectories should be watched, or only the current directory. This applies when a directory is specified, and only on supported
platforms (See caveats ). Default: false .

encoding <string> Specifies the character encoding to be used for the filename passed to the listener. Default: 'utf8' .

signal <AbortSignal> allows closing the watcher with an AbortSignal.

listener <Function> | <undefined> Default: undefined


eventType <string>

filename <string> | <Buffer>

Returns: <fs.FSWatcher>

Watch for changes on filename , where filename is either a file or a directory.

The second argument is optional. If options is provided as a string, it specifies the encoding . Otherwise options should be passed as an object.

The listener callback gets two arguments (eventType, filename) . eventType is either 'rename' or 'change' , and filename is the name of the file which triggered the event.

On most platforms, 'rename' is emitted whenever a filename appears or disappears in the directory.

The listener callback is attached to the 'change' event fired by <fs.FSWatcher> , but it is not the same thing as the 'change' value of eventType .

If a signal is passed, aborting the corresponding AbortController will close the returned <fs.FSWatcher> .

Caveats
The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.

The recursive option is only supported on macOS and Windows. An ERR_FEATURE_UNAVAILABLE_ON_PLATFORM exception will be thrown when the option is used on a platform that does not
support it.

On Windows, no events will be emitted if the watched directory is moved or renamed. An EPERM error is reported when the watched directory is deleted.

Availability
This feature depends on the underlying operating system providing a way to be notified of filesystem changes.
On Linux systems, this uses inotify(7) .

On BSD systems, this uses kqueue(2) .

On macOS, this uses kqueue(2) for files and FSEvents for directories.

On SunOS systems (including Solaris and SmartOS), this uses event ports .
On Windows systems, this feature depends on ReadDirectoryChangesW .

On Aix systems, this feature depends on AHAFS , which must be enabled.

On IBM i systems, this feature is not supported.


If the underlying functionality is not available for some reason, then fs.watch() will not be able to function and may thrown an exception. For example, watching files or directories can be
unreliable, and in some cases impossible, on network file systems (NFS, SMB, etc) or host file systems when using virtualization software such as Vagrant or Docker.

It is still possible to use fs.watchFile() , which uses stat polling, but this method is slower and less reliable.

Inodes
On Linux and macOS systems, fs.watch() resolves the path to an inode and watches the inode. If the watched path is deleted and recreated, it is assigned a new inode. The watch will emit
an event for the delete but will continue watching the original inode. Events for the new inode will not be emitted. This is expected behavior.

AIX files retain the same inode for the lifetime of a file. Saving and closing a watched file on AIX will result in two notifications (one for adding new content, and one for truncation).

Filename argument
Providing filename argument in the callback is only supported on Linux, macOS, Windows, and AIX. Even on supported platforms, filename is not always guaranteed to be provided.
Therefore, don't assume that filename argument is always provided in the callback, and have some fallback logic if it is null .

import { watch } from 'fs';


watch('somedir', (eventType, filename) => {
console.log(`event type is: ${eventType}`);
if (filename) {
console.log(`filename provided: ${filename}`);
} else {
console.log('filename not provided');
}
});

fs.watchFile(filename[, options], listener)


filename <string> | <Buffer> | <URL>

options <Object>
bigint <boolean> Default: false

persistent <boolean> Default: true

interval <integer> Default: 5007

listener <Function>
current <fs.Stats>

previous <fs.Stats>

Returns: <fs.StatWatcher>

Watch for changes on filename . The callback listener will be called each time the file is accessed.

The options argument may be omitted. If provided, it should be an object. The options object may contain a boolean named persistent that indicates whether the process should
continue to run as long as files are being watched. The options object may specify an interval property indicating how often the target should be polled in milliseconds.

The listener gets two arguments the current stat object and the previous stat object:

import { watchFile } from 'fs';

watchFile('message.text', (curr, prev) => {


console.log(`the current mtime is: ${curr.mtime}`);
console.log(`the previous mtime was: ${prev.mtime}`);
});

These stat objects are instances of fs.Stat . If the bigint option is true , the numeric values in these objects are specified as BigInt s.

To be notified when the file was modified, not just accessed, it is necessary to compare curr.mtime and prev.mtime .

When an fs.watchFile operation results in an ENOENT error, it will invoke the listener once, with all the fields zeroed (or, for dates, the Unix Epoch). If the file is created later on, the listener
will be called again, with the latest stat objects. This is a change in functionality since v0.10.

Using fs.watch() is more efficient than fs.watchFile and fs.unwatchFile . fs.watch should be used instead of fs.watchFile and fs.unwatchFile when possible.

When a file being watched by fs.watchFile() disappears and reappears, then the contents of previous in the second callback event (the file's reappearance) will be the same as the
contents of previous in the first callback event (its disappearance).

This happens when:

the file is deleted, followed by a restore


the file is renamed and then renamed a second time back to its original name

fs.write(fd, buffer[, offset[, length[, position]]], callback)


fd <integer>

buffer <Buffer> | <TypedArray> | <DataView> | <string> | <Object>

offset <integer>

length <integer>

position <integer>

callback <Function>
err <Error>

bytesWritten <integer>

buffer <Buffer> | <TypedArray> | <DataView>

Write buffer to the file specified by fd . If buffer is a normal object, it must have an own toString function property.

offset determines the part of the buffer to be written, and length is an integer specifying the number of bytes to write.

position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number' , the data will be written at the current position. See
pwrite(2) .

The callback will be given three arguments (err, bytesWritten, buffer) where bytesWritten specifies how many bytes were written from buffer .

If this method is invoked as its util.promisify() ed version, it returns a promise for an Object with bytesWritten and buffer properties.

It is unsafe to use fs.write() multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream() is recommended.

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

fs.write(fd, string[, position[, encoding]], callback)


fd <integer>

string <string> | <Object>

position <integer>

encoding <string> Default: 'utf8'

callback <Function>
err <Error>

written <integer>

string <string>

Write string to the file specified by fd . If string is not a string, or an object with an own toString function property, then an exception is thrown.
position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number' the data will be written at the current position. See
pwrite(2) .

encoding is the expected string encoding.

The callback will receive the arguments (err, written, string) where written specifies how many bytes the passed string required to be written. Bytes written is not necessarily the
same as string characters written. See Buffer.byteLength .

It is unsafe to use fs.write() multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream() is recommended.

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

On Windows, if the file descriptor is connected to the console (e.g. fd == 1 or stdout ) a string containing non-ASCII characters will not be rendered properly by default, regardless of the
encoding used. It is possible to configure the console to render UTF-8 properly by changing the active codepage with the chcp 65001 command. See the chcp docs for more details.

fs.writeFile(file, data[, options], callback)


file <string> | <Buffer> | <URL> | <integer> filename or file descriptor

data <string> | <Buffer> | <TypedArray> | <DataView> | <Object>

options <Object> | <string>


encoding <string> | <null> Default: 'utf8'

mode <integer> Default: 0o666

flag <string> See support of file system flags . Default: 'w' .

signal <AbortSignal> allows aborting an in-progress writeFile

callback <Function>
err <Error>

When file is a filename, asynchronously writes data to the file, replacing the file if it already exists. data can be a string or a buffer.

When file is a file descriptor, the behavior is similar to calling fs.write() directly (which is recommended). See the notes below on using a file descriptor.

The encoding option is ignored if data is a buffer. If data is a normal object, it must have an own toString function property.

import { writeFile } from 'fs';

const data = new Uint8Array(Buffer.from('Hello Node.js'));


writeFile('message.txt', data, (err) => {
if (err) throw err;
console.log('The file has been saved!');
});
If options is a string, then it specifies the encoding:

import { writeFile } from 'fs';

writeFile('message.txt', 'Hello Node.js', 'utf8', callback);

It is unsafe to use fs.writeFile() multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream() is recommended.

Similarly to fs.readFile - fs.writeFile is a convenience method that performs multiple write calls internally to write the buffer passed to it. For performance sensitive code consider
using fs.createWriteStream() .

It is possible to use an <AbortSignal> to cancel an fs.writeFile() . Cancelation is "best effort", and some amount of data is likely still to be written.

import { writeFile } from 'fs';

const controller = new AbortController();


const { signal } = controller;
const data = new Uint8Array(Buffer.from('Hello Node.js'));
writeFile('message.txt', data, { signal }, (err) => {
// When a request is aborted - the callback is called with an AbortError
});
// When the request should be aborted
controller.abort();

Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.writeFile performs.

Using fs.writeFile() with file descriptors


When file is a file descriptor, the behavior is almost identical to directly calling fs.write() like:

import { write } from 'fs';

write(fd, Buffer.from(data, options.encoding), callback);

The difference from directly calling fs.write() is that under some unusual conditions, fs.write() might write only part of the buffer and need to be retried to write the remaining data,
whereas fs.writeFile() retries until the data is entirely written (or an error occurs).
The implications of this are a common source of confusion. In the file descriptor case, the file is not replaced! The data is not necessarily written to the beginning of the file, and the file's
original data may remain before and/or after the newly written data.

For example, if fs.writeFile() is called twice in a row, first to write the string 'Hello' , then to write the string ', World' , the file would contain 'Hello, World' , and might contain
some of the file's original data (depending on the size of the original file, and the position of the file descriptor). If a file name had been used instead of a descriptor, the file would be
guaranteed to contain only ', World' .

fs.writev(fd, buffers[, position], callback)


fd <integer>

buffers <ArrayBufferView[]>

position <integer>

callback <Function>
err <Error>

bytesWritten <integer>

buffers <ArrayBufferView[]>

Write an array of ArrayBufferView s to the file specified by fd using writev() .

position is the offset from the beginning of the file where this data should be written. If typeof position !== 'number' , the data will be written at the current position.

The callback will be given three arguments: err , bytesWritten , and buffers . bytesWritten is how many bytes were written from buffers .

If this method is util.promisify() ed, it returns a promise for an Object with bytesWritten and buffers properties.

It is unsafe to use fs.writev() multiple times on the same file without waiting for the callback. For this scenario, use fs.createWriteStream() .

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

Synchronous API
The synchronous APIs perform all operations synchronously, blocking the event loop until the operation completes or fails.

fs.accessSync(path[, mode])
path <string> | <Buffer> | <URL>

mode <integer> Default: fs.constants.F_OK

Synchronously tests a user's permissions for the file or directory specified by path . The mode argument is an optional integer that specifies the accessibility checks to be performed. Check
File access constants for possible values of mode . It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. fs.constants.W_OK | fs.constants.R_OK ).
If any of the accessibility checks fail, an Error will be thrown. Otherwise, the method will return undefined .

import { accessSync, constants } from 'fs';

try {
accessSync('etc/passwd', constants.R_OK | constants.W_OK);
console.log('can read/write');
} catch (err) {
console.error('no access!');
}

fs.appendFileSync(path, data[, options])


path <string> | <Buffer> | <URL> | <number> filename or file descriptor

data <string> | <Buffer>

options <Object> | <string>


encoding <string> | <null> Default: 'utf8'

mode <integer> Default: 0o666

flag <string> See support of file system flags . Default: 'a' .

Synchronously append data to a file, creating the file if it does not yet exist. data can be a string or a <Buffer> .

import { appendFileSync } from 'fs';

try {
appendFileSync('message.txt', 'data to append');
console.log('The "data to append" was appended to file!');
} catch (err) {
/* Handle the error */
}

If options is a string, then it specifies the encoding:

import { appendFileSync } from 'fs';

appendFileSync('message.txt', 'data to append', 'utf8');


The path may be specified as a numeric file descriptor that has been opened for appending (using fs.open() or fs.openSync() ). The file descriptor will not be closed automatically.

import { openSync, closeSync, appendFileSync } from 'fs';

let fd;

try {
fd = openSync('message.txt', 'a');
appendFileSync(fd, 'data to append', 'utf8');
} catch (err) {
/* Handle the error */
} finally {
if (fd !== undefined)
closeSync(fd);
}

fs.chmodSync(path, mode)
path <string> | <Buffer> | <URL>

mode <string> | <integer>

For detailed information, see the documentation of the asynchronous version of this API: fs.chmod() .

See the POSIX chmod(2) documentation for more detail.

fs.chownSync(path, uid, gid)


path <string> | <Buffer> | <URL>

uid <integer>

gid <integer>

Synchronously changes owner and group of a file. Returns undefined . This is the synchronous version of fs.chown() .

See the POSIX chown(2) documentation for more detail.

fs.closeSync(fd)
fd <integer>

Closes the file descriptor. Returns undefined .


Calling fs.closeSync() on any file descriptor ( fd ) that is currently in use through any other fs operation may lead to undefined behavior.

See the POSIX close(2) documentation for more detail.

fs.copyFileSync(src, dest[, mode])


src <string> | <Buffer> | <URL> source filename to copy

dest <string> | <Buffer> | <URL> destination filename of the copy operation

mode <integer> modifiers for copy operation. Default: 0 .

Synchronously copies src to dest . By default, dest is overwritten if it already exists. Returns undefined . Node.js makes no guarantees about the atomicity of the copy operation. If an
error occurs after the destination file has been opened for writing, Node.js will attempt to remove the destination.

mode is an optional integer that specifies the behavior of the copy operation. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g.
fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE ).

fs.constants.COPYFILE_EXCL : The copy operation will fail if dest already exists.

fs.constants.COPYFILE_FICLONE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then a fallback copy mechanism is
used.
fs.constants.COPYFILE_FICLONE_FORCE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then the operation will fail.

import { copyFileSync, constants } from 'fs';

// destination.txt will be created or overwritten by default.


copyFileSync('source.txt', 'destination.txt');
console.log('source.txt was copied to destination.txt');

// By using COPYFILE_EXCL, the operation will fail if destination.txt exists.


copyFileSync('source.txt', 'destination.txt', constants.COPYFILE_EXCL);

fs.existsSync(path)
path <string> | <Buffer> | <URL>

Returns: <boolean>

Returns true if the path exists, false otherwise.

For detailed information, see the documentation of the asynchronous version of this API: fs.exists() .
fs.exists() is deprecated, but fs.existsSync() is not. The callback parameter to fs.exists() accepts parameters that are inconsistent with other Node.js callbacks.
fs.existsSync() does not use a callback.

import { existsSync } from 'fs';

if (existsSync('/etc/passwd'))
console.log('The path exists.');

fs.fchmodSync(fd, mode)
fd <integer>

mode <string> | <integer>

Sets the permissions on the file. Returns undefined .

See the POSIX fchmod(2) documentation for more detail.

fs.fchownSync(fd, uid, gid)


fd <integer>

uid <integer> The file's new owner's user id.

gid <integer> The file's new group's group id.

Sets the owner of the file. Returns undefined .

See the POSIX fchown(2) documentation for more detail.

fs.fdatasyncSync(fd)
fd <integer>

Forces all currently queued I/O operations associated with the file to the operating system's synchronized I/O completion state. Refer to the POSIX fdatasync(2) documentation for
details. Returns undefined .

fs.fstatSync(fd[, options])
fd <integer>

options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

Returns: <fs.Stats>
Retrieves the <fs.Stats> for the file descriptor.

See the POSIX fstat(2) documentation for more detail.

fs.fsyncSync(fd)
fd <integer>

Request that all data for the open file descriptor is flushed to the storage device. The specific implementation is operating system and device specific. Refer to the POSIX fsync(2)
documentation for more detail. Returns undefined .

fs.ftruncateSync(fd[, len])
fd <integer>

len <integer> Default: 0

Truncates the file descriptor. Returns undefined .

For detailed information, see the documentation of the asynchronous version of this API: fs.ftruncate() .

fs.futimesSync(fd, atime, mtime)


fd <integer>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

Synchronous version of fs.futimes() . Returns undefined .

fs.lchmodSync(path, mode)
path <string> | <Buffer> | <URL>

mode <integer>

Changes the permissions on a symbolic link. Returns undefined .

This method is only implemented on macOS.

See the POSIX lchmod(2) documentation for more detail.

fs.lchownSync(path, uid, gid)


path <string> | <Buffer> | <URL>
uid <integer> The file's new owner's user id.

gid <integer> The file's new group's group id.

Set the owner for the path. Returns undefined .

See the POSIX lchown(2) documentation for more details.

fs.lutimesSync(path, atime, mtime)


path <string> | <Buffer> | <URL>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

Change the file system timestamps of the symbolic link referenced by path . Returns undefined , or throws an exception when parameters are incorrect or the operation fails. This is the
synchronous version of fs.lutimes() .

fs.linkSync(existingPath, newPath)
existingPath <string> | <Buffer> | <URL>

newPath <string> | <Buffer> | <URL>

Creates a new link from the existingPath to the newPath . See the POSIX link(2) documentation for more detail. Returns undefined .

fs.lstatSync(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

throwIfNoEntry <boolean> Whether an exception will be thrown if no file system entry exists, rather than returning undefined . Default: true .

Returns: <fs.Stats>

Retrieves the <fs.Stats> for the symbolic link referred to by path .

See the POSIX lstat(2) documentation for more details.

fs.mkdirSync(path[, options])
path <string> | <Buffer> | <URL>

options <Object> | <integer>


recursive <boolean> Default: false

mode <string> | <integer> Not supported on Windows. Default: 0o777 .


Returns: <string> | <undefined>

Synchronously creates a directory. Returns undefined , or if recursive is true , the first directory path created. This is the synchronous version of fs.mkdir() .

See the POSIX mkdir(2) documentation for more details.

fs.mkdtempSync(prefix[, options])
prefix <string>

options <string> | <Object>


encoding <string> Default: 'utf8'

Returns: <string>

Returns the created directory path.

For detailed information, see the documentation of the asynchronous version of this API: fs.mkdtemp() .

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use.

fs.opendirSync(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
encoding <string> | <null> Default: 'utf8'

bufferSize <number> Number of directory entries that are buffered internally when reading from the directory. Higher values lead to better performance but higher memory
usage. Default: 32
Returns: <fs.Dir>

Synchronously open a directory. See opendir(3) .

Creates an <fs.Dir> , which contains all further functions for reading from and cleaning up the directory.

The encoding option sets the encoding for the path while opening the directory and subsequent read operations.

fs.openSync(path[, flags, mode])


path <string> | <Buffer> | <URL>

flags <string> | <number> Default: 'r' . See support of file system flags .

mode <string> | <integer> Default: 0o666

Returns: <number>

Returns an integer representing the file descriptor.


For detailed information, see the documentation of the asynchronous version of this API: fs.open() .

fs.readdirSync(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

withFileTypes <boolean> Default: false

Returns: <string[]> | <Buffer[]> | <fs.Dirent[]>

Reads the contents of the directory.

See the POSIX readdir(3) documentation for more details.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames returned. If the
encoding is set to 'buffer' , the filenames returned will be passed as <Buffer> objects.

If options.withFileTypes is set to true , the result will contain <fs.Dirent> objects.

fs.readFileSync(path[, options])
path <string> | <Buffer> | <URL> | <integer> filename or file descriptor

options <Object> | <string>


encoding <string> | <null> Default: null

flag <string> See support of file system flags . Default: 'r' .

Returns: <string> | <Buffer>

Returns the contents of the path .

For detailed information, see the documentation of the asynchronous version of this API: fs.readFile() .

If the encoding option is specified then this function returns a string. Otherwise it returns a buffer.

Similar to fs.readFile() , when the path is a directory, the behavior of fs.readFileSync() is platform-specific.

import { readFileSync } from 'fs';

// macOS, Linux, and Windows


readFileSync('<directory>');
// => [Error: EISDIR: illegal operation on a directory, read <directory>]
// FreeBSD
readFileSync('<directory>'); // => <data>

fs.readlinkSync(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

Returns: <string> | <Buffer>

Returns the symbolic link's string value.

See the POSIX readlink(2) documentation for more details.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path returned. If the
encoding is set to 'buffer' , the link path returned will be passed as a <Buffer> object.

fs.readSync(fd, buffer, offset, length, position)


fd <integer>

buffer <Buffer> | <TypedArray> | <DataView>

offset <integer>

length <integer>

position <integer> | <bigint>

Returns: <number>

Returns the number of bytesRead .

For detailed information, see the documentation of the asynchronous version of this API: fs.read() .

fs.readSync(fd, buffer, [options])


fd <integer>

buffer <Buffer> | <TypedArray> | <DataView>

options <Object>
offset <integer> Default: 0

length <integer> Default: buffer.length

position <integer> | <bigint> Default: null


Returns: <number>

Returns the number of bytesRead .

Similar to the above fs.readSync function, this version takes an optional options object. If no options object is specified, it will default with the above values.

For detailed information, see the documentation of the asynchronous version of this API: fs.read() .

fs.readvSync(fd, buffers[, position])


fd <integer>

buffers <ArrayBufferView[]>

position <integer>

Returns: <number> The number of bytes read.

For detailed information, see the documentation of the asynchronous version of this API: fs.readv() .

fs.realpathSync(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

Returns: <string> | <Buffer>

Returns the resolved pathname.

For detailed information, see the documentation of the asynchronous version of this API: fs.realpath() .

fs.realpathSync.native(path[, options])
path <string> | <Buffer> | <URL>

options <string> | <Object>


encoding <string> Default: 'utf8'

Returns: <string> | <Buffer>

Synchronous realpath(3) .

Only paths that can be converted to UTF8 strings are supported.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path returned. If the
encoding is set to 'buffer' , the path returned will be passed as a <Buffer> object.
On Linux, when Node.js is linked against musl libc, the procfs file system must be mounted on /proc in order for this function to work. Glibc does not have this restriction.

fs.renameSync(oldPath, newPath)
oldPath <string> | <Buffer> | <URL>

newPath <string> | <Buffer> | <URL>

Renames the file from oldPath to newPath . Returns undefined .

See the POSIX rename(2) documentation for more details.

fs.rmdirSync(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js retries the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .

recursive <boolean> If true , perform a recursive directory removal. In recursive mode, errors are not reported if path does not exist, and operations are retried on failure.
Default: false .
retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .

Synchronous rmdir(2) . Returns undefined .

Using fs.rmdirSync() on a file (not a directory) results in an ENOENT error on Windows and an ENOTDIR error on POSIX.

Setting recursive to true results in behavior similar to the Unix command rm -rf : an error will not be raised for paths that do not exist, and paths that represent files will be deleted. The
permissive behavior of the recursive option is deprecated, ENOTDIR and ENOENT will be thrown in the future.

fs.rmSync(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
force <boolean> When true , exceptions will be ignored if path does not exist. Default: false .

maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js will retry the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .

recursive <boolean> If true , perform a recursive directory removal. In recursive mode operations are retried on failure. Default: false .

retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .

Synchronously removes files and directories (modeled on the standard POSIX rm utility). Returns undefined .
fs.statSync(path[, options])
path <string> | <Buffer> | <URL>

options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .

throwIfNoEntry <boolean> Whether an exception will be thrown if no file system entry exists, rather than returning undefined . Default: true .

Returns: <fs.Stats>

Retrieves the <fs.Stats> for the path.

fs.symlinkSync(target, path[, type])


target <string> | <Buffer> | <URL>

path <string> | <Buffer> | <URL>

type <string>

Returns undefined .

For detailed information, see the documentation of the asynchronous version of this API: fs.symlink() .

fs.truncateSync(path[, len])
path <string> | <Buffer> | <URL>

len <integer> Default: 0

Truncates the file. Returns undefined . A file descriptor can also be passed as the first argument. In this case, fs.ftruncateSync() is called.

Passing a file descriptor is deprecated and may result in an error being thrown in the future.

fs.unlinkSync(path)
path <string> | <Buffer> | <URL>

Synchronous unlink(2) . Returns undefined .

fs.utimesSync(path, atime, mtime)


path <string> | <Buffer> | <URL>

atime <number> | <string> | <Date>

mtime <number> | <string> | <Date>

Returns undefined .
For detailed information, see the documentation of the asynchronous version of this API: fs.utimes() .

fs.writeFileSync(file, data[, options])


file <string> | <Buffer> | <URL> | <integer> filename or file descriptor

data <string> | <Buffer> | <TypedArray> | <DataView> | <Object>

options <Object> | <string>


encoding <string> | <null> Default: 'utf8'

mode <integer> Default: 0o666

flag <string> See support of file system flags . Default: 'w' .

Returns undefined .

For detailed information, see the documentation of the asynchronous version of this API: fs.writeFile() .

fs.writeSync(fd, buffer[, offset[, length[, position]]])


fd <integer>

buffer <Buffer> | <TypedArray> | <DataView> | <string> | <Object>

offset <integer>

length <integer>

position <integer>

Returns: <number> The number of bytes written.

For detailed information, see the documentation of the asynchronous version of this API: fs.write(fd, buffer...) .

fs.writeSync(fd, string[, position[, encoding]])


fd <integer>

string <string> | <Object>

position <integer>

encoding <string>

Returns: <number> The number of bytes written.

For detailed information, see the documentation of the asynchronous version of this API: fs.write(fd, string...) .

fs.writevSync(fd, buffers[, position])


fd <integer>

buffers <ArrayBufferView[]>

position <integer>

Returns: <number> The number of bytes written.

For detailed information, see the documentation of the asynchronous version of this API: fs.writev() .

Common Objects
The common objects are shared by all of the file system API variants (promise, callback, and synchronous).

Class: fs.Dir
A class representing a directory stream.

Created by fs.opendir() , fs.opendirSync() , or fsPromises.opendir() .

import { opendir } from 'fs/promises';

try {
const dir = await opendir('./');
for await (const dirent of dir)
console.log(dirent.name);
} catch (err) {
console.error(err);
}

dir.close()
Returns: <Promise>

Asynchronously close the directory's underlying resource handle. Subsequent reads will result in errors.

A promise is returned that will be resolved after the resource has been closed.

dir.close(callback)
callback <Function>
err <Error>

Asynchronously close the directory's underlying resource handle. Subsequent reads will result in errors.
The callback will be called after the resource handle has been closed.

dir.closeSync()
Synchronously close the directory's underlying resource handle. Subsequent reads will result in errors.

dir.path
<string>

The read-only path of this directory as was provided to fs.opendir() , fs.opendirSync() , or fsPromises.opendir() .

dir.read()
Returns: <Promise> containing <fs.Dirent> | <null>

Asynchronously read the next directory entry via readdir(3) as an <fs.Dirent> .

A promise is returned that will be resolved with an <fs.Dirent> , or null if there are no more directory entries to read.

Directory entries returned by this function are in no particular order as provided by the operating system's underlying directory mechanisms. Entries added or removed while iterating over
the directory might not be included in the iteration results.

dir.read(callback)
callback <Function>
err <Error>

dirent <fs.Dirent> | <null>

Asynchronously read the next directory entry via readdir(3) as an <fs.Dirent> .

After the read is completed, the callback will be called with an <fs.Dirent> , or null if there are no more directory entries to read.

Directory entries returned by this function are in no particular order as provided by the operating system's underlying directory mechanisms. Entries added or removed while iterating over
the directory might not be included in the iteration results.

dir.readSync()
Returns: <fs.Dirent> | <null>

Synchronously read the next directory entry as an <fs.Dirent> . See the POSIX readdir(3) documentation for more detail.

If there are no more directory entries to read, null will be returned.


Directory entries returned by this function are in no particular order as provided by the operating system's underlying directory mechanisms. Entries added or removed while iterating over
the directory might not be included in the iteration results.

dir[Symbol.asyncIterator]()
Returns: <AsyncIterator> of <fs.Dirent>

Asynchronously iterates over the directory until all entries have been read. Refer to the POSIX readdir(3) documentation for more detail.

Entries returned by the async iterator are always an <fs.Dirent> . The null case from dir.read() is handled internally.

See <fs.Dir> for an example.

Directory entries returned by this iterator are in no particular order as provided by the operating system's underlying directory mechanisms. Entries added or removed while iterating over
the directory might not be included in the iteration results.

Class: fs.Dirent
A representation of a directory entry, which can be a file or a subdirectory within the directory, as returned by reading from an <fs.Dir> . The directory entry is a combination of the file
name and file type pairs.

Additionally, when fs.readdir() or fs.readdirSync() is called with the withFileTypes option set to true , the resulting array is filled with <fs.Dirent> objects, rather than strings or
<Buffer> s.

dirent.isBlockDevice()
Returns: <boolean>

Returns true if the <fs.Dirent> object describes a block device.

dirent.isCharacterDevice()
Returns: <boolean>

Returns true if the <fs.Dirent> object describes a character device.

dirent.isDirectory()
Returns: <boolean>

Returns true if the <fs.Dirent> object describes a file system directory.

dirent.isFIFO()
Returns: <boolean>
Returns true if the <fs.Dirent> object describes a first-in-first-out (FIFO) pipe.

dirent.isFile()
Returns: <boolean>

Returns true if the <fs.Dirent> object describes a regular file.

dirent.isSocket()
Returns: <boolean>

Returns true if the <fs.Dirent> object describes a socket.

dirent.isSymbolicLink()
Returns: <boolean>

Returns true if the <fs.Dirent> object describes a symbolic link.

dirent.name
<string> | <Buffer>

The file name that this <fs.Dirent> object refers to. The type of this value is determined by the options.encoding passed to fs.readdir() or fs.readdirSync() .

Class: fs.FSWatcher
Extends <EventEmitter>

A successful call to fs.watch() method will return a new <fs.FSWatcher> object.

All <fs.FSWatcher> objects emit a 'change' event whenever a specific watched file is modified.

Event: 'change'
eventType <string> The type of change event that has occurred

filename <string> | <Buffer> The filename that changed (if relevant/available)

Emitted when something changes in a watched directory or file. See more details in fs.watch() .

The filename argument may not be provided depending on operating system support. If filename is provided, it will be provided as a <Buffer> if fs.watch() is called with its encoding
option set to 'buffer' , otherwise filename will be a UTF-8 string.
import { watch } from 'fs';
// Example when handled through fs.watch() listener
watch('./tmp', { encoding: 'buffer' }, (eventType, filename) => {
if (filename) {
console.log(filename);
// Prints: <Buffer ...>
}
});

Event: 'close'
Emitted when the watcher stops watching for changes. The closed <fs.FSWatcher> object is no longer usable in the event handler.

Event: 'error'
error <Error>

Emitted when an error occurs while watching the file. The errored <fs.FSWatcher> object is no longer usable in the event handler.

watcher.close()
Stop watching for changes on the given <fs.FSWatcher> . Once stopped, the <fs.FSWatcher> object is no longer usable.

watcher.ref()
Returns: <fs.FSWatcher>

When called, requests that the Node.js event loop not exit so long as the <fs.FSWatcher> is active. Calling watcher.ref() multiple times will have no effect.

By default, all <fs.FSWatcher> objects are "ref'ed", making it normally unnecessary to call watcher.ref() unless watcher.unref() had been called previously.

watcher.unref()
Returns: <fs.FSWatcher>

When called, the active <fs.FSWatcher> object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit
before the <fs.FSWatcher> object's callback is invoked. Calling watcher.unref() multiple times will have no effect.

Class: fs.StatWatcher
Extends <EventEmitter>

A successful call to fs.watchFile() method will return a new <fs.StatWatcher> object.


watcher.ref()
Returns: <fs.StatWatcher>

When called, requests that the Node.js event loop not exit so long as the <fs.StatWatcher> is active. Calling watcher.ref() multiple times will have no effect.

By default, all <fs.StatWatcher> objects are "ref'ed", making it normally unnecessary to call watcher.ref() unless watcher.unref() had been called previously.

watcher.unref()
Returns: <fs.StatWatcher>

When called, the active <fs.StatWatcher> object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit
before the <fs.StatWatcher> object's callback is invoked. Calling watcher.unref() multiple times will have no effect.

Class: fs.ReadStream
Extends: <stream.Readable>

Instances of <fs.ReadStream> are created and returned using the fs.createReadStream() function.

Event: 'close'
Emitted when the <fs.ReadStream> 's underlying file descriptor has been closed.

Event: 'open'
fd <integer> Integer file descriptor used by the <fs.ReadStream> .

Emitted when the <fs.ReadStream> 's file descriptor has been opened.

Event: 'ready'
Emitted when the <fs.ReadStream> is ready to be used.

Fires immediately after 'open' .

readStream.bytesRead
<number>

The number of bytes that have been read so far.

readStream.path
<string> | <Buffer>
The path to the file the stream is reading from as specified in the first argument to fs.createReadStream() . If path is passed as a string, then readStream.path will be a string. If path is
passed as a <Buffer> , then readStream.path will be a <Buffer> .

readStream.pending
<boolean>

This property is true if the underlying file has not been opened yet, i.e. before the 'ready' event is emitted.

Class: fs.Stats
A <fs.Stats> object provides information about a file.

Objects returned from fs.stat() , fs.lstat() and fs.fstat() and their synchronous counterparts are of this type. If bigint in the options passed to those methods is true, the
numeric values will be bigint instead of number , and the object will contain additional nanosecond-precision properties suffixed with Ns .

Stats {
dev: 2114,
ino: 48064969,
mode: 33188,
nlink: 1,
uid: 85,
gid: 100,
rdev: 0,
size: 527,
blksize: 4096,
blocks: 8,
atimeMs: 1318289051000.1,
mtimeMs: 1318289051000.1,
ctimeMs: 1318289051000.1,
birthtimeMs: 1318289051000.1,
atime: Mon, 10 Oct 2011 23:24:11 GMT,
mtime: Mon, 10 Oct 2011 23:24:11 GMT,
ctime: Mon, 10 Oct 2011 23:24:11 GMT,
birthtime: Mon, 10 Oct 2011 23:24:11 GMT }

bigint version:

BigIntStats {
dev: 2114n,
ino: 48064969n,
mode: 33188n,
nlink: 1n,
uid: 85n,
gid: 100n,
rdev: 0n,
size: 527n,
blksize: 4096n,
blocks: 8n,
atimeMs: 1318289051000n,
mtimeMs: 1318289051000n,
ctimeMs: 1318289051000n,
birthtimeMs: 1318289051000n,
atimeNs: 1318289051000000000n,
mtimeNs: 1318289051000000000n,
ctimeNs: 1318289051000000000n,
birthtimeNs: 1318289051000000000n,
atime: Mon, 10 Oct 2011 23:24:11 GMT,
mtime: Mon, 10 Oct 2011 23:24:11 GMT,
ctime: Mon, 10 Oct 2011 23:24:11 GMT,
birthtime: Mon, 10 Oct 2011 23:24:11 GMT }

stats.isBlockDevice()
Returns: <boolean>

Returns true if the <fs.Stats> object describes a block device.

stats.isCharacterDevice()
Returns: <boolean>

Returns true if the <fs.Stats> object describes a character device.

stats.isDirectory()
Returns: <boolean>

Returns true if the <fs.Stats> object describes a file system directory.

If the <fs.Stats> object was obtained from fs.lstat() , this method will always return false . This is because fs.lstat() returns information about a symbolic link itself and not the
path it resolves to.
stats.isFIFO()
Returns: <boolean>

Returns true if the <fs.Stats> object describes a first-in-first-out (FIFO) pipe.

stats.isFile()
Returns: <boolean>

Returns true if the <fs.Stats> object describes a regular file.

stats.isSocket()
Returns: <boolean>

Returns true if the <fs.Stats> object describes a socket.

stats.isSymbolicLink()
Returns: <boolean>

Returns true if the <fs.Stats> object describes a symbolic link.

This method is only valid when using fs.lstat() .

stats.dev
<number> | <bigint>

The numeric identifier of the device containing the file.

stats.ino
<number> | <bigint>

The file system specific "Inode" number for the file.

stats.mode
<number> | <bigint>

A bit-field describing the file type and mode.

stats.nlink
<number> | <bigint>
The number of hard-links that exist for the file.

stats.uid
<number> | <bigint>

The numeric user identifier of the user that owns the file (POSIX).

stats.gid
<number> | <bigint>

The numeric group identifier of the group that owns the file (POSIX).

stats.rdev
<number> | <bigint>

A numeric device identifier if the file represents a device.

stats.size
<number> | <bigint>

The size of the file in bytes.

stats.blksize
<number> | <bigint>

The file system block size for i/o operations.

stats.blocks
<number> | <bigint>

The number of blocks allocated for this file.

stats.atimeMs
<number> | <bigint>

The timestamp indicating the last time this file was accessed expressed in milliseconds since the POSIX Epoch.

stats.mtimeMs
<number> | <bigint>
The timestamp indicating the last time this file was modified expressed in milliseconds since the POSIX Epoch.

stats.ctimeMs
<number> | <bigint>

The timestamp indicating the last time the file status was changed expressed in milliseconds since the POSIX Epoch.

stats.birthtimeMs
<number> | <bigint>

The timestamp indicating the creation time of this file expressed in milliseconds since the POSIX Epoch.

stats.atimeNs
<bigint>

Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the last time this file was accessed expressed in nanoseconds since the
POSIX Epoch.

stats.mtimeNs
<bigint>

Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the last time this file was modified expressed in nanoseconds since the
POSIX Epoch.

stats.ctimeNs
<bigint>

Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the last time the file status was changed expressed in nanoseconds since
the POSIX Epoch.

stats.birthtimeNs
<bigint>

Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the creation time of this file expressed in nanoseconds since the POSIX
Epoch.

stats.atime
<Date>

The timestamp indicating the last time this file was accessed.
stats.mtime
<Date>

The timestamp indicating the last time this file was modified.

stats.ctime
<Date>

The timestamp indicating the last time the file status was changed.

stats.birthtime
<Date>

The timestamp indicating the creation time of this file.

Stat time values


The atimeMs , mtimeMs , ctimeMs , birthtimeMs properties are numeric values that hold the corresponding times in milliseconds. Their precision is platform specific. When bigint: true is
passed into the method that generates the object, the properties will be bigints , otherwise they will be numbers .

The atimeNs , mtimeNs , ctimeNs , birthtimeNs properties are bigints that hold the corresponding times in nanoseconds. They are only present when bigint: true is passed into the
method that generates the object. Their precision is platform specific.

atime , mtime , ctime , and birthtime are Date object alternate representations of the various times. The Date and number values are not connected. Assigning a new number value, or
mutating the Date value, will not be reflected in the corresponding alternate representation.

The times in the stat object have the following semantics:

atime "Access Time": Time when file data last accessed. Changed by the mknod(2) , utimes(2) , and read(2) system calls.

mtime "Modified Time": Time when file data last modified. Changed by the mknod(2) , utimes(2) , and write(2) system calls.

ctime "Change Time": Time when file status was last changed (inode data modification). Changed by the chmod(2) , chown(2) , link(2) , mknod(2) , rename(2) , unlink(2) ,
utimes(2) , read(2) , and write(2) system calls.

birthtime "Birth Time": Time of file creation. Set once when the file is created. On filesystems where birthtime is not available, this field may instead hold either the ctime or 1970-01-
01T00:00Z (ie, Unix epoch timestamp 0 ). This value may be greater than atime or mtime in this case. On Darwin and other FreeBSD variants, also set if the atime is explicitly set to an
earlier value than the current birthtime using the utimes(2) system call.

Prior to Node.js 0.12, the ctime held the birthtime on Windows systems. As of 0.12, ctime is not "creation time", and on Unix systems, it never was.

Class: fs.WriteStream
Extends <stream.Writable>
Instances of <fs.WriteStream> are created and returned using the fs.createWriteStream() function.

Event: 'close'
Emitted when the <fs.WriteStream> 's underlying file descriptor has been closed.

Event: 'open'
fd <integer> Integer file descriptor used by the <fs.WriteStream> .

Emitted when the <fs.WriteStream> 's file is opened.

Event: 'ready'
Emitted when the <fs.WriteStream> is ready to be used.

Fires immediately after 'open' .

writeStream.bytesWritten
The number of bytes written so far. Does not include data that is still queued for writing.

writeStream.path
The path to the file the stream is writing to as specified in the first argument to fs.createWriteStream() . If path is passed as a string, then writeStream.path will be a string. If path is
passed as a <Buffer> , then writeStream.path will be a <Buffer> .

writeStream.pending
<boolean>

This property is true if the underlying file has not been opened yet, i.e. before the 'ready' event is emitted.

fs.constants
<Object>

Returns an object containing commonly used constants for file system operations.

FS constants
The following constants are exported by fs.constants .

Not every constant will be available on every operating system.


To use more than one constant, use the bitwise OR | operator.

Example:

import { open, constants } from 'fs';

const {
O_RDWR,
O_CREAT,
O_EXCL
} = constants;

open('/path/to/my/file', O_RDWR | O_CREAT | O_EXCL, (err, fd) => {


// ...
});

File access constants


The following constants are meant for use with fs.access() .

Constant Description

F_OK Flag indicating that the file is visible to the calling process. This is useful for determining if a file exists, but says nothing about rwx permissions. Default if no mode is specified.

R_OK Flag indicating that the file can be read by the calling process.

W_OK Flag indicating that the file can be written by the calling process.

X_OK Flag indicating that the file can be executed by the calling process. This has no effect on Windows (will behave like fs.constants.F_OK ).

File copy constants


The following constants are meant for use with fs.copyFile() .

Constant Description

COPYFILE_EXCL If present, the copy operation will fail with an error if the destination path already exists.

COPYFILE_FICLONE If present, the copy operation will attempt to create a copy-on-write reflink. If the underlying platform does not support copy-on-write, then a fallback copy
mechanism is used.

COPYFILE_FICLONE_FO If present, the copy operation will attempt to create a copy-on-write reflink. If the underlying platform does not support copy-on-write, then the operation will
fail with an error.
RCE

File open constants


The following constants are meant for use with fs.open() .

Constant Description

O_RDONLY Flag indicating to open a file for read-only access.

O_WRONLY Flag indicating to open a file for write-only access.

O_RDWR Flag indicating to open a file for read-write access.

O_CREAT Flag indicating to create the file if it does not already exist.

O_EXCL Flag indicating that opening a file should fail if the O_CREAT flag is set and the file already exists.

O_NOCTTY Flag indicating that if path identifies a terminal device, opening the path shall not cause that terminal to become the controlling terminal for the process (if the process
does not already have one).

O_TRUNC Flag indicating that if the file exists and is a regular file, and the file is opened successfully for write access, its length shall be truncated to zero.

O_APPEND Flag indicating that data will be appended to the end of the file.

O_DIRECTORY Flag indicating that the open should fail if the path is not a directory.

O_NOATIME Flag indicating reading accesses to the file system will no longer result in an update to the atime information associated with the file. This flag is available on Linux
operating systems only.

O_NOFOLLOW Flag indicating that the open should fail if the path is a symbolic link.

O_SYNC Flag indicating that the file is opened for synchronized I/O with write operations waiting for file integrity.

O_DSYNC Flag indicating that the file is opened for synchronized I/O with write operations waiting for data integrity.

O_SYMLINK Flag indicating to open the symbolic link itself rather than the resource it is pointing to.

O_DIRECT When set, an attempt will be made to minimize caching effects of file I/O.

O_NONBLOCK Flag indicating to open the file in nonblocking mode when possible.

UV_FS_O_FILE When set, a memory file mapping is used to access the file. This flag is available on Windows operating systems only. On other operating systems, this flag is ignored.
MAP
File type constants
The following constants are meant for use with the <fs.Stats> object's mode property for determining a file's type.

Constant Description

S_IFMT Bit mask used to extract the file type code.

S_IFREG File type constant for a regular file.

S_IFDIR File type constant for a directory.

S_IFCHR File type constant for a character-oriented device file.

S_IFBLK File type constant for a block-oriented device file.

S_IFIFO File type constant for a FIFO/pipe.

S_IFLNK File type constant for a symbolic link.

S_IFSOCK File type constant for a socket.

File mode constants


The following constants are meant for use with the <fs.Stats> object's mode property for determining the access permissions for a file.

Constant Description

S_IRWXU File mode indicating readable, writable, and executable by owner.

S_IRUSR File mode indicating readable by owner.

S_IWUSR File mode indicating writable by owner.

S_IXUSR File mode indicating executable by owner.

S_IRWXG File mode indicating readable, writable, and executable by group.

S_IRGRP File mode indicating readable by group.

S_IWGRP File mode indicating writable by group.

S_IXGRP File mode indicating executable by group.

S_IRWXO File mode indicating readable, writable, and executable by others.


S_IROTH File mode indicating readable by others.

S_IWOTH File mode indicating writable by others.

S_IXOTH File mode indicating executable by others.

Notes
Ordering of callback and promise-based operations
Because they are executed asynchronously by the underlying thread pool, there is no guaranteed ordering when using either the callback or promise-based methods.

For example, the following is prone to error because the fs.stat() operation might complete before the fs.rename() operation:

fs.rename('/tmp/hello', '/tmp/world', (err) => {


if (err) throw err;
console.log('renamed complete');
});
fs.stat('/tmp/world', (err, stats) => {
if (err) throw err;
console.log(`stats: ${JSON.stringify(stats)}`);
});

It is important to correctly order the operations by awaiting the results of one before invoking the other:

// Using ESM syntax


import { rename, stat } from 'fs/promises';
CJS ESM
const from = '/tmp/hello';
const to = '/tmp/world';

try {
await rename(from, to);
const stats = await stat(to);
console.log(`stats: ${JSON.stringify(stats)}`);
} catch (error) {
console.error('there was an error:', error.message);
}
// Using CommonJS syntax
const { rename, stat } = require('fs/promises');

(async function(from, to) {


try {
await rename(from, to);
const stats = await stat(to);
console.log(`stats: ${JSON.stringify(stats)}`);
} catch (error) {
console.error('there was an error:', error.message);
}
})('/tmp/hello', '/tmp/world');

Or, when using the callback APIs, move the fs.stat() call into the callback of the fs.rename() operation:

import { rename, stat } from 'fs';


CJS ESM
rename('/tmp/hello', '/tmp/world', (err) => {
if (err) throw err;
stat('/tmp/world', (err, stats) => {
if (err) throw err;
console.log(`stats: ${JSON.stringify(stats)}`);
});
});

const { rename, stat } = require('fs/promises');

rename('/tmp/hello', '/tmp/world', (err) => {


if (err) throw err;
stat('/tmp/world', (err, stats) => {
if (err) throw err;
console.log(`stats: ${JSON.stringify(stats)}`);
});
});
File paths
Most fs operations accept file paths that may be specified in the form of a string, a <Buffer> , or a <URL> object using the file: protocol.

String paths
String form paths are interpreted as UTF-8 character sequences identifying the absolute or relative filename. Relative paths will be resolved relative to the current working directory as
determined by calling process.cwd() .

Example using an absolute path on POSIX:

import { open } from 'fs/promises';

let fd;
try {
fd = await open('/open/some/file.txt', 'r');
// Do something with the file
} finally {
await fd.close();
}

Example using a relative path on POSIX (relative to process.cwd() ):

import { open } from 'fs/promises';

let fd;
try {
fd = await open('file.txt', 'r');
// Do something with the file
} finally {
await fd.close();
}

File URL paths


For most fs module functions, the path or filename argument may be passed as a <URL> object using the file: protocol.

import { readFileSync } from 'fs';


readFileSync(new URL('file:///tmp/hello'));

file: URLs are always absolute paths.

Platform-specific considerations
On Windows, file: <URL> s with a host name convert to UNC paths, while file: <URL> s with drive letters convert to local absolute paths. file: <URL> s without a host name nor a drive
letter will result in an error:

import { readFileSync } from 'fs';


// On Windows :

// - WHATWG file URLs with hostname convert to UNC path


// file://hostname/p/a/t/h/file => \\hostname\p\a\t\h\file
readFileSync(new URL('file://hostname/p/a/t/h/file'));

// - WHATWG file URLs with drive letters convert to absolute path


// file:///C:/tmp/hello => C:\tmp\hello
readFileSync(new URL('file:///C:/tmp/hello'));

// - WHATWG file URLs without hostname must have a drive letters


readFileSync(new URL('file:///notdriveletter/p/a/t/h/file'));
readFileSync(new URL('file:///c/p/a/t/h/file'));
// TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must be absolute

file: <URL> s with drive letters must use : as a separator just after the drive letter. Using another separator will result in an error.

On all other platforms, file: <URL> s with a host name are unsupported and will result in an error:

import { readFileSync } from 'fs';


// On other platforms:

// - WHATWG file URLs with hostname are unsupported


// file://hostname/p/a/t/h/file => throw!
readFileSync(new URL('file://hostname/p/a/t/h/file'));
// TypeError [ERR_INVALID_FILE_URL_PATH]: must be absolute

// - WHATWG file URLs convert to absolute path


// file:///tmp/hello => /tmp/hello
readFileSync(new URL('file:///tmp/hello'));
A file: <URL> having encoded slash characters will result in an error on all platforms:

import { readFileSync } from 'fs';

// On Windows
readFileSync(new URL('file:///C:/p/a/t/h/%2F'));
readFileSync(new URL('file:///C:/p/a/t/h/%2f'));
/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded
\ or / characters */

// On POSIX
readFileSync(new URL('file:///p/a/t/h/%2F'));
readFileSync(new URL('file:///p/a/t/h/%2f'));
/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded
/ characters */

On Windows, file: <URL> s having encoded backslash will result in an error:

import { readFileSync } from 'fs';

// On Windows
readFileSync(new URL('file:///C:/path/%5C'));
readFileSync(new URL('file:///C:/path/%5c'));
/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded
\ or / characters */

Buffer paths
Paths specified using a <Buffer> are useful primarily on certain POSIX operating systems that treat file paths as opaque byte sequences. On such systems, it is possible for a single file path
to contain sub-sequences that use multiple character encodings. As with string paths, <Buffer> paths may be relative or absolute:

Example using an absolute path on POSIX:

import { open } from 'fs/promises';

let fd;
try {
fd = await open(Buffer.from('/open/some/file.txt'), 'r');
// Do something with the file
} finally {
await fd.close();
}

Per-drive working directories on Windows


On Windows, Node.js follows the concept of per-drive working directory. This behavior can be observed when using a drive path without a backslash. For example fs.readdirSync('C:\\')
can potentially return a different result than fs.readdirSync('C:') . For more information, see this MSDN page .

File descriptors
On POSIX systems, for every process, the kernel maintains a table of currently open files and resources. Each open file is assigned a simple numeric identifier called a file descriptor. At the
system-level, all file system operations use these file descriptors to identify and track each specific file. Windows systems use a different but conceptually similar mechanism for tracking
resources. To simplify things for users, Node.js abstracts away the differences between operating systems and assigns all open files a numeric file descriptor.

The callback-based fs.open() , and synchronous fs.openSync() methods open a file and allocate a new file descriptor. Once allocated, the file descriptor may be used to read data from,
write data to, or request information about the file.

Operating systems limit the number of file descriptors that may be open at any given time so it is critical to close the descriptor when operations are completed. Failure to do so will result in
a memory leak that will eventually cause an application to crash.

import { open, close, fstat } from 'fs';

function closeFd(fd) {
close(fd, (err) => {
if (err) throw err;
});
}

open('/open/some/file.txt', 'r', (err, fd) => {


if (err) throw err;
try {
fstat(fd, (err, stat) => {
if (err) {
closeFd(fd);
throw err;
}
// use stat

closeFd(fd);
});
} catch (err) {
closeFd(fd);
throw err;
}
});

The promise-based APIs use a <FileHandle> object in place of the numeric file descriptor. These objects are better managed by the system to ensure that resources are not leaked.
However, it is still required that they are closed when operations are completed:

import { open } from 'fs/promises';

let file;
try {
file = await open('/open/some/file.txt', 'r');
const stat = await file.stat();
// use stat
} finally {
await file.close();
}

Threadpool usage
All callback and promise-based file system APIs ( with the exception of fs.FSWatcher() ) use libuv's threadpool. This can have surprising and negative performance implications for some
applications. See the UV_THREADPOOL_SIZE documentation for more information.

File system flags


The following flags are available wherever the flag option takes a string.

'a' : Open file for appending. The file is created if it does not exist.

'ax' : Like 'a' but fails if the path exists.

'a+' : Open file for reading and appending. The file is created if it does not exist.
'ax+' : Like 'a+' but fails if the path exists.

'as' : Open file for appending in synchronous mode. The file is created if it does not exist.

'as+' : Open file for reading and appending in synchronous mode. The file is created if it does not exist.

'r' : Open file for reading. An exception occurs if the file does not exist.

'r+' : Open file for reading and writing. An exception occurs if the file does not exist.

'rs+' : Open file for reading and writing in synchronous mode. Instructs the operating system to bypass the local file system cache.

This is primarily useful for opening files on NFS mounts as it allows skipping the potentially stale local cache. It has a very real impact on I/O performance so using this flag is not
recommended unless it is needed.

This doesn't turn fs.open() or fsPromises.open() into a synchronous blocking call. If synchronous operation is desired, something like fs.openSync() should be used.

'w' : Open file for writing. The file is created (if it does not exist) or truncated (if it exists).

'wx' : Like 'w' but fails if the path exists.

'w+' : Open file for reading and writing. The file is created (if it does not exist) or truncated (if it exists).

'wx+' : Like 'w+' but fails if the path exists.

flag can also be a number as documented by open(2) ; commonly used constants are available from fs.constants . On Windows, flags are translated to their equivalent ones where
applicable, e.g. O_WRONLY to FILE_GENERIC_WRITE , or O_EXCL|O_CREAT to CREATE_NEW , as accepted by CreateFileW .

The exclusive flag 'x' ( O_EXCL flag in open(2) ) causes the operation to return an error if the path already exists. On POSIX, if the path is a symbolic link, using O_EXCL returns an error
even if the link is to a path that does not exist. The exclusive flag might not work with network file systems.

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

Modifying a file rather than replacing it may require the flag option to be set to 'r+' rather than the default 'w' .

The behavior of some flags are platform-specific. As such, opening a directory on macOS and Linux with the 'a+' flag, as in the example below, will return an error. In contrast, on Windows
and FreeBSD, a file descriptor or a FileHandle will be returned.

// macOS and Linux


fs.open('<directory>', 'a+', (err, fd) => {
// => [Error: EISDIR: illegal operation on a directory, open <directory>]
});
// Windows and FreeBSD
fs.open('<directory>', 'a+', (err, fd) => {
// => null, <fd>
});

On Windows, opening an existing hidden file using the 'w' flag (either through fs.open() or fs.writeFile() or fsPromises.open() ) will fail with EPERM . Existing hidden files can be
opened for writing with the 'r+' flag.

A call to fs.ftruncate() or filehandle.truncate() can be used to reset the file contents.


Node.js v15.12.0 Documentation

Child process
Stability: 2 - Stable

Source Code: lib/child_process.js

The child_process module provides the ability to spawn subprocesses in a manner that is similar, but not identical, to popen(3) . This capability is primarily provided by the
child_process.spawn() function:

const { spawn } = require('child_process');


const ls = spawn('ls', ['-lh', '/usr']);

ls.stdout.on('data', (data) => {


console.log(`stdout: ${data}`);
});

ls.stderr.on('data', (data) => {


console.error(`stderr: ${data}`);
});

ls.on('close', (code) => {


console.log(`child process exited with code ${code}`);
});

By default, pipes for stdin , stdout , and stderr are established between the parent Node.js process and the spawned subprocess. These pipes have limited (and platform-specific)
capacity. If the subprocess writes to stdout in excess of that limit without the output being captured, the subprocess blocks waiting for the pipe buffer to accept more data. This is identical to
the behavior of pipes in the shell. Use the { stdio: 'ignore' } option if the output will not be consumed.

The command lookup is performed using the options.env.PATH environment variable if it is in the options object. Otherwise, process.env.PATH is used.

On Windows, environment variables are case-insensitive. Node.js lexicographically sorts the env keys and uses the first one that case-insensitively matches. Only first (in lexicographic
order) entry will be passed to the subprocess. This might lead to issues on Windows when passing objects to the env option that have multiple variants of the same key, such as PATH and
Path .

The child_process.spawn() method spawns the child process asynchronously, without blocking the Node.js event loop. The child_process.spawnSync() function provides equivalent
functionality in a synchronous manner that blocks the event loop until the spawned process either exits or is terminated.

For convenience, the child_process module provides a handful of synchronous and asynchronous alternatives to child_process.spawn() and child_process.spawnSync() . Each of these
alternatives are implemented on top of child_process.spawn() or child_process.spawnSync() .

child_process.exec() : spawns a shell and runs a command within that shell, passing the stdout and stderr to a callback function when complete.

child_process.execFile() : similar to child_process.exec() except that it spawns the command directly without first spawning a shell by default.

child_process.fork() : spawns a new Node.js process and invokes a specified module with an IPC communication channel established that allows sending messages between parent
and child.
child_process.execSync() : a synchronous version of child_process.exec() that will block the Node.js event loop.

child_process.execFileSync() : a synchronous version of child_process.execFile() that will block the Node.js event loop.

For certain use cases, such as automating shell scripts, the synchronous counterparts may be more convenient. In many cases, however, the synchronous methods can have significant
impact on performance due to stalling the event loop while spawned processes complete.

Asynchronous process creation


The child_process.spawn() , child_process.fork() , child_process.exec() , and child_process.execFile() methods all follow the idiomatic asynchronous programming pattern
typical of other Node.js APIs.

Each of the methods returns a ChildProcess instance. These objects implement the Node.js EventEmitter API, allowing the parent process to register listener functions that are called
when certain events occur during the life cycle of the child process.

The child_process.exec() and child_process.execFile() methods additionally allow for an optional callback function to be specified that is invoked when the child process
terminates.

Spawning .bat and .cmd files on Windows


The importance of the distinction between child_process.exec() and child_process.execFile() can vary based on platform. On Unix-type operating systems (Unix, Linux, macOS)
child_process.execFile() can be more efficient because it does not spawn a shell by default. On Windows, however, .bat and .cmd files are not executable on their own without a
terminal, and therefore cannot be launched using child_process.execFile() . When running on Windows, .bat and .cmd files can be invoked using child_process.spawn() with the
shell option set, with child_process.exec() , or by spawning cmd.exe and passing the .bat or .cmd file as an argument (which is what the shell option and child_process.exec()
do). In any case, if the script filename contains spaces it needs to be quoted.

// On Windows Only...
const { spawn } = require('child_process');
const bat = spawn('cmd.exe', ['/c', 'my.bat']);
bat.stdout.on('data', (data) => {
console.log(data.toString());
});

bat.stderr.on('data', (data) => {


console.error(data.toString());
});

bat.on('exit', (code) => {


console.log(`Child exited with code ${code}`);
});

// OR...
const { exec, spawn } = require('child_process');
exec('my.bat', (err, stdout, stderr) => {
if (err) {
console.error(err);
return;
}
console.log(stdout);
});

// Script with spaces in the filename:


const bat = spawn('"my script.cmd"', ['a', 'b'], { shell: true });
// or:
exec('"my script.cmd" a b', (err, stdout, stderr) => {
// ...
});

child_process.exec(command[, options][, callback])


command <string> The command to run, with space-separated arguments.

options <Object>
cwd <string> Current working directory of the child process. Default: process.cwd() .

env <Object> Environment key-value pairs. Default: process.env .

encoding <string> Default: 'utf8'


shell <string> Shell to execute the command with. See Shell requirements and Default Windows shell . Default: '/bin/sh' on Unix, process.env.ComSpec on Windows.

signal <AbortSignal> allows aborting the child process using an AbortSignal.

timeout <number> Default: 0

maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .

killSignal <string> | <integer> Default: 'SIGTERM'

uid <number> Sets the user identity of the process (see setuid(2) ).

gid <number> Sets the group identity of the process (see setgid(2) ).

windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .

callback <Function> called with the output when process terminates.


error <Error>

stdout <string> | <Buffer>

stderr <string> | <Buffer>

Returns: <ChildProcess>

Spawns a shell then executes the command within that shell, buffering any generated output. The command string passed to the exec function is processed directly by the shell and special
characters (vary based on shell ) need to be dealt with accordingly:

const { exec } = require('child_process');

exec('"/path/to/test file/test.sh" arg1 arg2');


// Double quotes are used so that the space in the path is not interpreted as
// a delimiter of multiple arguments.

exec('echo "The \\$HOME variable is $HOME"');


// The $HOME variable is escaped in the first instance, but not in the second.

Never pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

If a callback function is provided, it is called with the arguments (error, stdout, stderr) . On success, error will be null . On error, error will be an instance of Error . The
error.code property will be the exit code of the process. By convention, any exit code other than 0 indicates an error. error.signal will be the signal that terminated the process.

The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings
to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer' , or an unrecognized character
encoding, Buffer objects will be passed to the callback instead.
const { exec } = require('child_process');
exec('cat *.js missing_file | wc -l', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
console.log(`stdout: ${stdout}`);
console.error(`stderr: ${stderr}`);
});

If timeout is greater than 0 , the parent will send the signal identified by the killSignal property (the default is 'SIGTERM' ) if the child runs longer than timeout milliseconds.

Unlike the exec(3) POSIX system call, child_process.exec() does not replace the existing process and uses a shell to execute the command.

If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with stdout and stderr properties. The returned ChildProcess instance is attached to
the Promise as a child property. In case of an error (including any error resulting in an exit code other than 0), a rejected promise is returned, with the same error object given in the
callback, but with two additional properties stdout and stderr .

const util = require('util');


const exec = util.promisify(require('child_process').exec);

async function lsExample() {


const { stdout, stderr } = await exec('ls');
console.log('stdout:', stdout);
console.error('stderr:', stderr);
}
lsExample();

If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :

const { exec } = require('child_process');


const controller = new AbortController();
const { signal } = controller;
const child = exec('grep ssh', { signal }, (error) => {
console.log(error); // an AbortError
});
controller.abort();
child_process.execFile(file[, args][, options][, callback])
file <string> The name or path of the executable file to run.

args <string[]> List of string arguments.

options <Object>
cwd <string> Current working directory of the child process.

env <Object> Environment key-value pairs. Default: process.env .

encoding <string> Default: 'utf8'

timeout <number> Default: 0

maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .

killSignal <string> | <integer> Default: 'SIGTERM'

uid <number> Sets the user identity of the process (see setuid(2) ).

gid <number> Sets the group identity of the process (see setgid(2) ).

windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .

windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. Default: false .

shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).

signal <AbortSignal> allows aborting the child process using an AbortSignal.

callback <Function> Called with the output when process terminates.


error <Error>

stdout <string> | <Buffer>

stderr <string> | <Buffer>

Returns: <ChildProcess>

The child_process.execFile() function is similar to child_process.exec() except that it does not spawn a shell by default. Rather, the specified executable file is spawned directly as a
new process making it slightly more efficient than child_process.exec() .

The same options as child_process.exec() are supported. Since a shell is not spawned, behaviors such as I/O redirection and file globbing are not supported.

const { execFile } = require('child_process');


const child = execFile('node', ['--version'], (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});

The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings
to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer' , or an unrecognized character
encoding, Buffer objects will be passed to the callback instead.

If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with stdout and stderr properties. The returned ChildProcess instance is attached to
the Promise as a child property. In case of an error (including any error resulting in an exit code other than 0), a rejected promise is returned, with the same error object given in the
callback, but with two additional properties stdout and stderr .

const util = require('util');


const execFile = util.promisify(require('child_process').execFile);
async function getVersion() {
const { stdout } = await execFile('node', ['--version']);
console.log(stdout);
}
getVersion();

If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :

const { execFile } = require('child_process');


const controller = new AbortController();
const { signal } = controller;
const child = execFile('node', ['--version'], { signal }, (error) => {
console.log(error); // an AbortError
});
controller.abort();

child_process.fork(modulePath[, args][, options])


modulePath <string> The module to run in the child.

args <string[]> List of string arguments.

options <Object>
cwd <string> Current working directory of the child process.

detached <boolean> Prepare child to run independently of its parent process. Specific behavior depends on the platform, see options.detached ).

env <Object> Environment key-value pairs. Default: process.env .

execPath <string> Executable used to create the child process.

execArgv <string[]> List of string arguments passed to the executable. Default: process.execArgv .

gid <number> Sets the group identity of the process (see setgid(2) ).

serialization <string> Specify the kind of serialization used for sending messages between processes. Possible values are 'json' and 'advanced' . See Advanced
serialization for more details. Default: 'json' .
signal <AbortSignal> Allows closing the child process using an AbortSignal.

killSignal <string> The signal value to be used when the spawned process will be killed by the abort signal. Default: 'SIGTERM' .

silent <boolean> If true , stdin, stdout, and stderr of the child will be piped to the parent, otherwise they will be inherited from the parent, see the 'pipe' and 'inherit'
options for child_process.spawn() 's stdio for more details. Default: false .
stdio <Array> | <string> See child_process.spawn() 's stdio . When this option is provided, it overrides silent . If the array variant is used, it must contain exactly one item
with value 'ipc' or an error will be thrown. For instance [0, 1, 2, 'ipc'] .
uid <number> Sets the user identity of the process (see setuid(2) ).

windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. Default: false .

Returns: <ChildProcess>

The child_process.fork() method is a special case of child_process.spawn() used specifically to spawn new Node.js processes. Like child_process.spawn() , a ChildProcess object is
returned. The returned ChildProcess will have an additional communication channel built-in that allows messages to be passed back and forth between the parent and child. See
subprocess.send() for details.

Keep in mind that spawned Node.js child processes are independent of the parent with exception of the IPC communication channel that is established between the two. Each process has its
own memory, with their own V8 instances. Because of the additional resource allocations required, spawning a large number of child Node.js processes is not recommended.

By default, child_process.fork() will spawn new Node.js instances using the process.execPath of the parent process. The execPath property in the options object allows for an
alternative execution path to be used.

Node.js processes launched with a custom execPath will communicate with the parent process using the file descriptor (fd) identified using the environment variable NODE_CHANNEL_FD on
the child process.

Unlike the fork(2) POSIX system call, child_process.fork() does not clone the current process.

The shell option available in child_process.spawn() is not supported by child_process.fork() and will be ignored if set.

If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :
if (process.argv[2] === 'child') {
setTimeout(() => {
console.log(`Hello from ${process.argv[2]}!`);
}, 1_000);
} else {
const { fork } = require('child_process');
const controller = new AbortController();
const { signal } = controller;
const child = fork(__filename, ['child'], { signal });
child.on('error', (err) => {
// This will be called with err being an AbortError if the controller aborts
});
controller.abort(); // Stops the child process
}

child_process.spawn(command[, args][, options])


command <string> The command to run.

args <string[]> List of string arguments.

options <Object>

cwd <string> Current working directory of the child process.

env <Object> Environment key-value pairs. Default: process.env .

argv0 <string> Explicitly set the value of argv[0] sent to the child process. This will be set to command if not specified.

stdio <Array> | <string> Child's stdio configuration (see options.stdio ).

detached <boolean> Prepare child to run independently of its parent process. Specific behavior depends on the platform, see options.detached ).

uid <number> Sets the user identity of the process (see setuid(2) ).

gid <number> Sets the group identity of the process (see setgid(2) ).

serialization <string> Specify the kind of serialization used for sending messages between processes. Possible values are 'json' and 'advanced' . See Advanced
serialization for more details. Default: 'json' .

shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).

windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. This is set to true automatically when shell is specified and
is CMD. Default: false .
windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .

signal <AbortSignal> allows aborting the child process using an AbortSignal.

killSignal <string> The signal value to be used when the spawned process will be killed by the abort signal. Default: 'SIGTERM' .

Returns: <ChildProcess>

The child_process.spawn() method spawns a new process using the given command , with command-line arguments in args . If omitted, args defaults to an empty array.

If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

A third argument may be used to specify additional options, with these defaults:

const defaults = {
cwd: undefined,
env: process.env
};

Use cwd to specify the working directory from which the process is spawned. If not given, the default is to inherit the current working directory. If given, but the path does not exist, the child
process emits an ENOENT error and exits immediately. ENOENT is also emitted when the command does not exist.

Use env to specify environment variables that will be visible to the new process, the default is process.env .

undefined values in env will be ignored.

Example of running ls -lh /usr , capturing stdout , stderr , and the exit code:

const { spawn } = require('child_process');


const ls = spawn('ls', ['-lh', '/usr']);

ls.stdout.on('data', (data) => {


console.log(`stdout: ${data}`);
});

ls.stderr.on('data', (data) => {


console.error(`stderr: ${data}`);
});

ls.on('close', (code) => {


console.log(`child process exited with code ${code}`);
});
Example: A very elaborate way to run ps ax | grep ssh

const { spawn } = require('child_process');


const ps = spawn('ps', ['ax']);
const grep = spawn('grep', ['ssh']);

ps.stdout.on('data', (data) => {


grep.stdin.write(data);
});

ps.stderr.on('data', (data) => {


console.error(`ps stderr: ${data}`);
});

ps.on('close', (code) => {


if (code !== 0) {
console.log(`ps process exited with code ${code}`);
}
grep.stdin.end();
});

grep.stdout.on('data', (data) => {


console.log(data.toString());
});

grep.stderr.on('data', (data) => {


console.error(`grep stderr: ${data}`);
});

grep.on('close', (code) => {


if (code !== 0) {
console.log(`grep process exited with code ${code}`);
}
});

Example of checking for failed spawn :

const { spawn } = require('child_process');


const subprocess = spawn('bad_command');
subprocess.on('error', (err) => {
console.error('Failed to start subprocess.');
});

Certain platforms (macOS, Linux) will use the value of argv[0] for the process title while others (Windows, SunOS) will use command .

Node.js currently overwrites argv[0] with process.execPath on startup, so process.argv[0] in a Node.js child process will not match the argv0 parameter passed to spawn from the
parent, retrieve it with the process.argv0 property instead.

If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :

const { spawn } = require('child_process');


const controller = new AbortController();
const { signal } = controller;
const grep = spawn('grep', ['ssh'], { signal });
grep.on('error', (err) => {
// This will be called with err being an AbortError if the controller aborts
});
controller.abort(); // Stops the child process

options.detached
On Windows, setting options.detached to true makes it possible for the child process to continue running after the parent exits. The child will have its own console window. Once enabled
for a child process, it cannot be disabled.

On non-Windows platforms, if options.detached is set to true , the child process will be made the leader of a new process group and session. Child processes may continue running after
the parent exits regardless of whether they are detached or not. See setsid(2) for more information.

By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given subprocess to exit, use the subprocess.unref() method. Doing so will cause
the parent's event loop to not include the child in its reference count, allowing the parent to exit independently of the child, unless there is an established IPC channel between the child and
the parent.

When using the detached option to start a long-running process, the process will not stay running in the background after the parent exits unless it is provided with a stdio configuration
that is not connected to the parent. If the parent's stdio is inherited, the child will remain attached to the controlling terminal.

Example of a long-running process, by detaching and also ignoring its parent stdio file descriptors, in order to ignore the parent's termination:
const { spawn } = require('child_process');

const subprocess = spawn(process.argv[0], ['child_program.js'], {


detached: true,
stdio: 'ignore'
});

subprocess.unref();

Alternatively one can redirect the child process' output into files:

const fs = require('fs');
const { spawn } = require('child_process');
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');

const subprocess = spawn('prg', [], {


detached: true,
stdio: [ 'ignore', out, err ]
});

subprocess.unref();

options.stdio
The options.stdio option is used to configure the pipes that are established between the parent and child process. By default, the child's stdin, stdout, and stderr are redirected to
corresponding subprocess.stdin , subprocess.stdout , and subprocess.stderr streams on the ChildProcess object. This is equivalent to setting the options.stdio equal to ['pipe',
'pipe', 'pipe'] .

For convenience, options.stdio may be one of the following strings:

'pipe' : equivalent to ['pipe', 'pipe', 'pipe'] (the default)

'overlapped' : equivalent to ['overlapped', 'overlapped', 'overlapped']

'ignore' : equivalent to ['ignore', 'ignore', 'ignore']

'inherit' : equivalent to ['inherit', 'inherit', 'inherit'] or [0, 1, 2]

Otherwise, the value of options.stdio is an array where each index corresponds to an fd in the child. The fds 0, 1, and 2 correspond to stdin, stdout, and stderr, respectively. Additional fds
can be specified to create additional pipes between the parent and child. The value is one of the following:
1. 'pipe' : Create a pipe between the child process and the parent process. The parent end of the pipe is exposed to the parent as a property on the child_process object as
subprocess.stdio[fd] . Pipes created for fds 0, 1, and 2 are also available as subprocess.stdin , subprocess.stdout and subprocess.stderr , respectively.

2. 'overlapped' : Same as 'pipe' except that the FILE_FLAG_OVERLAPPED flag is set on the handle. This is necessary for overlapped I/O on the child process's stdio handles. See the docs
for more details. This is exactly the same as 'pipe' on non-Windows systems.

3. 'ipc' : Create an IPC channel for passing messages/file descriptors between parent and child. A ChildProcess may have at most one IPC stdio file descriptor. Setting this option
enables the subprocess.send() method. If the child is a Node.js process, the presence of an IPC channel will enable process.send() and process.disconnect() methods, as well as
'disconnect' and 'message' events within the child.

Accessing the IPC channel fd in any way other than process.send() or using the IPC channel with a child process that is not a Node.js instance is not supported.

4. 'ignore' : Instructs Node.js to ignore the fd in the child. While Node.js will always open fds 0, 1, and 2 for the processes it spawns, setting the fd to 'ignore' will cause Node.js to open
/dev/null and attach it to the child's fd.

5. 'inherit' : Pass through the corresponding stdio stream to/from the parent process. In the first three positions, this is equivalent to process.stdin , process.stdout , and
process.stderr , respectively. In any other position, equivalent to 'ignore' .

6. <Stream> object: Share a readable or writable stream that refers to a tty, file, socket, or a pipe with the child process. The stream's underlying file descriptor is duplicated in the child
process to the fd that corresponds to the index in the stdio array. The stream must have an underlying descriptor (file streams do not until the 'open' event has occurred).

7. Positive integer: The integer value is interpreted as a file descriptor that is currently open in the parent process. It is shared with the child process, similar to how <Stream> objects can
be shared. Passing sockets is not supported on Windows.

8. null , undefined : Use default value. For stdio fds 0, 1, and 2 (in other words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the default is 'ignore' .

const { spawn } = require('child_process');

// Child will use parent's stdios.


spawn('prg', [], { stdio: 'inherit' });

// Spawn child sharing only stderr.


spawn('prg', [], { stdio: ['pipe', 'pipe', process.stderr] });

// Open an extra fd=4, to interact with programs presenting a


// startd-style interface.
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });

It is worth noting that when an IPC channel is established between the parent and child processes, and the child is a Node.js process, the child is launched with the IPC channel unreferenced (using
unref() ) until the child registers an event handler for the 'disconnect' event or the 'message' event. This allows the child to exit normally without the process being held open by the open IPC
channel.
On Unix-like operating systems, the child_process.spawn() method performs memory operations synchronously before decoupling the event loop from the child. Applications with a large
memory footprint may find frequent child_process.spawn() calls to be a bottleneck. For more information, see V8 issue 7381 .

See also: child_process.exec() and child_process.fork() .

Synchronous process creation


The child_process.spawnSync() , child_process.execSync() , and child_process.execFileSync() methods are synchronous and will block the Node.js event loop, pausing execution of
any additional code until the spawned process exits.

Blocking calls like these are mostly useful for simplifying general-purpose scripting tasks and for simplifying the loading/processing of application configuration at startup.

child_process.execFileSync(file[, args][, options])


file <string> The name or path of the executable file to run.

args <string[]> List of string arguments.

options <Object>
cwd <string> Current working directory of the child process.

input <string> | <Buffer> | <TypedArray> | <DataView> The value which will be passed as stdin to the spawned process. Supplying this value will override stdio[0] .

stdio <string> | <Array> Child's stdio configuration. stderr by default will be output to the parent process' stderr unless stdio is specified. Default: 'pipe' .

env <Object> Environment key-value pairs. Default: process.env .

uid <number> Sets the user identity of the process (see setuid(2) ).

gid <number> Sets the group identity of the process (see setgid(2) ).

timeout <number> In milliseconds the maximum amount of time the process is allowed to run. Default: undefined .

killSignal <string> | <integer> The signal value to be used when the spawned process will be killed. Default: 'SIGTERM' .

maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated. See caveat at maxBuffer and Unicode . Default:
1024 * 1024 .

encoding <string> The encoding used for all stdio inputs and outputs. Default: 'buffer' .

windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .

shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).

Returns: <Buffer> | <string> The stdout from the command.

The child_process.execFileSync() method is generally identical to child_process.execFile() with the exception that the method will not return until the child process has fully closed.
When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited.
If the child process intercepts and handles the SIGTERM signal and does not exit, the parent process will still wait until the child process has exited.

If the process times out or has a non-zero exit code, this method will throw an Error that will include the full result of the underlying child_process.spawnSync() .

If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

child_process.execSync(command[, options])
command <string> The command to run.

options <Object>
cwd <string> Current working directory of the child process.

input <string> | <Buffer> | <TypedArray> | <DataView> The value which will be passed as stdin to the spawned process. Supplying this value will override stdio[0] .

stdio <string> | <Array> Child's stdio configuration. stderr by default will be output to the parent process' stderr unless stdio is specified. Default: 'pipe' .

env <Object> Environment key-value pairs. Default: process.env .

shell <string> Shell to execute the command with. See Shell requirements and Default Windows shell . Default: '/bin/sh' on Unix, process.env.ComSpec on Windows.

uid <number> Sets the user identity of the process. (See setuid(2) ).

gid <number> Sets the group identity of the process. (See setgid(2) ).

timeout <number> In milliseconds the maximum amount of time the process is allowed to run. Default: undefined .

killSignal <string> | <integer> The signal value to be used when the spawned process will be killed. Default: 'SIGTERM' .

maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .

encoding <string> The encoding used for all stdio inputs and outputs. Default: 'buffer' .

windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .

Returns: <Buffer> | <string> The stdout from the command.

The child_process.execSync() method is generally identical to child_process.exec() with the exception that the method will not return until the child process has fully closed. When a
timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. If the child process intercepts and handles the SIGTERM signal and
doesn't exit, the parent process will wait until the child process has exited.

If the process times out or has a non-zero exit code, this method will throw. The Error object will contain the entire result from child_process.spawnSync() .

Never pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

child_process.spawnSync(command[, args][, options])


command <string> The command to run.

args <string[]> List of string arguments.


options <Object>
cwd <string> Current working directory of the child process.

input <string> | <Buffer> | <TypedArray> | <DataView> The value which will be passed as stdin to the spawned process. Supplying this value will override stdio[0] .

argv0 <string> Explicitly set the value of argv[0] sent to the child process. This will be set to command if not specified.

stdio <string> | <Array> Child's stdio configuration.

env <Object> Environment key-value pairs. Default: process.env .

uid <number> Sets the user identity of the process (see setuid(2) ).

gid <number> Sets the group identity of the process (see setgid(2) ).

timeout <number> In milliseconds the maximum amount of time the process is allowed to run. Default: undefined .

killSignal <string> | <integer> The signal value to be used when the spawned process will be killed. Default: 'SIGTERM' .

maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .
encoding <string> The encoding used for all stdio inputs and outputs. Default: 'buffer' .

shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).
windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. This is set to true automatically when shell is specified and
is CMD. Default: false .

windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .

Returns: <Object>
pid <number> Pid of the child process.

output <Array> Array of results from stdio output.

stdout <Buffer> | <string> The contents of output[1] .

stderr <Buffer> | <string> The contents of output[2] .

status <number> | <null> The exit code of the subprocess, or null if the subprocess terminated due to a signal.

signal <string> | <null> The signal used to kill the subprocess, or null if the subprocess did not terminate due to a signal.

error <Error> The error object if the child process failed or timed out.

The child_process.spawnSync() method is generally identical to child_process.spawn() with the exception that the function will not return until the child process has fully closed. When
a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. If the process intercepts and handles the SIGTERM signal and
doesn't exit, the parent process will wait until the child process has exited.

If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
Class: ChildProcess
Extends: <EventEmitter>

Instances of the ChildProcess represent spawned child processes.

Instances of ChildProcess are not intended to be created directly. Rather, use the child_process.spawn() , child_process.exec() , child_process.execFile() , or
child_process.fork() methods to create instances of ChildProcess .

Event: 'close'
code <number> The exit code if the child exited on its own.

signal <string> The signal by which the child process was terminated.

The 'close' event is emitted when the stdio streams of a child process have been closed. This is distinct from the 'exit' event, since multiple processes might share the same stdio
streams.

const { spawn } = require('child_process');


const ls = spawn('ls', ['-lh', '/usr']);

ls.stdout.on('data', (data) => {


console.log(`stdout: ${data}`);
});

ls.on('close', (code) => {


console.log(`child process close all stdio with code ${code}`);
});

ls.on('exit', (code) => {


console.log(`child process exited with code ${code}`);
});

Event: 'disconnect'
The 'disconnect' event is emitted after calling the subprocess.disconnect() method in parent process or process.disconnect() in child process. After disconnecting it is no longer
possible to send or receive messages, and the subprocess.connected property is false .

Event: 'error'
err <Error> The error.
The 'error' event is emitted whenever:

1. The process could not be spawned, or


2. The process could not be killed, or
3. Sending a message to the child process failed.
The 'exit' event may or may not fire after an error has occurred. When listening to both the 'exit' and 'error' events, guard against accidentally invoking handler functions multiple
times.

See also subprocess.kill() and subprocess.send() .

Event: 'exit'
code <number> The exit code if the child exited on its own.

signal <string> The signal by which the child process was terminated.

The 'exit' event is emitted after the child process ends. If the process exited, code is the final exit code of the process, otherwise null . If the process terminated due to receipt of a signal,
signal is the string name of the signal, otherwise null . One of the two will always be non- null .

When the 'exit' event is triggered, child process stdio streams might still be open.

Node.js establishes signal handlers for SIGINT and SIGTERM and Node.js processes will not terminate immediately due to receipt of those signals. Rather, Node.js will perform a sequence of
cleanup actions and then will re-raise the handled signal.

See waitpid(2) .

Event: 'message'
message <Object> A parsed JSON object or primitive value.

sendHandle <Handle> A net.Socket or net.Server object, or undefined.

The 'message' event is triggered when a child process uses process.send() to send messages.

The message goes through serialization and parsing. The resulting message might not be the same as what is originally sent.

If the serialization option was set to 'advanced' used when spawning the child process, the message argument can contain data that JSON is not able to represent. See Advanced
serialization for more details.

Event: 'spawn'
The 'spawn' event is emitted once the child process has spawned successfully.

If emitted, the 'spawn' event comes before all other events and before any data is received via stdout or stderr .
The 'spawn' event will fire regardless of whether an error occurs within the spawned process. For example, if bash some-command spawns successfully, the 'spawn' event will fire, though
bash may fail to spawn some-command . This caveat also applies when using { shell: true } .

subprocess.channel
<Object> A pipe representing the IPC channel to the child process.

The subprocess.channel property is a reference to the child's IPC channel. If no IPC channel currently exists, this property is undefined .

subprocess.channel.ref()
This method makes the IPC channel keep the event loop of the parent process running if .unref() has been called before.

subprocess.channel.unref()
This method makes the IPC channel not keep the event loop of the parent process running, and lets it finish even while the channel is open.

subprocess.connected
<boolean> Set to false after subprocess.disconnect() is called.

The subprocess.connected property indicates whether it is still possible to send and receive messages from a child process. When subprocess.connected is false , it is no longer possible
to send or receive messages.

subprocess.disconnect()
Closes the IPC channel between parent and child, allowing the child to exit gracefully once there are no other connections keeping it alive. After calling this method the
subprocess.connected and process.connected properties in both the parent and child (respectively) will be set to false , and it will be no longer possible to pass messages between the
processes.

The 'disconnect' event will be emitted when there are no messages in the process of being received. This will most often be triggered immediately after calling subprocess.disconnect() .

When the child process is a Node.js instance (e.g. spawned using child_process.fork() ), the process.disconnect() method can be invoked within the child process to close the IPC
channel as well.

subprocess.exitCode
<integer>

The subprocess.exitCode property indicates the exit code of the child process. If the child process is still running, the field will be null .

subprocess.kill([signal])
signal <number> | <string>
Returns: <boolean>

The subprocess.kill() method sends a signal to the child process. If no argument is given, the process will be sent the 'SIGTERM' signal. See signal(7) for a list of available signals. This
function returns true if kill(2) succeeds, and false otherwise.

const { spawn } = require('child_process');


const grep = spawn('grep', ['ssh']);

grep.on('close', (code, signal) => {


console.log(
`child process terminated due to receipt of signal ${signal}`);
});

// Send SIGHUP to process.


grep.kill('SIGHUP');

The ChildProcess object may emit an 'error' event if the signal cannot be delivered. Sending a signal to a child process that has already exited is not an error but may have unforeseen
consequences. Specifically, if the process identifier (PID) has been reassigned to another process, the signal will be delivered to that process instead which can have unexpected results.

While the function is called kill , the signal delivered to the child process may not actually terminate the process.

See kill(2) for reference.

On Linux, child processes of child processes will not be terminated when attempting to kill their parent. This is likely to happen when running a new process in a shell or with the use of the
shell option of ChildProcess :

'use strict';
const { spawn } = require('child_process');

const subprocess = spawn(


'sh',
[
'-c',
`node -e "setInterval(() => {
console.log(process.pid, 'is alive')
}, 500);"`,
], {
stdio: ['inherit', 'inherit', 'inherit']
}
);
setTimeout(() => {
subprocess.kill(); // Does not terminate the Node.js process in the shell.
}, 2000);

subprocess.killed
<boolean> Set to true after subprocess.kill() is used to successfully send a signal to the child process.

The subprocess.killed property indicates whether the child process successfully received a signal from subprocess.kill() . The killed property does not indicate that the child process
has been terminated.

subprocess.pid
<integer> | <undefined>

Returns the process identifier (PID) of the child process. If the child process fails to spawn due to errors, then the value is undefined and error is emitted.

const { spawn } = require('child_process');


const grep = spawn('grep', ['ssh']);

console.log(`Spawned child pid: ${grep.pid}`);


grep.stdin.end();

subprocess.ref()
Calling subprocess.ref() after making a call to subprocess.unref() will restore the removed reference count for the child process, forcing the parent to wait for the child to exit before
exiting itself.

const { spawn } = require('child_process');

const subprocess = spawn(process.argv[0], ['child_program.js'], {


detached: true,
stdio: 'ignore'
});

subprocess.unref();
subprocess.ref();
subprocess.send(message[, sendHandle[, options]][, callback])
message <Object>

sendHandle <Handle>

options <Object> The options argument, if present, is an object used to parameterize the sending of certain types of handles. options supports the following properties:
keepOpen <boolean> A value that can be used when passing instances of net.Socket . When true , the socket is kept open in the sending process. Default: false .

callback <Function>

Returns: <boolean>

When an IPC channel has been established between the parent and child ( i.e. when using child_process.fork() ), the subprocess.send() method can be used to send messages to the
child process. When the child process is a Node.js instance, these messages can be received via the 'message' event.

The message goes through serialization and parsing. The resulting message might not be the same as what is originally sent.

For example, in the parent script:

const cp = require('child_process');
const n = cp.fork(`${__dirname}/sub.js`);

n.on('message', (m) => {


console.log('PARENT got message:', m);
});

// Causes the child to print: CHILD got message: { hello: 'world' }


n.send({ hello: 'world' });

And then the child script, 'sub.js' might look like this:

process.on('message', (m) => {


console.log('CHILD got message:', m);
});

// Causes the parent to print: PARENT got message: { foo: 'bar', baz: null }
process.send({ foo: 'bar', baz: NaN });

Child Node.js processes will have a process.send() method of their own that allows the child to send messages back to the parent.

There is a special case when sending a {cmd: 'NODE_foo'} message. Messages containing a NODE_ prefix in the cmd property are reserved for use within Node.js core and will not be
emitted in the child's 'message' event. Rather, such messages are emitted using the 'internalMessage' event and are consumed internally by Node.js. Applications should avoid using such
messages or listening for 'internalMessage' events as it is subject to change without notice.

The optional sendHandle argument that may be passed to subprocess.send() is for passing a TCP server or socket object to the child process. The child will receive the object as the
second argument passed to the callback function registered on the 'message' event. Any data that is received and buffered in the socket will not be sent to the child.

The optional callback is a function that is invoked after the message is sent but before the child may have received it. The function is called with a single argument: null on success, or an
Error object on failure.

If no callback function is provided and the message cannot be sent, an 'error' event will be emitted by the ChildProcess object. This can happen, for instance, when the child process
has already exited.

subprocess.send() will return false if the channel has closed or when the backlog of unsent messages exceeds a threshold that makes it unwise to send more. Otherwise, the method
returns true . The callback function can be used to implement flow control.

Example: sending a server object


The sendHandle argument can be used, for instance, to pass the handle of a TCP server object to the child process as illustrated in the example below:

const subprocess = require('child_process').fork('subprocess.js');

// Open up the server object and send the handle.


const server = require('net').createServer();
server.on('connection', (socket) => {
socket.end('handled by parent');
});
server.listen(1337, () => {
subprocess.send('server', server);
});

The child would then receive the server object as:

process.on('message', (m, server) => {


if (m === 'server') {
server.on('connection', (socket) => {
socket.end('handled by child');
});
}
});

Once the server is now shared between the parent and child, some connections can be handled by the parent and some by the child.
While the example above uses a server created using the net module, dgram module servers use exactly the same workflow with the exceptions of listening on a 'message' event instead
of 'connection' and using server.bind() instead of server.listen() . This is, however, currently only supported on Unix platforms.

Example: sending a socket object


Similarly, the sendHandler argument can be used to pass the handle of a socket to the child process. The example below spawns two children that each handle connections with "normal" or
"special" priority:

const { fork } = require('child_process');


const normal = fork('subprocess.js', ['normal']);
const special = fork('subprocess.js', ['special']);

// Open up the server and send sockets to child. Use pauseOnConnect to prevent
// the sockets from being read before they are sent to the child process.
const server = require('net').createServer({ pauseOnConnect: true });
server.on('connection', (socket) => {

// If this is special priority...


if (socket.remoteAddress === '74.125.127.100') {
special.send('socket', socket);
return;
}
// This is normal priority.
normal.send('socket', socket);
});
server.listen(1337);

The subprocess.js would receive the socket handle as the second argument passed to the event callback function:

process.on('message', (m, socket) => {


if (m === 'socket') {
if (socket) {
// Check that the client socket exists.
// It is possible for the socket to be closed between the time it is
// sent and the time it is received in the child process.
socket.end(`Request handled with ${process.argv[2]} priority`);
}
}
});
Do not use .maxConnections on a socket that has been passed to a subprocess. The parent cannot track when the socket is destroyed.

Any 'message' handlers in the subprocess should verify that socket exists, as the connection may have been closed during the time it takes to send the connection to the child.

subprocess.signalCode
<string> | <null>

The subprocess.signalCode property indicates the signal received by the child process if any, else null .

subprocess.spawnargs
<Array>

The subprocess.spawnargs property represents the full list of command-line arguments the child process was launched with.

subprocess.spawnfile
<string>

The subprocess.spawnfile property indicates the executable file name of the child process that is launched.

For child_process.fork() , its value will be equal to process.execPath . For child_process.spawn() , its value will be the name of the executable file. For child_process.exec() , its value
will be the name of the shell in which the child process is launched.

subprocess.stderr
<stream.Readable>

A Readable Stream that represents the child process's stderr .

If the child was spawned with stdio[2] set to anything other than 'pipe' , then this will be null .

subprocess.stderr is an alias for subprocess.stdio[2] . Both properties will refer to the same value.

The subprocess.stderr property can be null if the child process could not be successfully spawned.

subprocess.stdin
<stream.Writable>

A Writable Stream that represents the child process's stdin .

If a child process waits to read all of its input, the child will not continue until this stream has been closed via end() .
If the child was spawned with stdio[0] set to anything other than 'pipe' , then this will be null .

subprocess.stdin is an alias for subprocess.stdio[0] . Both properties will refer to the same value.

The subprocess.stdin property can be undefined if the child process could not be successfully spawned.

subprocess.stdio
<Array>

A sparse array of pipes to the child process, corresponding with positions in the stdio option passed to child_process.spawn() that have been set to the value 'pipe' .
subprocess.stdio[0] , subprocess.stdio[1] , and subprocess.stdio[2] are also available as subprocess.stdin , subprocess.stdout , and subprocess.stderr , respectively.

In the following example, only the child's fd 1 (stdout) is configured as a pipe, so only the parent's subprocess.stdio[1] is a stream, all other values in the array are null .

const assert = require('assert');


const fs = require('fs');
const child_process = require('child_process');

const subprocess = child_process.spawn('ls', {


stdio: [
0, // Use parent's stdin for child.
'pipe', // Pipe child's stdout to parent.
fs.openSync('err.out', 'w'), // Direct child's stderr to a file.
]
});

assert.strictEqual(subprocess.stdio[0], null);
assert.strictEqual(subprocess.stdio[0], subprocess.stdin);

assert(subprocess.stdout);
assert.strictEqual(subprocess.stdio[1], subprocess.stdout);

assert.strictEqual(subprocess.stdio[2], null);
assert.strictEqual(subprocess.stdio[2], subprocess.stderr);

The subprocess.stdio property can be undefined if the child process could not be successfully spawned.

subprocess.stdout
<stream.Readable>
A Readable Stream that represents the child process's stdout .

If the child was spawned with stdio[1] set to anything other than 'pipe' , then this will be null .

subprocess.stdout is an alias for subprocess.stdio[1] . Both properties will refer to the same value.

const { spawn } = require('child_process');

const subprocess = spawn('ls');

subprocess.stdout.on('data', (data) => {


console.log(`Received chunk ${data}`);
});

The subprocess.stdout property can be null if the child process could not be successfully spawned.

subprocess.unref()
By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given subprocess to exit, use the subprocess.unref() method. Doing so will cause
the parent's event loop to not include the child in its reference count, allowing the parent to exit independently of the child, unless there is an established IPC channel between the child and
the parent.

const { spawn } = require('child_process');

const subprocess = spawn(process.argv[0], ['child_program.js'], {


detached: true,
stdio: 'ignore'
});

subprocess.unref();

maxBuffer and Unicode


The maxBuffer option specifies the largest number of bytes allowed on stdout or stderr . If this value is exceeded, then the child process is terminated. This impacts output that includes
multibyte character encodings such as UTF-8 or UTF-16. For instance, console.log('中文测试') will send 13 UTF-8 encoded bytes to stdout although there are only 4 characters.

Shell requirements
The shell should understand the -c switch. If the shell is 'cmd.exe' , it should understand the /d /s /c switches and command-line parsing should be compatible.

Default Windows shell


Although Microsoft specifies %COMSPEC% must contain the path to 'cmd.exe' in the root environment, child processes are not always subject to the same requirement. Thus, in
child_process functions where a shell can be spawned, 'cmd.exe' is used as a fallback if process.env.ComSpec is unavailable.

Advanced serialization
Child processes support a serialization mechanism for IPC that is based on the serialization API of the v8 module , based on the HTML structured clone algorithm . This is generally more
powerful and supports more built-in JavaScript object types, such as BigInt , Map and Set , ArrayBuffer and TypedArray , Buffer , Error , RegExp etc.

However, this format is not a full superset of JSON, and e.g. properties set on objects of such built-in types will not be passed on through the serialization step. Additionally, performance may
not be equivalent to that of JSON, depending on the structure of the passed data. Therefore, this feature requires opting in by setting the serialization option to 'advanced' when calling
child_process.spawn() or child_process.fork() .
Node.js v15.12.0 Documentation

Buffer
Stability: 2 - Stable

Source Code: lib/buffer.js

Buffer objects are used to represent a fixed-length sequence of bytes. Many Node.js APIs support Buffer s.

The Buffer class is a subclass of JavaScript's Uint8Array class and extends it with methods that cover additional use cases. Node.js APIs accept plain Uint8Array s wherever Buffer s are
supported as well.

The Buffer class is within the global scope, making it unlikely that one would need to ever use require('buffer').Buffer .

// Creates a zero-filled Buffer of length 10.


const buf1 = Buffer.alloc(10);

// Creates a Buffer of length 10,


// filled with bytes which all have the value `1`.
const buf2 = Buffer.alloc(10, 1);

// Creates an uninitialized buffer of length 10.


// This is faster than calling Buffer.alloc() but the returned
// Buffer instance might contain old data that needs to be
// overwritten using fill(), write(), or other functions that fill the Buffer's
// contents.
const buf3 = Buffer.allocUnsafe(10);

// Creates a Buffer containing the bytes [1, 2, 3].


const buf4 = Buffer.from([1, 2, 3]);

// Creates a Buffer containing the bytes [1, 1, 1, 1] – the entries


// are all truncated using `(value & 255)` to fit into the range 0–255.
const buf5 = Buffer.from([257, 257.5, -255, '1']);
// Creates a Buffer containing the UTF-8-encoded bytes for the string 'tést':
// [0x74, 0xc3, 0xa9, 0x73, 0x74] (in hexadecimal notation)
// [116, 195, 169, 115, 116] (in decimal notation)
const buf6 = Buffer.from('tést');

// Creates a Buffer containing the Latin-1 bytes [0x74, 0xe9, 0x73, 0x74].
const buf7 = Buffer.from('tést', 'latin1');

Buffers and character encodings


When converting between Buffer s and strings, a character encoding may be specified. If no character encoding is specified, UTF-8 will be used as the default.

const buf = Buffer.from('hello world', 'utf8');

console.log(buf.toString('hex'));
// Prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
// Prints: aGVsbG8gd29ybGQ=

console.log(Buffer.from('fhqwhgads', 'utf8'));
// Prints: <Buffer 66 68 71 77 68 67 61 64 73>
console.log(Buffer.from('fhqwhgads', 'utf16le'));
// Prints: <Buffer 66 00 68 00 71 00 77 00 68 00 67 00 61 00 64 00 73 00>

The character encodings currently supported by Node.js are the following:

'utf8' : Multi-byte encoded Unicode characters. Many web pages and other document formats use UTF-8 . This is the default character encoding. When decoding a Buffer into a
string that does not exclusively contain valid UTF-8 data, the Unicode replacement character U+FFFD � will be used to represent those errors.

'utf16le' : Multi-byte encoded Unicode characters. Unlike 'utf8' , each character in the string will be encoded using either 2 or 4 bytes. Node.js only supports the little-endian
variant of UTF-16 .

'latin1' : Latin-1 stands for ISO-8859-1 . This character encoding only supports the Unicode characters from U+0000 to U+00FF . Each character is encoded using a single byte.
Characters that do not fit into that range are truncated and will be mapped to characters in that range.

Converting a Buffer into a string using one of the above is referred to as decoding, and converting a string into a Buffer is referred to as encoding.
Node.js also supports the following binary-to-text encodings. For binary-to-text encodings, the naming convention is reversed: Converting a Buffer into a string is typically referred to as
encoding, and converting a string into a Buffer as decoding.

'base64' : Base64 encoding. When creating a Buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5 .
Whitespace characters such as spaces, tabs, and new lines contained within the base64-encoded string are ignored.

'base64url' : base64url encoding as specified in RFC 4648, Section 5 . When creating a Buffer from a string, this encoding will also correctly accept regular base64-encoded strings.
When encoding a Buffer to a string, this encoding will omit padding.

'hex' : Encode each byte as two hexadecimal characters. Data truncation may occur when decoding strings that do exclusively contain valid hexadecimal characters. See below for an
example.

The following legacy character encodings are also supported:

'ascii' : For 7-bit ASCII data only. When encoding a string into a Buffer , this is equivalent to using 'latin1' . When decoding a Buffer into a string, using this encoding will
additionally unset the highest bit of each byte before decoding as 'latin1' . Generally, there should be no reason to use this encoding, as 'utf8' (or, if the data is known to always be
ASCII-only, 'latin1' ) will be a better choice when encoding or decoding ASCII-only text. It is only provided for legacy compatibility.

'binary' : Alias for 'latin1' . See binary strings for more background on this topic. The name of this encoding can be very misleading, as all of the encodings listed here convert
between strings and binary data. For converting between strings and Buffer s, typically 'utf-8' is the right choice.

'ucs2' : Alias of 'utf16le' . UCS-2 used to refer to a variant of UTF-16 that did not support characters that had code points larger than U+FFFF. In Node.js, these code points are
always supported.

Buffer.from('1ag', 'hex');
// Prints <Buffer 1a>, data truncated when first non-hexadecimal value
// ('g') encountered.

Buffer.from('1a7g', 'hex');
// Prints <Buffer 1a>, data truncated when data ends in single digit ('7').

Buffer.from('1634', 'hex');
// Prints <Buffer 16 34>, all data represented.

Modern Web browsers follow the WHATWG Encoding Standard which aliases both 'latin1' and 'ISO-8859-1' to 'win-1252' . This means that while doing something like http.get() ,
if the returned charset is one of those listed in the WHATWG specification it is possible that the server actually returned 'win-1252' -encoded data, and using 'latin1' encoding may
incorrectly decode the characters.

Buffers and TypedArrays


Buffer instances are also JavaScript Uint8Array and TypedArray instances. All TypedArray methods are available on Buffer s. There are, however, subtle incompatibilities between the
Buffer API and the TypedArray API.

In particular:

While TypedArray#slice() creates a copy of part of the TypedArray , Buffer#slice() creates a view over the existing Buffer without copying. This behavior can be surprising, and
only exists for legacy compatibility. TypedArray#subarray() can be used to achieve the behavior of Buffer#slice() on both Buffer s and other TypedArray s.

buf.toString() is incompatible with its TypedArray equivalent.

A number of methods, e.g. buf.indexOf() , support additional arguments.

There are two ways to create new TypedArray instances from a Buffer :

Passing a Buffer to a TypedArray constructor will copy the Buffer s contents, interpreted as an array of integers, and not as a byte sequence of the target type.

const buf = Buffer.from([1, 2, 3, 4]);


const uint32array = new Uint32Array(buf);

console.log(uint32array);

// Prints: Uint32Array(4) [ 1, 2, 3, 4 ]

Passing the Buffer s underlying ArrayBuffer will create a TypedArray that shares its memory with the Buffer .

const buf = Buffer.from('hello', 'utf16le');


const uint16arr = new Uint16Array(
buf.buffer,
buf.byteOffset,
buf.length / Uint16Array.BYTES_PER_ELEMENT);

console.log(uint16array);

// Prints: Uint16Array(5) [ 104, 101, 108, 108, 111 ]

It is possible to create a new Buffer that shares the same allocated memory as a TypedArray instance by using the TypedArray object’s .buffer property in the same way. Buffer.from()
behaves like new Uint8Array() in this context.

const arr = new Uint16Array(2);

arr[0] = 5000;
arr[1] = 4000;

// Copies the contents of `arr`.


const buf1 = Buffer.from(arr);

// Shares memory with `arr`.


const buf2 = Buffer.from(arr.buffer);

console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 a0 0f>

arr[1] = 6000;

console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 70 17>

When creating a Buffer using a TypedArray 's .buffer , it is possible to use only a portion of the underlying ArrayBuffer by passing in byteOffset and length parameters.

const arr = new Uint16Array(20);


const buf = Buffer.from(arr.buffer, 0, 16);

console.log(buf.length);
// Prints: 16

The Buffer.from() and TypedArray.from() have different signatures and implementations. Specifically, the TypedArray variants accept a second argument that is a mapping function that
is invoked on every element of the typed array:

TypedArray.from(source[, mapFn[, thisArg]])

The Buffer.from() method, however, does not support the use of a mapping function:

Buffer.from(array)

Buffer.from(buffer)

Buffer.from(arrayBuffer[, byteOffset[, length]])

Buffer.from(string[, encoding])
Buffers and iteration
Buffer instances can be iterated over using for..of syntax:

const buf = Buffer.from([1, 2, 3]);

for (const b of buf) {


console.log(b);
}
// Prints:
// 1
// 2
// 3

Additionally, the buf.values() , buf.keys() , and buf.entries() methods can be used to create iterators.

Class: Blob

Stability: 1 - Experimental

A Blob encapsulates immutable, raw data that can be safely shared across multiple worker threads.

new buffer.Blob([sources[, options]])


sources <string[]> | <ArrayBuffer[]> | <TypedArray[]> | <DataView[]> | <Blob[]> An array of string, <ArrayBuffer> , <TypedArray> , <DataView> , or <Blob> objects, or any mix
of such objects, that will be stored within the Blob .

options <Object>
encoding <string> The character encoding to use for string sources. Default: 'utf8' .

type <string> The Blob content-type. The intent is for type to convey the MIME media type of the data, however no validation of the type format is performed.

Creates a new Blob object containing a concatenation of the given sources.

<ArrayBuffer> , <TypedArray> , <DataView> , and <Buffer> sources are copied into the 'Blob' and can therefore be safely modified after the 'Blob' is created.

String sources are also copied into the Blob .

blob.arrayBuffer()
Returns: <Promise>

Returns a promise that fulfills with an <ArrayBuffer> containing a copy of the Blob data.

blob.size
The total size of the Blob in bytes.

blob.slice([start, [end, [type]]])


start <number> The starting index.

end <number> The ending index.

type <string> The content-type for the new Blob

Creates and returns a new Blob containing a subset of this Blob objects data. The original Blob is not alterered.

blob.text()
Returns: <Promise>

Returns a promise that resolves the contents of the Blob decoded as a UTF-8 string.

blob.type
Type: <string>

The content-type of the Blob .

Blob objects and MessageChannel


Once a <Blob> object is created, it can be sent via MessagePort to multiple destinations without transferring or immediately copying the data. The data contained by the Blob is copied
only when the arrayBuffer() or text() methods are called.

const { Blob } = require('buffer');


const blob = new Blob(['hello there']);
const { setTimeout: delay } = require('timers/promises');

const mc1 = new MessageChannel();


const mc2 = new MessageChannel();

mc1.port1.onmessage = async ({ data }) => {


console.log(await data.arrayBuffer());
mc1.port1.close();
};

mc2.port1.onmessage = async ({ data }) => {


await delay(1000);
console.log(await data.arrayBuffer());
mc2.port1.close();
};

mc1.port2.postMessage(blob);
mc2.port2.postMessage(blob);

// The Blob is still usable after posting.


data.text().then(console.log);

Class: Buffer
The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.

Static method: Buffer.alloc(size[, fill[, encoding]])


size <integer> The desired length of the new Buffer .

fill <string> | <Buffer> | <Uint8Array> | <integer> A value to pre-fill the new Buffer with. Default: 0 .

encoding <string> If fill is a string, this is its encoding. Default: 'utf8' .

Allocates a new Buffer of size bytes. If fill is undefined , the Buffer will be zero-filled.

const buf = Buffer.alloc(5);

console.log(buf);
// Prints: <Buffer 00 00 00 00 00>

If size is larger than buffer.constants.MAX_LENGTH or smaller than 0, ERR_INVALID_ARG_VALUE is thrown.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill) .

const buf = Buffer.alloc(5, 'a');


console.log(buf);
// Prints: <Buffer 61 61 61 61 61>

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding) .

const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');

console.log(buf);
// Prints: <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>

Calling Buffer.alloc() can be measurably slower than the alternative Buffer.allocUnsafe() but ensures that the newly created Buffer instance contents will never contain sensitive
data from previous allocations, including data that might not have been allocated for Buffer s.

A TypeError will be thrown if size is not a number.

Static method: Buffer.allocUnsafe(size)


size <integer> The desired length of the new Buffer .

Allocates a new Buffer of size bytes. If size is larger than buffer.constants.MAX_LENGTH or smaller than 0, ERR_INVALID_ARG_VALUE is thrown.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use
Buffer.alloc() instead to initialize Buffer instances with zeroes.

const buf = Buffer.allocUnsafe(10);

console.log(buf);
// Prints (contents may vary): <Buffer a0 8b 28 3f 01 00 00 00 50 32>

buf.fill(0);

console.log(buf);
// Prints: <Buffer 00 00 00 00 00 00 00 00 00 00>

A TypeError will be thrown if size is not a number.

The Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using
Buffer.allocUnsafe() , Buffer.from(array) , Buffer.concat() , and the deprecated new Buffer(size) constructor only when size is less than or equal to Buffer.poolSize >> 1 (floor
of Buffer.poolSize divided by two).
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill) . Specifically,
Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half
Buffer.poolSize . The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe() provides.

Static method: Buffer.allocUnsafeSlow(size)


size <integer> The desired length of the new Buffer .

Allocates a new Buffer of size bytes. If size is larger than buffer.constants.MAX_LENGTH or smaller than 0, ERR_INVALID_ARG_VALUE is thrown. A zero-length Buffer is created if size
is 0.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use buf.fill(0)
to initialize such Buffer instances with zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are sliced from a single pre-allocated Buffer . This allows applications to avoid the garbage
collection overhead of creating many individually allocated Buffer instances. This approach improves both performance and memory usage by eliminating the need to track and clean up as
many individual ArrayBuffer objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer
instance using Buffer.allocUnsafeSlow() and then copying out the relevant bits.

// Need to keep around a few small chunks of memory.


const store = [];

socket.on('readable', () => {
let data;
while (null !== (data = readable.read())) {
// Allocate for retained data.
const sb = Buffer.allocUnsafeSlow(10);

// Copy the data into the new allocation.


data.copy(sb, 0, 0, 10);

store.push(sb);
}
});

A TypeError will be thrown if size is not a number.

Static method: Buffer.byteLength(string[, encoding])


string <string> | <Buffer> | <TypedArray> | <DataView> | <ArrayBuffer> | <SharedArrayBuffer> A value to calculate the length of.

encoding <string> If string is a string, this is its encoding. Default: 'utf8' .

Returns: <integer> The number of bytes contained within string .

Returns the byte length of a string when encoded using encoding . This is not the same as String.prototype.length , which does not account for the encoding that is used to convert the
string into bytes.

For 'base64' , 'base64url' , and 'hex' , this function assumes valid input. For strings that contain non-base64/hex-encoded data (e.g. whitespace), the return value might be greater than
the length of a Buffer created from the string.

const str = '\u00bd + \u00bc = \u00be';

console.log(`${str}: ${str.length} characters, ` +


`${Buffer.byteLength(str, 'utf8')} bytes`);
// Prints: ½ + ¼ = ¾: 9 characters, 12 bytes

When string is a Buffer / DataView / TypedArray / ArrayBuffer / SharedArrayBuffer , the byte length as reported by .byteLength is returned.

Static method: Buffer.compare(buf1, buf2)


buf1 <Buffer> | <Uint8Array>

buf2 <Buffer> | <Uint8Array>

Returns: <integer> Either -1 , 0 , or 1 , depending on the result of the comparison. See buf.compare() for details.

Compares buf1 to buf2 , typically for the purpose of sorting arrays of Buffer instances. This is equivalent to calling buf1.compare(buf2) .

const buf1 = Buffer.from('1234');


const buf2 = Buffer.from('0123');
const arr = [buf1, buf2];

console.log(arr.sort(Buffer.compare));
// Prints: [ <Buffer 30 31 32 33>, <Buffer 31 32 33 34> ]
// (This result is equal to: [buf2, buf1].)

Static method: Buffer.concat(list[, totalLength])


list <Buffer[]> | <Uint8Array[]> List of Buffer or Uint8Array instances to concatenate.

totalLength <integer> Total length of the Buffer instances in list when concatenated.
Returns: <Buffer>

Returns a new Buffer which is the result of concatenating all the Buffer instances in the list together.

If the list has no items, or if the totalLength is 0, then a new zero-length Buffer is returned.

If totalLength is not provided, it is calculated from the Buffer instances in list by adding their lengths.

If totalLength is provided, it is coerced to an unsigned integer. If the combined length of the Buffer s in list exceeds totalLength , the result is truncated to totalLength .

// Create a single `Buffer` from a list of three `Buffer` instances.

const buf1 = Buffer.alloc(10);


const buf2 = Buffer.alloc(14);
const buf3 = Buffer.alloc(18);
const totalLength = buf1.length + buf2.length + buf3.length;

console.log(totalLength);
// Prints: 42

const bufA = Buffer.concat([buf1, buf2, buf3], totalLength);

console.log(bufA);
// Prints: <Buffer 00 00 00 00 ...>
console.log(bufA.length);
// Prints: 42

Buffer.concat() may also use the internal Buffer pool like Buffer.allocUnsafe() does.

Static method: Buffer.from(array)


array <integer[]>

Allocates a new Buffer using an array of bytes in the range 0 – 255 . Array entries outside that range will be truncated to fit into it.

// Creates a new Buffer containing the UTF-8 bytes of the string 'buffer'.
const buf = Buffer.from([0x62, 0x75, 0x66, 0x66, 0x65, 0x72]);

A TypeError will be thrown if array is not an Array or another type appropriate for Buffer.from() variants.

Buffer.from(array) and Buffer.from(string) may also use the internal Buffer pool like Buffer.allocUnsafe() does.
Static method: Buffer.from(arrayBuffer[, byteOffset[, length]])
arrayBuffer <ArrayBuffer> | <SharedArrayBuffer> An ArrayBuffer , SharedArrayBuffer , for example the .buffer property of a TypedArray .

byteOffset <integer> Index of first byte to expose. Default: 0 .

length <integer> Number of bytes to expose. Default: arrayBuffer.byteLength - byteOffset .

This creates a view of the ArrayBuffer without copying the underlying memory. For example, when passed a reference to the .buffer property of a TypedArray instance, the newly
created Buffer will share the same allocated memory as the TypedArray 's underlying ArrayBuffer .

const arr = new Uint16Array(2);

arr[0] = 5000;
arr[1] = 4000;

// Shares memory with `arr`.


const buf = Buffer.from(arr.buffer);

console.log(buf);
// Prints: <Buffer 88 13 a0 0f>

// Changing the original Uint16Array changes the Buffer also.


arr[1] = 6000;

console.log(buf);
// Prints: <Buffer 88 13 70 17>

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer .

const ab = new ArrayBuffer(10);


const buf = Buffer.from(ab, 0, 2);

console.log(buf.length);
// Prints: 2

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer or a SharedArrayBuffer or another type appropriate for Buffer.from() variants.

It is important to remember that a backing ArrayBuffer can cover a range of memory that extends beyond the bounds of a TypedArray view. A new Buffer created using the buffer
property of a TypedArray may extend beyond the range of the TypedArray :
const arrA = Uint8Array.from([0x63, 0x64, 0x65, 0x66]); // 4 elements
const arrB = new Uint8Array(arrA.buffer, 1, 2); // 2 elements
console.log(arrA.buffer === arrB.buffer); // true

const buf = Buffer.from(arrB.buffer);


console.log(buf);
// Prints: <Buffer 63 64 65 66>

Static method: Buffer.from(buffer)


buffer <Buffer> | <Uint8Array> An existing Buffer or Uint8Array from which to copy data.

Copies the passed buffer data onto a new Buffer instance.

const buf1 = Buffer.from('buffer');


const buf2 = Buffer.from(buf1);

buf1[0] = 0x61;

console.log(buf1.toString());
// Prints: auffer
console.log(buf2.toString());
// Prints: buffer

A TypeError will be thrown if buffer is not a Buffer or another type appropriate for Buffer.from() variants.

Static method: Buffer.from(object[, offsetOrEncoding[, length]])


object <Object> An object supporting Symbol.toPrimitive or valueOf() .

offsetOrEncoding <integer> | <string> A byte-offset or encoding.

length <integer> A length.

For objects whose valueOf() function returns a value not strictly equal to object , returns Buffer.from(object.valueOf(), offsetOrEncoding, length) .

const buf = Buffer.from(new String('this is a test'));


// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>

For objects that support Symbol.toPrimitive , returns Buffer.from(object[Symbol.toPrimitive]('string'), offsetOrEncoding) .


class Foo {
[Symbol.toPrimitive]() {
return 'this is a test';
}
}

const buf = Buffer.from(new Foo(), 'utf8');


// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>

A TypeError will be thrown if object does not have the mentioned methods or is not of another type appropriate for Buffer.from() variants.

Static method: Buffer.from(string[, encoding])


string <string> A string to encode.

encoding <string> The encoding of string . Default: 'utf8' .

Creates a new Buffer containing string . The encoding parameter identifies the character encoding to be used when converting string into bytes.

const buf1 = Buffer.from('this is a tést');


const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');

console.log(buf1.toString());
// Prints: this is a tést
console.log(buf2.toString());
// Prints: this is a tést
console.log(buf1.toString('latin1'));
// Prints: this is a tést

A TypeError will be thrown if string is not a string or another type appropriate for Buffer.from() variants.

Static method: Buffer.isBuffer(obj)


obj <Object>

Returns: <boolean>

Returns true if obj is a Buffer , false otherwise.

Buffer.isBuffer(Buffer.alloc(10)); // true
Buffer.isBuffer(Buffer.from('foo')); // true
Buffer.isBuffer('a string'); // false
Buffer.isBuffer([]); // false
Buffer.isBuffer(new Uint8Array(1024)); // false

Static method: Buffer.isEncoding(encoding)


encoding <string> A character encoding name to check.

Returns: <boolean>

Returns true if encoding is the name of a supported character encoding, or false otherwise.

console.log(Buffer.isEncoding('utf-8'));
// Prints: true

console.log(Buffer.isEncoding('hex'));
// Prints: true

console.log(Buffer.isEncoding('utf/8'));
// Prints: false

console.log(Buffer.isEncoding(''));
// Prints: false

Class property: Buffer.poolSize


<integer> Default: 8192

This is the size (in bytes) of pre-allocated internal Buffer instances used for pooling. This value may be modified.

buf[index]
index <integer>

The index operator [index] can be used to get and set the octet at position index in buf . The values refer to individual bytes, so the legal value range is between 0x00 and 0xFF (hex) or
0 and 255 (decimal).

This operator is inherited from Uint8Array , so its behavior on out-of-bounds access is the same as Uint8Array . In other words, buf[index] returns undefined when index is negative or
greater or equal to buf.length , and buf[index] = value does not modify the buffer if index is negative or >= buf.length .
// Copy an ASCII string into a `Buffer` one byte at a time.
// (This only works for ASCII-only strings. In general, one should use
// `Buffer.from()` to perform this conversion.)

const str = 'Node.js';


const buf = Buffer.allocUnsafe(str.length);

for (let i = 0; i < str.length; i++) {


buf[i] = str.charCodeAt(i);
}

console.log(buf.toString('utf8'));
// Prints: Node.js

buf.buffer
<ArrayBuffer> The underlying ArrayBuffer object based on which this Buffer object is created.

This ArrayBuffer is not guaranteed to correspond exactly to the original Buffer . See the notes on buf.byteOffset for details.

const arrayBuffer = new ArrayBuffer(16);


const buffer = Buffer.from(arrayBuffer);

console.log(buffer.buffer === arrayBuffer);


// Prints: true

buf.byteOffset
<integer> The byteOffset of the Buffer s underlying ArrayBuffer object.

When setting byteOffset in Buffer.from(ArrayBuffer, byteOffset, length) , or sometimes when allocating a Buffer smaller than Buffer.poolSize , the buffer does not start from a
zero offset on the underlying ArrayBuffer .

This can cause problems when accessing the underlying ArrayBuffer directly using buf.buffer , as other parts of the ArrayBuffer may be unrelated to the Buffer object itself.

A common issue when creating a TypedArray object that shares its memory with a Buffer is that in this case one needs to specify the byteOffset correctly:
// Create a buffer smaller than `Buffer.poolSize`.
const nodeBuffer = new Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);

// When casting the Node.js Buffer to an Int8Array, use the byteOffset


// to refer only to the part of `nodeBuffer.buffer` that contains the memory
// for `nodeBuffer`.
new Int8Array(nodeBuffer.buffer, nodeBuffer.byteOffset, nodeBuffer.length);

buf.compare(target[, targetStart[, targetEnd[, sourceStart[, sourceEnd]]]])


target <Buffer> | <Uint8Array> A Buffer or Uint8Array with which to compare buf .

targetStart <integer> The offset within target at which to begin comparison. Default: 0 .

targetEnd <integer> The offset within target at which to end comparison (not inclusive). Default: target.length .

sourceStart <integer> The offset within buf at which to begin comparison. Default: 0 .

sourceEnd <integer> The offset within buf at which to end comparison (not inclusive). Default: buf.length .

Returns: <integer>

Compares buf with target and returns a number indicating whether buf comes before, after, or is the same as target in sort order. Comparison is based on the actual sequence of bytes
in each Buffer .

0 is returned if target is the same as buf

1 is returned if target should come before buf when sorted.

-1 is returned if target should come after buf when sorted.

const buf1 = Buffer.from('ABC');


const buf2 = Buffer.from('BCD');
const buf3 = Buffer.from('ABCD');

console.log(buf1.compare(buf1));
// Prints: 0
console.log(buf1.compare(buf2));
// Prints: -1
console.log(buf1.compare(buf3));
// Prints: -1
console.log(buf2.compare(buf1));
// Prints: 1
console.log(buf2.compare(buf3));
// Prints: 1
console.log([buf1, buf2, buf3].sort(Buffer.compare));
// Prints: [ <Buffer 41 42 43>, <Buffer 41 42 43 44>, <Buffer 42 43 44> ]
// (This result is equal to: [buf1, buf3, buf2].)

The optional targetStart , targetEnd , sourceStart , and sourceEnd arguments can be used to limit the comparison to specific ranges within target and buf respectively.

const buf1 = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8, 9]);


const buf2 = Buffer.from([5, 6, 7, 8, 9, 1, 2, 3, 4]);

console.log(buf1.compare(buf2, 5, 9, 0, 4));
// Prints: 0
console.log(buf1.compare(buf2, 0, 6, 4));
// Prints: -1
console.log(buf1.compare(buf2, 5, 6, 5));
// Prints: 1

ERR_OUT_OF_RANGE is thrown if targetStart < 0 , sourceStart < 0 , targetEnd > target.byteLength , or sourceEnd > source.byteLength .

buf.copy(target[, targetStart[, sourceStart[, sourceEnd]]])


target <Buffer> | <Uint8Array> A Buffer or Uint8Array to copy into.

targetStart <integer> The offset within target at which to begin writing. Default: 0 .

sourceStart <integer> The offset within buf from which to begin copying. Default: 0 .

sourceEnd <integer> The offset within buf at which to stop copying (not inclusive). Default: buf.length .

Returns: <integer> The number of bytes copied.

Copies data from a region of buf to a region in target , even if the target memory region overlaps with buf .

TypedArray#set() performs the same operation, and is available for all TypedArrays, including Node.js Buffer s, although it takes different function arguments.

// Create two `Buffer` instances.


const buf1 = Buffer.allocUnsafe(26);
const buf2 = Buffer.allocUnsafe(26).fill('!');

for (let i = 0; i < 26; i++) {


// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}
// Copy `buf1` bytes 16 through 19 into `buf2` starting at byte 8 of `buf2`.
buf1.copy(buf2, 8, 16, 20);
// This is equivalent to:
// buf2.set(buf1.subarray(16, 20), 8);

console.log(buf2.toString('ascii', 0, 25));
// Prints: !!!!!!!!qrst!!!!!!!!!!!!!

// Create a `Buffer` and copy data from one region to an overlapping region
// within the same `Buffer`.

const buf = Buffer.allocUnsafe(26);

for (let i = 0; i < 26; i++) {


// 97 is the decimal ASCII value for 'a'.
buf[i] = i + 97;
}

buf.copy(buf, 0, 4, 10);

console.log(buf.toString());
// Prints: efghijghijklmnopqrstuvwxyz

buf.entries()
Returns: <Iterator>

Creates and returns an iterator of [index, byte] pairs from the contents of buf .

// Log the entire contents of a `Buffer`.

const buf = Buffer.from('buffer');

for (const pair of buf.entries()) {


console.log(pair);
}
// Prints:
// [0, 98]
// [1, 117]
// [2, 102]
// [3, 102]
// [4, 101]
// [5, 114]

buf.equals(otherBuffer)
otherBuffer <Buffer> | <Uint8Array> A Buffer or Uint8Array with which to compare buf .

Returns: <boolean>

Returns true if both buf and otherBuffer have exactly the same bytes, false otherwise. Equivalent to buf.compare(otherBuffer) === 0 .

const buf1 = Buffer.from('ABC');


const buf2 = Buffer.from('414243', 'hex');
const buf3 = Buffer.from('ABCD');

console.log(buf1.equals(buf2));
// Prints: true
console.log(buf1.equals(buf3));
// Prints: false

buf.fill(value[, offset[, end]][, encoding])


value <string> | <Buffer> | <Uint8Array> | <integer> The value with which to fill buf .

offset <integer> Number of bytes to skip before starting to fill buf . Default: 0 .

end <integer> Where to stop filling buf (not inclusive). Default: buf.length .

encoding <string> The encoding for value if value is a string. Default: 'utf8' .

Returns: <Buffer> A reference to buf .

Fills buf with the specified value . If the offset and end are not given, the entire buf will be filled:

// Fill a `Buffer` with the ASCII character 'h'.

const b = Buffer.allocUnsafe(50).fill('h');
console.log(b.toString());
// Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

value is coerced to a uint32 value if it is not a string, Buffer , or integer. If the resulting integer is greater than 255 (decimal), buf will be filled with value & 255 .

If the final write of a fill() operation falls on a multi-byte character, then only the bytes of that character that fit into buf are written:

// Fill a `Buffer` with character that takes up two bytes in UTF-8.

console.log(Buffer.allocUnsafe(5).fill('\u0222'));
// Prints: <Buffer c8 a2 c8 a2 c8>

If value contains invalid characters, it is truncated; if no valid fill data remains, an exception is thrown:

const buf = Buffer.allocUnsafe(5);

console.log(buf.fill('a'));
// Prints: <Buffer 61 61 61 61 61>
console.log(buf.fill('aazz', 'hex'));
// Prints: <Buffer aa aa aa aa aa>
console.log(buf.fill('zz', 'hex'));
// Throws an exception.

buf.includes(value[, byteOffset][, encoding])


value <string> | <Buffer> | <Uint8Array> | <integer> What to search for.

byteOffset <integer> Where to begin searching in buf . If negative, then offset is calculated from the end of buf . Default: 0 .

encoding <string> If value is a string, this is its encoding. Default: 'utf8' .

Returns: <boolean> true if value was found in buf , false otherwise.

Equivalent to buf.indexOf() !== -1 .

const buf = Buffer.from('this is a buffer');

console.log(buf.includes('this'));
// Prints: true
console.log(buf.includes('is'));
// Prints: true
console.log(buf.includes(Buffer.from('a buffer')));
// Prints: true
console.log(buf.includes(97));
// Prints: true (97 is the decimal ASCII value for 'a')
console.log(buf.includes(Buffer.from('a buffer example')));
// Prints: false
console.log(buf.includes(Buffer.from('a buffer example').slice(0, 8)));
// Prints: true
console.log(buf.includes('this', 4));
// Prints: false

buf.indexOf(value[, byteOffset][, encoding])


value <string> | <Buffer> | <Uint8Array> | <integer> What to search for.

byteOffset <integer> Where to begin searching in buf . If negative, then offset is calculated from the end of buf . Default: 0 .

encoding <string> If value is a string, this is the encoding used to determine the binary representation of the string that will be searched for in buf . Default: 'utf8' .

Returns: <integer> The index of the first occurrence of value in buf , or -1 if buf does not contain value .

If value is:

a string, value is interpreted according to the character encoding in encoding .

a Buffer or Uint8Array , value will be used in its entirety. To compare a partial Buffer , use buf.slice() .

a number, value will be interpreted as an unsigned 8-bit integer value between 0 and 255 .

const buf = Buffer.from('this is a buffer');

console.log(buf.indexOf('this'));
// Prints: 0
console.log(buf.indexOf('is'));
// Prints: 2
console.log(buf.indexOf(Buffer.from('a buffer')));
// Prints: 8
console.log(buf.indexOf(97));
// Prints: 8 (97 is the decimal ASCII value for 'a')
console.log(buf.indexOf(Buffer.from('a buffer example')));
// Prints: -1
console.log(buf.indexOf(Buffer.from('a buffer example').slice(0, 8)));
// Prints: 8

const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'utf16le');

console.log(utf16Buffer.indexOf('\u03a3', 0, 'utf16le'));
// Prints: 4
console.log(utf16Buffer.indexOf('\u03a3', -4, 'utf16le'));
// Prints: 6

If value is not a string, number, or Buffer , this method will throw a TypeError . If value is a number, it will be coerced to a valid byte value, an integer between 0 and 255.

If byteOffset is not a number, it will be coerced to a number. If the result of coercion is NaN or 0 , then the entire buffer will be searched. This behavior matches String#indexOf() .

const b = Buffer.from('abcdef');

// Passing a value that's a number, but not a valid byte.


// Prints: 2, equivalent to searching for 99 or 'c'.
console.log(b.indexOf(99.9));
console.log(b.indexOf(256 + 99));

// Passing a byteOffset that coerces to NaN or 0.


// Prints: 1, searching the whole buffer.
console.log(b.indexOf('b', undefined));
console.log(b.indexOf('b', {}));
console.log(b.indexOf('b', null));
console.log(b.indexOf('b', []));

If value is an empty string or empty Buffer and byteOffset is less than buf.length , byteOffset will be returned. If value is empty and byteOffset is at least buf.length ,
buf.length will be returned.

buf.keys()
Returns: <Iterator>

Creates and returns an iterator of buf keys (indices).

const buf = Buffer.from('buffer');

for (const key of buf.keys()) {


console.log(key);
}
// Prints:
// 0
// 1
// 2
// 3
// 4
// 5

buf.lastIndexOf(value[, byteOffset][, encoding])


value <string> | <Buffer> | <Uint8Array> | <integer> What to search for.

byteOffset <integer> Where to begin searching in buf . If negative, then offset is calculated from the end of buf . Default: buf.length - 1 .

encoding <string> If value is a string, this is the encoding used to determine the binary representation of the string that will be searched for in buf . Default: 'utf8' .

Returns: <integer> The index of the last occurrence of value in buf , or -1 if buf does not contain value .

Identical to buf.indexOf() , except the last occurrence of value is found rather than the first occurrence.

const buf = Buffer.from('this buffer is a buffer');

console.log(buf.lastIndexOf('this'));
// Prints: 0
console.log(buf.lastIndexOf('buffer'));
// Prints: 17
console.log(buf.lastIndexOf(Buffer.from('buffer')));
// Prints: 17
console.log(buf.lastIndexOf(97));
// Prints: 15 (97 is the decimal ASCII value for 'a')
console.log(buf.lastIndexOf(Buffer.from('yolo')));
// Prints: -1
console.log(buf.lastIndexOf('buffer', 5));
// Prints: 5
console.log(buf.lastIndexOf('buffer', 4));
// Prints: -1

const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'utf16le');

console.log(utf16Buffer.lastIndexOf('\u03a3', undefined, 'utf16le'));


// Prints: 6
console.log(utf16Buffer.lastIndexOf('\u03a3', -5, 'utf16le'));
// Prints: 4

If value is not a string, number, or Buffer , this method will throw a TypeError . If value is a number, it will be coerced to a valid byte value, an integer between 0 and 255.

If byteOffset is not a number, it will be coerced to a number. Any arguments that coerce to NaN , like {} or undefined , will search the whole buffer. This behavior matches
String#lastIndexOf() .

const b = Buffer.from('abcdef');

// Passing a value that's a number, but not a valid byte.


// Prints: 2, equivalent to searching for 99 or 'c'.
console.log(b.lastIndexOf(99.9));
console.log(b.lastIndexOf(256 + 99));

// Passing a byteOffset that coerces to NaN.


// Prints: 1, searching the whole buffer.
console.log(b.lastIndexOf('b', undefined));
console.log(b.lastIndexOf('b', {}));

// Passing a byteOffset that coerces to 0.


// Prints: -1, equivalent to passing 0.
console.log(b.lastIndexOf('b', null));
console.log(b.lastIndexOf('b', []));

If value is an empty string or empty Buffer , byteOffset will be returned.

buf.length
<integer>

Returns the number of bytes in buf .

// Create a `Buffer` and write a shorter string to it using UTF-8.

const buf = Buffer.alloc(1234);

console.log(buf.length);
// Prints: 1234
buf.write('some string', 0, 'utf8');

console.log(buf.length);
// Prints: 1234

buf.parent
Stability: 0 - Deprecated: Use buf.buffer instead.

The buf.parent property is a deprecated alias for buf.buffer .

buf.readBigInt64BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <bigint>

Reads a signed, big-endian 64-bit integer from buf at the specified offset .

Integers read from a Buffer are interpreted as two's complement signed values.

buf.readBigInt64LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <bigint>

Reads a signed, little-endian 64-bit integer from buf at the specified offset .

Integers read from a Buffer are interpreted as two's complement signed values.

buf.readBigUInt64BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <bigint>

Reads an unsigned, big-endian 64-bit integer from buf at the specified offset .

This function is also available under the readBigUint64BE alias.


const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff]);

console.log(buf.readBigUInt64BE(0));
// Prints: 4294967295n

buf.readBigUInt64LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <bigint>

Reads an unsigned, little-endian 64-bit integer from buf at the specified offset .

This function is also available under the readBigUint64LE alias.

const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff]);

console.log(buf.readBigUInt64LE(0));
// Prints: 18446744069414584320n

buf.readDoubleBE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <number>

Reads a 64-bit, big-endian double from buf at the specified offset .

const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]);

console.log(buf.readDoubleBE(0));
// Prints: 8.20788039913184e-304

buf.readDoubleLE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <number>

Reads a 64-bit, little-endian double from buf at the specified offset .


const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]);

console.log(buf.readDoubleLE(0));
// Prints: 5.447603722011605e-270
console.log(buf.readDoubleLE(1));
// Throws ERR_OUT_OF_RANGE.

buf.readFloatBE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <number>

Reads a 32-bit, big-endian float from buf at the specified offset .

const buf = Buffer.from([1, 2, 3, 4]);

console.log(buf.readFloatBE(0));
// Prints: 2.387939260590663e-38

buf.readFloatLE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <number>

Reads a 32-bit, little-endian float from buf at the specified offset .

const buf = Buffer.from([1, 2, 3, 4]);

console.log(buf.readFloatLE(0));
// Prints: 1.539989614439558e-36
console.log(buf.readFloatLE(1));
// Throws ERR_OUT_OF_RANGE.

buf.readInt8([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .
Returns: <integer>

Reads a signed 8-bit integer from buf at the specified offset .

Integers read from a Buffer are interpreted as two's complement signed values.

const buf = Buffer.from([-1, 5]);

console.log(buf.readInt8(0));
// Prints: -1
console.log(buf.readInt8(1));
// Prints: 5
console.log(buf.readInt8(2));
// Throws ERR_OUT_OF_RANGE.

buf.readInt16BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer>

Reads a signed, big-endian 16-bit integer from buf at the specified offset .

Integers read from a Buffer are interpreted as two's complement signed values.

const buf = Buffer.from([0, 5]);

console.log(buf.readInt16BE(0));
// Prints: 5

buf.readInt16LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer>

Reads a signed, little-endian 16-bit integer from buf at the specified offset .

Integers read from a Buffer are interpreted as two's complement signed values.
const buf = Buffer.from([0, 5]);

console.log(buf.readInt16LE(0));
// Prints: 1280
console.log(buf.readInt16LE(1));
// Throws ERR_OUT_OF_RANGE.

buf.readInt32BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer>

Reads a signed, big-endian 32-bit integer from buf at the specified offset .

Integers read from a Buffer are interpreted as two's complement signed values.

const buf = Buffer.from([0, 0, 0, 5]);

console.log(buf.readInt32BE(0));
// Prints: 5

buf.readInt32LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer>

Reads a signed, little-endian 32-bit integer from buf at the specified offset .

Integers read from a Buffer are interpreted as two's complement signed values.

const buf = Buffer.from([0, 0, 0, 5]);

console.log(buf.readInt32LE(0));
// Prints: 83886080
console.log(buf.readInt32LE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readIntBE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .

Returns: <integer>

Reads byteLength number of bytes from buf at the specified offset and interprets the result as a big-endian, two's complement signed value supporting up to 48 bits of accuracy.

const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);

console.log(buf.readIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.
console.log(buf.readIntBE(1, 0).toString(16));
// Throws ERR_OUT_OF_RANGE.

buf.readIntLE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .

Returns: <integer>

Reads byteLength number of bytes from buf at the specified offset and interprets the result as a little-endian, two's complement signed value supporting up to 48 bits of accuracy.

const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);

console.log(buf.readIntLE(0, 6).toString(16));
// Prints: -546f87a9cbee

buf.readUInt8([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .

Returns: <integer>

Reads an unsigned 8-bit integer from buf at the specified offset .

This function is also available under the readUint8 alias.


const buf = Buffer.from([1, -2]);

console.log(buf.readUInt8(0));
// Prints: 1
console.log(buf.readUInt8(1));
// Prints: 254
console.log(buf.readUInt8(2));
// Throws ERR_OUT_OF_RANGE.

buf.readUInt16BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer>

Reads an unsigned, big-endian 16-bit integer from buf at the specified offset .

This function is also available under the readUint16BE alias.

const buf = Buffer.from([0x12, 0x34, 0x56]);

console.log(buf.readUInt16BE(0).toString(16));
// Prints: 1234
console.log(buf.readUInt16BE(1).toString(16));
// Prints: 3456

buf.readUInt16LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer>

Reads an unsigned, little-endian 16-bit integer from buf at the specified offset .

This function is also available under the readUint16LE alias.

const buf = Buffer.from([0x12, 0x34, 0x56]);

console.log(buf.readUInt16LE(0).toString(16));
// Prints: 3412
console.log(buf.readUInt16LE(1).toString(16));
// Prints: 5634
console.log(buf.readUInt16LE(2).toString(16));
// Throws ERR_OUT_OF_RANGE.

buf.readUInt32BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer>

Reads an unsigned, big-endian 32-bit integer from buf at the specified offset .

This function is also available under the readUint32BE alias.

const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]);

console.log(buf.readUInt32BE(0).toString(16));
// Prints: 12345678

buf.readUInt32LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer>

Reads an unsigned, little-endian 32-bit integer from buf at the specified offset .

This function is also available under the readUint32LE alias.

const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]);

console.log(buf.readUInt32LE(0).toString(16));
// Prints: 78563412
console.log(buf.readUInt32LE(1).toString(16));
// Throws ERR_OUT_OF_RANGE.

buf.readUIntBE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .

Returns: <integer>

Reads byteLength number of bytes from buf at the specified offset and interprets the result as an unsigned big-endian integer supporting up to 48 bits of accuracy.

This function is also available under the readUintBE alias.

const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);

console.log(buf.readUIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readUIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.

buf.readUIntLE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .

Returns: <integer>

Reads byteLength number of bytes from buf at the specified offset and interprets the result as an unsigned, little-endian integer supporting up to 48 bits of accuracy.

This function is also available under the readUintLE alias.

const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);

console.log(buf.readUIntLE(0, 6).toString(16));
// Prints: ab9078563412

buf.subarray([start[, end]])
start <integer> Where the new Buffer will start. Default: 0 .

end <integer> Where the new Buffer will end (not inclusive). Default: buf.length .

Returns: <Buffer>

Returns a new Buffer that references the same memory as the original, but offset and cropped by the start and end indices.
Specifying end greater than buf.length will return the same result as that of end equal to buf.length .

This method is inherited from TypedArray#subarray() .

Modifying the new Buffer slice will modify the memory in the original Buffer because the allocated memory of the two objects overlap.

// Create a `Buffer` with the ASCII alphabet, take a slice, and modify one byte
// from the original `Buffer`.

const buf1 = Buffer.allocUnsafe(26);

for (let i = 0; i < 26; i++) {


// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}

const buf2 = buf1.subarray(0, 3);

console.log(buf2.toString('ascii', 0, buf2.length));
// Prints: abc

buf1[0] = 33;

console.log(buf2.toString('ascii', 0, buf2.length));
// Prints: !bc

Specifying negative indexes causes the slice to be generated relative to the end of buf rather than the beginning.

const buf = Buffer.from('buffer');

console.log(buf.subarray(-6, -1).toString());
// Prints: buffe
// (Equivalent to buf.subarray(0, 5).)

console.log(buf.subarray(-6, -2).toString());
// Prints: buff
// (Equivalent to buf.subarray(0, 4).)

console.log(buf.subarray(-5, -2).toString());
// Prints: uff
// (Equivalent to buf.subarray(1, 4).)

buf.slice([start[, end]])
start <integer> Where the new Buffer will start. Default: 0 .

end <integer> Where the new Buffer will end (not inclusive). Default: buf.length .

Returns: <Buffer>

Returns a new Buffer that references the same memory as the original, but offset and cropped by the start and end indices.

This is the same behavior as buf.subarray() .

This method is not compatible with the Uint8Array.prototype.slice() , which is a superclass of Buffer . To copy the slice, use Uint8Array.prototype.slice() .

const buf = Buffer.from('buffer');

const copiedBuf = Uint8Array.prototype.slice.call(buf);


copiedBuf[0]++;
console.log(copiedBuf.toString());
// Prints: cuffer

console.log(buf.toString());
// Prints: buffer

buf.swap16()
Returns: <Buffer> A reference to buf .

Interprets buf as an array of unsigned 16-bit integers and swaps the byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is not a multiple of 2.

const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);

console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>

buf1.swap16();

console.log(buf1);
// Prints: <Buffer 02 01 04 03 06 05 08 07>

const buf2 = Buffer.from([0x1, 0x2, 0x3]);

buf2.swap16();
// Throws ERR_INVALID_BUFFER_SIZE.

One convenient use of buf.swap16() is to perform a fast in-place conversion between UTF-16 little-endian and UTF-16 big-endian:

const buf = Buffer.from('This is little-endian UTF-16', 'utf16le');


buf.swap16(); // Convert to big-endian UTF-16 text.

buf.swap32()
Returns: <Buffer> A reference to buf .

Interprets buf as an array of unsigned 32-bit integers and swaps the byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is not a multiple of 4.

const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);

console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>

buf1.swap32();

console.log(buf1);
// Prints: <Buffer 04 03 02 01 08 07 06 05>

const buf2 = Buffer.from([0x1, 0x2, 0x3]);

buf2.swap32();
// Throws ERR_INVALID_BUFFER_SIZE.

buf.swap64()
Returns: <Buffer> A reference to buf .

Interprets buf as an array of 64-bit numbers and swaps byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is not a multiple of 8.
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);

console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>

buf1.swap64();

console.log(buf1);
// Prints: <Buffer 08 07 06 05 04 03 02 01>

const buf2 = Buffer.from([0x1, 0x2, 0x3]);

buf2.swap64();
// Throws ERR_INVALID_BUFFER_SIZE.

buf.toJSON()
Returns: <Object>

Returns a JSON representation of buf . JSON.stringify() implicitly calls this function when stringifying a Buffer instance.

Buffer.from() accepts objects in the format returned from this method. In particular, Buffer.from(buf.toJSON()) works like Buffer.from(buf) .

const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5]);


const json = JSON.stringify(buf);

console.log(json);
// Prints: {"type":"Buffer","data":[1,2,3,4,5]}

const copy = JSON.parse(json, (key, value) => {


return value && value.type === 'Buffer' ?
Buffer.from(value) :
value;
});

console.log(copy);
// Prints: <Buffer 01 02 03 04 05>
buf.toString([encoding[, start[, end]]])
encoding <string> The character encoding to use. Default: 'utf8' .

start <integer> The byte offset to start decoding at. Default: 0 .

end <integer> The byte offset to stop decoding at (not inclusive). Default: buf.length .

Returns: <string>

Decodes buf to a string according to the specified character encoding in encoding . start and end may be passed to decode only a subset of buf .

If encoding is 'utf8' and a byte sequence in the input is not valid UTF-8, then each invalid byte is replaced with the replacement character U+FFFD .

The maximum length of a string instance (in UTF-16 code units) is available as buffer.constants.MAX_STRING_LENGTH .

const buf1 = Buffer.allocUnsafe(26);

for (let i = 0; i < 26; i++) {


// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}

console.log(buf1.toString('utf8'));
// Prints: abcdefghijklmnopqrstuvwxyz
console.log(buf1.toString('utf8', 0, 5));
// Prints: abcde

const buf2 = Buffer.from('tést');

console.log(buf2.toString('hex'));
// Prints: 74c3a97374
console.log(buf2.toString('utf8', 0, 3));
// Prints: té
console.log(buf2.toString(undefined, 0, 3));
// Prints: té

buf.values()
Returns: <Iterator>

Creates and returns an iterator for buf values (bytes). This function is called automatically when a Buffer is used in a for..of statement.
const buf = Buffer.from('buffer');

for (const value of buf.values()) {


console.log(value);
}
// Prints:
// 98
// 117
// 102
// 102
// 101
// 114

for (const value of buf) {


console.log(value);
}
// Prints:
// 98
// 117
// 102
// 102
// 101
// 114

buf.write(string[, offset[, length]][, encoding])


string <string> String to write to buf .

offset <integer> Number of bytes to skip before starting to write string . Default: 0 .

length <integer> Maximum number of bytes to write (written bytes will not exceed buf.length - offset ). Default: buf.length - offset .

encoding <string> The character encoding of string . Default: 'utf8' .

Returns: <integer> Number of bytes written.

Writes string to buf at offset according to the character encoding in encoding . The length parameter is the number of bytes to write. If buf did not contain enough space to fit the
entire string, only part of string will be written. However, partially encoded characters will not be written.

const buf = Buffer.alloc(256);

const len = buf.write('\u00bd + \u00bc = \u00be', 0);


console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`);
// Prints: 12 bytes: ½ + ¼ = ¾

const buffer = Buffer.alloc(10);

const length = buffer.write('abcd', 8);

console.log(`${length} bytes: ${buffer.toString('utf8', 8, 10)}`);


// Prints: 2 bytes : ab

buf.writeBigInt64BE(value[, offset])
value <bigint> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian.

value is interpreted and written as a two's complement signed integer.

const buf = Buffer.allocUnsafe(8);

buf.writeBigInt64BE(0x0102030405060708n, 0);

console.log(buf);
// Prints: <Buffer 01 02 03 04 05 06 07 08>

buf.writeBigInt64LE(value[, offset])
value <bigint> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian.

value is interpreted and written as a two's complement signed integer.


const buf = Buffer.allocUnsafe(8);

buf.writeBigInt64LE(0x0102030405060708n, 0);

console.log(buf);
// Prints: <Buffer 08 07 06 05 04 03 02 01>

buf.writeBigUInt64BE(value[, offset])
value <bigint> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian.

This function is also available under the writeBigUint64BE alias.

const buf = Buffer.allocUnsafe(8);

buf.writeBigUInt64BE(0xdecafafecacefaden, 0);

console.log(buf);
// Prints: <Buffer de ca fa fe ca ce fa de>

buf.writeBigUInt64LE(value[, offset])
value <bigint> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian

const buf = Buffer.allocUnsafe(8);

buf.writeBigUInt64LE(0xdecafafecacefaden, 0);
console.log(buf);
// Prints: <Buffer de fa ce ca fe fa ca de>

This function is also available under the writeBigUint64LE alias.

buf.writeDoubleBE(value[, offset])
value <number> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian. The value must be a JavaScript number. Behavior is undefined when value is anything other than a JavaScript number.

const buf = Buffer.allocUnsafe(8);

buf.writeDoubleBE(123.456, 0);

console.log(buf);
// Prints: <Buffer 40 5e dd 2f 1a 9f be 77>

buf.writeDoubleLE(value[, offset])
value <number> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian. The value must be a JavaScript number. Behavior is undefined when value is anything other than a JavaScript number.

const buf = Buffer.allocUnsafe(8);

buf.writeDoubleLE(123.456, 0);

console.log(buf);
// Prints: <Buffer 77 be 9f 1a 2f dd 5e 40>

buf.writeFloatBE(value[, offset])
value <number> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian. Behavior is undefined when value is anything other than a JavaScript number.

const buf = Buffer.allocUnsafe(4);

buf.writeFloatBE(0xcafebabe, 0);

console.log(buf);
// Prints: <Buffer 4f 4a fe bb>

buf.writeFloatLE(value[, offset])
value <number> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian. Behavior is undefined when value is anything other than a JavaScript number.

const buf = Buffer.allocUnsafe(4);

buf.writeFloatLE(0xcafebabe, 0);

console.log(buf);
// Prints: <Buffer bb fe 4a 4f>

buf.writeInt8(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset . value must be a valid signed 8-bit integer. Behavior is undefined when value is anything other than a signed 8-bit integer.

value is interpreted and written as a two's complement signed integer.


const buf = Buffer.allocUnsafe(2);

buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);

console.log(buf);
// Prints: <Buffer 02 fe>

buf.writeInt16BE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian. The value must be a valid signed 16-bit integer. Behavior is undefined when value is anything other than a signed 16-bit
integer.

The value is interpreted and written as a two's complement signed integer.

const buf = Buffer.allocUnsafe(2);

buf.writeInt16BE(0x0102, 0);

console.log(buf);
// Prints: <Buffer 01 02>

buf.writeInt16LE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian. The value must be a valid signed 16-bit integer. Behavior is undefined when value is anything other than a signed 16-bit
integer.

The value is interpreted and written as a two's complement signed integer.


const buf = Buffer.allocUnsafe(2);

buf.writeInt16LE(0x0304, 0);

console.log(buf);
// Prints: <Buffer 04 03>

buf.writeInt32BE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian. The value must be a valid signed 32-bit integer. Behavior is undefined when value is anything other than a signed 32-bit
integer.

The value is interpreted and written as a two's complement signed integer.

const buf = Buffer.allocUnsafe(4);

buf.writeInt32BE(0x01020304, 0);

console.log(buf);
// Prints: <Buffer 01 02 03 04>

buf.writeInt32LE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian. The value must be a valid signed 32-bit integer. Behavior is undefined when value is anything other than a signed 32-bit
integer.

The value is interpreted and written as a two's complement signed integer.


const buf = Buffer.allocUnsafe(4);

buf.writeInt32LE(0x05060708, 0);

console.log(buf);
// Prints: <Buffer 08 07 06 05>

buf.writeIntBE(value, offset, byteLength)


value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .

Returns: <integer> offset plus the number of bytes written.

Writes byteLength bytes of value to buf at the specified offset as big-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than a signed
integer.

const buf = Buffer.allocUnsafe(6);

buf.writeIntBE(0x1234567890ab, 0, 6);

console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>

buf.writeIntLE(value, offset, byteLength)


value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .

Returns: <integer> offset plus the number of bytes written.

Writes byteLength bytes of value to buf at the specified offset as little-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than a signed
integer.

const buf = Buffer.allocUnsafe(6);


buf.writeIntLE(0x1234567890ab, 0, 6);

console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>

buf.writeUInt8(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset . value must be a valid unsigned 8-bit integer. Behavior is undefined when value is anything other than an unsigned 8-bit integer.

This function is also available under the writeUint8 alias.

const buf = Buffer.allocUnsafe(4);

buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);

console.log(buf);
// Prints: <Buffer 03 04 23 42>

buf.writeUInt16BE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian. The value must be a valid unsigned 16-bit integer. Behavior is undefined when value is anything other than an unsigned 16-
bit integer.

This function is also available under the writeUint16BE alias.

const buf = Buffer.allocUnsafe(4);


buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);

console.log(buf);
// Prints: <Buffer de ad be ef>

buf.writeUInt16LE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian. The value must be a valid unsigned 16-bit integer. Behavior is undefined when value is anything other than an unsigned 16-
bit integer.

This function is also available under the writeUint16LE alias.

const buf = Buffer.allocUnsafe(4);

buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);

console.log(buf);
// Prints: <Buffer ad de ef be>

buf.writeUInt32BE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as big-endian. The value must be a valid unsigned 32-bit integer. Behavior is undefined when value is anything other than an unsigned 32-
bit integer.

This function is also available under the writeUint32BE alias.

const buf = Buffer.allocUnsafe(4);


buf.writeUInt32BE(0xfeedface, 0);

console.log(buf);
// Prints: <Buffer fe ed fa ce>

buf.writeUInt32LE(value[, offset])
value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .

Returns: <integer> offset plus the number of bytes written.

Writes value to buf at the specified offset as little-endian. The value must be a valid unsigned 32-bit integer. Behavior is undefined when value is anything other than an unsigned 32-
bit integer.

This function is also available under the writeUint32LE alias.

const buf = Buffer.allocUnsafe(4);

buf.writeUInt32LE(0xfeedface, 0);

console.log(buf);
// Prints: <Buffer ce fa ed fe>

buf.writeUIntBE(value, offset, byteLength)


value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .

Returns: <integer> offset plus the number of bytes written.

Writes byteLength bytes of value to buf at the specified offset as big-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than an unsigned
integer.

This function is also available under the writeUintBE alias.

const buf = Buffer.allocUnsafe(6);

buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>

buf.writeUIntLE(value, offset, byteLength)


value <integer> Number to be written to buf .

offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .

byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .

Returns: <integer> offset plus the number of bytes written.

Writes byteLength bytes of value to buf at the specified offset as little-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than an
unsigned integer.

This function is also available under the writeUintLE alias.

const buf = Buffer.allocUnsafe(6);

buf.writeUIntLE(0x1234567890ab, 0, 6);

console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>

new Buffer(array)

Stability: 0 - Deprecated: Use Buffer.from(array) instead.

array <integer[]> An array of bytes to copy from.

See Buffer.from(array) .

new Buffer(arrayBuffer[, byteOffset[, length]])

Stability: 0 - Deprecated: Use Buffer.from(arrayBuffer[, byteOffset[, length]]) instead.


arrayBuffer <ArrayBuffer> | <SharedArrayBuffer> An ArrayBuffer , SharedArrayBuffer or the .buffer property of a TypedArray .

byteOffset <integer> Index of first byte to expose. Default: 0 .

length <integer> Number of bytes to expose. Default: arrayBuffer.byteLength - byteOffset .

See Buffer.from(arrayBuffer[, byteOffset[, length]]) .

new Buffer(buffer)

Stability: 0 - Deprecated: Use Buffer.from(buffer) instead.

buffer <Buffer> | <Uint8Array> An existing Buffer or Uint8Array from which to copy data.

See Buffer.from(buffer) .

new Buffer(size)

Stability: 0 - Deprecated: Use Buffer.alloc() instead (also see Buffer.allocUnsafe() ).

size <integer> The desired length of the new Buffer .

See Buffer.alloc() and Buffer.allocUnsafe() . This variant of the constructor is equivalent to Buffer.alloc() .

new Buffer(string[, encoding])


Stability: 0 - Deprecated: Use Buffer.from(string[, encoding]) instead.

string <string> String to encode.

encoding <string> The encoding of string . Default: 'utf8' .

See Buffer.from(string[, encoding]) .

buffer module APIs


While, the Buffer object is available as a global, there are additional Buffer -related APIs that are available only via the buffer module accessed using require('buffer') .
buffer.INSPECT_MAX_BYTES
<integer> Default: 50

Returns the maximum number of bytes that will be returned when buf.inspect() is called. This can be overridden by user modules. See util.inspect() for more details on
buf.inspect() behavior.

buffer.kMaxLength
<integer> The largest size allowed for a single Buffer instance.

An alias for buffer.constants.MAX_LENGTH .

buffer.transcode(source, fromEnc, toEnc)


source <Buffer> | <Uint8Array> A Buffer or Uint8Array instance.

fromEnc <string> The current encoding.

toEnc <string> To target encoding.

Returns: <Buffer>

Re-encodes the given Buffer or Uint8Array instance from one character encoding to another. Returns a new Buffer instance.

Throws if the fromEnc or toEnc specify invalid character encodings or if conversion from fromEnc to toEnc is not permitted.

Encodings supported by buffer.transcode() are: 'ascii' , 'utf8' , 'utf16le' , 'ucs2' , 'latin1' , and 'binary' .

The transcoding process will use substitution characters if a given byte sequence cannot be adequately represented in the target encoding. For instance:

const buffer = require('buffer');

const newBuf = buffer.transcode(Buffer.from('€'), 'utf8', 'ascii');


console.log(newBuf.toString('ascii'));
// Prints: '?'

Because the Euro ( € ) sign is not representable in US-ASCII, it is replaced with ? in the transcoded Buffer .

Class: SlowBuffer

Stability: 0 - Deprecated: Use Buffer.allocUnsafeSlow() instead.


See Buffer.allocUnsafeSlow() . This was never a class in the sense that the constructor always returned a Buffer instance, rather than a SlowBuffer instance.

new SlowBuffer(size)

Stability: 0 - Deprecated: Use Buffer.allocUnsafeSlow() instead.

size <integer> The desired length of the new SlowBuffer .

See Buffer.allocUnsafeSlow() .

Buffer constants

buffer.constants.MAX_LENGTH
<integer> The largest size allowed for a single Buffer instance.

On 32-bit architectures, this value currently is 230 - 1 (~1GB). On 64-bit architectures, this value currently is 231 - 1 (~2GB).

This value is also available as buffer.kMaxLength .

buffer.constants.MAX_STRING_LENGTH
<integer> The largest length allowed for a single string instance.

Represents the largest length that a string primitive can have, counted in UTF-16 code units.

This value may depend on the JS engine that is being used.

Buffer.from() , Buffer.alloc() , and Buffer.allocUnsafe()


In versions of Node.js prior to 6.0.0, Buffer instances were created using the Buffer constructor function, which allocates the returned Buffer differently based on what arguments are
provided:

Passing a number as the first argument to Buffer() (e.g. new Buffer(10) ) allocates a new Buffer object of the specified size. Prior to Node.js 8.0.0, the memory allocated for such
Buffer instances is not initialized and can contain sensitive data. Such Buffer instances must be subsequently initialized by using either buf.fill(0) or by writing to the entire Buffer
before reading data from the Buffer . While this behavior is intentional to improve performance, development experience has demonstrated that a more explicit distinction is required
between creating a fast-but-uninitialized Buffer versus creating a slower-but-safer Buffer . Since Node.js 8.0.0, Buffer(num) and new Buffer(num) return a Buffer with initialized
memory.
Passing a string, array, or Buffer as the first argument copies the passed object's data into the Buffer .

Passing an ArrayBuffer or a SharedArrayBuffer returns a Buffer that shares allocated memory with the given array buffer.
Because the behavior of new Buffer() is different depending on the type of the first argument, security and reliability issues can be inadvertently introduced into applications when
argument validation or Buffer initialization is not performed.

For example, if an attacker can cause an application to receive a number where a string is expected, the application may call new Buffer(100) instead of new Buffer("100") , leading it to
allocate a 100 byte buffer instead of allocating a 3 byte buffer with content "100" . This is commonly possible using JSON API calls. Since JSON distinguishes between numeric and string
types, it allows injection of numbers where a naively written application that does not validate its input sufficiently might expect to always receive a string. Before Node.js 8.0.0, the 100 byte
buffer might contain arbitrary pre-existing in-memory data, so may be used to expose in-memory secrets to a remote attacker. Since Node.js 8.0.0, exposure of memory cannot occur
because the data is zero-filled. However, other attacks are still possible, such as causing very large buffers to be allocated by the server, leading to performance degradation or crashing on
memory exhaustion.

To make the creation of Buffer instances more reliable and less error-prone, the various forms of the new Buffer() constructor have been deprecated and replaced by separate
Buffer.from() , Buffer.alloc() , and Buffer.allocUnsafe() methods.

Developers should migrate all existing uses of the new Buffer() constructors to one of these new APIs.

Buffer.from(array) returns a new Buffer that contains a copy of the provided octets.

Buffer.from(arrayBuffer[, byteOffset[, length]]) returns a new Buffer that shares the same allocated memory as the given ArrayBuffer .

Buffer.from(buffer) returns a new Buffer that contains a copy of the contents of the given Buffer .

Buffer.from(string[, encoding]) returns a new Buffer that contains a copy of the provided string.

Buffer.alloc(size[, fill[, encoding]]) returns a new initialized Buffer of the specified size. This method is slower than Buffer.allocUnsafe(size) but guarantees that newly
created Buffer instances never contain old data that is potentially sensitive. A TypeError will be thrown if size is not a number.

Buffer.allocUnsafe(size) and Buffer.allocUnsafeSlow(size) each return a new uninitialized Buffer of the specified size . Because the Buffer is uninitialized, the allocated
segment of memory might contain old data that is potentially sensitive.
Buffer instances returned by Buffer.allocUnsafe() and Buffer.from(array) may be allocated off a shared internal memory pool if size is less than or equal to half Buffer.poolSize .
Instances returned by Buffer.allocUnsafeSlow() never use the shared internal memory pool.

The --zero-fill-buffers command-line option


Node.js can be started using the --zero-fill-buffers command-line option to cause all newly-allocated Buffer instances to be zero-filled upon creation by default. Without the option,
buffers created with Buffer.allocUnsafe() , Buffer.allocUnsafeSlow() , and new SlowBuffer(size) are not zero-filled. Use of this flag can have a measurable negative impact on
performance. Use the --zero-fill-buffers option only when necessary to enforce that newly allocated Buffer instances cannot contain old data that is potentially sensitive.

$ node --zero-fill-buffers
> Buffer.allocUnsafe(5);
<Buffer 00 00 00 00 00>

What makes Buffer.allocUnsafe() and Buffer.allocUnsafeSlow() "unsafe"?


When calling Buffer.allocUnsafe() and Buffer.allocUnsafeSlow() , the segment of allocated memory is uninitialized (it is not zeroed-out). While this design makes the allocation of
memory quite fast, the allocated segment of memory might contain old data that is potentially sensitive. Using a Buffer created by Buffer.allocUnsafe() without completely overwriting
the memory can allow this old data to be leaked when the Buffer memory is read.

While there are clear performance advantages to using Buffer.allocUnsafe() , extra care must be taken in order to avoid introducing security vulnerabilities into an application.

You might also like