Node FS Module Collate 030421
Node FS Module Collate 030421
Table of contents
Node.js is cross-platform meaning it works on Windows, OSX and Linux. A large number of
the Node.js community write Node.js on OSX and then deploy to Linux servers. Because
OSX and Linux are based on UNIX this tends to just work. Windows support is a first-class
citizen in Node.js and if you learn to use Node.js in the right way you can make sure that
you can welcome your Windows friends to your code party.
Paths
The biggest issue you will run into is paths. Node.js does a great job of taking care of most
of this for you but if you build paths in the wrong way you’ll run into problems on Windows.
Consider you are doing some string concatantion to build a path for example.
Whilst forward slashes will work ok on Windows if you do string concatenation you miss
out on the protection that the path module in Node.js gives you.
The path (http://nodejs.org/api/path.html) module gives you all of the tools you need to
handle cross-platform paths. For this example we need path.join .
Using path.resolve lets you move around the file system but maintain cross platform
compatibility. As per the documentation you can think of it as a series of cd commands
that output a single path at the end.
You should be using path.normalize . This will present you with the correct path on
whatever platform you are using.
As we saw before with the string concatenation example kittens can die if you use string
concatenation.
If you need to join paths together use path.join . This will also normalize the result for
you.
Scripts in package.json
Let’s say you have the following executable script npm-postinstall in the bin folder of
your project.
#!/usr/bin/env node
console.log('node modules installed!');
If you define scripts to be run in your package.json you will find that Windows will choke if
you rely on a Node.js shebang.
{
"name": "some-app",
"version": "0.0.1",
"authors": [
"George Ornbo <[email protected]>",
],
"scripts": {
"postinstall": "./bin/npm-postinstall"
}
}
{
"name": "some-app",
"version": "0.0.1",
"authors": [
"George Ornbo <[email protected]>",
],
"scripts": {
"postinstall": "node bin/npm-postinstall"
}
}
This works for all platforms rather than just OSX and Linux.
If you are working with any form of executing command-line programs, and you like to ex-
ecute more than one in a single go, you would probably do so like this (let’s use the basic act
of creating a folder and cd’ing into it for brevity):
We all know how troublesome newline characters are accross platforms. Some platforms
use ‘\n’, others use ‘\r’, and the rest use both. If you are struggling to get the newline char-
acter to work in your log statements or strings on multiple platforms, then you might con-
sider a solution that uses nasty regular expressions to match the correct newline character
that you want. Usually, that would look like this: /(?:\r\n|[r\n])/ . Yuck. Here’s a better
approach. The OS module has an EOL constant attached to it that when referred, will out-
put the correct newline character for the operating system.
console.log(‘This text will print’ + EOL + ‘on three lines’ + EOL + ‘no matter the OS’);
Temporary files
If you need to write files to a tmp folder use os.tmpdir() to ensure you write to the cor-
rect tmp file location for you platform. Thanks to alessioalex
(https://github.com/alessioalex) for this tip.
Home directories
On *nix your home directory is process.env.HOME but in Windows the home directory is
proces.env.HOMEPATH. You can smooth this out with
If you need even more control you can get the operating system platform and CPU architec-
ture you are running on react accordingly with the os module
(http://nodejs.org/api/os.html).
var os = require('os');
os.platform(); // equivalent to process.platform
// 'linux' on Linux
// 'win32' on Windows (32-bit / 64-bit)
// 'darwin' on OSX
os.arch();
// 'x86' on 32-bit CPU architecture
// 'x64' on 64-bit CPU architecture
Conclusion
One of the major strengths of Node.js is the ability to deploy your code on any platform and
to work with almost any development platform. With a bit of knowledge you can make
cross-platform compatibility happen out of the box and avoid having to write the ‘make x
compatible on x’ ticket.
References
Have an update or suggestion for this article? You can edit it here and send me a pull re-
quest.
(https://github.com/shapeshed/shapeshed.com/edit/master/content/posts/writing-
cross-platform-node.md)
Tags
Node.js (/tags/node.js)
Recent Posts
← http://shapeshed.com (/)
Summary: in this tutorial, you will learn about the path module in Node.js
Node.js provides you with the path module that allows you to interact with file paths easily.
The path module has many useful properties and methods to access and manipulate paths in the file
system.
The path is a core module in Node, therefore, you can use it without installing:
The path object has the sep property that represents the platform-specific path separator:
path.sep
Code language: JavaScript (javascript)
The path.sep returns \ on Windows and / on Linux and macOS.
The path object also has the delimiter property that represents the path delimiter:
path.delimiter
Code language: JavaScript (javascript)
The following shows some handy methods of the path module that you probably use very often:
path.basename(path, [,ext])
path.dirname(path)
path.extname(path)
path.format(pathObj)
path.isAbsolute(path)
path.join(...path)
path.normalize(path)
path.parse(path)
path.relative(from, to)
path.resolve(...path)
Code language: JavaScript (javascript)
path.basename(path[, ext])
The path.basename() returns the last portion of a specified path. For example:
Output:
index.html
Code language: JavaScript (javascript)
The ext parameter filters out the extension from the path:
Output:
index
Code language: JavaScript (javascript)
path.dirname(path)
The path.dirname() method returns the directory name of a specified path. For example:
let result = path.dirname('/public_html/home/index.html');
console.log(result);
Code language: JavaScript (javascript)
Output:
/public_html/home
Code language: JavaScript (javascript)
path.extname(path)
console.log(path.extname('index.html'));
console.log(path.extname('app.js'));
console.log(path.extname('node.js.md'));
Code language: JavaScript (javascript)
Output:
.html
.js
.md
Code language: JavaScript (javascript)
path.format(pathObj)
The path.format() method returns a path string from a specified path object.
console.log(pathToFile);
Code language: JavaScript (javascript)
public_html/home/js/app.js
Code language: JavaScript (javascript)
path.isAbsolute(path)
result = path.isAbsolute('C:/node.js/');
console.log(result); // true
result = path.isAbsolute('/node.js');
console.log(result); // true
result = path.isAbsolute('home/');
console.log(result); // false
result = path.isAbsolute('.');
console.log(result); // false
Code language: JavaScript (javascript)
result = path.isAbsolute('/node/..');
console.log(result); // true
result = path.isAbsolute('node/');
console.log(result); // false
result = path.isAbsolute('.');
console.log(result); // false
Code language: JavaScript (javascript)
path.join(…paths)
For example:
\home\js\dist\app.js
Code language: JavaScript (javascript)
path.parse(path)
The path.parse() method returns an object whose properties represent the path elements. The
returned object has the following properties:
Output:
{
root: 'd:/',
dir: 'd:/nodejs/html/js/',
base: 'app.js',
ext: '.js',
name: 'app'
}
Code language: JavaScript (javascript)
On Linux or macOS:
Output:
{
root: '/',
dir: '/nodejs/html/js',
base: 'app.js',
ext: '.js',
name: 'app'
}
Code language: JavaScript (javascript)
path.normalize(path)
The path.normalize() method normalizes a specified path. It also resolves the '..' and '.'
segments.
Output:
C:\node.js\module\js\dist
Code language: JavaScript (javascript)
path.relative(from, to)
The path.relative() accepts two arguments and returns the relative path between them based on
the current working directory.
Output:
../js
Code language: JavaScript (javascript)
path.resolve(…paths)
The path.resolve() method accepts a sequence of paths or path segments and resolves it into an
absolute path. The path.resolve() method prepends each subsequent path from right to left until it
completes constructing an absolute path.
If you don’t pass any argument into the path.resolve() method, it will return the current working
directory.
Output:
/home/john
/home/john
Code language: JavaScript (javascript)
In this example, the path.resolve() method returns a path that is the same as the current working
directory.
See another example on Linux or macOS:
Output:
/home/john/html/index.html
/home/john/html/js/app.js
/home/html/about.html
Code language: JavaScript (javascript)
Summary
Use the path core module to manipulate the file path effectively.
YesNo
ADVERTISEMENT
ADVERTISEMENT
GETTING STARTED
What is JavaScript
Syntax
Variables
Data Types
Number
Boolean
JAVASCRIPT OPERATORS
Unary Operators
Assignment Operators
Logical Operators
Comparison Operators
CONTROL FLOW
if else
switch case
while
do-while
for
break
continue
ADVERTISEMENT
JAVASCRIPT STRINGS
JavaScript Strings
String Type
JAVASCRIPT ARRAY
JavaScript Arrays
Stack
Queue
Multidimensional Array
JAVASCRIPT FUNCTIONS
Functions
Anonymous Functions
IIFE
Callback Functions
JAVASCRIPT OBJECTS
JavaScript Objects
Object Properties
Constructor Functions
Javascript Prototypes
Constructor/Prototype Pattern
Prototypal Inheritance
THIS Keyword
Factory Functions
for…in Loop
Enumerable Properties
Own Properties
ADVANCED FUNCTIONS
Passing By Value
Returning Multiple Values
Function Type
Recursive Functions
Closures
REGUL AR EXPRESSIONS
search()
match()
replace()
Character Classes
Anchors
JAVASCRIPT RUNTIME
Execution Context
Call Stack
Event Loop
Hoisting
Variable Scopes
ADVERTISEMENT
The JavaScript Tutorial website helps you learn JavaScript programming from scratch quickly and effectively.
RECENT TUTORIALS
SITE LINKS
About Us
Contact Us
Privacy Policy
Copyright © 2021 by JavaScript Tutorial Website. All Right Reserved.
Docs » Api » Stream
Stream
Stability: 2 - Unstable
A stream is an abstract interface implemented by various objects in Node. For example a request to an
HTTP server is a stream, as is stdout. Streams are readable, writable, or both. All streams are instances
of EventEmitter
You can load the Stream base classes by doing require('stream') . There are base classes provided for
Readable streams, Writable streams, Duplex streams, and Transform streams.
This document is split up into 3 sections. The rst explains the parts of the API that you need to be
aware of to use streams in your programs. If you never implement a streaming API yourself, you can
stop there.
The second section explains the parts of the API that you need to use if you implement your own
custom streams yourself. The API is designed to make this easy for you to do.
The third section goes into more depth about how streams work, including some of the internal
mechanisms and functions that you should probably not modify unless you de nitely know what you
are doing.
All streams are EventEmitters, but they also have other custom methods and properties depending on
whether they are Readable, Writable, or Duplex.
If a stream is both Readable and Writable, then it implements all of the methods and events below. So,
a Duplex or Transform stream is fully described by this API, though their implementation may be
somewhat different.
It is not necessary to implement Stream interfaces in order to consume streams in your programs. If
you are implementing streaming interfaces in your own program, please also refer to API for Stream
Implementors below.
Almost all Node programs, no matter how simple, use Streams in some way. Here is an example of
using Streams in a Node program:
var http = require('http');
// the end event tells you that you have entire body
req.on('end', function () {
try {
var data = JSON.parse(body);
} catch (er) {
// uh oh! bad json!
res.statusCode = 400;
return res.end('error: ' + er.message);
}
server.listen(1337);
Class: stream.Readable
The Readable stream interface is the abstraction for a source of data that you are reading from. In
other words, data comes out of a Readable stream.
A Readable stream will not start emitting data until you indicate that you are ready to receive it.
Readable streams have two "modes": a owing mode and a paused mode. When in owing mode, data
is read from the underlying system and provided to your program as fast as possible. In paused mode,
you must explicitly call stream.read() to get chunks of data out. Streams start out in paused mode.
Note: If no data event handlers are attached, and there are no [ pipe() ][] destinations, and the stream
is switched into owing mode, then data will be lost.
You can switch back to paused mode by doing either of the following:
Note that, for backwards compatibility reasons, removing 'data' event handlers will not
automatically pause the stream. Also, if there are piped destinations, then calling pause() will not
guarantee that the stream will remain paused once those destinations drain and ask for more data.
Event: 'readable'
When a chunk of data can be read from the stream, it will emit a 'readable' event.
In some cases, listening for a 'readable' event will cause some data to be read into the internal buffer
from the underlying system, if it hadn't already.
Once the internal buffer is drained, a readable event will re again when more data is available.
The readable event is not emitted in the " owing" mode with the sole exception of the last one, on
end-of-stream.
The 'readable' event indicates that the stream has new information: either new data is available or the
end of the stream has been reached. In the former case, .read() will return that data. In the latter
case, .read() will return null. For instance, in the following example, foo.txt is an empty le:
var fs = require('fs');
var rr = fs.createReadStream('foo.txt');
rr.on('readable', function() {
console.log('readable:', rr.read());
});
rr.on('end', function() {
console.log('end');
});
Attaching a data event listener to a stream that has not been explicitly paused will switch the stream
into owing mode. Data will then be passed as soon as it is available.
If you just want to get all the data out of the stream as fast as possible, this is the best way to do so.
Note that the readable event should not be used together with data because the assigning the latter
switches the stream into " owing" mode, so the readable event will not be emitted.
Event: 'end'
Note that the end event will not re unless the data is completely consumed. This can be done by
switching into owing mode, or by calling read() repeatedly until you get to the end.
Event: 'close'
Emitted when the underlying resource (for example, the backing le descriptor) has been closed. Not
all streams will emit this.
Event: 'error'
{Error Object}
readable.read([size])
The read() method pulls some data out of the internal buffer and returns it. If there is no data
available, then it will return null .
If you pass in a size argument, then it will return that many bytes. If size bytes are not available,
then it will return null , unless we've ended, in which case it will return the data remaining in the
buffer.
If you do not specify a size argument, then it will return all the data in the internal buffer.
This method should only be called in paused mode. In owing mode, this method is called
automatically until the internal buffer is drained.
If this method returns a data chunk, then it will also trigger the emission of a [ 'data' event][].
Note that calling readable.read([size]) after the end event has been triggered will return null . No
runtime error will be raised.
readable.setEncoding(encoding)
Call this function to cause the stream to return strings of the speci ed encoding instead of Buffer
objects. For example, if you do readable.setEncoding('utf8') , then the output data will be interpreted
as UTF-8 data, and returned as strings. If you do readable.setEncoding('hex') , then the data will be
encoded in hexadecimal string format.
This properly handles multi-byte characters that would otherwise be potentially mangled if you simply
pulled the Buffers directly and called buf.toString(encoding) on them. If you want to read the data as
strings, always use this method.
readable.resume()
Return: this
This method will cause the readable stream to resume emitting data events.
This method will switch the stream into owing mode. If you do not want to consume the data from a
stream, but you do want to get to its end event, you can call [ readable.resume() ][] to open the ow of
data.
Return: this
This method will cause a stream in owing mode to stop emitting data events, switching out of
owing mode. Any data that becomes available will remain in the internal buffer.
readable.isPaused()
Return: Boolean
This method returns whether or not the readable has been explicitly paused by client code (using
readable.pause() without a corresponding readable.resume() ).
readable.pipe(destination[, options])
This function returns the destination stream, so you can set up pipe chains like so:
var r = fs.createReadStream('file.txt');
var z = zlib.createGzip();
var w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);
process.stdin.pipe(process.stdout);
By default [ end() ][] is called on the destination when the source stream emits end , so that
destination is no longer writable. Pass { end:
This keeps writer open so that "Goodbye" can be written at the end.
Note that process.stderr and process.stdout are never closed until the process exits, regardless of the
speci ed options.
readable.unpipe([destination])
destination {Writable Stream} Optional speci c stream to unpipe
This method will remove the hooks set up for a previous pipe() call.
If the destination is not speci ed, then all pipes are removed.
If the destination is speci ed, but no pipe is set up for it, then this is a no-op.
readable.unshift(chunk)
chunk {Buffer | String} Chunk of data to unshift onto the read queue
This is useful in certain cases where a stream is being consumed by a parser, which needs to "un-
consume" some data that it has optimistically pulled out of the source, so that the stream can be
passed on to some other party.
Note that stream.unshift(chunk) cannot be called after the end event has been triggered; a runtime
error will be raised.
If you nd that you must often call stream.unshift(chunk) in your programs, consider implementing a
Transform stream instead. (See API for Stream Implementors, below.)
// Pull off a header delimited by \n\n
// use unshift() if we get too much
// Call the callback with (error, header, stream)
var StringDecoder = require('string_decoder').StringDecoder;
function parseHeader(stream, callback) {
stream.on('error', callback);
stream.on('readable', onReadable);
var decoder = new StringDecoder('utf8');
var header = '';
function onReadable() {
var chunk;
while (null !== (chunk = stream.read())) {
var str = decoder.write(chunk);
if (str.match(/\n\n/)) {
// found the header boundary
var split = str.split(/\n\n/);
header += split.shift();
var remaining = split.join('\n\n');
var buf = new Buffer(remaining, 'utf8');
if (buf.length)
stream.unshift(buf);
stream.removeListener('error', callback);
stream.removeListener('readable', onReadable);
// now the body of the message can be read from the stream.
callback(null, header, stream);
} else {
// still reading the header.
header += str;
}
}
}
}
Note that, unlike stream.push(chunk) , stream.unshift(chunk) will not end the reading process by
resetting the internal reading state of the stream. This can cause unexpected results if unshift is
called during a read (i.e. from within a _read implementation on a custom stream). Following the call
to unshift with an immediate stream.push('') will reset the reading state appropriately, however it is
best to simply avoid calling unshift while in the process of performing a read.
readable.wrap(stream)
Versions of Node prior to v0.10 had streams that did not implement the entire Streams API as it is
today. (See "Compatibility" below for more information.)
If you are using an older Node library that emits 'data' events and has a [ pause() ][] method that is
advisory only, then you can use the wrap() method to create a Readable stream that uses the old
stream as its data source.
You will very rarely ever need to call this function, but it exists as a convenience for interacting with
old Node programs and libraries.
For example:
myReader.on('readable', function() {
myReader.read(); // etc.
});
Class: stream.Writable
The Writable stream interface is an abstraction for a destination that you are writing data to.
This method writes some data to the underlying system, and calls the supplied callback once the data
has been fully handled.
The return value indicates if you should continue writing right now. If the data had to be buffered
internally, then it will return false . Otherwise, it will return true .
This return value is strictly advisory. You MAY continue to write, even if it returns false . However,
writes will be buffered in memory, so it is best not to do this excessively. Instead, wait for the drain
Event: 'drain'
If a [ writable.write(chunk) ][] call returns false, then the drain event will indicate when it is
appropriate to begin writing more data to the stream.
// Write the data to the supplied writable stream 1MM times.
// Be attentive to back-pressure.
function writeOneMillionTimes(writer, data, encoding, callback) {
var i = 1000000;
write();
function write() {
var ok = true;
do {
i -= 1;
if (i === 0) {
// last time!
writer.write(data, encoding, callback);
} else {
// see if we should continue, or wait
// don't pass the callback, because we're not done yet.
ok = writer.write(data, encoding);
}
} while (i > 0 && ok);
if (i > 0) {
// had to stop early!
// write some more once it drains
writer.once('drain', write);
}
}
}
writable.cork()
writable.uncork()
writable.setDefaultEncoding(encoding)
Sets the default encoding for a writable stream. Returns true if the encoding is valid and is set.
Otherwise returns false .
writable.end([chunk][, encoding][, callback])
Call this method when no more data will be written to the stream. If supplied, the callback is attached
as a listener on the finish event.
Calling [ write() ][] after calling [ end() ][] will raise an error:
// end with 'world!' and then write with 'hello, ' will raise an error
var file = fs.createWriteStream('example.txt');
file.end('world!');
file.write('hello, ');
When the [ end() ][] method has been called, and all data has been ushed to the underlying system,
this event is emitted.
Event: 'pipe'
Event: 'unpipe'
src {Readable Stream} The source stream that unpiped this writable
This is emitted whenever the [ unpipe() ][] method is called on a readable stream, removing this
writable from its set of destinations.
Event: 'error'
{Error object}
Class: stream.Duplex
Duplex streams are streams that implement both the Readable and Writable interfaces. See above for
usage.
Class: stream.Transform
Transform streams are Duplex streams where the output is in some way computed from the input.
They implement both the Readable and Writable interfaces. See above for usage.
zlib streams
crypto streams
1. Extend the appropriate parent class in your own subclass. (The [ util.inherits ][] method is particularly
helpful for this.)
2. Call the appropriate parent class constructor in your constructor, to be sure that the internal mechanisms are
set up properly.
3. Implement one or more speci c methods, as detailed below.
The class to extend and the method(s) to implement depend on the sort of stream class you are
writing:
[Readable]
Reading only [_read][]
(#stream_class_stream_readable_1)
Use-case Class Method(s) to implement
[Writable]
Writing only [_write][]
(#stream_class_stream_writable_1)
[Duplex]
Reading and writing [_read][] , [_write][]
(#stream_class_stream_duplex_1)
In your implementation code, it is very important to never call the methods described in API for
Stream Consumers above. Otherwise, you can potentially cause adverse side effects in programs that
consume your streaming interfaces.
Class: stream.Readable
Please see above under API for Stream Consumers for how to consume streams in your programs.
What follows is an explanation of how to implement Readable streams in your programs.
This is a basic example of a Readable stream. It emits the numerals from 1 to 1,000,000 in ascending
order, and then ends.
var Readable = require('stream').Readable;
var util = require('util');
util.inherits(Counter, Readable);
function Counter(opt) {
Readable.call(this, opt);
this._max = 1000000;
this._index = 1;
}
Counter.prototype._read = function() {
var i = this._index++;
if (i > this._max)
this.push(null);
else {
var str = '' + i;
var buf = new Buffer(str, 'ascii');
this.push(buf);
}
};
This is similar to the parseHeader function described above, but implemented as a custom stream. Also,
note that this implementation does not convert the incoming data to a string.
However, this would be better implemented as a Transform stream. See below for a better
implementation.
// A parser for a simple data protocol.
// The "header" is a JSON object, followed by 2 \n characters, and
// then a message body.
//
// NOTE: This can be done more simply as a Transform stream!
// Using Readable directly for this is sub-optimal. See the
// alternative example below under the Transform section.
util.inherits(SimpleProtocol, Readable);
Readable.call(this, options);
this._inBody = false;
this._sawFirstCr = false;
this._rawHeader = [];
this.header = null;
}
SimpleProtocol.prototype._read = function(n) {
if (!this._inBody) {
var chunk = this._source.read();
// and let them know that we are done parsing the header.
this.emit('header', this.header);
}
} else {
// from there on, just provide the data to our consumer.
// careful not to push(null), since that would indicate EOF.
var chunk = this._source.read();
if (chunk) this.push(chunk);
}
};
// Usage:
// var parser = new SimpleProtocol(source);
// Now parser is a readable stream that will emit 'header'
// with the parsed header data.
new stream.Readable([options])
options {Object}
highWaterMark {Number} The maximum number of bytes to store in the internal buffer before ceasing to read
from the underlying resource. Default=16kb, or 16 for objectMode streams
encoding {String} If speci ed, then buffers will be decoded to strings using the speci ed encoding.
Default=null
objectMode {Boolean} Whether this stream should behave as a stream of objects. Meaning that stream.read(n)
returns a single value instead of a Buffer of size n. Default=false
In classes that extend the Readable class, make sure to call the Readable constructor so that the
buffering settings can be properly initialized.
readable._read(size)
This method is pre xed with an underscore because it is internal to the class that de nes it and should
only be called by the internal Readable class methods. All Readable stream implementations must
provide a _read method to fetch data from the underlying resource.
When _read is called, if data is available from the resource, _read should start pushing that data into
the read queue by calling this.push(dataChunk) . _read should continue reading from the resource and
pushing data until push returns false, at which point it should stop reading from the resource. Only
when _read is called again after it has stopped should it start reading more data from the resource and
pushing that data onto the queue.
Note: once the _read() method is called, it will not be called again until the push method is called.
The size argument is advisory. Implementations where a "read" is a single call that returns data can
use this to know how much data to fetch. Implementations where that is not relevant, such as TCP or
TLS, may ignore this argument, and simply provide data whenever it becomes available. There is no
need, for example to "wait" until size bytes are available before calling [ stream.push(chunk) ][].
readable.push(chunk[, encoding])
chunk {Buffer | null | String} Chunk of data to push into the read queue
encoding {String} Encoding of String chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'
Note: This method should be called by Readable implementors, NOT by consumers of Readable
streams.
If a value other than null is passed, The push() method adds a chunk of data into the queue for
subsequent stream processors to consume. If null is passed, it signals the end of the stream (EOF),
after which no more data can be written.
The data added with push can be pulled out by calling the read() method when the 'readable' event
res.
This API is designed to be as exible as possible. For example, you may be wrapping a lower-level
source which has some sort of pause/resume mechanism, and a data callback. In those cases, you could
wrap the low-level source object by doing something like this:
// source is an object with readStop() and readStart() methods,
// and an `ondata` member that gets called when it has data, and
// an `onend` member that gets called when the data is over.
util.inherits(SourceWrapper, Readable);
function SourceWrapper(options) {
Readable.call(this, options);
this._source = getLowlevelSourceObject();
var self = this;
// _read will be called when the stream wants to pull more data in
// the advisory size argument is ignored in this case.
SourceWrapper.prototype._read = function(size) {
this._source.readStart();
};
Class: stream.Writable
Please see above under API for Stream Consumers for how to consume writable streams in your
programs. What follows is an explanation of how to implement Writable streams in your programs.
new stream.Writable([options])
options {Object}
highWaterMark {Number} Buffer level when [ write() ][] starts returning false. Default=16kb, or 16 for
objectMode streams
decodeStrings {Boolean} Whether or not to decode strings into Buffers before passing them to [ _write() ][].
Default=true
objectMode {Boolean} Whether or not the write(anyObj) is a valid operation. If set you can write arbitrary
data instead of only Buffer / String data. Default=false
In classes that extend the Writable class, make sure to call the constructor so that the buffering
settings can be properly initialized.
chunk {Buffer | String} The chunk to be written. Will always be a buffer unless the decodeStrings option was
set to false .
encoding {String} If the chunk is a string, then this is the encoding type. Ignore if chunk is a buffer. Note that
chunk will always be a buffer unless the decodeStrings option is explicitly set to false .
callback {Function} Call this function (optionally with an error argument) when you are done processing the
supplied chunk.
All Writable stream implementations must provide a [ _write() ][] method to send data to the
underlying resource.
Note: This function MUST NOT be called directly. It should be implemented by child classes, and
called by the internal Writable class methods only.
Call the callback using the standard callback(error) pattern to signal that the write completed
successfully or with an error.
If the decodeStrings ag is set in the constructor options, then chunk may be a string rather than a
Buffer, and encoding will indicate the sort of string that it is. This is to support implementations that
have an optimized handling for certain string data encodings. If you do not explicitly set the
decodeStrings option to false , then you can safely ignore the encoding argument, and assume that
writable._writev(chunks, callback)
chunks {Array} The chunks to be written. Each chunk has following format: { chunk: ..., encoding: ... } .
callback {Function} Call this function (optionally with an error argument) when you are done processing the
supplied chunks.
Note: This function MUST NOT be called directly. It may be implemented by child classes, and called
by the internal Writable class methods only.
Class: stream.Duplex
A "duplex" stream is one that is both Readable and Writable, such as a TCP socket connection.
Since JavaScript doesn't have multiple prototypal inheritance, this class prototypally inherits from
Readable, and then parasitically from Writable. It is thus up to the user to implement both the lowlevel
_read(n) method as well as the lowlevel [ _write(chunk, encoding, callback) ][] method on extension
duplex classes.
new stream.Duplex(options)
options {Object} Passed to both Writable and Readable constructors. Also has the following elds:
allowHalfOpen {Boolean} Default=true. If set to false , then the stream will automatically end the readable
side when the writable side ends and vice versa.
readableObjectMode {Boolean} Default=false. Sets objectMode for readable side of the stream. Has no effect if
objectMode is true .
writableObjectMode {Boolean} Default=false. Sets objectMode for writable side of the stream. Has no effect if
objectMode is true .
In classes that extend the Duplex class, make sure to call the constructor so that the buffering settings
can be properly initialized.
Class: stream.Transform
A "transform" stream is a duplex stream where the output is causally connected in some way to the
input, such as a zlib stream or a crypto stream.
There is no requirement that the output be the same size as the input, the same number of chunks, or
arrive at the same time. For example, a Hash stream will only ever have a single chunk of output which
is provided when the input is ended. A zlib stream will produce output that is either much smaller or
much larger than its input.
Rather than implement the [ _read() ][] and [ _write() ][] methods, Transform classes must implement
the _transform() method, and may optionally also implement the _flush() method. (See below.)
new stream.Transform([options])
In classes that extend the Transform class, make sure to call the constructor so that the buffering
settings can be properly initialized.
Note: This function MUST NOT be called directly. It should be implemented by child classes, and
called by the internal Transform class methods only.
All Transform stream implementations must provide a _transform method to accept input and
produce output.
_transform should do whatever has to be done in this speci c Transform class, to handle the bytes
being written, and pass them off to the readable portion of the interface. Do asynchronous I/O,
process things, and so on.
Call transform.push(outputChunk) 0 or more times to generate output from this input chunk, depending
on how much data you want to output as a result of this chunk.
Call the callback function only when the current chunk is completely consumed. Note that there may
or may not be output as a result of any particular input chunk. If you supply a data chunk as the second
argument to the callback function it will be passed to push method, in other words the following are
equivalent:
transform._ ush(callback)
callback {Function} Call this function (optionally with an error argument) when you are done ushing any
remaining data.
Note: This function MUST NOT be called directly. It MAY be implemented by child classes, and if so,
will be called by the internal Transform class methods only.
In some cases, your transform operation may need to emit a bit more data at the end of the stream.
For example, a Zlib compression stream will store up some internal state so that it can optimally
compress the output. At the end, however, it needs to do the best it can with what is left, so that the
data will be complete.
In those cases, you can implement a _flush method, which will be called at the very end, after all the
written data is consumed, but before emitting end to signal the end of the readable side. Just like with
_transform , call transform.push(chunk) zero or more times, as appropriate, and call callback when the
ush operation is complete.
This method is pre xed with an underscore because it is internal to the class that de nes it, and should
not be called directly by user programs. However, you are expected to override this method in your
own extension classes.
The [ finish ][] and [ end ][] events are from the parent Writable and Readable classes respectively.
The finish event is red after .end() is called and all chunks have been processed by _transform ,
end is red after all data has been output which is after the callback in _flush has been called.
Example: SimpleProtocol parser v2
The example above of a simple protocol parser can be implemented simply by using the higher level
Transform stream class, similar to the parseHeader and SimpleProtocol v1 examples above.
In this example, rather than providing the input as an argument, it would be piped into the parser,
which is a more idiomatic Node stream approach.
var util = require('util');
var Transform = require('stream').Transform;
util.inherits(SimpleProtocol, Transform);
function SimpleProtocol(options) {
if (!(this instanceof SimpleProtocol))
return new SimpleProtocol(options);
Transform.call(this, options);
this._inBody = false;
this._sawFirstCr = false;
this._rawHeader = [];
this.header = null;
}
Class: stream.PassThrough
This is a trivial implementation of a Transform stream that simply passes the input bytes across to the
output. Its purpose is mainly for examples and testing, but there are occasionally use cases where it
can come in handy as a building block for novel sorts of streams.
Both Writable and Readable streams will buffer data on an internal object called
_writableState.buffer or _readableState.buffer , respectively.
The amount of data that will potentially be buffered depends on the highWaterMark option which is
passed into the constructor.
Buffering in Readable streams happens when the implementation calls [ stream.push(chunk) ][]. If the
consumer of the Stream does not call stream.read() , then the data will sit in the internal queue until it
is consumed.
Buffering in Writable streams happens when the user calls [ stream.write(chunk) ][] repeatedly, even
when write() returns false .
The purpose of streams, especially with the pipe() method, is to limit the buffering of data to
acceptable levels, so that sources and destinations of varying speed will not overwhelm the available
memory.
stream.read(0)
There are some cases where you want to trigger a refresh of the underlying readable stream
mechanisms, without actually consuming any data. In that case, you can call stream.read(0) , which will
always return null.
If the internal read buffer is below the highWaterMark , and the stream is not currently reading, then
calling read(0) will trigger a low-level _read call.
There is almost never a need to do this. However, you will see some cases in Node's internals where
this is done, particularly in the Readable stream class internals.
stream.push('')
Pushing a zero-byte string or Buffer (when not in Object mode) has an interesting side effect. Because
it is a call to [ stream.push() ][], it will end the reading process. However, it does not add any data to the
readable buffer, so there's nothing for a user to consume.
Very rarely, there are cases where you have no data to provide now, but the consumer of your stream
(or, perhaps, another bit of your own code) will know when to check again, by calling stream.read(0) . In
those cases, you may call stream.push('') .
So far, the only use case for this functionality is in the tls.CryptoStream class, which is deprecated in
Node v0.12. If you nd that you have to use stream.push('') , please consider another approach,
because it almost certainly indicates that something is horribly wrong.
In versions of Node prior to v0.10, the Readable stream interface was simpler, but also less powerful
and less useful.
Rather than waiting for you to call the read() method, 'data' events would start emitting immediately. If
you needed to do some I/O to decide how to handle data, then you had to store the chunks in some kind of
buffer so that they would not be lost.
The [ pause() ][] method was advisory, rather than guaranteed. This meant that you still had to be prepared to
receive 'data' events even when the stream was in a paused state.
In Node v0.10, the Readable class described below was added. For backwards compatibility with older
Node programs, Readable streams switch into " owing mode" when a 'data' event handler is added,
or when the [ resume() ][] method is called. The effect is that, even if you are not using the new read()
method and 'readable' event, you no longer have to worry about losing 'data' chunks.
Most programs will continue to function normally. However, this introduces an edge case in the
following conditions:
// WARNING! BROKEN!
net.createServer(function(socket) {
}).listen(1337);
In versions of node prior to v0.10, the incoming message data would be simply discarded. However, in
Node v0.10 and beyond, the socket will remain paused forever.
The workaround in this situation is to call the resume() method to start the ow of data:
// Workaround
net.createServer(function(socket) {
socket.on('end', function() {
socket.end('I got your message (but didnt read it)\n');
});
}).listen(1337);
In addition to new Readable streams switching into owing mode, pre-v0.10 style streams can be
wrapped in a Readable class using the wrap() method.
Object Mode
Streams that are in object mode can emit generic JavaScript values other than Buffers and Strings.
A Readable stream in object mode will always return a single item from a call to stream.read(size) ,
regardless of what the size argument is.
A Writable stream in object mode will always ignore the encoding argument to
stream.write(data, encoding) .
The special value null still retains its special value for object mode streams. That is, for object mode
readable streams, null as a return value from stream.read() indicates that there is no more data, and
[ stream.push(null) ][] will signal the end of stream data ( EOF ).
No streams in Node core are object mode streams. This pattern is only used by userland streaming
libraries.
You should set objectMode in your stream child class constructor on the options object. Setting
objectMode mid-stream is not safe.
For Duplex streams objectMode can be set exclusively for readable or writable side with
readableObjectMode and writableObjectMode respectively. These options can be used to implement
parsers and serializers with Transform streams.
// Gets \n-delimited JSON string data, and emits the parsed objects
function JSONParseStream() {
if (!(this instanceof JSONParseStream))
return new JSONParseStream();
this._buffer = '';
this._decoder = new StringDecoder('utf8');
}
JSONParseStream.prototype._flush = function(cb) {
// Just handle any leftover
var rem = this._buffer.trim();
if (rem) {
try {
var obj = JSON.parse(rem);
} catch (er) {
this.emit('error', er);
return;
}
// push the parsed object out to the readable consumer
this.push(obj);
}
cb();
};
Docs » Api » Events
Events
Stability: 4 - API Frozen
Many objects in Node emit events: a net.Server emits an event each time a peer connects to it, a
fs.readStream emits an event when the le is opened. All objects which emit events are instances of
events.EventEmitter . You can access this module by doing: require("events");
Typically, event names are represented by a camel-cased string, however, there aren't any strict
restrictions on that, as any string will be accepted.
Functions can then be attached to objects, to be executed when an event is emitted. These functions
are called listeners. Inside a listener function, this refers to the EventEmitter that the listener was
attached to.
Class: events.EventEmitter
To access the EventEmitter class, require('events').EventEmitter .
When an EventEmitter instance experiences an error, the typical action is to emit an 'error' event.
Error events are treated as a special case in node. If there is no listener for it, then the default action is
to print a stack trace and exit the program.
All EventEmitters emit the event 'newListener' when new listeners are added and 'removeListener'
emitter.addListener(event, listener)
emitter.on(event, listener)
Adds a listener to the end of the listeners array for the speci ed event . No checks are made to see if
the listener has already been added. Multiple calls passing the same combination of event and
listener will result in the listener being added multiple times.
emitter.once(event, listener)
Adds a one time listener for the event. This listener is invoked only the next time the event is red,
after which it is removed.
emitter.removeListener(event, listener)
Remove a listener from the listener array for the speci ed event. Caution: changes array indices in the
listener array behind the listener.
emitter.removeAllListeners([event])
Removes all listeners, or those of the speci ed event. It's not a good idea to remove listeners that were
added elsewhere in the code, especially when it's on an emitter that you didn't create (e.g. sockets or
le streams).
emitter.setMaxListeners(n)
By default EventEmitters will print a warning if more than 10 listeners are added for a particular
event. This is a useful default which helps nding memory leaks. Obviously not all Emitters should be
limited to 10. This function allows that to be increased. Set to zero for unlimited.
EventEmitter.defaultMaxListeners
emitter.setMaxListeners(n) sets the maximum on a per-instance basis. This class property lets you set it
for all EventEmitter instances, current and future, effective immediately. Use with care.
emitter.listeners(event)
Event: 'newListener'
This event is emitted any time a listener is added. When this event is triggered, the listener may not
yet have been added to the array of listeners for the event .
Event: 'removeListener'
This event is emitted any time someone removes a listener. When this event is triggered, the listener
may not yet have been removed from the array of listeners for the event .
Docs » Api » Path
Path
Stability: 3 - Stable
This module contains utilities for handling and transforming le paths. Almost all these methods
perform only string transformations. The le system is not consulted to check whether paths are valid.
Use require('path') to use this module. The following methods are provided:
path.normalize(p)
Normalize a string path, taking care of '..' and '.' parts.
When multiple slashes are found, they're replaced by a single one; when the path contains a trailing
slash, it is preserved. On Windows backslashes are used.
Example:
path.normalize('/foo/bar//baz/asdf/quux/..')
// returns
'/foo/bar/baz/asdf'
Arguments must be strings. In v0.8, non-string arguments were silently ignored. In v0.10 and up, an
exception is thrown.
Example:
If to isn't already absolute from arguments are prepended in right to left order, until an absolute
path is found. If after using all from paths still no absolute path is found, the current working directory
is used as well. The resulting path is normalized, and trailing slashes are removed unless the path gets
resolved to the root directory. Non-string from arguments are ignored.
Is similar to:
cd foo/bar
cd /tmp/file/
cd ..
cd a/../subfile
pwd
The difference is that the different paths don't need to exist and may also be les.
Examples:
path.resolve('/foo/bar', './baz')
// returns
'/foo/bar/baz'
path.resolve('/foo/bar', '/tmp/file/')
// returns
'/tmp/file'
path.isAbsolute(path)
Determines whether path is an absolute path. An absolute path will always resolve to the same
location, regardless of the working directory.
Posix examples:
path.isAbsolute('/foo/bar') // true
path.isAbsolute('/baz/..') // true
path.isAbsolute('qux/') // false
path.isAbsolute('.') // false
Windows examples:
path.isAbsolute('//server') // true
path.isAbsolute('C:/foo/..') // true
path.isAbsolute('bar\\baz') // false
path.isAbsolute('.') // false
path.relative(from, to)
Solve the relative path from from to to .
At times we have two absolute paths, and we need to derive the relative path from one to the other.
This is actually the reverse transform of path.resolve , which means we see that:
path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb')
// returns
'..\\..\\impl\\bbb'
path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb')
// returns
'../../impl/bbb'
path.dirname(p)
Return the directory name of a path. Similar to the Unix dirname command.
Example:
path.dirname('/foo/bar/baz/asdf/quux')
// returns
'/foo/bar/baz/asdf'
path.basename(p[, ext])
Return the last portion of a path. Similar to the Unix basename command.
Example:
path.basename('/foo/bar/baz/asdf/quux.html')
// returns
'quux.html'
path.basename('/foo/bar/baz/asdf/quux.html', '.html')
// returns
'quux'
path.extname(p)
Return the extension of the path, from the last '.' to end of string in the last portion of the path. If there
is no '.' in the last portion of the path or the rst character of it is '.', then it returns an empty string.
Examples:
path.extname('index.html')
// returns
'.html'
path.extname('index.coffee.md')
// returns
'.md'
path.extname('index.')
// returns
'.'
path.extname('index')
// returns
''
path.sep
The platform-speci c le separator. '\\' or '/' .
An example on *nix:
'foo/bar/baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']
An example on Windows:
'foo\\bar\\baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']
path.delimiter
The platform-speci c path delimiter, ; or ':' .
An example on *nix:
console.log(process.env.PATH)
// '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'
process.env.PATH.split(path.delimiter)
// returns
['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']
An example on Windows:
console.log(process.env.PATH)
// 'C:\Windows\system32;C:\Windows;C:\Program Files\nodejs\'
process.env.PATH.split(path.delimiter)
// returns
['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\nodejs\\']
path.parse(pathString)
Returns an object from a path string.
An example on *nix:
path.parse('/home/user/dir/file.txt')
// returns
{
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
}
An example on Windows:
path.parse('C:\\path\\dir\\index.html')
// returns
{
root : "C:\\",
dir : "C:\\path\\dir",
base : "index.html",
ext : ".html",
name : "index"
}
path.format(pathObject)
Returns a path string from an object, the opposite of path.parse above.
path.format({
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
})
// returns
'/home/user/dir/file.txt'
path.posix
Provide access to aforementioned path methods but always interact in a posix compatible way.
path.win32
Provide access to aforementioned path methods but always interact in a win32 compatible way.
Docs » Api » Buffer
Buffer
Stability: 3 - Stable
Pure JavaScript is Unicode friendly but not nice to binary data. When dealing with TCP streams or the
le system, it's necessary to handle octet streams. Node has several strategies for manipulating,
creating, and consuming octet streams.
Raw data is stored in instances of the Buffer class. A Buffer is similar to an array of integers but
corresponds to a raw memory allocation outside the V8 heap. A Buffer cannot be resized.
The Buffer class is a global, making it very rare that one would need to ever require('buffer') .
Converting between Buffers and JavaScript string objects requires an explicit encoding method. Here
are the different string encodings.
'ascii' - for 7 bit ASCII data only. This encoding method is very fast, and will strip the high bit if
set.
'utf8' - Multibyte encoded Unicode characters. Many web pages and other document formats
use UTF-8.
'utf16le' - 2 or 4 bytes, little endian encoded Unicode characters. Surrogate pairs (U+10000 to
Creating a typed array from a Buffer works with the following caveats:
NOTE: Node.js v0.8 simply retained a reference to the buffer in array.buffer instead of cloning it.
While more ef cient, it introduces subtle incompatibilities with the typed arrays speci cation.
ArrayBuffer#slice() makes a copy of the slice while Buffer#slice() creates a view.
Class: Buffer
The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety
of ways.
new Buffer(size)
size Number
Allocates a new buffer of size octets. Note, size must be no more than kMaxLength. Otherwise, a
RangeError will be thrown here. Unlike ArrayBuffers , the underlying memory for buffers is not
initialized. So the contents of a newly created Buffer is unknown. Use buf.fill(0) to initialize a buffer
to zeroes.
new Buffer(array)
array Array
Allocates a new buffer using an array of octets.
new Buffer(buffer)
buffer {Buffer}
Allocates a new buffer containing the given str . encoding defaults to 'utf8' .
obj Object
Return: Boolean
string String
encoding String, Optional, Default: 'utf8'
Return: Number
Gives the actual byte length of a string. encoding defaults to 'utf8' . This is not the same as
String.prototype.length since that returns the number of characters in a string.
Example:
// ½ + ¼ = ¾: 9 characters, 12 bytes
Returns a buffer which is the result of concatenating all the buffers in the list together.
If the list has no items, or if the totalLength is 0, then it returns a zero-length buffer.
If the list has exactly one item, then the rst item of the list is returned.
If the list has more than one item, then a new Buffer is created.
If totalLength is not provided, it is read from the buffers in the list. However, this adds an additional
loop to the function, so it is faster to provide the length explicitly.
buf1 {Buffer}
buf2 {Buffer}
The same as buf1.compare(buf2) . Useful for sorting an Array of Buffers:
buf.length
Number
The size of the buffer in bytes. Note that this is not necessarily the size of the contents. length refers
to the amount of memory allocated for the buffer object. It does not change when the contents of the
buffer are changed.
console.log(buf.length);
buf.write("some string", 0, "ascii");
console.log(buf.length);
// 1234
// 1234
While the length property is not immutable, changing the value of length can result in unde ned and
inconsistent behavior. Applications that wish to modify the length of a buffer should therefore treat
length as read-only and use buf.slice to create a new buffer.
defaults to 'utf8' . length is the number of bytes to write. Returns number of octets written. If
buffer did not contain enough space to t the entire string, it will write a partial amount of the string.
length defaults to buffer.length - offset . The method will not write partial characters.
Writes value to the buffer at the speci ed offset and byteLength . Supports up to 48 bits of accuracy.
For example:
Set noAssert to true to skip validation of value and offset . Defaults to false .
A generalized version of all numeric read methods. Supports up to 48 bits of accuracy. For example:
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Decodes and returns a string from buffer data encoded using the speci ed character set encoding. If
encoding is undefined or null , then encoding defaults to 'utf8' . The start and end parameters
buf.toJSON()
Returns a JSON-representation of the Buffer instance. JSON.stringify implicitly calls this function
when stringifying a Buffer instance.
Example:
console.log(json);
// '{"type":"Buffer","data":[116,101,115,116]}'
console.log(copy);
// <Buffer 74 65 73 74>
buf[index]
Get and set the octet at index . The values refer to individual bytes, so the legal range is between
0x00 and 0xFF hex or 0 and 255 .
console.log(buf);
// node.js
buf.equals(otherBuffer)
otherBuffer {Buffer}
Returns a boolean of whether this and otherBuffer have the same bytes.
buf.compare(otherBuffer)
otherBuffer {Buffer}
Returns a number indicating whether this comes before or after or is the same as the otherBuffer in
sort order.
Copies data from a region of this buffer to a region in the target buffer even if the target memory
region overlaps with the source. If undefined the targetStart and sourceStart parameters default to
0 while sourceEnd defaults to buffer.length .
Example: build two Buffers, then copy buf1 from byte 16 through byte 19 into buf2 , starting at the
8th byte in buf2 .
// !!!!!!!!qrst!!!!!!!!!!!!!
Example: Build a single buffer, then copy data from one region to an overlapping region in the same
buffer
buf.copy(buf, 0, 4, 10);
console.log(buf.toString());
// efghijghijklmnopqrstuvwxyz
buf.slice([start][, end])
Returns a new buffer which references the same memory as the old, but offset and cropped by the
start (defaults to 0 ) and end (defaults to buffer.length ) indexes. Negative indexes start from the
Modifying the new buffer slice will modify memory in the original buffer!
Example: build a Buffer with the ASCII alphabet, take a slice, then modify one byte from the original
Buffer.
// abc
// !bc
buf.readUInt8(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads an unsigned 8 bit integer from the buffer at the speci ed offset.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Example:
var buf = new Buffer(4);
buf[0] = 0x3;
buf[1] = 0x4;
buf[2] = 0x23;
buf[3] = 0x42;
// 0x3
// 0x4
// 0x23
// 0x42
buf.readUInt16LE(offset[, noAssert])
buf.readUInt16BE(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads an unsigned 16 bit integer from the buffer at the speci ed offset with speci ed endian format.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Example:
var buf = new Buffer(4);
buf[0] = 0x3;
buf[1] = 0x4;
buf[2] = 0x23;
buf[3] = 0x42;
console.log(buf.readUInt16BE(0));
console.log(buf.readUInt16LE(0));
console.log(buf.readUInt16BE(1));
console.log(buf.readUInt16LE(1));
console.log(buf.readUInt16BE(2));
console.log(buf.readUInt16LE(2));
// 0x0304
// 0x0403
// 0x0423
// 0x2304
// 0x2342
// 0x4223
buf.readUInt32LE(offset[, noAssert])
buf.readUInt32BE(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads an unsigned 32 bit integer from the buffer at the speci ed offset with speci ed endian format.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Example:
var buf = new Buffer(4);
buf[0] = 0x3;
buf[1] = 0x4;
buf[2] = 0x23;
buf[3] = 0x42;
console.log(buf.readUInt32BE(0));
console.log(buf.readUInt32LE(0));
// 0x03042342
// 0x42230403
buf.readInt8(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads a signed 8 bit integer from the buffer at the speci ed offset.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Works as buffer.readUInt8 , except buffer contents are treated as two's complement signed values.
buf.readInt16LE(offset[, noAssert])
buf.readInt16BE(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads a signed 16 bit integer from the buffer at the speci ed offset with speci ed endian format.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Works as buffer.readUInt16* , except buffer contents are treated as two's complement signed values.
buf.readInt32LE(offset[, noAssert])
buf.readInt32BE(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads a signed 32 bit integer from the buffer at the speci ed offset with speci ed endian format.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Works as buffer.readUInt32* , except buffer contents are treated as two's complement signed values.
buf.readFloatLE(offset[, noAssert])
buf.readFloatBE(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads a 32 bit oat from the buffer at the speci ed offset with speci ed endian format.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Example:
buf[0] = 0x00;
buf[1] = 0x00;
buf[2] = 0x80;
buf[3] = 0x3f;
console.log(buf.readFloatLE(0));
// 0x01
buf.readDoubleLE(offset[, noAssert])
buf.readDoubleBE(offset[, noAssert])
offset Number
noAssert Boolean, Optional, Default: false
Return: Number
Reads a 64 bit double from the buffer at the speci ed offset with speci ed endian format.
Set noAssert to true to skip validation of offset . This means that offset may be beyond the end of
the buffer. Defaults to false .
Example:
buf[0] = 0x55;
buf[1] = 0x55;
buf[2] = 0x55;
buf[3] = 0x55;
buf[4] = 0x55;
buf[5] = 0x55;
buf[6] = 0xd5;
buf[7] = 0x3f;
console.log(buf.readDoubleLE(0));
// 0.3333333333333333
buf.writeUInt8(value, offset[, noAssert])
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset. Note, value must be a valid unsigned 8 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Example:
console.log(buf);
// <Buffer 03 04 23 42>
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid unsigned 16 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Example:
console.log(buf);
buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);
console.log(buf);
// <Buffer de ad be ef>
// <Buffer ad de ef be>
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid unsigned 32 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Example:
var buf = new Buffer(4);
buf.writeUInt32BE(0xfeedface, 0);
console.log(buf);
buf.writeUInt32LE(0xfeedface, 0);
console.log(buf);
// <Buffer fe ed fa ce>
// <Buffer ce fa ed fe>
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset. Note, value must be a valid signed 8 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Works as buffer.writeUInt8 , except value is written out as a two's complement signed integer into
buffer .
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid signed 16 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Works as buffer.writeUInt16* , except value is written out as a two's complement signed integer into
buffer .
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid signed 32 bit integer.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Works as buffer.writeUInt32* , except value is written out as a two's complement signed integer into
buffer .
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset with speci ed endian format. Note, behavior is
unspeci ed if value is not a 32 bit oat.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Example:
console.log(buf);
buf.writeFloatLE(0xcafebabe, 0);
console.log(buf);
// <Buffer 4f 4a fe bb>
// <Buffer bb fe 4a 4f>
value Number
offset Number
noAssert Boolean, Optional, Default: false
Writes value to the buffer at the speci ed offset with speci ed endian format. Note, value must be a
valid 64 bit double.
Set noAssert to true to skip validation of value and offset . This means that value may be too large
for the speci c function and offset may be beyond the end of the buffer leading to the values being
silently dropped. This should not be used unless you are certain of correctness. Defaults to false .
Example:
console.log(buf);
buf.writeDoubleLE(0xdeadbeefcafebabe, 0);
console.log(buf);
// <Buffer 43 eb d5 b7 dd f9 5f d7>
// <Buffer d7 5f f9 dd b7 d5 eb 43>
value
Fills the buffer with the speci ed value. If the offset (defaults to 0 ) and end (defaults to
buffer.length ) are not given it will ll the entire buffer.
buffer.INSPECT_MAX_BYTES
Number, Default: 50
How many bytes will be returned when buffer.inspect() is called. This can be overridden by user
modules.
Note that this is a property on the buffer module returned by require('buffer') , not on the Buffer
global, or a buffer instance.
Class: SlowBuffer
Returns an un-pooled Buffer .
In order to avoid the garbage collection overhead of creating many individually allocated Buffers, by
default allocations under 4KB are sliced from a single larger allocated object. This approach improves
both performance and memory usage since v8 does not need to track and cleanup as many Persistent
objects.
In the case where a developer may need to retain a small chunk of memory from a pool for an
indeterminate amount of time it may be appropriate to create an un-pooled Buffer instance using
SlowBuffer and copy out the relevant bits.
socket.on('readable', function() {
var data = socket.read();
// allocate for retained data
var sb = new SlowBuffer(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});
Though this should used sparingly and only be a last resort after a developer has actively observed
undue memory retention in their applications.
Docs » Api » Fs
File System
Stability: 3 - Stable
File I/O is provided by simple wrappers around standard POSIX functions. To use this module do
require('fs') . All the methods have asynchronous and synchronous forms.
The asynchronous form always take a completion callback as its last argument. The arguments passed
to the completion callback depend on the method, but the rst argument is always reserved for an
exception. If the operation was completed successfully, then the rst argument will be null or
undefined .
When using the synchronous form any exceptions are immediately thrown. You can use try/catch to
handle exceptions or allow them to bubble up.
var fs = require('fs');
var fs = require('fs');
fs.unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');
With the asynchronous methods there is no guaranteed ordering. So the following is prone to error:
fs.rename('/tmp/hello', '/tmp/world', function (err) {
if (err) throw err;
console.log('renamed complete');
});
fs.stat('/tmp/world', function (err, stats) {
if (err) throw err;
console.log('stats: ' + JSON.stringify(stats));
});
It could be that fs.stat is executed before fs.rename . The correct way to do this is to chain the
callbacks.
In busy processes, the programmer is strongly encouraged to use the asynchronous versions of these
calls. The synchronous versions will block the entire process until they complete--halting all
connections.
Relative path to lename can be used, remember however that this path will be relative to
process.cwd() .
Most fs functions let you omit the callback argument. If you do, a default callback is used that
rethrows errors. To get a trace to the original call site, set the NODE_DEBUG environment variable:
$ cat script.js
function bad() {
require('fs').readFile('/');
}
bad();
fs.renameSync(oldPath, newPath)
Synchronous rename(2). Returns undefined .
fs.ftruncateSync(fd, len)
Synchronous ftruncate(2). Returns undefined .
fs.truncateSync(path, len)
Synchronous truncate(2). Returns undefined .
fs.chmodSync(path, mode)
Synchronous chmod(2). Returns undefined .
fs.fchmodSync(fd, mode)
Synchronous fchmod(2). Returns undefined .
fs.lchmodSync(path, mode)
Synchronous lchmod(2). Returns undefined .
fs.stat(path, callback)
Asynchronous stat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object.
See the fs.Stats section below for more information.
fs.lstat(path, callback)
Asynchronous lstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats
object. lstat() is identical to stat() , except that if path is a symbolic link, then the link itself is stat-
ed, not the le that it refers to.
fs.fstat(fd, callback)
Asynchronous fstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats
object. fstat() is identical to stat() , except that the le to be stat-ed is speci ed by the le
descriptor fd .
fs.statSync(path)
Synchronous stat(2). Returns an instance of fs.Stats .
fs.lstatSync(path)
Synchronous lstat(2). Returns an instance of fs.Stats .
fs.fstatSync(fd)
Synchronous fstat(2). Returns an instance of fs.Stats .
fs.readlink(path, callback)
Asynchronous readlink(2). The callback gets two arguments (err,
linkString) .
fs.readlinkSync(path)
Synchronous readlink(2). Returns the symbolic link's string value.
resolvedPath) . May use process.cwd to resolve relative paths. cache is an object literal of mapped
paths that can be used to force a speci c path resolution or avoid additional fs.stat calls for known
real paths.
Example:
fs.realpathSync(path[, cache])
Synchronous realpath(2). Returns the resolved path.
fs.unlink(path, callback)
Asynchronous unlink(2). No arguments other than a possible exception are given to the completion
callback.
fs.unlinkSync(path)
Synchronous unlink(2). Returns undefined .
fs.rmdir(path, callback)
Asynchronous rmdir(2). No arguments other than a possible exception are given to the completion
callback.
fs.rmdirSync(path)
Synchronous rmdir(2). Returns undefined .
fs.readdir(path, callback)
Asynchronous readdir(3). Reads the contents of a directory. The callback gets two arguments
(err, files) where files is an array of the names of the les in the directory excluding '.' and
'..' .
fs.readdirSync(path)
Synchronous readdir(3). Returns an array of lenames excluding '.' and '..' .
fs.close(fd, callback)
Asynchronous close(2). No arguments other than a possible exception are given to the completion
callback.
fs.closeSync(fd)
Synchronous close(2). Returns undefined .
'r' - Open le for reading. An exception occurs if the le does not exist.
'r+' - Open le for reading and writing. An exception occurs if the le does not exist.
'rs' - Open le for reading in synchronous mode. Instructs the operating system to bypass the
local le system cache.
This is primarily useful for opening les on NFS mounts as it allows you to skip the potentially stale
local cache. It has a very real impact on I/O performance so don't use this ag unless you need it.
Note that this doesn't turn fs.open() into a synchronous blocking call. If that's what you want then
you should be using fs.openSync()
'rs+' - Open le for reading and writing, telling the OS to open it synchronously. See notes for
'rs' about using this with caution.
'w' - Open le for writing. The le is created (if it does not exist) or truncated (if it exists).
'wx' - Like 'w' but fails if path exists.
'w+' - Open le for reading and writing. The le is created (if it does not exist) or truncated (if it
exists).
'wx+' - Like 'w+' but fails if path exists.
'a' - Open le for appending. The le is created if it does not exist.
'ax' - Like 'a' but fails if path exists.
'a+' - Open le for reading and appending. The le is created if it does not exist.
'ax+' - Like 'a+' but fails if path exists.
mode sets the le mode (permission and sticky bits), but only if the le was created. It defaults to
0666 , readable and writeable.
The exclusive ag 'x' ( O_EXCL ag in open(2)) ensures that path is newly created. On POSIX
systems, path is considered to exist even if it is a symlink to a non-existent le. The exclusive ag may
or may not work with network le systems.
On Linux, positional writes don't work when the le is opened in append mode. The kernel ignores the
position argument and always appends the data to the end of the le.
fs.openSync(path, ags[, mode])
Synchronous version of fs.open() . Returns an integer representing the le descriptor.
fs.fsync(fd, callback)
Asynchronous fsync(2). No arguments other than a possible exception are given to the completion
callback.
fs.fsyncSync(fd)
Synchronous fsync(2). Returns undefined .
position refers to the offset from the beginning of the le where this data should be written. If
typeof position !== 'number' , the data will be written at the current position. See pwrite(2).
The callback will be given three arguments (err, written, buffer) where written speci es how many
bytes were written from buffer .
Note that it is unsafe to use fs.write multiple times on the same le without waiting for the callback.
For this scenario, fs.createWriteStream is strongly recommended.
On Linux, positional writes don't work when the le is opened in append mode. The kernel ignores the
position argument and always appends the data to the end of the le.
position refers to the offset from the beginning of the le where this data should be written. If
typeof position !== 'number' the data will be written at the current position. See pwrite(2).
The callback will receive the arguments (err, written, string) where written speci es how many
bytes the passed string required to be written. Note that bytes written is not the same as string
characters. See Buffer.byteLength.
Unlike when writing buffer , the entire string must be written. No substring may be speci ed. This is
because the byte offset of the resulting data may not be the same as the string offset.
Note that it is unsafe to use fs.write multiple times on the same le without waiting for the callback.
For this scenario, fs.createWriteStream is strongly recommended.
On Linux, positional writes don't work when the le is opened in append mode. The kernel ignores the
position argument and always appends the data to the end of the le.
position is an integer specifying where to begin reading from in the le. If position is null , data will
be read from the current le position.
callback {Function}
The callback is passed two arguments (err, data) , where data is the contents of the le.
If the encoding option is speci ed then this function returns a string. Otherwise it returns a buffer.
Asynchronously writes data to a le, replacing the le if it already exists. data can be a string or a
buffer.
Example:
callback {Function}
Asynchronously append data to a le, creating the le if it not yet exists. data can be a string or a
buffer.
Example:
fs.appendFile('message.txt', 'data to append', function (err) {
if (err) throw err;
console.log('The "data to append" was appended to file!');
});
Watch for changes on filename . The callback listener will be called each time the le is accessed.
The second argument is optional. The options if provided should be an object containing two
members a boolean, persistent , and interval . persistent indicates whether the process should
continue to run as long as les are being watched. interval indicates how often the target should be
polled, in milliseconds. The default is { persistent: true, interval: 5007 } .
The listener gets two arguments the current stat object and the previous stat object:
If you want to be noti ed when the le was modi ed, not just accessed you need to compare
curr.mtime and prev.mtime .
Calling fs.unwatchFile() with a lename that is not being watched is a no-op, not an error.
Watch for changes on filename , where filename is either a le or a directory. The returned object is a
fs.FSWatcher.
The second argument is optional. The options if provided should be an object. The supported boolean
members are persistent and recursive . persistent indicates whether the process should continue to
run as long as les are being watched. recursive indicates whether all subdirectories should be
watched, or only the current directory. This applies when a directory is speci ed, and only on
supported platforms (See Caveats below).
The listener callback gets two arguments (event, filename) . event is either 'rename' or 'change', and
filename is the name of the le which triggered the event.
Caveats
The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.
The recursive option is currently supported on OS X. Only FSEvents supports this type of le watching
so it is unlikely any additional platforms will be added soon.
Availability
This feature depends on the underlying operating system providing a way to be noti ed of lesystem
changes.
If the underlying functionality is not available for some reason, then fs.watch will not be able to
function. For example, watching les or directories on network le systems (NFS, SMB, etc.) often
doesn't work reliably or at all.
You can still use fs.watchFile , which uses stat polling, but it is slower and less reliable.
Filename Argument
Providing filename argument in the callback is not supported on every platform (currently it's only
supported on Linux and Windows). Even on supported platforms filename is not always guaranteed to
be provided. Therefore, don't assume that filename argument is always provided in the callback, and
have some fallback logic if it is null.
fs.exists(path, callback)
Test whether or not the given path exists by checking with the le system. Then call the callback
fs.exists() is an anachronism and exists only for historical reasons. There should almost never be a
reason to use it in your own code.
In particular, checking if a le exists before opening it is an anti-pattern that leaves you vulnerable to
race conditions: another process may remove the le between the calls to fs.exists() and fs.open() .
Just open the le and handle the error when it's not there.
fs.existsSync(path)
Synchronous version of fs.exists() . Returns true if the le exists, false otherwise.
fs.F_OK - File is visible to the calling process. This is useful for determining if a le exists, but says nothing
about rwx permissions. Default if no mode is speci ed.
fs.R_OK - File can be read by the calling process.
fs.W_OK - File can be written by the calling process.
fs.X_OK - File can be executed by the calling process. This has no effect on Windows (will behave like
fs.F_OK ).
The nal argument, callback , is a callback function that is invoked with a possible error argument. If
any of the accessibility checks fail, the error argument will be populated. The following example
checks if the le /etc/passwd can be read and written by the current process.
fs.accessSync(path[, mode])
Synchronous version of fs.access . This throws if any accessibility checks fail, and does nothing
otherwise.
Class: fs.Stats
Objects returned from fs.stat() , fs.lstat() and fs.fstat() and their synchronous counterparts are
of this type.
stats.isFile()
stats.isDirectory()
stats.isBlockDevice()
stats.isCharacterDevice()
stats.isSocket()
Please note that atime , mtime , birthtime , and ctime are instances of Date object and to compare the
values of these objects you should use appropriate methods. For most general uses getTime() will
return the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC and this integer should
be suf cient for any comparison, however there are additional methods which can be used for
displaying fuzzy information. More details can be found in the MDN JavaScript Reference page.
atime "Access Time" - Time when le data last accessed. Changed by the mknod(2) , utimes(2) , and read(2)
system calls.
mtime "Modi ed Time" - Time when le data last modi ed. Changed by the mknod(2) , utimes(2) , and
write(2) system calls.
ctime "Change Time" - Time when le status was last changed (inode data modi cation). Changed by the
chmod(2) , chown(2) , link(2) , mknod(2) , rename(2) , unlink(2) , utimes(2) , read(2) , and write(2) system
calls.
birthtime "Birth Time" - Time of le creation. Set once when the le is created. On lesystems where
birthtime is not available, this eld may instead hold either the ctime or 1970-01-01T00:00Z (ie, unix epoch
timestamp 0 ). On Darwin and other FreeBSD variants, also set if the atime is explicitly set to an earlier
value than the current birthtime using the utimes(2) system call.
Prior to Node v0.12, the ctime held the birthtime on Windows systems. Note that as of v0.12, ctime
fs.createReadStream(path[, options])
Returns a new ReadStream object (See Readable Stream ).
Be aware that, unlike the default value set for highWaterMark on a readable stream (16kB), the stream
returned by this method has a default value of 64kB for the same parameter.
{ flags: 'r',
encoding: null,
fd: null,
mode: 0666,
autoClose: true
}
options can include start and end values to read a range of bytes from the le instead of the entire
le. Both start and end are inclusive and start at 0. The encoding can be 'utf8' , 'ascii' , or
'base64' .
If fd is speci ed, ReadStream will ignore the path argument and will use the speci ed le descriptor.
This means that no open event will be emitted.
If autoClose is false, then the le descriptor won't be closed, even if there's an error. It is your
responsibility to close it and make sure there's no le descriptor leak. If autoClose is set to true
(default behavior), on error or end the le descriptor will be closed automatically.
mode sets the le mode (permission and sticky bits), but only if the le was created.
Class: fs.ReadStream
ReadStream is a Readable Stream.
Event: 'open'
fs.createWriteStream(path[, options])
Returns a new WriteStream object (See Writable Stream ).
{ flags: 'w',
defaultEncoding: 'utf8',
fd: null,
mode: 0666 }
options may also include a start option to allow writing data at some position past the beginning of
the le. Modifying a le rather than replacing it may require a flags mode of r+ rather than the
default mode w .
Like ReadStream above, if fd is speci ed, WriteStream will ignore the path argument and will use the
speci ed le descriptor. This means that no open event will be emitted.
Class: fs.WriteStream
WriteStream is a Writable Stream.
Event: 'open'
le.bytesWritten
The number of bytes written so far. Does not include data that is still queued for writing.
Class: fs.FSWatcher
Objects returned from fs.watch() are of this type.
watcher.close()
Event: 'change'
Emitted when something changes in a watched directory or le. See more details in fs.watch.
Event: 'error'
Path
Stability: 3 - Stable
This module contains utilities for handling and transforming le paths. Almost all these methods
perform only string transformations. The le system is not consulted to check whether paths are valid.
Use require('path') to use this module. The following methods are provided:
path.normalize(p)
Normalize a string path, taking care of '..' and '.' parts.
When multiple slashes are found, they're replaced by a single one; when the path contains a trailing
slash, it is preserved. On Windows backslashes are used.
Example:
path.normalize('/foo/bar//baz/asdf/quux/..')
// returns
'/foo/bar/baz/asdf'
Arguments must be strings. In v0.8, non-string arguments were silently ignored. In v0.10 and up, an
exception is thrown.
Example:
If to isn't already absolute from arguments are prepended in right to left order, until an absolute
path is found. If after using all from paths still no absolute path is found, the current working directory
is used as well. The resulting path is normalized, and trailing slashes are removed unless the path gets
resolved to the root directory. Non-string from arguments are ignored.
Is similar to:
cd foo/bar
cd /tmp/file/
cd ..
cd a/../subfile
pwd
The difference is that the different paths don't need to exist and may also be les.
Examples:
path.resolve('/foo/bar', './baz')
// returns
'/foo/bar/baz'
path.resolve('/foo/bar', '/tmp/file/')
// returns
'/tmp/file'
path.isAbsolute(path)
Determines whether path is an absolute path. An absolute path will always resolve to the same
location, regardless of the working directory.
Posix examples:
path.isAbsolute('/foo/bar') // true
path.isAbsolute('/baz/..') // true
path.isAbsolute('qux/') // false
path.isAbsolute('.') // false
Windows examples:
path.isAbsolute('//server') // true
path.isAbsolute('C:/foo/..') // true
path.isAbsolute('bar\\baz') // false
path.isAbsolute('.') // false
path.relative(from, to)
Solve the relative path from from to to .
At times we have two absolute paths, and we need to derive the relative path from one to the other.
This is actually the reverse transform of path.resolve , which means we see that:
path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb')
// returns
'..\\..\\impl\\bbb'
path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb')
// returns
'../../impl/bbb'
path.dirname(p)
Return the directory name of a path. Similar to the Unix dirname command.
Example:
path.dirname('/foo/bar/baz/asdf/quux')
// returns
'/foo/bar/baz/asdf'
path.basename(p[, ext])
Return the last portion of a path. Similar to the Unix basename command.
Example:
path.basename('/foo/bar/baz/asdf/quux.html')
// returns
'quux.html'
path.basename('/foo/bar/baz/asdf/quux.html', '.html')
// returns
'quux'
path.extname(p)
Return the extension of the path, from the last '.' to end of string in the last portion of the path. If there
is no '.' in the last portion of the path or the rst character of it is '.', then it returns an empty string.
Examples:
path.extname('index.html')
// returns
'.html'
path.extname('index.coffee.md')
// returns
'.md'
path.extname('index.')
// returns
'.'
path.extname('index')
// returns
''
path.sep
The platform-speci c le separator. '\\' or '/' .
An example on *nix:
'foo/bar/baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']
An example on Windows:
'foo\\bar\\baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']
path.delimiter
The platform-speci c path delimiter, ; or ':' .
An example on *nix:
console.log(process.env.PATH)
// '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'
process.env.PATH.split(path.delimiter)
// returns
['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']
An example on Windows:
console.log(process.env.PATH)
// 'C:\Windows\system32;C:\Windows;C:\Program Files\nodejs\'
process.env.PATH.split(path.delimiter)
// returns
['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\nodejs\\']
path.parse(pathString)
Returns an object from a path string.
An example on *nix:
path.parse('/home/user/dir/file.txt')
// returns
{
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
}
An example on Windows:
path.parse('C:\\path\\dir\\index.html')
// returns
{
root : "C:\\",
dir : "C:\\path\\dir",
base : "index.html",
ext : ".html",
name : "index"
}
path.format(pathObject)
Returns a path string from an object, the opposite of path.parse above.
path.format({
root : "/",
dir : "/home/user/dir",
base : "file.txt",
ext : ".txt",
name : "file"
})
// returns
'/home/user/dir/file.txt'
path.posix
Provide access to aforementioned path methods but always interact in a posix compatible way.
path.win32
Provide access to aforementioned path methods but always interact in a win32 compatible way.
Products Pricing Documentation Community
Sign Up Sign In
file-uri-to-path
2.0.0 • Public • Published a year ago
Readme
Explore BETA
0 Dependencies
93 Dependents
4 Versions
Install
npm i file-uri-to-path
Weekly Downloads
13,607,702
Version License
2.0.0 MIT
Homepage
github.com/TooTallNate/file-uri-to-path
Repository
github.com/TooTallNate/file-uri-to-path
Last publish
a year ago
Collaborators
Try on RunKit
Report malware
file-uri-to-path
Convert a file: URI to a file path
Node CI passing
Accepts a file: URI and returns a regular file path suitable for use with the fs module functions.
Installation
Install with npm :
Example
uri2path('file://localhost/c|/WINDOWS/clock.avi');
// "c:\\WINDOWS\\clock.avi"
uri2path('file:///c|/WINDOWS/clock.avi');
// "c:\\WINDOWS\\clock.avi"
uri2path('file://localhost/c:/WINDOWS/clock.avi');
// "c:\\WINDOWS\\clock.avi"
uri2path('file://hostname/path/to/the%20file.txt');
// "\\\\hostname\\path\\to\\the file.txt"
uri2path('file:///c:/path/to/the%20file.txt');
// "c:\\path\\to\\the file.txt"
API
fileUriToPath(String uri) → String
License
(The MIT License)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation
files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Keywords
Support
Help
Community
Advisories
Status
Contact npm
Company
About
Blog
Press
Policies
Terms of Use
Code of Conduct
Privacy
Instantly share code, notes, and snippets.
branneman / better-nodejs-require-paths.md
Last active Apr 2, 2021
Star
better-nodejs-require-paths.md
Problem
When the directory structure of your Node.js application (not library!) has some depth, you end up with a lot of annoying
relative paths in your require calls like:
Possible solutions
Ideally, I'd like to have the same basepath from which I require() all my modules. Like any other language environment out
there. I'd like the require() calls to be first-and-foremost relative to my application entry point file, in my case app.js .
There are only solutions here that work cross-platform, because 42% of Node.js users use Windows as their desktop
environment (source).
0. The Alias
1. Install the module-alias package:
{
"_moduleAliases": {
"@lib": "app/lib",
"@models": "app/models"
}
}
require('module-alias/register')
1. The Container
1. Learn all about Dependency Injection and Inversion of Control containers. Example implementation using Electrolyte
here: github/branneman/nodejs-app-boilerplate
module.exports = factory;
module.exports['@require'] = [
'lib/read',
'lib/render-view'
];
function factory(read, render) { /* ... */ }
2. The Symlink
Stolen from: focusaurus / express_code_structure # the-app-symlink-trick
2. Now you can require local modules like this from anywhere:
Alternatively, you can create the symlink on the npm postinstall hook, as described by scharf in this awesome comment.
Put this inside your package.json :
"scripts": {
"postinstall" : "node -e \"var s='../src',d='node_modules/src',fs=require('fs');fs.exists(d,function(e){e||fs.sy
}
3. The Global
1. In your entry-point file, before any require() calls:
2. In your very/far/away/module.js:
4. The Module
1. Install some module:
3. In your very/far/away/module.js:
5. The Environment
Set the NODE_PATH environment variable to the absolute path of your application, ending with the directory you want your
modules relative to (in my case . ).
There are 2 ways of achieving the following require() statement from anywhere in your application:
5.1. Up-front
Setting a variable like this with export or set will remain in your environment as long as your current shell is open. To have
it globally available in any shell, set it in your userprofile and reload your environment.
This solution will not affect your environment other than what node preceives. It does change your application start
command.
Start your application like this from now on:
Linux: NODE_PATH=. node app
Windows: cmd.exe /C "set NODE_PATH=.&& node app"
(On Windows this command will not work if you put a space in between the path and the && . Crazy shit.)
With one of these solutions (6.1 & 6.2) you can start your application like this from now on:
Linux: ./app (also for Windows PowerShell)
Windows: app
An advantage of this solution is that if you want to force your node app to always be started with v8 parameters like --
harmony or --use_strict , you can easily add them in the start-up script as well.
6.1. Node.js
#!/bin/sh
NODE_PATH=. node app.js
@echo off
cmd.exe /C "set NODE_PATH=.&& node app.js"
7. The Hack
Courtesy of @joelabair. Effectively also the same as 5.2, but without the need to specify the NODE_PATH outside your
application, making it more fool proof. However, since this relies on a private Node.js core method, this is also a hack that
might stop working on the previous or next version of node.
process.env.NODE_PATH = __dirname;
require('module').Module._initPaths();
8. The Wrapper
Courtesy of @a-ignatov-parc. Another simple solution which increases obviousness, simply wrap the require() function
with one relative to the path of the application's entry point file.
Place this code in your app.js , again before any require() calls:
Another option is to always use the initial require() function, basically the same trick without a wrapper. Node.js creates a
new scoped require() function for every new module, but there's always a reference to the initial global one. Unlike most
other solutions this is actually a documented feature. It can be used like this:
requireUtil('./some-tool')
Conclusion
0. The Alias
Great solution, and a well maintained and popular package on npm. The @ -syntax also looks like something special is going
on, which will tip off the next developer whats going on. You might need extra steps for this solution to work with linting and
unit testing though.
1. The Container
If you're building a slightly bigger application, using a IoC Container is a great way to apply DI. I would only advise this for
the apps relying heavily on Object-oriented design principals and design patterns.
2. The Symlink
If you're using CVS or SVN (but not Git!), this solution is a great one which works, otherwise I don't recommend this to
anyone. You're going to have OS differences one way or another.
3. The Global
You're effectively swapping ../../../ for __base + which is only slightly better if you ask me. However it's very obvious for
the next developer what's exactly happening. That's a big plus compared to the other magical solutions around here.
4. The Module
Great and simple solution. Does not touch other require calls to node_modules .
5. The Environment
Setting application-specific settings as environment variables globally or in your current shell is an anti-pattern if you ask me.
E.g. it's not very handy for development machines which need to run multiple applications.
If you're adding it only for the currently executing program, you're going to have to specify it each time you run your app.
Your start-app command is not easy anymore, which also sucks.
7. The Hack
Most simple solution of all. Use at your own risk.
8. The Wrapper
Great and non-hacky solution. Very obvious what it does, especially if you pick the require.main.require() one.
Just set up your stuff as modules, and put them in node_modules folder, and then they're top-level things. Problem solved.
let's you drop in node modules if you need to "fork" them and don't yet have a private registry. Lots of nesting in an app ends up sucking more
often than not, and I'd argue that ../ in any module is usually an anti-pattern, maybe other than var pkg = require('../package') for bin .version
etc
branneman commented Dec 20, 2013 Owner Author
@isaacs; yes I know that's an option, but the node_modules folder currently is a nice clean place for only the external modules we use. All the
application-specific modules are not generic enough to be put inside node_modules. Like all kinds of Controllers, Models and stuff. I don't think the
node_modules folder is intended for that, is it?
yeah, whenever i see '../../../dir/name' i immediately think that someone has either 1) prematurely broken out their app in to a million directories and
file or 2) hasn't modularized these components in to modules yet, and they should.
If it has application logic, it's not in node_modules. If a lot of things call it or depend on it, it shouldn't have application logic in it, it should be a
node_module.
This helps us keep things clean and lets us write things for ourselves, make sure they work, then publish them and hopefully see others getting use
from them and contributing.
I should note that NODE_PATH can be confusing too if you're not familiar with the app, it's not always clear where a module is coming from unless
it's named in an obvious way we prefix ours with s- so it's obvious but they now live in a private registry
it s named in an obvious way, we prefix ours with s so it s obvious but they now live in a private registry
I hear mostly: if you have this problem: you have a bad architecture or bad application design. I also hear: maybe it's time for a private npm
repository?
As an example, most modules in one of my applications depend on a config file, still I can not remove application logic from that, and I'm already
using a proper (external) module to handle common config logic. But the data itself needs to be either loaded a lot or passed around a lot.
Would it then be a best practice to save that config object once per request to the req variable in express.js? I doubt that, because I'm touching
objects I don't own. What is the way to do that kind of thing?
One of the other things I tried with a old version is require.paths , but that's removed now. That was actually the most elegant solution in my
opinion. At least everything would stay inside the app, it's the developers responsibility to use it wisely.
I used to use the symlink method, but it's too much trouble on windows so I don't use it anymore.
In most my projects nowadays I don't have this problem. I use relative requires for intra-package modules.
I used to mix local deps with npm deps in node_modules, but that made my .gitignore too much trouble to only ignore certain deps.
I use symlinks (or nested directories on windows) to link my different packages to each-other, but each has it's own git repo and if it's generally
usable, it's own npm name.
defunctzombie commented Jan 30, 2014
"dependencies": {
"whatever": "file///relative/path/to/folder"
}
It would only work for private packages but is an easy way to have the package management/install system take care of setting up the symlink for
you at install time. This avoids all of the above described hacks and also has the benefit of letting you reference package.json when you want to
learn about a dependency (which you do already).
The start up script is a good option, though all the solutions have some drawback. At the very least others looking at your code might not know
where the require is looking for modules. You also want to eliminate the possibility of new dependencies colliding with modules of the same name.
I haven't noticed anyone mention using the relationship between your dependencies and your project root. So I went and built it myself:
requireFrom. This method is intuitive to anyone looking at it, and requires no extra steps outside of adding a dependency. Third-party modules can
use it relative to themselves, as well.
/node_modules
/package.json
/src
/node_modules
/client -> ../client
/server -> ../server
/shared -> ../shared
/client
/apps
/main
/test
main.spec.js
index.js
/modules
/foo
/test
foo.spec.js
index.js
/server
/apps
/modules
/shared
it also solves the problem of not know where the modules come from because all app modules have client/server/shared prefixes in require paths
I ran into the same architectural problem: wanting a way of giving my application more organization and internal namespaces, without:
mixing application modules with external dependencies or bothering with private npm repos for application-specific code
using relative requires which make refactoring and comprehension harder
using relative requires, which make refactoring and comprehension harder
using symlinks or environment variables which don't play nicely with source control
The start-up script is a good idea, but I didn't like the extra moving parts.
In the end, I decided to organize my code using file naming conventions rather than directories. A structure would look something like:
node_modules
...
package.json
npm-shrinkwrap.json
src
app.js
app.config.js
app.models.bar.js
app.models.foo.js
app.web.js
app.web.routes.js
...
Then in code:
or just:
The main disadvantage is of course that in a file browser, you can't expand/collapse the tree as though it was actually organized into directories. But
I like that it's very explicit about where all code is coming from, and it doesn't use any 'magic'.
Hi,
the start up script doesn't work very well with nodemon (or node forever).
If something changes nodemon tries to restart the start-up script and in my case the childprocess (express js) is still bound to my IP and I got a
EADDRINUSE error.
I also tried to kill the child process but this will be executed too late.
process.on('exit', function() {
console.log("kill child process");
app.kill('SIGINT');
});
edit:
I've switched to the approach used by alexgorbatchev using a server and shared folder and making symlinks to node_modules folder.
Thank you it works great.
@visionmedia: quite like the idea of the no/low nesting, but how does that work with larger a source base - I have seen a few of your github reps
which manifest what you say - I'm thinking that maybe an application has a more sprawling areas of functionality? ( I'm a newbie on node so I might
be speculating? )
tuliomonteazul commented Mar 24, 2014
I also found a good way to use the start-up script solution with Grunt and nodemon.
grunt.initConfig({
concurrent: {
dev: {
tasks: ['nodemon', 'node-inspector', 'watch', 'mochaTest'],
options: {
logConcurrentOutput: true
}
}
...
},
nodemon: {
dev: {
script: 'index.js',
options: {
nodeArgs: ['--debug'],
env: {
NODE_PATH: './app'
}
}
}
},
...
So just setting the options.env inside nodemon configuration and my application is still starting by just calling $ grunt
The app-module-path modifies the internal Module._nodeModulePaths method to change how the search path is calculated for modules at the
application-level. Modules under "node_modules" will not be impacted because modules installed under node_modules will not get a modified
search path.
It of course bothers me that a semi-private method needed to be modified, but it works pretty well. Use at your own risk.
The startup script solution will impact module loading for all installed modules which is not ideal. Plus, that solution requires that you start your
application in a different way which introduces more friction.
You can create helper function in global scope to be able require modules relative to root path.
In app.js :
global.app_require = function(name) {
return require(__dirname + '/' + name);
}
var fs = require('fs'),
config = app_require('config'),
common = app_require('utils/common');
@gumaflux I believe @visionmedia is only talking about modules which usually wouldn't require "sprawling areas of functionality" because a single
module isn't meant to do as much as an application. I think the nesting issue is more of a problem in applications, especially MVC apps.
slorber commented May 19, 2014
The problem using paths, or putting code into node_modules is that in your app you may have sources to transform, for exemple CoffeeScript or
JSX files.
When using require("some_private_node_module"), browserify doesn't seem to transform the files and builds a bundle with unprocessed sources.
Now your code will work and is less vulnerable to system-wide configuration changes and upgrades because each component can have its own local
transforms and dependencies.
See also: avoiding ../../../../../../.. which pretty much echos what @isaacs has said already: just use node_modules/ .
If you're worried about how node_modules might clutter up your app, create a node_modules/app and put all your modules under that package
namespace. You can always require('app/whatever') for some package node_modules/app/whatever .
So....
This is a small hack. It relies only on node.js continuing to support the NODE_PATH environment variable. The NODE_PATH env setting is a fine
method for defining an application specific local modules search path. However, I don't like relying on it being properly set external to javascript, in
all cases (i.e. export, bash profile, or startup cmd). Node's module.js absorbs process.env's NODE_PATH into a private variable for inclusion into a list
of global search paths used by require. The problem is, node only looks at process.env['NODE_PATH'] once, on main process init, before evaluating
any of the app's code. Including the following 2 lines allows the re-definition of NODE_PATH, post process-init, and should be included prior to any
local module specific requires. In a top level file include:
This does not change the behavior of module.js as documented; node_modules, package.json, and global modules all behave as expected.
Another option for complex application logic (config files, loggers, database connections, etc) is to use inversion of control (IoC) containers with
dependency injection. See @jaredhanson's Electrolyte for one implementation.
I just updated the article again and added more solutions. Thanks for all the feedback, keep it coming!
/cc @isaacs, @visionmedia, @mikeal, @creationix, @defunctzombie, @dskrepps, @alexgorbatchev, @indirectlylit, @flodev, @gumaflux,
@tuliomonteazul, @patrick-steele-idem, @a-ignatov-parc, @esco, @slorber, @substack, @joelabair, @kgryte
FWIW, in case anyone is using Jest for testing, I tried solution 1 referenced above and it broke everything. But after hacking around, I figured out a
way to make symlinks work: facebook/jest#98
This might be the worst IDEA ever, but what do you guys think about this ?
# CoffeeScript Example
$require = require
require = (file)->
if /^\/\/.*$/.test file
file = file.slice 1, file.length
$require.resolve process.cwd() + file
else
$require file
//JavaScript Example
var $require, require;
$require = require;
require = function(file) {
if (/^\/\/.*$/.test(file)) {
file = file.slice(1, file.length);
return $require.resolve(process.cwd() + file);
} else {
return $require(file);
q ( )
}
};
You can add that on the first line to override the require function with a reference to itself...
now, you can use require("express") as normal, and require("//lib/myLibFile") the difference is the leading // , inspired by the use in http
requests //ajax.googleapis.com/ajax/libs/jqueryui/1.11.0/jquery-ui.min.js .
My current solution is to have my script spawn a child-process to itself if NODE_PATH isn't set. This allows me to just run node file.js and not worry
about anything else:
if( !process.env.NODE_PATH ){
// set NODE_PATH to `pwd`
process.env.NODE_PATH = __dirname + '/';
// "throw away" logging from this process. The child will still be fine since it has access to stdout and its own console.log
console.log = function(){};
}else{
// start app
}
Thank you for this write-up! I went with #7 and have global method Require which complements require .
cronvel commented Oct 2, 2014
I did a lib when I tried to restructure some source code in a large project. https://github.com/viruschidai/node-mv move a source file and update all
require paths to the moved file.
This feature is helpful for local offline development and creating tests that require npm installing where you don't want to hit an external
server, but should not be used when publishing packages to the public registry.
What I've been doing is to exploit the require.cache . If I have a package, say utils on node_modules I'll do a lib/utils and on there I'll merge
the cache of utils to have whatever I want. That is:
So I only have to require that package once and then utils.<some package> will give the necesary pack.
It just shortens the relative paths by introducing marks, points from which paths can be relative.
My solution is;
//PS.: This code should in the root level folder of your project!
You are now basically requiring your .js files from base instead of cwd
A word of caution for people using the symlink approach with Browserify: you are likely to break transforms. This has been my experience with brfs
and trying to include a module through a symlinked path. The transformer seems to ignore symlinked paths (or probably packages that are in the
node_modules directory).
However, it turns out that there's an additional option for strategy #4 if you're using a build tool like gulp (and still works with browserify
transforms). I've simply added process.env.NODE_PATH = "./my/include/path:" + (process.env.NODE_PATH || ""); to my gulpfile.js and everything
works great now.
@azu Local path in npm isn't be synchronized with original source code when I edit it in original folder. It doesn't make a symbolic link.
I just made this module (my first) so I'd love to hear feedback (on my github page, not on this thread): https://www.npmjs.com/package/magic-
globals
For me the hack presented by @joelabair works really well. I tested it with node v0.8, v0.10, v0.11 and it works well. In order to reuse this solution, i
made a little module where you can just add the folders that should behave like the node_modules folder.
https://www.npmjs.com/package/local-modules
require('local-modules')('lib', 'components');
like @creationix, I didn't want to mess with private dependencies in node_modules folder.
If you put parts of your app into node_modules you can't exclude node_modules from search scope anymore. So you lose the ability to quick search
through project files. This kinda sucks.
When you start to import app modules like require("something") and those modules are not really reside in node_modules it feels like an evil
magic to me. Import semantics was changed under the cover.
I actually think it should be resolved by adding special PROJECT ROOT symbol and patching native require . Syntax may be like
require("~/dfdfdf") .
But ~ will be confused with unix home dir so it's better to choose something else like require("@/dfdfdf") .
Explicit is better than implicit, as noone may miss "@" symbol in import statements.
We basically add different syntax for different semantics which is good imo.
I believe having a special shims.js file for every non-standard installation like this in project folder is sane and safe enough.
https://gist.github.com/ivan-kleshnin/edfa4abefe8ce216b9fa
This is my second approach. It just implements the __root solution which, in my opiniom, it's the best solution to this problem and nodejs/iojs
should implement it.
https://github.com/gagle/node-groot
I wrote in my blog about a few solutions presented here versus ES6 problems:
http://injoin.io/2015/01/31/nodejs-require-problem-es6.html
@gustavohenke nice one, very hackish but cleaner and cross-functional among OS's. But the problem with it is the same as with putting the
modules inside node_modules. Having a require call require('my/package') it's very confusing for me because I associate require paths without a
leading ./ with core or external modules. You could have an external module named my , collisions may happen.
gustavohenke commented Feb 1, 2015
Yeah @gagle, I understand these problems, but my case is special, I won't be dropping ES6 modules. Fortunately, I have taken care of namespacing
my libs so there's only a single collision point. Also, my app is well documented for developers.
This gist is so incredibly helpful. Kind of embarrassing that Node has an issue with this many hackish solutions.
seems like
if you can turn this into a node module do it
else just define it in your index.js or app.js
if (!global.__base) { global.__base = __dirname + '/'; }
The readme:
The readme:
rootrequire
Require files relative to your project root.
Install
Use
var
root = require('rootrequire'),
myLib = require(root + '/path/to/lib.js');
Why?
You can move files around more easily than you can with relative paths like ../../lib/my-lib.js
Every file documents your app's directory structure for you. You'll know exactly where to look for things.
Dazzle your coworkers.
This was written for the "Learn JavaScript with Eric Elliott" courses. Don't just learn JavaScript. Learn how to change the world.
To make node.js search for modules in an additional directory you could use require.main.path array.
// require('node-dm'); <-- Exception
require.main.paths.push('/home/username/code/projectname/node_modules/'); // <- any path here
console.log(require('node-dm')); // All good
@ericelliott, with your solution IDE navigation is lost in the same way as with others...
There is no escape from this problem at app code level. Every "trick" breaks IDE move-to functionality.
From all those "solutions", only symlinks keep IDE working as it should.
Thanks for the post, very useful and detailed. I found the wrapper solution to be the most elegant, works on any latest node instance and does not
require any pre-setup / hacks for it to work.
Besides it let me set the path to the library and avoid any potential name conflict issues.
I'll add my library to the list: https://github.com/etcinit/enclosure (It's very Java-like though)
jondlm commented Mar 3, 2015
Turns out that npm now flattens your dependency tree which breaks the "rootrequire" method by @ericelliott.
Since symlink is the only solution that does not confuse IDEs (as @ivan-kleshnin noted), here is my solution: add a postinstall script to the
package.json that creates a symlink from the app directory the to node_modules (note the srcpath link is specified relative to the node_modules ):
"scripts": {
"postinstall" : "node -e \"var srcpath='../app'; var dstpath='node_modules/app';var fs=require('fs'); fs.exists(dstpath,function(exist
},
The script could also be put into a separate file, but I prefer to specify it directly inside the package.json...
I think it should work on windows as well, but I have not tested it.
Would like to see an updated article for JS module syntax, as it requires you to be static with your imports - many of these solutions won't work
I developed wires because we had configuration and routing nightmares at my company. We've been using it for 2 years now and I just released
version 0.3.0 which is world-ready, so have fun using it and don't hesitate with feedback, questions or death-threats :P
Using wires, you would create a wires.json file at the root of your app:
{
":models/": "./lib/models/"
}
require( ":models/article" );
require( ":models/client" );
wires startServer
There's a lot more to wires but I felt like sharing on this specific topic.
The filepath Node.js module is a very helpful utility for simple access to file paths. You’ll need only a package.json file with this module
as a dependency, an “npm install” command, and then you are up and running. This article provides a quick introduction to a few of the
most common methods.
Example # 1A
Example # 1B:
In Example # 1, we first create the FP variable, which references the filepath module. Then we create the path variable, which holds the
return value of the FP object’s newPath method. And finally, we output the path in the console. Example # 1B shows the terminal output
when we use console.log to view the path variable. This path will vary for each user so I simply put “[YOUR LOCAL PATH TO]” for the
folder structure that leads up to that file in the github repo that you cloned (see “How to Demo” below).
How to Demo:
Example # 2
Example # 2 demonstrates the list method. The only real difference between this code and Example # 1, is the new variable “files”,
which receives the value of the list method, when called on our path variable. The files variable ends up as an array. Each element in
the array is an object whose “path” property is a string that points to a file in the current directory.
How to Demo:
Example # 3A
Example # 3B
1 [
2 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-1.js' },
3 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-2.js' },
4 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/filepath-3.js' },
5 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/node_modules' },
6 { path: '[YOUR LOCAL PATH TO]/video-code-examples/JavaScript/node-js/filepath/package.json' }
7 ]
Example # 3C
Example # 3D
In Example # 3A, we see the recurse method in action. Just as the name implies, the recurse method will recursively list all of the files in
the current directory. As a result, if one of those files is a folder, then it will list all of the files in that folder, and so on. This method differs
from the previous two examples in that it takes a callback. The callback is a bit like a forEach call; it iterates over all of the files or folders
in the path, and calls the callback for each one. Inside of the callback, the path variable is the current path being iterated over.
In Example # 3C, we use the toString() method of the path object so that instead of a bunch of objects that we would need to handle, we
just get the values we are after; the string representation of the path to that file or folder.
How to Demo:
Summary
The filepath Node.js module has much more to offer than was demonstrated here. Hopefully, this article has demonstrated how easy it is
to get started with filepath
Home > Ar cles > Web Development
Jesse Smith discusses how to work with the file paths o en used in Node.js applica ons.
This ar cle discusses handling file paths from the file system, which is important for loading and parsing file names in your applica on.
The file system is a big part of any applica on that has to handle files paths for loading, manipula ng, or serving data. Node provides some
helper methods for working with file paths, which are discussed in the sec ons that follow.
Most of the me, your applica on has to know where certain files and/or directories are and executes them within the file system based
on certain contexts. Most other languages also have these convenience methods, but Node may have a few you might not have seen with
any other language.
Find Paths
Node can tell you where in the file system it is working by using the _filename and _dirname variables. The _filename variable
provides the absolute path to the file that is currently execu ng; _dirname provides the absolute path to the working directory where the
file being executed is located. Neither variable has to be imported from any modules because each is provided standard.
Many applica ons might have to switch the current working directory to another directory to fetch or serve different files.
The processobject provides the chdir() method to accomplish this. The name of the directory to switch to is passed in as an argument
to this method:
process.chdir("../");
console.log("The new working directory is " + process.cwd());
The code changes to the directory above the current working directory. If the directory name change fails, the current working directory
remains the working directory. You can trap for this error using a try..catch clause.
You might you need the path to the Node executable file. The process object provides the execPath() method to achieve this:
console.log(process.execPath);
The output from the code above is the path C:\Program Files (x86)\nodejs\node.exe.
Node.js | path.relative() Method
Last Updated : 28 Jan, 2020
The path.relative() method is used to find the relative path from a given path to another path based on the current working
directory. If both the given paths are the same, it would resolve to a zero-length string.
Syntax:
path.relative( from, to )
Parameters: This method accept two parameters as mentioned above and described below:
Return Value: It returns a string with the normalized form of the path.
Output:
..\index.html
..\..\admin\files\website
Reference: https://nodejs.org/api/path.html#path_path_relative_from_to
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a
student-friendly price and become industry ready.
Requiring modules in
Node.js: Everything you
need to know
Samer Buna
Update: This article is now part of my book “Node.js Beyond The Basics”.
Read the updated version of this content and more about Node at jscomplete.com/node-
beyond-basics.
The require module, which appears to be available on the global scope — no need
to require('require') .
The module module, which also appears to be available on the global scope — no need
to require('module') .
You can think of the require module as the command and the module module as the organizer of
The main object exported by the require module is a function (as used in the above example).
When Node invokes that require() function with a local file path as the function’s only argument,
Wrapping: To give the file its private scope. This is what makes both
the require and module objects local to every file we require.
Evaluating: This is what the VM eventually does with the loaded code.
Caching: So that when we require this file again, we don’t go over all the steps another time.
In this article, I’ll attempt to explain with examples these different stages and how they affect the
Let me first create a directory to host all the examples using my terminal:
mkdir ~/learn-node && cd ~/learn-node
All the commands in the rest of this article will be run from within ~/learn-node .
~/learn-node $ node
> module
Module {
id: '<repl>',
exports: {},
parent: undefined,
filename: null,
loaded: false,
children: [],
paths: [ ... ] }
Every module object gets an id property to identify it. This id is usually the full path to the file,
Node modules have a one-to-one relation with files on the file-system. We require a module by
However, since Node allows many ways to require a file (for example, with a relative path or a pre-
configured path), before we can load the content of a file into the memory we need to find the
Node will look for find-me.js in all the paths specified by module.paths — in order.
~/learn-node $ node
> module.paths
[ '/Users/samer/learn-node/repl/node_modules',
'/Users/samer/learn-node/node_modules',
'/Users/samer/node_modules',
'/Users/node_modules',
'/node_modules',
'/Users/samer/.node_modules',
'/Users/samer/.node_libraries',
'/usr/local/Cellar/node/7.7.1/lib/node' ]
The paths list is basically a list of node_modules directories under every directory from the current
p y _ y y
directory to the root directory. It also includes a few legacy directories whose use is not
recommended.
If Node can’t find find-me.js in any of these paths, it will throw a “cannot find module error.”
~/learn-node $ node
> require('find-me')
at Function.Module._resolveFilename (module.js:470:15)
at Function.Module._load (module.js:418:25)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at repl:1:1
at ContextifyScript.Script.runInThisContext (vm.js:23:33)
at REPLServer.defaultEval (repl.js:336:29)
at bound (domain.js:280:14)
at REPLServer.runBound [as eval] (domain.js:293:12)
at REPLServer.onLine (repl.js:533:10)
If you now create a local node_modules directory and put a find-me.js in there, the require('find-
~/learn-node $ node
> require('find-me');
I am not lost
{}
>
If another find-me.js file existed in any of the other paths, for example, if we have
a node_modules directory under the home directory and we have a different find-me.js file in there:
_ y y j
$ mkdir ~/node_modules
$ echo "console.log('I am the root of all problems');" > ~/node_modules/find-me.js
When we require('find-me') from within the learn-node directory — which has its own node_modul
es/find-me.js , the find-me.js file under the home directory will not be loaded at all:
~/learn-node $ node
> require('find-me')
I am not lost
{}
>
If we remove the local node_modules directory under ~/learn-node and try to require find-me one
more time, the file under the home’s node_modules directory would be used:
~/learn-node $ rm -r node_modules/
~/learn-node $ node
> require('find-me')
>
Requiring a folder
Modules don’t have to be files. We can also create a find-me folder under node_modules and place
an index.js file in there. The same require('find-me') line will use that folder’s index.js file:
~/learn-node $ node
> require('find-me');
Found again.
{}
>
Note how it ignored the home directory’s node_modules path again since we have a local one now.
An index.js file will be used by default when we require a folder, but we can control what file
name to start with under the folder using the main property in package.json . For example, to make
the require('find-me') line resolve to a different file under the find-me folder, all we need to do is
add a package.json file in there and specify which file should be used to resolve this folder:
~/learn-node $ echo "console.log('I rule');" > node_modules/find-me/start.js
~/learn-node $ echo '{ "name": "find-me-folder", "main": "start.js" }' > node_modules/fi
~/learn-node $ node
> require('find-me');
I rule
{}
>
require.resolve
If you want to only resolve the module and not execute it, you can use
the require.resolve function. This behaves exactly the same as the main require function, but
does not load the file. It will still throw an error if the file does not exist and it will return the full path
'/Users/samer/learn-node/node_modules/find-me/start.js'
> require.resolve('not-there');
Error: Cannot find module 'not-there'
at Function.Module._resolveFilename (module.js:470:15)
at Function.resolve (internal/module.js:27:19)
at repl:1:9
at ContextifyScript.Script.runInThisContext (vm.js:23:33)
at REPLServer.defaultEval (repl.js:336:29)
at bound (domain.js:280:14)
at REPLServer.runBound [as eval] (domain.js:293:12)
at REPLServer.onLine (repl.js:533:10)
at emitOne (events.js:101:20)
at REPLServer.emit (events.js:191:7)
>
This can be used, for example, to check whether an optional package is installed or not and only
use it when it’s available.
anywhere we want and require it with either relative paths ( ./ and ../ ) or with absolute paths
starting with / .
If, for example, the find-me.js file was under a lib folder instead of the node_modules folder, we
require('./lib/find-me');
le object itself:
Do the same for an index.js file, which is what we’ll be executing with the node command. Make
exports: {},
parent: null,
filename: '/Users/samer/learn-node/index.js',
loaded: false,
children: [],
paths: [ ... ] }
In util Module {
id: '/Users/samer/learn-node/lib/util.js',
exports: {},
parent:
Module {
id: '.',
exports: {},
parent: null,
filename: '/Users/samer/learn-node/index.js',
loaded: false,
hild Ci l
children: [ [Circular] ],
paths: [...] },
filename: '/Users/samer/learn-node/lib/util.js',
loaded: false,
children: [],
paths: [...] }
Note how the main index module (id: '.') is now listed as the parent for the lib/util module.
However, the lib/util module was not listed as a child of the index module. Instead, we have
the [Circular] value there because this is a circular reference. If Node prints
the lib/util module object, it will go into an infinite loop. That’s why it simply replaces the lib/ut
More importantly now, what happens if the lib/util module required the main index module?
This is where we get into what’s known as the circular modular dependency, which is allowed in
Node.
To understand it better, let’s first understand a few other concepts on the module object.
module object, it had an exports property which has been an empty object so far. We can add any
attribute to this special exports object. For example, let’s export an id attribute for index.js and li
b/util.js :
exports.id = 'lib/util';
exports.id = 'index';
When we now execute index.js , we’ll see these attributes as managed on each
loaded: false,
... }
In util Module {
id: '/Users/samer/learn-node/lib/util.js',
parent:
Module {
id: '.',
loaded: false,
... },
loaded: false,
... }
I’ve removed some attributes in the above output to keep it brief, but note how the exports object
now has the attributes we defined in each module. You can put as many attributes as you want on
that exports object, and you can actually change the whole object to be something else. For
example, to change the exports object to be a function instead of an object, we do the following:
When you run index.js now, you’ll see how the exports object is a function:
~/learn-node $ node index.js
In index Module {
id: '.',
exports: [Function],
loaded: false,
... }
Note how we did not do exports = function() {} to make the exports object into a function. We
can’t actually do that because the exports variable inside each module is just a reference to modul
e.exports which manages the exported properties. When we reassign the exports variable, that
reference is lost and we would be introducing a new variable instead of changing the module.expor
ts object.
The module.exports object in every module is what the require function returns when we require
that module. For example, change the require('./lib/util') line in index.js into:
p , g q ( / / ) j
console.log('UTIL:', UTIL);
The above will capture the properties exported in lib/util into the UTIL constant. When we
Let’s also talk about the loaded attribute on every module. So far, every time we printed a module
The module module uses the loaded attribute to track which modules have been loaded (true
The module module uses the loaded attribute to track which modules have been loaded (true
value) and which modules are still being loaded (false value). We can, for example, see the inde
x.js module fully loaded if we print its module object on the next cycle of the event loop using a s
etImmediate call:
// In index.js
setImmediate(() => {
console.log('The index.js module object is now loaded!', module)
});
id: '.',
exports: [Function],
parent: null,
filename: '/Users/samer/learn-node/index.js',
loaded: true,
children:
[ Module {
id: '/Users/samer/learn-node/lib/util.js',
exports: [Object],
parent: [Circular],
filename: '/Users/samer/learn-node/lib/util.js',
loaded: true,
children: [],
paths: [Object] } ],
paths:
[ '/Users/samer/learn-node/node_modules',
'/Users/samer/node_modules',
'/Users/node_modules',
'/node_modules' ] }
Note how in this delayed console.log output both lib/util.js and index.js are fully loaded.
The exports object becomes complete when Node finishes loading the module (and labels it so).
The whole process of requiring/loading a module is synchronous. That’s why we were able to see
the modules fully loaded after one cycle of the event loop.
This also means that we cannot change the exports object asynchronously. We can’t, for
});
To find out, let’s create the following two files under lib/ , module1.js and module2.js and have
// lib/module1.js
exports.a = 1;
require('./module2');
exports.b = 2;
exports.c = 3;
// lib/module2.js
We required module2 before module1 was fully loaded, and since module2 required module1 while
it wasn’t fully loaded, what we get from the exports object at that point are all the properties
exported prior to the circular dependency. Only the a property was reported because
both b and c were exported after module2 required and printed module1 .
Node keeps this really simple. During the loading of a module, it builds the exports object. You
can require the module before it’s done loading and you’ll just get a partial exports object with
If a file extension was not specified, the first thing Node will try to resolve is a .js file. If it can’t
find a .js file, it will try a .json file and it will parse the .json file if found as a JSON text file.
After that, it will try to find a binary .node file. However, to remove ambiguity, you should probably
specify a file extension when requiring anything other than .js files.
Requiring JSON files is useful if, for example, everything you need to manage in that file is some
static configuration values, or some values that you periodically read from an external source. For
{
{
"host": "localhost",
"port": 8080
The Node documentation site has a sample addon file which is written in C++. It’s a simple
module that exposes a hello() function and the hello function outputs “world.”
You can use the node-gyp package to compile and build the .cc file into a .node file. You just
Once you have the addon.node file (or whatever name you specify in binding.gyp ) then you can
console.log(addon.hello());
We can actually see the support of the three extensions by looking at require.extensions .
Looking at the functions for each extension, you can clearly see what Node will do with each. It
uses module._compile for .js files, JSON.parse for .json files, and process.dlopen for .node files.
We can use the exports object to export properties, but we cannot replace the exports object
How exactly does this exports object, which appears to be global for every module, get defined as
Let me ask one more question before explaining Node’s wrapping process.
That answer variable will be globally available in all scripts after the script that defined it.
This is not the case in Node. When we define a variable in one module, the other modules in the
program will not have access to that variable. So how come variables in Node are magically
scoped?
The answer is simple. Before compiling a module, Node wraps the module code in a function,
which we can inspect using the wrapper property of the module module.
~ $ node
> require('module').wrapper
'\n});' ]
>
Node does not execute any code you write in a file directly. It executes this wrapper function which
will have your code in its body. This is what keeps the top-level variables that are defined in any
This is what makes them appear to look global when in fact they are specific to each module.
All of these arguments get their values when Node executes the wrapper function. exports is
defined as a reference to module.exports prior to that. require and module are both specific to the
function to be executed, and __filename / __dirname variables will contain the wrapped module’s
You can see this wrapping in action if you run a script with a problem on its first line:
Note how the first line of the script as reported above was the wrapper function, not the bad
reference.
Moreover, since every module gets wrapped in a function, we can actually access that function’s
'1':
{ [Function: require]
resolve: [Function: resolve],
main:
Module {
id: '.',
exports: {},
parent: null,
filename: '/Users/samer/index.js',
loaded: false,
children: [],
paths: [Object] },
extensions: { ... },
'2':
Module {
id: '.',
exports: {},
parent: null,
filename: '/Users/samer/index.js',
loaded: false,
children: [],
paths: [ ... ] },
'3': '/Users/samer/index.js',
'4': '/Users/samer' }
The first argument is the exports object, which starts empty. Then we have
the require / module objects, both of which are instances that are associated with the index.js file
that we’re executing. They are not global variables. The last 2 arguments are the file’s path and its
directory path.
The wrapping function’s return value is module.exports . Inside the wrapped function, we can use
the exports object to change the properties of module.exports , but we can’t reassign exports itself
return module.exports;
}
If we change the whole exports object, it would no longer be a reference to module.exports . This
is the way JavaScript reference objects work everywhere, not just in this context.
module name or path and returns the module.exports object. We can simply override
For example, maybe for testing purposes, we want every require call to be mocked by default
and just return a fake object instead of the required module exports object. This simple
require = function() {
After doing the above reassignment of require , every require('something') call in the script will
The require object also has properties of its own. We’ve seen the resolve property, which is a
function that performs only the resolving step of the require process. We’ve also seen require.ext
ensions above.
There is also require.main which can be helpful to determine if the script is being required or run
directly.
Say, for example, that we have this simple printInFrame function in print-in-frame.js :
// In print-in-frame.js
console.log('*'.repeat(size));
console.log(header);
console.log('*'.repeat(size));
};
The function takes a numeric argument size and a string argument header and it prints that
Passing 8 and Hello as command line arguments to print “Hello” in a frame of 8 stars.
2. With require . Assuming the required module will export the printInFrame function and we can
print(5, 'Hey');
So we can use this condition to satisfy the usage requirements above by invoking the
// In print-in-frame.js
t i t ( i h d ) {
const printInFrame = (size, header) => {
console.log('*'.repeat(size));
console.log(header);
console.log('*'.repeat(size));
};
printInFrame(process.argv[2], process.argv[3]);
} else {
module.exports = printInFrame;
}
When the file is not being required, we just call the printInFrame function
with process.argv elements. Otherwise, we just change the module.exports object to be the printI
Say that you have the following ascii-art.js file that prints a cool looking header:
We want to display this header every time we require the file. So when we require the file twice,
call and does not load the file on the second call.
We can see this cache by printing require.cache after the first require. The cache registry is
simply an object that has a property for every required module. Those properties values are the m
odule objects used for each module. We can simply delete a property from
that require.cache object to invalidate that cache. If we do that, Node will re-load the module to re-
cache it.
However, this is not the most efficient solution for this case. The simple solution is to wrap the log
line in ascii-art.js with a function and export that function. This way, when we require the ascii-
art.js file, we get a function that we can execute to invoke the log line every time:
That’s all I have for this topic. Thanks for reading. Until next time!
When interacting with the file system, you may want to check whether a file exists on the hard disk at a given path. Node.js comes
with the fs core module allowing you to interact with the hard disk.
This tutorial shows you how to use Node.js to determine whether a file exists on disk.
Node.js Strings Streams Date & Time Arrays Promises JSON Classes Numbers
1. Get a
Well, Fs#access doesn’t return the desired boolean value ( true/false ). Instead, it expects a callback with an error as the only
argument. The callback support comes from the early days of Node.js where asynchronous operations used callbacks.
Starting in version 10.0, Node.js added support for promises and async/await for the fs module. This tutorial assumes you’re using
async/await for flow control of your code. Then, you can use the require('fs').promises version of Fs#access which is usable with
async/await.
Here’s a helper method returning the boolean value whether a file exists at the given path :
const { promises: Fs } = require('fs')
// Example:
const Path = require('path')
const path = Path.join(__dirname, "existing-file.txt")
await exists(path)
// true
Fs.existsSync(path)
// true
You may use the exists method to check if a file path exists on your disk:
await Fs.exists(path)
// true
Enjoy!
Mentioned Resources
Example
Extract the filename from a file path:
Run example »
Definition and Usage
The path.basename() method returns the filename part of a file path.
Syntax
path.basename(path, extension);
Parameter Values
Parameter Description
extension Optional. If the filename ends with the specified string, the specified string is excluded
from the result
Technical Details
Return Value: The filename, as a String
Example
Extract the filename, but not the ".js" at the end:
Run example »
❮ Path Module
COLOR PICKER
HOW TO
Tabs
Dropdowns
Accordions
Convert Weights
Animated Buttons
Side Navigation
Top Navigation
Modal Boxes
Progress Bars
Parallax
Login Form
HTML Includes
Google Maps
Range Sliders
Tooltips
Slideshow
Filter List
Sort List
SHARE
CERTIFICATES
HTML, CSS, JavaScript, PHP, jQuery, Bootstrap and XML.
Read More »
REPORT ERROR PRINT PAGE FORUM ABOUT
W3Schools is optimized for learning, testing, and training. Examples might be simplified to improve reading and basic understanding. Tutorials, references,
and examples are constantly reviewed to avoid errors, but we cannot warrant full correctness of all content. While using this site, you agree to have read
and accepted our terms of use, cookie and privacy policy. Copyright 1999-2018 by Refsnes Data. All Rights Reserved.
Clone by Md Maruf Adnan Sami.
How YOU can learn Node.js I/O, files and paths
Follow me on Twitter , happy to take your suggestions on topics or improvements /Chris
If you are you completely new to Node.js or maybe you've just spun up an Express app in Node.js, but barely know anything else
about Node - Then this first part in a series is for YOU.
Working with file paths, it's important when working with files and directories that we understand how to
work with paths. There are so many things that can go wrong in terms of locating your files and parsing
expressions but Node.js does a really good job of keeping you on the straight and narrow thanks to built-in
variables and great core libraries
Working with Files and Directories, almost everything in Node.js comes in an async, and sync flavor.
It's important to understand why we should go with one over the other, but also how they differ in how you
invoke them.
Demo, finally we will build some demos demonstrating these functionalities
## The file system
The file system is an important part of many applications. This means working with files, directories but also
dealing with different access levels and paths.
Working with files is in Node.js a synchronous or an asynchronous process. Node.js is single-threaded which
means if we need to carry things out in parallel we need an approach that supports it. That approach is the
callback pattern.
## References
Node.js docs - file system This is the official docs page for the file system
Overview of the fs module Good overview that shows what methods are available on the fs module
Reading files Shows all you need to know about reading files
Writing files Docs page showing to how to writ files
Working with folders Shows how to work with folders
File stats If you need specific information on a file or directory like creation date, size etc, this is the page
to learn more.
Paths Working with paths can be tricky but this module makes that really easy.
Create a Node.js app on Azure Want to know how to take your Node.js app to the Cloud?
Log on to Azure programmatically using Node.js This teaches you how to programmatically connect to
your Azure resources using Node.js
## Paths
A file path represents where a directory or file is located in your file system. It can look like this:
1 /path/to/file.txt
The path looks different depending on whether we are dealing with Linux based or Windows-based operating
system. On Windows the same path might look like this instead:
1 C:\path\to\file.txt
For this we have the built-in module path that we can use like so:
Information, it can extract information from our path on things such as parent directory, filename and file
extension
Join, we can get help joining two paths so we don't have to worry about which OS our code is run on
Absolute path, we can get help calculating an absolute path
Normalization, we can get help calculating the relative distance between two paths.
Pre-steps
Information
Above we can see how the methods basename() , dirname() and extname() helps us inspect our path to
give us different pieces of information.
Join paths
Above we are joining the paths contained in variables join and joinArg but we are also in our last
example testing out concatenating using nothing but directory names and file names:
1 Joined /path/to/my/file.txt
2 Concat /path/user/files/file.txt
The takeaway here is that we can concatenate different paths using the join() method. However, because
we don't know if our app will be run on a Linux of Windows host machine it's preferred that we construct
paths using nothing but directory and file names like so:
Note, how we in our second example is using the resolve() method on info.txt a file that exist in the
same directory as we run our code:
Normalize paths
Sometimes we have characters like ./ or ../ in our path. The method normalize() helps us calculate
the resulting path. Add the below code to our application file:
1 console.log(`Normalize ${path.normalize('/path/to/file/../')}`)
1 Normalize /path/to/
There are many things you can do when interacting with the file system like:
You interact with the file system using the built in module fs . To use it import it, like so:
1 const fs = require('fs')
I/O operations
Here is a selection of operations you can carry out on files/directories that exist on the fs module.
appendFile() , adds data to file if it exist, if not then file is created first
stat() , returns the stats of the file like when it was created, how big it is in Bytes and other info,
All the above methods exist as synchronous versions as well. All you need to do is to append the Sync at
the end, for example readFileSync() .
Async/Sync
All operations come in synchronous and asynchronous form. Node.js is single-threaded. The consequence
of running synchronous operations are therefore that we are blocking anything else from happening. This
results in much less throughput than if your app was written in an asynchronous way.
Synchronous operation
In a synchronous operation, you are effectively stopping anything else from happening, this might make your
program less responsive. A synchronous file operation should have sync as part of the operation name, like
so:
Asynchronous operation
An Asynchronous operation is non-blocking. The way Node.js deals with asynchronous operations is by
using a callback model. What essentially happens is that Node.js doesn't wait for the operation to finish.
What you can do is to provide a callback, a function, that will be invoked once the operation has finished.
This gives rise to something called a callback pattern.
Below follows an example of opening a file:
1 const fs = require('fs');
2
3 fs.open('/path/to/file/file.txt', 'r', (err, fileContent) => {
4 if (err) throw err;
5 fs.close(fd, (err) => {
6 if (err) throw err;
7 });
8 });
Above we see how we provide a function as our third argument. The function in itself takes an error err as
the first argument. The second argument is usually data as a result of the operation, in this case, the file
content.
In this exercise, we will learn how to work with the module fs to do things such as
Pre-steps
1 app.js
2 info.txt
3 sub -|
4 ---| a.txt
5 ---| b.txt
6 ---| c.txt
## Read/Write files
First, start by giving your app.js file the following content on the top:
1 const fs = require('fs');
2 const path = require('path');
Now we will work primarily with the module fs , but we will need the module path for helping us construct
a path later in the exercise.
Above we are using the synchronous version of opening a file. We can see that through the use of a method
ending in sync.
Note above how the text After sync call is printed right after it lists the file content from our synchronous
call. Additionally note how text After async call is printed before Async Content: info . This means
anything asynchronous happens last. This is an important realization about asynchronous operations, they
may be non-blocking but they don't complete right away. So if the order is important you should be looking at
constructs such Promises and Async/await.
For various reasons, you may want to list detailed information on a specific file/directory. For that we
have stat() method. This also comes in an asynchronous/synchronous version.
1 Size 4
2 Mode 33188
3 MTime Mon Mar 16 2020 19:04:31 GMT+0100 (Central European Standard Time)
4 Is directory false
5 Is file true
Results above may vary depending on what content you have in your file info.txt and when it was
created.
Lastly, we will open up a directory using the method readdir() . This will produce an array of files/directories
contained within the specified directory:
Above we are constructing a directory path using the method join() from the path module, like so:
1 path.join(__dirname, 'sub')
__dirname is a built-in variable and simply means the executing directory. The method call means we will
look into a directory sub relative to where we are executing the code.
Now run this code with the following command:
Summary
Paths, we've looked at how we can work with paths using the built-in path module
Files & Directories, we've learned how we can use the fs module to create, update, remove, move etc
files & directories.
There is lots more to learn in this area and I highly recommend looking at the reference section of this article
to learn more.
ZetCode
All Spring Boot Python C# Java JavaScript Subscribe
JSON Server tutorial introduces the JavaScript json-server library, which can be used to create fake REST API.
Like 1 Share
JSON server
The json-server is a JavaScript library to create testing REST API.
$ mkdir json-server-lib
$ cd json-server-lib
$ npm init -y
$ npm i -g json-server
In addition, we install the axios module, which is a promise-based JavaScript HTTP client.
$ cat package.json
{
"name": "json-server-lib",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"axios": "^0.18.0"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
The --watch command is used to specify the data for the server.
$ curl localhost:3000/users/3/
{
"id": 3,
"first_name": "Anna",
"last_name": "Smith",
"email": "[email protected]"
}
get_request.js
const axios = require('axios');
axios.get('http://localhost:3000/users')
.then(resp => {
data = resp.data;
data.forEach(e => {
console.log(`${e.first_name}, ${e.last_name}, ${e.email}`);
});
})
.catch(error => {
console.log(error);
});
With the axios module, we get all users as a JSON array and loop through it with forEach().
$ node get_request.js
Robert, Schwartz, [email protected]
Lucy, Ballmer, [email protected]
Anna, Smith, [email protected]
Robert, Brown, [email protected]
Roger, Bacon, [email protected]
This is the output of the example. We get all users and print their full names and emails.
JSON Server POST request
With a POST request, we create a new user.
post_request.js
const axios = require('axios');
axios.post('http://localhost:3000/users', {
id: 6,
first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]'
}).then(resp => {
console.log(resp.data);
}).catch(error => {
console.log(error);
});
$ node post_request.js
{ id: 6,
first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]' }
$ curl localhost:3000/users/6/
{
"id": 6,
"first_name": "Fred",
"last_name": "Blair",
"email": "[email protected]"
}
We verify the newly created user with the curl command.
put_request.js
const axios = require('axios');
axios.put('http://localhost:3000/users/6/', {
first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]'
}).then(resp => {
console.log(resp.data);
}).catch(error => {
console.log(error);
});
try.digitalocean.com OPEN
$ node put_request.js
{ first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]',
id: 6 }
delete_request.js
const axios = require('axios');
axios.delete('http://localhost:3000/users/1/')
.then(resp => {
console.log(resp.data)
}).catch(error => {
console.log(error);
});
In the example, we delete the user with Id 1.
$ node delete_request.js
{}
sort_data.js
const axios = require('axios');
axios.get('http://localhost:3000/users?_sort=last_name&_order=asc')
.then(resp => {
data = resp.data;
data.forEach(e => {
console.log(`${e.first_name}, ${e.last_name}, ${e.email}`)
});
}).catch(error => {
console.log(error);
});
The code example sorts data by the users' last name in ascending order. We use the _sort and _order query parameters.
Google Search API
$0
Google Search high scalable API
OPEN RapidAPI
RapidAPI
$ node sort_data.js
Roger, Bacon, [email protected]
Lucy, Ballmer, [email protected]
Fred, Blair, [email protected]
Robert, Brown, [email protected]
Robert, Schwartz, [email protected]
Anna, Smith, [email protected]
operators.js
const axios = require('axios');
axios.get('http://localhost:3000/users?id_gte=4')
.then(resp => {
console.log(resp.data)
}).catch(error => {
console.log(error);
});
$ node operators.js
[ { id: 4,
first_name: 'Robert',
last_name: 'Brown',
email: '[email protected]' },
{ id: '5',
first_name: 'Roger',
last_name: 'Bacon',
email: '[email protected]' },
{ first_name: 'Fred',
last_name: 'Blair',
email: '[email protected]',
id: 6 } ]
full_text_search.js
const axios = require('axios');
axios.get('http://localhost:3000/users?q=yahoo')
.then(resp => {
console.log(resp.data)
}).catch(error => {
console.log(error);
});
TUTORIAL
By Cooper Makhijani
Published on July 10, 2019 6.8k
While this tutorial has content that we believe is of great benefit to our community, we have not yet tested or edited it to ensure you have an
error-free learning experience. It's on our list, and we're working on it! You can help us out by using the "report an issue" button at the bottom of
the tutorial.
Many people forget about one of Node’s most useful built-in modules, the path module. It’s a module with methods that
help you deal with file and directory path names on the machine’s filesystem. In this article, we’re going to look at five of the
tools path provides.
Before we can start using the path module, we have to require it:
S C R O L L TO TO P
Something of note: path works a little bit differently depending on your OS, but that’s beyond the scope of this article. To read
more about the differences in the way path works on POSIX systems and Windows, see the path documentation.
Now that that’s out of the way, let’s look at all the things we can use path for.
path.join
One of the most commonly used path methods is path.join . The join method takes two or more parts of a file path and
joins them into one string that can be used anywhere that requires a file path. For this example, let’s say that we need the
file path of an image, and we have the name of the image. For simplicity’s sake, we’ll assume it’s a png.
Copy
const path = require('path');
→ path.join documentation
path.basename
According to the path docs, the path.basename method will give you the trailing part of a path. In layman’s terms, it returns
S C R O L L TO TO P
either the name of the file or directory that the file path refers to. For this example, let’s say we want to know the name of an
image, but we were passed the whole file path.
Now this is cool and all, but what if we want it without the extension? Lucky for us, we just have to tell path.basename to
remove it.
→ path.basename documentation
path.dirname
Sometimes we need to know the directory that a file is in, but the file path we have leads to a file within that directory. The
S C R O L L TO TO P
path.dirname function is here for us. path.dirname returns the lowest level directory in a file path.
const path = require('path');
→ path.dirname documentation
path.extname
Say we need to know what the extension of a file is. For our example we’re going to make a function that tells us if a file is
an image. For simplicity’s sake, we’ll only be checking against the most common image types. We use path.extname to get
the extension of a file.
function isImage(filepath) {
let filetype = path.extname(filepath);
if(imageTypes.includes(filetype)) {
return true;
} else {
return false;
}
}
isImage('picture.png'); // true S C R O L L TO TO P
isImage('myProgram.exe'); // false
isImage('pictures/selfie.jpeg'); // true
→ path.extname documentation
path.normalize
Many file systems allow the use of shortcuts and references to make navigation easier, such as .. and . , meaning up one
directory and current direction respectively. These are great for quick navigation and testing, but it’s a good idea to have our
paths a little more readable. With path.normalize , we can convert a path containing these shortcuts to the actual path it
represents. path.normalize can handle even the most convoluted paths, as our example shows.
path.normalize('/hello/world/lets/go/deeper/./wait/this/is/too/deep/lets/go/back/some/../../../../../../../../..');
// returns: /hello/world/lets/go/deeper
→ path.normalize documentation
🎉 We’re done! That’s all we’re going to cover in this article. Keep in mind that there’s way more to path than what’s covered
here, so I encourage you to check out the official path documentation. Hopefully you learned something, and thanks for
reading!
S C R O L L TO TO P
Report an issue
About the authors
Cooper Makhijani
is a Community author on DigitalOcean.
REL ATED
S C R O L L TO TO P
How To Code in Node.js eBook
Tutorial
Comments
0 Comments
Leave a comment...
Sign In to Comment
BECOME A CONTRIBUTOR
S C R O L L TO TO P
Featured on Community Kubernetes Course Learn Python 3 Machine Learning in Python Getting started with Go Intro to Kubernetes
DigitalOcean Products Virtual Machines Managed Databases Managed Kubernetes Block Storage Object Storage Marketplace VPC
Load Balancers
Learn More
Pricing Tutorials
Pricing Tutorials
About
Products Overview Q&A
Leadership
Droplets Tools and Integrations
Blog
© 2021 DigitalOcean, LLC. All rights reserved. Careers Kubernetes Tags
Partners Managed Databases Product Ideas
Referral Program Spaces Write for DigitalOcean
Press Marketplace Presentation Grants
Legal Load Balancers Hatch Startup Program
Security & Trust Center Block Storage Shop Swag
API Documentation Research Program
Documentation Open Source
Release Notes Code of Conduct
Contact
Get Support
Trouble Signing In?
Sales
Report Abuse
System Status
S C R O L L TO TO P
10 DAYS Upcoming Tech Talk: Build a Web App With Django
TUTORIAL
By William Le
Last Validated on October 20, 2020 · Originally Published on May 23, 2019 184.6k
Introduction
__dirname is an environment variable that tells you the absolute path of the directory containing the currently executing file.
In this article, you will explore how to implement __dirname in your Node.js project.
Prerequisites
To complete this tutorial, you will need:
A general knowledge of Node.js. To learn more about Node.js, check out our How To Code in Node.js series.
node-app
├──index.js
├──public
├──src
│ ├──helpers.js
│ └──api
│ └──controller.js
├──cronjobs
│ ├──pictures
│ └──hello.js
└──package.json
You can use __dirname to check on which directories your files live:
controller.js
console.log(__dirname) // "/Users/Sam/node-app/src/api"
console.log(process.cwd()) // "/Users/Sam/node-app"
hello.js
console.log(__dirname) // "/Users/Sam/node-app/cronjobs"
console.log(process.cwd()) // "/Users/Sam/node-app"
Notice that __dirname has a different value depending on which file you console it out. The process.cwd() method also
returns a value, but the project directory instead. The __dirname variable always returns the absolute path of
S Cwhere your
R O L L TO TO P
files live.
index.js
const fs = require('fs');
const path = require('path');
const dirPath = path.join(__dirname, '/pictures');
fs.mkdirSync(dirPath);
Now you’ve created a new directory, pictures , after calling on the mdirSync() method, which contains __dirname as the
absolute path.
Pointing to Directories
Another unique feature is its ability to point to directories. In your index.js file, declare a variable and pass in the value of
__dirname as the first argument in path.join() , and your directory containing static files as the second:
index.js
express.static(path.join(__dirname, '/public')); S C R O L L TO TO P
Here, you’re telling Node.js to use __dirname to point to the public directory that contains static files.
index.js
const fs = require('fs');
const path = require('path');
const filePath = path.join(__dirname, '/pictures');
fs.openSync(filePath, 'hello.jpeg');
Using the openSync() method will add the file if it does not exist within your directory.
Conclusion
Node.js provides a way for you to make and point to directories, and add files to existing directories with a modular
environment variable.
For further reading, check out the Node.js documentation for __dirname , and the tutorial on using __dirname in the
Express.js framework.
S C R O L L TO TO P
Report an issue
REL ATED
S C R O L L TO TO P
App Platform: Run Node apps without managing servers
Product
Comments
0 Comments
Leave a comment...
Sign In to Comment
BECOME A CONTRIBUTOR
DigitalOcean Products Virtual Machines Managed Databases Managed Kubernetes Block Storage Object Storage Marketplace VPC
Load Balancers
Learn More
Contact
Get Support
Trouble Signing In?
Sales
Report Abuse
System Status
S C R O L L TO TO P
Learn Docs Download Community
Menu
/users/joe/file.txt
while Windows computers are di erent, and have a structure such as:
C:\users\joe\file.txt
You need to pay attention when using paths in your applications, as this di erence must be taken into
account.
Example:
path.dirname(notes) // /users/joe
path.basename(notes) // notes.txt
path.extname(notes) // .txt
You can get the le name without the extension by specifying a second argument to basename :
You can get the absolute path calculation of a relative path using path.resolve() :
If the rst parameter starts with a slash, that means it's an absolute path:
path.normalize() is another useful function, that will try and calculate the actual path, when it contains
relative speci ers like . or .. , or double slashes:
path.normalize('/users/joe/..//test.txt') //'/users/test.txt'
Neither resolve nor normalize will check if the path exists. They just calculate a path based on the
information they got.
CONTRIBUTORS
← PREV NEXT →
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
Node.js le stats
Every le comes with a set of details that we can inspect using Node.js.
You call it passing a le path, and once Node.js gets the le details it will call the callback function you
pass, with 2 parameters: an error message, and the le stats:
const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}
//we have access to the file stats in `stats`
})
Node.js provides also a sync method, which blocks the thread until the le stats are ready:
const fs = require('fs')
try {
const stats = fs.statSync('/Users/joe/test.txt')
} catch (err) {
console.error(err)
}
The le information is included in the stats variable. What kind of information can we extract using the
stats?
A lot, including:
There are other advanced methods, but the bulk of what you'll use in your day-to-day programming is this.
const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}
stats.isFile() //true
stats.isDirectory() //false
stats.isSymbolicLink() //false
stats.size //1024000 //= 1MB
})
CONTRIBUTORS
EDIT THIS PAGE ON GITHUB
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
The Node.js fs core module provides many handy methods you can use to work with folders.
const fs = require('fs')
try {
if (!fs.existsSync(folderName)) {
fs.mkdirSync(folderName)
}
} catch (err) {
console.error(err)
}
Read the content of a directory
Use fs.readdir() or fs.readdirSync() to read the contents of a directory.
This piece of code reads the content of a folder, both les and subfolders, and returns their relative path:
const fs = require('fs')
fs.readdirSync(folderPath)
fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})
You can also lter the results to only return the les, and exclude the folders:
fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})
.filter(isFile)
R f ld
Rename a folder
Use fs.rename() or fs.renameSync() to rename folder. The rst parameter is the current path, the
second the new path:
const fs = require('fs')
const fs = require('fs')
try {
fs.renameSync('/Users/joe', '/Users/roger')
} catch (err) {
console.error(err)
}
Remove a folder
Use fs.rmdir() or fs.rmdirSync() to remove a folder.
Removing a folder that has content can be more complicated than you need.
In this case it's best to install the fs-extra module, which is very popular and well maintained. It's a
drop-in replacement of the fs module, which provides more features on top of it.
In this case the remove() method is what you want.
Install it using
const fs = require('fs-extra')
fs.remove(folder)
.then(() => {
//done
})
.catch(err => {
console.error(err)
})
or with async/await:
async function removeFolder(folder) {
try {
await fs.remove(folder)
//done
} catch (err) {
console.error(err)
}
}
CONTRIBUTORS
← PREV NEXT →
The easiest way to write to les in Node.js is to use the fs.writeFile() API.
Example:
Menu
const content = 'Some content!'
const fs = require('fs')
try {
try {
const data = fs.writeFileSync('/Users/joe/test.txt', content)
By default, this API will replace the contents of the le if it does already exist.
Append to a le
A handy method to append content to the end of a le is fs.appendFile() (and its
fs.appendFileSync() counterpart):
const content = 'Some content!'
Using streams
All those methods write the full content to the le before returning the control back to your program (in
the async version, this means executing the callback)
CONTRIBUTORS
← PREV NEXT →
Trademark Policy Code of Conduct About
© OpenJS Foundation
The Node.js fs module
The fs module provides a lot of very useful functionality to access and interact with the le system.
There is noLearn
need toDocs
install it. Being part of
Download the Node.js core, it can be used by simply requiring it:
Community
Menu
const fs = require('fs')
Once you do so, you have access to all its methods, which include:
fs.access() : check if the le exists and Node.js can access it with its permissions
fs.appendFile() : append data to a le. If the le does not exist, it's created
fs.chmod() : change the permissions of a le speci ed by the lename passed. Related:
fs.lchmod() , fs.fchmod()
fs.chown() : change the owner and group of a le speci ed by the lename passed. Related:
fs.fchown() , fs.lchown()
fs.close() : close a le descriptor
fs.copyFile() : copies a le
fs.createReadStream() : create a readable le stream
fs.createWriteStream() : create a writable le stream
fs.link() : create a new hard link to a le
fs.mkdir() : create a new folder
One peculiar thing about the fs module is that all the methods are asynchronous by default, but they
can also work synchronously by appending Sync .
For example:
fs.rename()
fs.renameSync()
fs.write()
fs.writeSync()
For example let's examine the fs.rename() method. The asynchronous API is used with a callback:
const fs = require('fs')
//done
})
A synchronous API can be used like this, with a try/catch block to handle errors:
const fs = require('fs')
try {
fs.renameSync('before.json', 'after.json')
//done
} catch (err) {
console.error(err)
}
The key di erence here is that the execution of your script will block in the second example, until the le
operation succeeded.
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Error handling in Node.js
TABLE OF CONTENTS
Creating exceptions
An exception is created using the throw keyword:
throw value
As soon as JavaScript executes this line, the normal program ow is halted and the control is held back to
the nearest exception handler.
Usually in client-side code value can be any JavaScript value including a string, a number or an object.
Error objects
An error object is an object that is either an instance of the Error object, or extends the Error class,
provided in the Error core module:
or
Learn Docs Download Community
Menu
class NotEnoughCoffeeError extends Error {
//...
}
throw new NotEnoughCoffeeError()
Handling exceptions
An exception handler is a try / catch statement.
Any exception raised in the lines of code included in the try block is handled in the corresponding
catch block:
try {
//lines of code
} catch (e) {}
You can add multiple handlers, that can catch di erent kinds of errors.
To solve this, you listen for the uncaughtException event on the process object:
process.on('uncaughtException', err => {
console.error('There was an uncaught error', err)
process.exit(1) //mandatory (as per the Node.js docs)
})
You don't need to import the process core module for this, as it's automatically injected.
doSomething1()
.then(doSomething2)
.then(doSomething3)
.catch(err => console.error(err))
How do you know where the error occurred? You don't really know, but you can handle errors in each of
the functions you call ( doSomethingX ), and inside the error handler throw a new error, that's going to call
the outside catch handler:
To be able to handle errors locally without handling them in the function we call, we can break the chain
you can create a function in each then() and process the exception:
doSomething1()
.then(() => {
return doSomething2().catch(err => {
//handle error
throw err //break the chain!
})
})
.then(() => {
return doSomething3().catch(err => {
//handle error
throw err //break the chain!
})
})
.catch(err => console.error(err))
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
Node.js Streams
TABLE OF CONTENTS
They are a way to handle reading/writing les, network communications, or any kind of end-to-end
information exchange in an e cient way.
Streams are not a concept unique to Node.js. They were introduced in the Unix operating system decades
ago, and programs can interact with each other passing streams through the pipe operator ( | ).
For example, in the traditional way, when you tell the program to read a le, the le is read into memory,
from start to nish, and then you process it.
Using streams you read it piece by piece, processing its content without keeping it all in memory.
The Node.js stream module provides the foundation upon which all streaming APIs are built. All streams
are instances of EventEmitter
Why streams
Streams basically provide two major advantages over using other data handling methods:
Memory e ciency: you don't need to load large amounts of data in memory before you are able to
process it
Time e ciency: it takes way less time to start processing data, since you can start processing as
soon as you have it, rather than waiting till the whole data payload is available
An example of a stream
A typical example is reading les from a disk.
Using the Node.js fs module, you can read a le, and serve it over HTTP when a new connection is
established to your HTTP server:
readFile() reads the full contents of the le, and invokes the callback function when it's done.
res.end(data) in the callback will return the le contents to the HTTP client.
If the le is big, the operation will take quite a bit of time. Here is the same thing written using streams:
Instead of waiting until the le is fully read, we start streaming it to the HTTP client as soon as we have a
chunk of data ready to be sent.
pipe()
The above example uses the line stream.pipe(res) : the pipe() method is called on the le stream.
What does this code do? It takes the source, and pipes it into a destination.
You call it on the source stream, so in this case, the le stream is piped to the HTTP response.
The return value of the pipe() method is the destination stream, which is a very convenient thing that
lets us chain multiple pipe() calls, like this:
src.pipe(dest1).pipe(dest2)
src.pipe(dest1)
dest1.pipe(dest2)
Readable : a stream you can pipe from, but not pipe into (you can receive data, but not send data
to it). When you push data into a readable stream, it is bu ered, until a consumer starts to read the
data.
Writable : a stream you can pipe into, but not pipe from (you can send data, but not receive from
it)
Duplex : a stream you can both pipe into and pipe from, basically a combination of a Readable and
Writable stream
Transform : a Transform stream is similar to a Duplex, but the output is a transform of its input
readableStream._read = () => {}
readableStream.push('hi!')
readableStream.push('ho!')
process.stdin.pipe(writableStream)
readableStream.pipe(writableStream)
readableStream.push('hi!')
readableStream.push('ho!')
You can also consume a readable stream directly, using the readable event:
readableStream.on('readable', () => {
console.log(readableStream.read())
})
writableStream.write('hey!\n')
readableStream.pipe(writableStream)
readableStream.push('hi!')
readableStream.push('ho!')
writableStream.end()
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
Node.js Bu ers
TABLE OF CONTENTS
What is a bu er?
A bu er is an area of memory. JavaScript developers are not familiar with this concept, much less than C,
C++ or Go developers (or any programmer that uses a system programming language), which interact with
memory every day.
It represents a xed-size chunk of memory (can't be resized) allocated outside of the V8 JavaScript engine.
You can think of a bu er like an array of integers, which each represent a byte of data.
Bu ers are deeply linked with streams. When a stream processor receives data faster than it can digest, it
puts the data in a bu er.
A simple visualization of a bu er is when you are watching a YouTube video and the red line goes beyond
your visualization point: you are downloading data faster than you're viewing it, and your browser bu ers
it.
How to create a bu er
A bu er is created using the Buffer.from() , Buffer.alloc() , and Buffer.allocUnsafe() methods.
Buffer.from(array)
Buffer.from(arrayBuffer[, byteOffset[, length]])
Buffer.from(buffer)
Buffer.from(string[, encoding])
You can also just initialize the bu er passing the size. This creates a 1KB bu er:
While both alloc and allocUnsafe allocate a Buffer of the speci ed size in bytes, the Buffer
created by alloc will be initialized with zeroes and the one created by allocUnsafe will be uninitialized.
This means that while allocUnsafe would be quite fast in comparison to alloc , the allocated segment
of memory may contain old data which could potentially be sensitive.
Older data, if present in the memory, can be accessed or leaked when the Buffer memory is read. This
is what really makes allocUnsafe unsafe and extra care must be taken while using it.
Using a bu er
Those numbers are the Unicode Code that identi es the character in the bu er position (H => 72, e =>
101, y => 121)
You can print the full content of the bu er using the toString() method:
console.log(buf.toString())
Notice that if you initialize a buffer with a number that sets its size, you'll get access to pre-
initialized memory that will contain random data, not an empty buffer!
You can write to a bu er a whole string of data by using the write() method:
Just like you can access a bu er with an array syntax, you can also set the contents of the bu er in the
same way:
Copy a bu er
By default you copy the whole bu er. 3 more parameters let you de ne the target bu er starting position
to copy to, the source bu er starting position to copy from, and the new bu er length:
Slice a bu er
If you want to create a partial visualization of a bu er, you can create a slice. A slice is not a copy: the
original bu er is still the source of truth. If that changes, your slice changes.
Use the slice() method to create it. The rst parameter is the starting position, and you can specify an
optional second parameter with the end position:
CONTRIBUTORS
© OpenJS Foundation
Learn Docs Download Community
Menu
The module provides some properties and methods, and some classes.
Properties
http.METHODS
> require('http').METHODS
[ 'ACL',
'BIND',
'CHECKOUT',
'CONNECT',
'COPY',
'DELETE',
'GET'
'GET',
'HEAD',
'LINK',
'LOCK',
'M-SEARCH',
'MERGE',
'MKACTIVITY',
'MKCALENDAR',
'MKCOL',
'MOVE',
'NOTIFY',
'OPTIONS',
'PATCH',
'POST',
'PROPFIND',
'PROPPATCH',
'PURGE',
'PUT',
'REBIND',
'REPORT',
'SEARCH',
'SUBSCRIBE',
'TRACE',
'UNBIND',
'UNLINK',
'UNLOCK',
'UNSUBSCRIBE' ]
http.STATUS_CODES
This property lists all the HTTP status codes and their description:
http.globalAgent
Points to the global instance of the Agent object, which is an instance of the http.Agent class.
It's used to manage connections persistence and reuse for HTTP clients, and it's a key component of
Node js HTTP networking
Node.js HTTP networking.
http.createServer()
Usage:
http.request()
http.get()
Similar to http.request() , but automatically sets the HTTP method to GET, and calls req.end()
automatically.
Classes
The HTTP module provides 5 classes:
http.Agent
http.ClientRequest
http Server
http.Server
http.ServerResponse
http.IncomingMessage
http.Agent
Node.js creates a global instance of the http.Agent class to manage connections persistence and reuse
for HTTP clients, a key component of Node.js HTTP networking.
This object makes sure that every request made to a server is queued and a single socket is reused.
http.ClientRequest
When a response is received, the response event is called with the response, with an
http.IncomingMessage instance as argument.
http.Server
This class is commonly instantiated and returned when creating a new server using
http.createServer() .
Once you have a server object, you have access to its methods:
close() stops the server from accepting new connections
http.ServerResponse
Created by an http.Server and passed as the second parameter to the request event it res.
The method you'll always call in the handler is end() , which closes the response, the message is
complete and the server can send it to the client. It must be called on each response.
getHeaderNames() get the list of the names of the HTTP headers already set
getHeaders() get a copy of the HTTP headers already set
setHeader('headername', value) sets an HTTP header value
getHeader('headername') gets an HTTP header already set
removeHeader('headername') removes an HTTP header already set
hasHeader('headername') return true if the response has that header set
headersSent() return true if the headers have already been sent to the client
After processing the headers you can send them to the client by calling response.writeHead() , which
accepts the statusCode as the rst parameter, the optional status message, and the headers object.
To send data to the client in the response body, you use write() . It will send bu ered data to the HTTP
response stream.
If the headers were not sent yet using response.writeHead() , it will send the headers rst, with the
status code and message that's set in the request, which you can edit by setting the statusCode and
statusMessage properties values:
response.statusCode = 500
response.statusMessage = 'Internal Server Error'
http.IncomingMessage
The data is accessed using streams, since http.IncomingMessage implements the Readable Stream
interface.
CONTRIBUTORS
EDIT THIS PAGE ON GITHUB
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
The events module provides us the EventEmitter class, which is key to working with events in Node.js.
The event listener eats its own dog food and uses these events:
emitter.addListener()
emitter.emit()
Emits an event. It synchronously calls every event listener in the order they were registered.
door.emit("slam") // emitting the event "slam"
emitter.eventNames()
Return an array of strings that represent the events registered on the current EventEmitter object:
door.eventNames()
emitter.getMaxListeners()
Get the maximum amount of listeners one can add to an EventEmitter object, which defaults to 10 but
can be increased or lowered by using setMaxListeners()
door.getMaxListeners()
emitter.listenerCount()
door.listenerCount('open')
emitter.listeners()
door listeners
li t ('open'
' ')
door.listeners('open')
emitter.off()
emitter.on()
Usage:
door.on('open', () => {
console.log('Door was opened')
})
emitter.once()
Adds a callback function that's called when an event is emitted for the rst time after registering this. This
callback is only going to be called once, never again.
ee.once('my-event', () => {
//call callback function once
})
emitter prependListener()
emitter.prependListener()
When you add a listener using on or addListener , it's added last in the queue of listeners, and called
last. Using prependListener it's added, and called, before other listeners.
emitter.prependOnceListener()
When you add a listener using once , it's added last in the queue of listeners, and called last. Using
prependOnceListener it's added, and called, before other listeners.
emitter.removeAllListeners()
door.removeAllListeners('open')
emitter.removeListener()
Remove a speci c listener. You can do this by saving the callback function to a variable, when added, so
you can reference it later:
emitter.setMaxListeners()
S t th i t f li t dd t E tE itt bj t hi h d f lt t 10 b t
Sets the maximum amount of listeners one can add to an EventEmitter object, which defaults to 10 but
can be increased or lowered.
door.setMaxListeners(50)
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
This module provides many functions that you can use to retrieve information from the underlying
operating system and the computer the program runs on, and interact with it.
const os = require('os')
There are a few useful properties that tell us some key things related to handling les:
os.EOL gives the line delimiter sequence. It's \n on Linux and macOS, and \r\n on Windows.
os.constants.signals tells us all the constants related to handling process signals, like SIGHUP,
SIGKILL and so on.
os.constants.errno sets the constants for error reporting, like EADDRINUSE, EOVERFLOW and more.
os.arch()
Return the string that identi es the underlying architecture, like arm , x64 , arm64 .
os.cpus()
Return information on the CPUs available on your system.
Example:
[
{
model: 'Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz',
speed: 2400,
times: {
user: 281685380,
nice: 0,
sys: 187986530,
idle: 685833750,
irq: 0
}
},
{
model: 'Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz',
speed: 2400,
times: {
user: 282348700,
nice: 0,
sys: 161800480,
idle: 703509470,
irq: 0
}
}
]
os.endianness()
Return BE or LE depending if Node.js was compiled with Big Endian or Little Endian.
os.freemem()
Return the number of bytes that represent the free memory in the system.
os.homedir()
Example:
'/Users/joe'
os.hostname()
os.loadavg()
Return the calculation made by the operating system on the load average.
Example:
Example:
{ lo0:
[ { address: '127.0.0.1',
netmask: '255.0.0.0',
family: 'IPv4',
mac: 'fe:82:00:00:00:00',
internal: true },
{ address: '::1',
netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
family: 'IPv6',
mac: 'fe:82:00:00:00:00',
scopeid: 0,
internal: true },
{ address: 'fe80::1',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:82:00:00:00:00',
scopeid: 1,
internal: true } ],
en1:
[ { address: 'fe82::9b:8282:d7e6:496e',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: '06:00:00:02:0e:00',
scopeid: 5,
internal: false },
{ address: '192.168.1.38',
netmask: '255.255.255.0',
family: 'IPv4',
a y: IPv4 ,
family
mac: '06:00:00:02:0e:00',
internal: false } ],
utun0:
[ { address: 'fe80::2513:72bc:f405:61d0',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:80:00:20:00:00',
scopeid: 8,
internal: false } ] }
os.platform()
darwin
freebsd
linux
openbsd
win32
...more
os.release()
os.tmpdir()
Returns the number of bytes that represent the total memory available in the system.
os.type()
Linux
Darwin on macOS
Windows_NT on Windows
os.uptime()
Returns the number of seconds the computer has been running since it was last rebooted.
os.userInfo()
Returns an object that contains the current username , uid , gid , shell , and homedir
CONTRIBUTORS
← PREV NEXT →
Trademark Policy Code of Conduct About
© OpenJS Foundation
Learn Docs Download Community
Menu
The path module provides a lot of very useful functionality to access and interact with the le system.
There is no need to install it. Being part of the Node.js core, it can be used by simply requiring it:
This module provides path.sep which provides the path segment separator ( \ on Windows, and / on
Linux / macOS), and path.delimiter which provides the path delimiter ( ; on Windows, and : on
Linux / macOS).
path.basename()
Return the last portion of a path. A second parameter can lter out the le extension:
require('path').basename('/test/something') //something
require('path').basename('/test/something.txt') //something.txt
require('path').basename('/test/something.txt', '.txt') //something
path.dirname()
require('path').dirname('/test/something') // /test
require('path').dirname('/test/something/file.txt') // /test/something
path.extname()
require('path').extname('/test/something') // ''
require('path').extname('/test/something/file.txt') // '.txt'
path.format()
//WINODWS
require('path').format({ dir: 'C:\\Users\\joe', base: 'test.txt' }) // 'C:\\Users\\joe\\
path.isAbsolute()
require('path').isAbsolute('/test/something') // true
require('path').isAbsolute('./test/something') // false
path.join()
path.normalize()
Tries to calculate the actual path when it contains relative speci ers like . or .. , or double slashes:
require('path').normalize('/users/joe/..//test.txt') //'/users/test.txt'
path.parse()
Example:
require('path').parse('/users/test.txt')
results in
{
root: '/',
dir: '/users',
base: 'test.txt',
ext: '.txt',
name: 'test'
}
path.relative()
Accepts 2 paths as arguments. Returns the relative path from the rst path to the second, based on the
current working directory.
Example:
path.resolve()
You can get the absolute path calculation of a relative path using path.resolve() :
By specifying a second parameter, resolve will use the rst as a base for the second:
If the rst parameter starts with a slash, that means it's an absolute path:
CONTRIBUTORS
EDIT THIS PAGE ON GITHUB
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
There is no need to install it. Being part of the Node.js core, it can be used by simply requiring it:
const fs = require('fs')
Once you do so, you have access to all its methods, which include:
fs.access() : check if the le exists and Node.js can access it with its permissions
fs.appendFile() : append data to a le. If the le does not exist, it's created
fs.chmod() : change the permissions of a le speci ed by the lename passed. Related:
fs.lchmod() , fs.fchmod()
fs.chown() : change the owner and group of a le speci ed by the lename passed. Related:
fs.fchown() , fs.lchown()
fs.close() : close a le descriptor
fs.copyFile() : copies a le
fs.createReadStream() : create a readable le stream
fs.createWriteStream() : create a writable le stream
fs.link() : create a new hard link to a le
fs.mkdir() : create a new folder
One peculiar thing about the fs module is that all the methods are asynchronous by default, but they
can also work synchronously by appending Sync .
For example:
fs.rename()
fs.renameSync()
fs.write()
fs.writeSync()
For example let's examine the fs.rename() method. The asynchronous API is used with a callback:
const fs = require('fs')
//done
})
A synchronous API can be used like this, with a try/catch block to handle errors:
const fs = require('fs')
try {
fs.renameSync('before.json', 'after.json')
//done
} catch (err) {
console.error(err)
}
The key di erence here is that the execution of your script will block in the second example, until the le
operation succeeded.
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
The Node.js fs core module provides many handy methods you can use to work with folders.
const fs = require('fs')
try {
if (!fs.existsSync(folderName)) {
fs.mkdirSync(folderName)
}
} catch (err) {
console.error(err)
}
Read the content of a directory
Use fs.readdir() or fs.readdirSync() to read the contents of a directory.
This piece of code reads the content of a folder, both les and subfolders, and returns their relative path:
const fs = require('fs')
fs.readdirSync(folderPath)
fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})
You can also lter the results to only return the les, and exclude the folders:
fs.readdirSync(folderPath).map(fileName => {
return path.join(folderPath, fileName)
})
.filter(isFile)
R f ld
Rename a folder
Use fs.rename() or fs.renameSync() to rename folder. The rst parameter is the current path, the
second the new path:
const fs = require('fs')
const fs = require('fs')
try {
fs.renameSync('/Users/joe', '/Users/roger')
} catch (err) {
console.error(err)
}
Remove a folder
Use fs.rmdir() or fs.rmdirSync() to remove a folder.
Removing a folder that has content can be more complicated than you need.
In this case it's best to install the fs-extra module, which is very popular and well maintained. It's a
drop-in replacement of the fs module, which provides more features on top of it.
In this case the remove() method is what you want.
Install it using
const fs = require('fs-extra')
fs.remove(folder)
.then(() => {
//done
})
.catch(err => {
console.error(err)
})
or with async/await:
async function removeFolder(folder) {
try {
await fs.remove(folder)
//done
} catch (err) {
console.error(err)
}
}
CONTRIBUTORS
← PREV NEXT →
Menu
The easiest way to write to les in Node.js is to use the fs.writeFile() API.
Example:
const fs = require('fs')
const fs = require('fs')
try {
try {
const data = fs.writeFileSync('/Users/joe/test.txt', content)
By default, this API will replace the contents of the le if it does already exist.
Append to a le
A handy method to append content to the end of a le is fs.appendFile() (and its
fs.appendFileSync() counterpart):
const content = 'Some content!'
Using streams
All those methods write the full content to the le before returning the control back to your program (in
the async version, this means executing the callback)
CONTRIBUTORS
← PREV NEXT →
Trademark Policy Code of Conduct About
© OpenJS Foundation
Learn Docs Download Community
Menu
const fs = require('fs')
const fs = require('fs')
try {
const data = fs.readFileSync('/Users/joe/test.txt', 'utf8')
console.log(data)
} catch (err) {
console.error(err)
}
Both fs.readFile() and fs.readFileSync() read the full content of the le in memory before
returning the data.
This means that big les are going to have a major impact on your memory consumption and speed of
execution of the program.
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Node.js File Paths
TABLE OF CONTENTS
/users/joe/file.txt
while Windows computers are di erent, and have a structure such as:
C:\users\joe\file.txt
You need to pay attention when using paths in your applications, as this di erence must be taken into
account.
Menu
Getting information out of a path
Given a path, you can extract information out of it using those methods:
dirname : get the parent folder of a le
Example:
path.dirname(notes) // /users/joe
path.basename(notes) // notes.txt
path.extname(notes) // .txt
You can get the le name without the extension by specifying a second argument to basename :
You can get the absolute path calculation of a relative path using path.resolve() :
If the rst parameter starts with a slash, that means it's an absolute path:
path.normalize() is another useful function, that will try and calculate the actual path, when it contains
relative speci ers like . or .. , or double slashes:
path.normalize('/users/joe/..//test.txt') //'/users/test.txt'
Neither resolve nor normalize will check if the path exists. They just calculate a path based on the
information they got.
CONTRIBUTORS
← PREV NEXT →
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
Node.js le stats
Every le comes with a set of details that we can inspect using Node.js.
You call it passing a le path, and once Node.js gets the le details it will call the callback function you
pass, with 2 parameters: an error message, and the le stats:
const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}
//we have access to the file stats in `stats`
})
Node.js provides also a sync method, which blocks the thread until the le stats are ready:
const fs = require('fs')
try {
const stats = fs.statSync('/Users/joe/test.txt')
} catch (err) {
console.error(err)
}
The le information is included in the stats variable. What kind of information can we extract using the
stats?
A lot, including:
There are other advanced methods, but the bulk of what you'll use in your day-to-day programming is this.
const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}
stats.isFile() //true
stats.isDirectory() //false
stats.isSymbolicLink() //false
stats.size //1024000 //= 1MB
})
CONTRIBUTORS
EDIT THIS PAGE ON GITHUB
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
A le descriptor is what's returned by opening the le using the open() method o ered by the fs
module:
const fs = require('fs')
const fs = require('fs')
try {
const fd = fs.openSync('/Users/joe/test.txt', 'r')
} catch (err) {
console.error(err)
}
Once you get the le descriptor, in whatever way you choose, you can perform all the operations that
require it, like calling fs.open() and many other operations that interact with the lesystem.
CONTRIBUTORS
← PREV NEXT →
Menu
local packages are installed in the directory where you run npm install <package-name> , and
they are put in the node_modules folder under this directory
global packages are all put in a single place in your system (exactly where depends on your setup),
regardless of where you run npm install -g <package-name>
require('package-name')
This makes sure you can have dozens of applications in your computer, all running a di erent version of
each package if needed.
Updating a global package would make all your projects use the new release, and as you can imagine this
might cause nightmares in terms of maintenance, as some packages might break compatibility with
further dependencies, and so on.
All projects have their own local version of a package, even if this might appear like a waste of resources,
it's minimal compared to the possible negative consequences.
A package should be installed globally when it provides an executable command that you run from the
shell (CLI), and it's reused across projects.
You can also install executable commands locally and run them using npx, but some packages are just
better installed globally.
Great examples of popular global packages which you might know are
npm
create-react-app
vue-cli
grunt-cli
mocha
react-native-cli
gatsby-cli
forever
nodemon
You probably have some packages installed globally already on your system. You can see them by running
CONTRIBUTORS
© OpenJS Foundation
Learn Docs Download Community
Menu
from the project root folder (the folder that contains the node_modules folder).
Using the -S ag, or --save , this operation will also remove the reference in the package.json le.
If the package was a development dependency, listed in the devDependencies of the package.json le,
you must use the -D / --save-dev ag to remove it from the le:
If the package is installed globally, you need to add the -g / --global ag:
for example:
npm uninstall -g webpack
and you can run this command from anywhere you want on your system because the folder where you
currently are does not matter.
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
If you work with JavaScript, or you've ever interacted with a JavaScript project, Node.js or a frontend
project, you surely met the package.json le.
What's that for? What should you know about it, and what are some of the cool things you can do with it?
The package.json le is kind of a manifest for your project. It can do a lot of things, completely
unrelated. It's a central repository of con guration for tools, for example. It's also where npm and yarn
store the names and versions for all the installed packages.
The le structure
Here's an example package.json le:
{}
It's empty! There are no xed requirements of what should be in a package.json le, for an application.
The only requirement is that it respects the JSON format, otherwise it cannot be read by programs that try
to access its properties programmatically.
If you're building a Node.js package that you want to distribute over npm things change radically, and you
must have a set of properties that will help other people use it. We'll see more about this later on.
This is another package.json:
{
"name": "test-project"
}
It de nes a name property, which tells the name of the app, or package, that's contained in the same
folder where this le lives.
Here's a much more complex example, which was extracted from a sample Vue.js application:
{
"name": "test-project",
"version": "1.0.0",
"description": "A Vue.js project",
"main": "src/main.js",
"private": true,
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"unit": "jest --config test/unit/jest.conf.js --coverage",
"test": "npm run unit",
"lint": "eslint --ext .js,.vue src test/unit",
"build": "node build/build.js"
},
"dependencies": {
"vue": "^2.5.2"
},
"devDependencies": {
"autoprefixer": "^7.1.2",
"babel-core": "^6.22.1",
"babel-eslint": "^8.2.1",
"babel-helper-vue-jsx-merge-props": "^2.0.3",
"babel-jest": "^21.0.2",
"babel-loader": "^7.1.1",
"babel-plugin-dynamic-import-node": "^1.2.0",
"babel-plugin-syntax-jsx": "^6.18.0",
"babel-plugin-transform-es2015-modules-commonjs": "^6.26.0",
"babel-plugin-transform-runtime": "^6.22.0",
"babel-plugin-transform-vue-jsx": "^3.5.0",
"babel-preset-env": "^1.3.2",
"babel-preset-stage-2": "^6.22.0",
"chalk": "^2.0.1",
"copy-webpack-plugin": "^4.0.1",
"css-loader": "^0.28.0",
"eslint": "^4.15.0",
"eslint-config-airbnb-base": "^11.3.0",
"eslint-friendly-formatter": "^3.0.0",
"eslint-import-resolver-webpack": "^0.8.3",
"eslint-loader": "^1.7.1",
"eslint-plugin-import": "^2.7.0",
"eslint-plugin-vue": "^4.0.0",
"extract-text-webpack-plugin": "^3.0.0",
"file-loader": "^1.1.4",
"friendly-errors-webpack-plugin": "^1.6.1",
"html-webpack-plugin": "^2.30.1",
"jest": "^22.0.4",
"jest-serializer-vue": "^0.3.0",
"node-notifier": "^5.1.2",
"optimize-css-assets-webpack-plugin": "^3.2.0",
"ora": "^1.2.0",
"portfinder": "^1.0.13",
"postcss-import": "^11.0.0",
"postcss-loader": "^2.0.8",
"postcss-url": "^7.2.1",
"rimraf": "^2.6.0",
"semver": "^5.3.0",
"shelljs": "^0.7.6",
"uglifyjs-webpack-plugin": "^1.1.1",
"url-loader": "^0.5.8",
"vue-jest": "^1.0.2",
"vue-loader": "^13.3.0",
"vue-style-loader": "^3.0.1",
"vue-template-compiler": "^2.5.2",
"webpack": "^3.6.0",
"webpack-bundle-analyzer": "^2.9.0",
"webpack-dev-server": "^2.9.1",
"webpack-merge": "^4.1.0"
},
"engines": {
"node": ">= 6.0.0",
"npm": ">= 3.0.0"
},
"browserslist": ["> 1%", "last 2 versions", "not ie <= 8"]
}
All those properties are used by either npm or other tools that we can use.
Properties breakdown
This section describes the properties you can use in detail. We refer to "package" but the same thing
applies to local applications which you do not use as packages.
Most of those properties are only used on https://www.npmjs.com/, others by scripts that interact with
your code, like npm or others.
name
Example:
"name": "test-project"
The name must be less than 214 characters, must not have spaces, it can only contain lowercase letters,
hyphens ( - ) or underscores ( _ ).
This is because when a package is published on npm , it gets its own URL based on this property.
If you published this package publicly on GitHub, a good value for this property is the GitHub repository
name.
author
Lists the package author name
Example:
{
"author": "Joe <[email protected]> (https://whatever.com)"
}
{
"author": {
"name": "Joe",
"email": "[email protected]",
"url": "https://whatever.com"
}
}
contributors
As well as the author, the project can have one or more contributors. This property is an array that lists
them.
Example:
{
"contributors": ["Joe <[email protected]> (https://whatever.com)"]
}
Can also be used with this format:
{
"contributors": [
{
"name": "Joe",
"email": "[email protected]",
"url": "https://whatever.com"
}
]
}
bugs
Links to the package issue tracker, most likely a GitHub issues page
Example:
{
"bugs": "https://github.com/whatever/package/issues"
}
homepage
Example:
{
"homepage": "https://whatever.com/package"
}
version
Example:
"version": "1.0.0"
This property follows the semantic versioning (semver) notation for versions, which means the version is
always expressed with 3 numbers: x.x.x .
The rst number is the major version, the second the minor version and the third is the patch version.
There is a meaning in these numbers: a release that only xes bugs is a patch release, a release that
introduces backward-compatible changes is a minor release, a major release can have breaking changes.
license
Example:
"license": "MIT"
keywords
This property contains an array of keywords that associate with what your package does.
Example:
"keywords": [
"email",
"machine learning",
"ai"
]
This helps people nd your package when navigating similar packages, or when browsing the
https://www.npmjs.com/ website.
description
Example:
This is especially useful if you decide to publish your package to npm so that people can nd out what the
package is about.
repository
Example:
"repository": "github:whatever/testing",
Notice the github pre x. There are other popular services baked in:
"repository": "gitlab:whatever/testing",
"repository": "bitbucket:whatever/testing",
"repository": {
"type": "git",
"url": "https://github.com/whatever/testing.git"
}
"repository": {
"type": "svn",
"url": "..."
}
main
Example:
"main": "src/main.js"
private
Example:
"private": true
scripts
Example:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"unit": "jest --config test/unit/jest.conf.js --coverage",
"test": "npm run unit",
"lint": "eslint --ext .js,.vue src test/unit",
"build": "node build/build.js"
}
These scripts are command line applications. You can run them by calling npm run XXXX or yarn XXXX ,
where XXXX is the command name. Example: npm run dev .
You can use any name you want for a command, and scripts can do literally anything you want.
dependencies
Example:
"dependencies": {
"vue": "^2.5.2"
}
devDependencies
They di er from dependencies because they are meant to be installed only on a development machine,
not needed to run the code in production.
When you install a package using npm or yarn:
Example:
"devDependencies": {
"autoprefixer": "^7.1.2",
"babel-core": "^6.22.1"
}
engines
Sets which versions of Node.js and other commands this package/app work on
Example:
"engines": {
"node": ">= 6.0.0",
"npm": ">= 3.0.0",
"yarn": "^0.13.0"
}
browserslist
Is used to tell which browsers (and their versions) you want to support. It's referenced by Babel,
Autopre xer, and other tools, to only add the poly lls and fallbacks needed to the browsers you target.
Example:
"browserslist": [
"> 1%",
"last 2 versions",
"not ie <= 8"
]
This con guration means you want to support the last 2 major versions of all browsers with at least 1% of
usage (from the CanIUse.com stats), except IE8 and lower.
(see more)
Command-speci c properties
The package.json le can also host command-speci c con guration, for example for Babel, ESLint, and
more.
Each has a speci c property, like eslintConfig , babel and others. Those are command-speci c, and
you can nd how to use those in the respective command/project documentation.
Package versions
You have seen in the description above version numbers like these: ~3.0.0 or ^0.13.0 . What do they
mean, and which other version speci ers can you use?
That symbol speci es which updates your package accepts, from that dependency.
Given that using semver (semantic versioning) all versions have 3 digits, the rst being the major release,
the second the minor release and the third is the patch release, you have these "Rules".
You can combine most of the versions in ranges, like this: 1.0.0 || >=1.1.0 <1.2.0 , to either use 1.0.0
or one release from 1.1.0 up, but lower than 1.2.0.
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
The package-lock.json le
TABLE OF CONTENTS
What's that? You probably know about the package.json le, which is much more common and has
been around for much longer.
The goal of package-lock.json le is to keep track of the exact version of every package that is installed
so that a product is 100% reproducible in the same way even if packages are updated by their
maintainers.
This solves a very speci c problem that package.json left unsolved. In package.json you can set which
versions you want to upgrade to (patch or minor), using the semver notation, for example:
if you write ~0.13.0 , you want to only update patch releases: 0.13.1 is ok, but 0.14.0 is not.
if you write ^0.13.0 , you want to update patch and minor releases: 0.13.1 , 0.14.0 and so on.
if you write 0.13.0 , that is the exact version that will be used, always
You don't commit to Git your node_modules folder, which is generally huge, and when you try to replicate
the project on another machine by using the npm install command, if you speci ed the ~ syntax and
a patch release of a package has been released, that one is going to be installed. Same for ^ and minor
releases.
If you specify exact versions, like 0.13.0 in the example, you are not affected by this
problem.
It could be you, or another person trying to initialize the project on the other side of the world by running
npm install .
So your original project and the newly initialized project are actually di erent. Even if a patch or minor
release should not introduce breaking changes, we all know bugs can (and so, they will) slide in.
The package-lock.json sets your currently installed version of each package in stone, and npm will
use those exact versions when running npm install .
This concept is not new, and other programming languages package managers (like Composer in PHP) use
a similar system for years.
The package-lock.json le needs to be committed to your Git repository, so it can be fetched by other
people, if the project is public or you have collaborators, or if you use Git as a source for deployments.
The dependencies versions will be updated in the package-lock.json le when you run npm update .
An example
This is an example structure of a package-lock.json le we get when we run npm install cowsay in
an empty folder:
{
"requires": true,
"lockfileVersion": 1,
"dependencies": {
"ansi-regex": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.
0.0.tgz",
"integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
},
"cowsay": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/cowsay/-/cowsay-1.3.1.tgz"
,
"integrity": "sha512-3PVFe6FePVtPj1HTeLin9v8WyLl+VmM1l1H/5P+BTTDkM
Ajufp+0F9eLjzRnOHzVAYeIYFF5po5NjRrgefnRMQ==",
"requires": {
"get-stdin": "^5.0.1",
"optimist": "~0.6.1",
"string-width": "~2.1.1",
"strip-eof": "^1.0.0"
}
},
"get-stdin": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-5.0.
1.tgz",
"integrity": "sha1-Ei4WFZHiH/TFJTAwVpPyDmOTo5g="
},
"is-fullwidth-code-point": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/
is-fullwidth-code-point-2.0.0.tgz",
"integrity": "sha1-o7MKXE8ZkYMWeqq5O+764937ZU8="
},
"minimist": {
"version": "0.0.10",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.10
.tgz",
"integrity": "sha1-3j+YVD2/lggr5IrRoMfNqDYwHc8="
},
"optimist": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/optimist/-/optimist-0.6.1.tgz",
"integrity": "sha1-2j6nRob6IaGaERwybpDrFaAZZoY=",
"requires": {
"minimist": "~0.0.1",
"wordwrap": "~0.0.2"
}
},
"string-width": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz",
"integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVa
"requires": {
"is-fullwidth-code-point": "^2.0.0",
"strip-ansi": "^4.0.0"
}
},
"strip-ansi": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz",
"integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=",
"requires": {
"ansi-regex": "^3.0.0"
}
},
"strip-eof": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/strip-eof/-/strip-eof-1.0.0.tgz",
"integrity": "sha1-u0P/VZim6wXYm1n80SnJgzE2Br8="
},
"wordwrap": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/wordwrap/-/wordwrap-0.0.3.tgz",
"integrity": "sha1-o9XabNXAvAAI03I0u68b7WMFkQc="
}
}
}
get-stdin
optimist
string-width
strip-eof
In turn, those packages require other packages, as we can see from the requires property that some
have:
ansi-regex
is-fullwidth-code-point
minimist
wordwrap
strip-eof
They are added in alphabetical order into the le, and each one has a version eld, a resolved eld
that points to the package location, and an integrity string that we can use to verify the package.
CONTRIBUTORS
© OpenJS Foundation
Learn Docs Download Community
Menu
to import the functionality exposed in the library.js le that resides in the current le folder.
In this le, functionality must be exposed before it can be imported by other les.
Any other object or variable de ned in the le by default is private and not exposed to the outer world.
This is what the module.exports API o ered by the module system allows us to do.
When you assign an object or a function as a new exports property, that is the thing that's being
exposed, and as such, it can be imported in other parts of your app, or in other apps as well.
The rst is to assign an object to module.exports , which is an object provided out of the box by the
module system, and this will make your le export just that object:
// car.js
const car = {
brand: 'Ford',
model: 'Fiesta'
}
module.exports = car
// index.js
const car = require('./car')
The second way is to add the exported object as a property of exports . This way allows you to export
multiple objects, functions or data:
const car = {
brand: 'Ford',
model: 'Fiesta'
}
exports.car = car
or directly
exports.car = {
brand: 'Ford',
model: 'Fiesta'
}
And in the other le, you'll use it by referencing a property of your import:
or
The rst exposes the object it points to. The latter exposes the properties of the object it points to.
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
Node.js since version 7 provides the readline module to perform exactly this: get input from a readable
stream such as the process.stdin stream, which during the execution of a Node.js program is the
terminal input, one line at a time.
This piece of code asks the username, and once the text is entered and the user presses enter, we send a
greeting.
The question() method shows the rst parameter (a question) and waits for the user input. It calls the
callback function once enter is pressed.
readline o ers several other methods, and I'll let you check them out on the package documentation
li k d b
linked above.
If you need to require a password, it's best not to echo it back, but instead show a * symbol.
The simplest way is to use the readline-sync package which is very similar in terms of the API and
handles this out of the box.
You can install it using npm install inquirer , and then you can replicate the above code like this:
var questions = [
{
type: 'input',
name: 'name',
message: "What's your name?"
}
]
inquirer.prompt(questions).then(answers => {
console.log(`Hi ${answers['name']}!`)
})
Inquirer.js lets you do many things like asking multiple choices, having radio buttons, con rmations, and
more.
It's worth knowing all the alternatives, especially the built-in ones provided by Node.js, but if you plan to
take CLI input to the next level, Inquirer.js is an optimal choice.
CONTRIBUTORS
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Output to the command line using Node.js
TABLE OF CONTENTS
The most basic and most used method is console.log() , which prints the string you pass to it to the
console.
Menu
You can pass multiple variables to console.log , for example:
const x = 'x'
const y = 'y'
console.log(x, y)
We can also format pretty phrases by passing variables and a format speci er.
For example:
console.log('My %s has %d years', 'cat', 2)
Example:
console.log('%o', Number)
Counting elements
console.count() is a handy method.
This will print the stack trace. This is what's printed if we try this in the Node.js REPL:
Trace
at function2 (repl:1:33)
at function1 (repl:1:25)
at repl:1:1
at ContextifyScript.Script.runInThisContext (vm.js:44:33)
at REPLServer.defaultEval (repl.js:239:29)
at bound (domain.js:301:14)
at REPLServer.runBound [as eval] (domain.js:314:12)
at REPLServer.onLine (repl.js:440:10)
at emitOne (events.js:120:20)
at REPLServer.emit (events.js:210:7)
It will not appear in the console, but it will appear in the error log.
Example:
console.log('\x1b[33m%s\x1b[0m', 'hi!')
You can try that in the Node.js REPL, and it will print hi! in yellow.
However, this is the low-level way to do this. The simplest way to go about coloring the console output is
by using a library. Chalk is such a library, and in addition to coloring it also helps with other styling
facilities, like making text bold, italic or underlined.
You install it with npm install chalk , then you can use it:
Using chalk.yellow is much more convenient than trying to remember the escape codes, and the code
is much more readable.
Check the project link posted above for more usage examples.
This snippet creates a 10-step progress bar, and every 100ms one step is completed. When the bar
completes we clear the interval:
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Learn Docs Download Community
Menu
Here is an example that accesses the NODE_ENV environment variable, which is set to development by
default.
process.env.NODE_ENV // "development"
Setting it to "production" before the script runs will tell Node.js that this is a production environment.
In the same way you can access any custom environment variable you set.
CONTRIBUTORS
© OpenJS Foundation
Get HTTP request body data using Node.js
Here is how you can extract the data that was sent as JSON in the request body.
If you are using Express, that's quite simple: use the body-parser Node.js module.
axios.post('https://whatever.com/todos', {
todo: 'Buy the milk'
})
app.use(
express.urlencoded({
extended: true
})
)
app.use(express.json())
app.post('/todos', (req, res) => {
console.log(req.body.todo)
})
If you're not using Express and you want to do this in vanilla Node.js, you need to do a bit more work, of
course, as Express abstracts a lot of this for you.
The key thing to understand is that when you initialize the HTTP server using http.createServer() , the
callback is called when the server got all the HTTP headers, but not the request body.
So, we must listen for the body content to be processed, and it's processed in chunks.
We rst get the data by listening to the stream data events, and when the data ends, the stream end
event is called, once:
So to access the data, assuming we expect to receive a string, we must concatenate the chunks into a
string when listening to the stream data , and when the stream end , we parse the string to JSON:
const server = http.createServer((req, res) => {
let data = '';
CONTRIBUTORS
← PREV NEXT →
© OpenJS Foundation
Node.js v15.13.0 documentation
Path
Stability: 2 - Stable
The path module provides utilities for working with file and directory paths. It can be accessed using:
On POSIX:
path.basename('C:\\temp\\myfile.html');
// Returns: 'C:\\temp\\myfile.html'
On Windows:
path.basename('C:\\temp\\myfile.html');
// Returns: 'myfile.html'
To achieve consistent results when working with Windows file paths on any operating system, use path.win32 :
To achieve consistent results when working with POSIX file paths on any operating system, use path.posix :
path.posix.basename('/tmp/myfile.html');
// Returns: 'myfile.html'
On Windows Node.js follows the concept of per-drive working directory. This behavior can be observed when using a drive path without a backslash. For example, path.resolve('C:\\')
can potentially return a different result than path.resolve('C:') . For more information, see this MSDN page .
path.basename(path[, ext])
path <string>
Returns: <string>
The path.basename() method returns the last portion of a path , similar to the Unix basename command. Trailing directory separators are ignored, see path.sep .
path.basename('/foo/bar/baz/asdf/quux.html');
// Returns: 'quux.html'
path.basename('/foo/bar/baz/asdf/quux.html', '.html');
// Returns: 'quux'
Although Windows usually treats file names, including file extensions, in a case-insensitive manner, this function does not. For example, C:\\foo.html and C:\\foo.HTML refer to the same
file, but basename treats the extension as a case-sensitive string:
path.win32.basename('C:\\foo.html', '.html');
// Returns: 'foo'
path.win32.basename('C:\\foo.HTML', '.html');
// Returns: 'foo.HTML'
A TypeError is thrown if path is not a string or if ext is given and is not a string.
path.delimiter
<string>
; for Windows
: for POSIX
console.log(process.env.PATH);
// Prints: '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'
process.env.PATH.split(path.delimiter);
// Returns: ['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']
On Windows:
console.log(process.env.PATH);
// Prints: 'C:\Windows\system32;C:\Windows;C:\Program Files\node\'
process.env.PATH.split(path.delimiter);
// Returns ['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\node\\']
path.dirname(path)
path <string>
Returns: <string>
The path.dirname() method returns the directory name of a path , similar to the Unix dirname command. Trailing directory separators are ignored, see path.sep .
path.dirname('/foo/bar/baz/asdf/quux');
// Returns: '/foo/bar/baz/asdf'
Returns: <string>
The path.extname() method returns the extension of the path , from the last occurrence of the . (period) character to end of string in the last portion of the path . If there is no . in the
last portion of the path , or if there are no . characters other than the first character of the basename of path (see path.basename() ) , an empty string is returned.
path.extname('index.html');
// Returns: '.html'
path.extname('index.coffee.md');
// Returns: '.md'
path.extname('index.');
// Returns: '.'
path.extname('index');
// Returns: ''
path.extname('.index');
// Returns: ''
path.extname('.index.md');
// Returns: '.md'
path.format(pathObject)
pathObject <Object>
dir <string>
root <string>
base <string>
name <string>
ext <string>
Returns: <string>
The path.format() method returns a path string from an object. This is the opposite of path.parse() .
When providing properties to the pathObject remember that there are combinations where one property has priority over another:
On Windows:
path.format({
dir: 'C:\\path\\dir',
base: 'file.txt'
});
// Returns: 'C:\\path\\dir\\file.txt'
path.isAbsolute(path)
path <string>
Returns: <boolean>
path.isAbsolute('/foo/bar'); // true
path.isAbsolute('/baz/..'); // true
path.isAbsolute('qux/'); // false
path.isAbsolute('.'); // false
On Windows:
path.isAbsolute('//server'); // true
path.isAbsolute('\\\\server'); // true
path.isAbsolute('C:/foo/..'); // true
path.isAbsolute('C:\\foo\\..'); // true
path.isAbsolute('bar\\baz'); // false
path.isAbsolute('bar/baz'); // false
path.isAbsolute('.'); // false
path.join([...paths])
...paths <string> A sequence of path segments
Returns: <string>
The path.join() method joins all given path segments together using the platform-specific separator as a delimiter, then normalizes the resulting path.
Zero-length path segments are ignored. If the joined path string is a zero-length string then '.' will be returned, representing the current working directory.
path.normalize(path)
path <string>
Returns: <string>
The path.normalize() method normalizes the given path , resolving '..' and '.' segments.
When multiple, sequential path segment separation characters are found (e.g. / on POSIX and either \ or / on Windows), they are replaced by a single instance of the platform-specific
path segment separator ( / on POSIX and \ on Windows). Trailing separators are preserved.
If the path is a zero-length string, '.' is returned, representing the current working directory.
path.normalize('/foo/bar//baz/asdf/quux/..');
// Returns: '/foo/bar/baz/asdf'
On Windows:
path.normalize('C:\\temp\\\\foo\\bar\\..\\');
// Returns: 'C:\\temp\\foo\\'
Since Windows recognizes multiple path separators, both separators will be replaced by instances of the Windows preferred separator ( \ ):
path.win32.normalize('C:////temp\\\\/\\/\\/foo/bar');
// Returns: 'C:\\temp\\foo\\bar'
path.parse(path)
path <string>
Returns: <Object>
The path.parse() method returns an object whose properties represent significant elements of the path . Trailing directory separators are ignored, see path.sep .
dir <string>
root <string>
base <string>
name <string>
ext <string>
path.parse('/home/user/dir/file.txt');
// Returns:
// { root: '/',
// dir: '/home/user/dir',
// base: 'file.txt',
// ext: '.txt',
// name: 'file' }
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" / home/user/dir / file .txt "
└──────┴──────────────┴──────┴─────┘
(All spaces in the "" line should be ignored. They are purely for formatting.)
On Windows:
path.parse('C:\\path\\dir\\file.txt');
// Returns:
// { root: 'C:\\',
// dir: 'C:\\path\\dir',
// base: 'file.txt',
// ext: '.txt',
// name: 'file' }
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" C:\ path\dir \ file .txt "
└──────┴──────────────┴──────┴─────┘
(All spaces in the "" line should be ignored. They are purely for formatting.)
path.posix
<Object>
The path.posix property provides access to POSIX specific implementations of the path methods.
path.relative(from, to)
from <string>
to <string>
Returns: <string>
The path.relative() method returns the relative path from from to to based on the current working directory. If from and to each resolve to the same path (after calling
path.resolve() on each), a zero-length string is returned.
If a zero-length string is passed as from or to , the current working directory will be used instead of the zero-length strings.
For example, on POSIX:
path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb');
// Returns: '../../impl/bbb'
On Windows:
path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb');
// Returns: '..\\..\\impl\\bbb'
path.resolve([...paths])
...paths <string> A sequence of paths or path segments
Returns: <string>
The path.resolve() method resolves a sequence of paths or path segments into an absolute path.
The given sequence of paths is processed from right to left, with each subsequent path prepended until an absolute path is constructed. For instance, given the sequence of path segments:
/foo , /bar , baz , calling path.resolve('/foo', '/bar', 'baz') would return /bar/baz because 'baz' is not an absolute path but '/bar' + '/' + 'baz' is.
If, after processing all given path segments, an absolute path has not yet been generated, the current working directory is used.
The resulting path is normalized and trailing slashes are removed unless the path is resolved to the root directory.
If no path segments are passed, path.resolve() will return the absolute path of the current working directory.
path.resolve('/foo/bar', './baz');
// Returns: '/foo/bar/baz'
path.resolve('/foo/bar', '/tmp/file/');
// Returns: '/tmp/file'
path.sep
<string>
\ on Windows
/ on POSIX
'foo/bar/baz'.split(path.sep);
// Returns: ['foo', 'bar', 'baz']
On Windows:
'foo\\bar\\baz'.split(path.sep);
// Returns: ['foo', 'bar', 'baz']
On Windows, both the forward slash ( / ) and backward slash ( \ ) are accepted as path segment separators; however, the path methods only add backward slashes ( \ ).
path.toNamespacedPath(path)
path <string>
Returns: <string>
On Windows systems only, returns an equivalent namespace-prefixed path for the given path . If path is not a string, path will be returned without modifications.
This method is meaningful only on Windows systems. On POSIX systems, the method is non-operational and always returns path without modifications.
path.win32
<Object>
The path.win32 property provides access to Windows-specific implementations of the path methods.
1072 I know there's process.cwd , but that only refers to the directory where the script was called, not of the script itself. For instance, say
I'm in /home/kyle/ and I run the following command:
node /home/kyle/some/dir/file.js
160
If I call process.cwd() , I get /home/kyle/ , not /home/kyle/some/dir/ . Is there a way to get that directory?
node.js
Share Follow edited Dec 17 '16 at 12:03 asked Jun 28 '10 at 14:31
Peter Mortensen Kyle Slattery
27.9k 21 94 123 28.8k 9 29 35
6 nodejs.org/docs/latest/api/globals.html the documentation link of the accepted answer. – allenhwkim Apr 12 '13 at 15:41
I found it after looking through the documentation again. What I was looking for were the __filename and __dirname module-level
variables.
1494
__filename is the file name of the current module. This is the resolved absolute path of the current module file.
(ex: /home/kyle/some/dir/file.js )
__dirname is the directory name of the current module. (ex: /home/kyle/some/dir )
Share Follow edited Oct 24 '18 at 15:40 answered Jun 28 '10 at 14:39
doom Kyle Slattery
2,222 2 18 33 28.8k 9 29 35
3 If you want only the directory name and not the full path, you might do something like this: function getCurrentDirectoryName() { var fullPath =
__dirname; var path = fullPath.split('/'); var cwd = path[path.length-1]; return cwd; } – Anthony Martin Oct 30 '13 at 20:34
6 For those trying @apx solution (like I did:), this solution does not work on Windows. – Laoujin May 7 '15 at 19:33
Use resolve() instead of concatenating with '/' or '\' else you will run into cross-platform issues.
Note: __dirname is the local path of the module or included script. If you are writing a plugin which needs to know the path of the main
script it is:
require.main.filename
require('path').dirname(require.main.filename)
Share Follow edited Oct 26 '12 at 17:49 answered Sep 8 '11 at 18:40
Marc
9,501 9 50 63
,
16 If your goal is just to parse and interact with the json file, you can often do this more easily via var settings = require('./settings.json') . Of
course, it's synchronous fs IO, so don't do it at run-time, but at startup time it's fine, and once it's loaded, it'll be cached. – isaacs May 9 '12 at 18:26
2 @Marc Thanks! For a while now I was hacking my way around the fact that __dirname is local to each module. I have a nested structure in my
library and need to know in several places the root of my app. Glad I know how to do this now :D – Thijs Koerselman Feb 28 '13 at 14:34
If you don't consider windows to be a real platform, can we skip resolve? BSD, Macos, linux, tizen, symbian, Solaris, android, flutter, webos all use /
right? – Ray Foss Feb 27 '19 at 18:18
This no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:38
var fs = require('fs');
fs.readFile(process.cwd() + "\\text.txt", function(err, data)
{
if(err)
console.log(err)
else
console.log(data.toString());
});
Share Follow edited Feb 12 '18 at 23:35 answered Oct 23 '16 at 7:17
dYale Masoud Siahkali
1,253 1 14 19 4,088 1 23 16
For those who didn't understand Asynchronous and Synchronous, see this link... stackoverflow.com/a/748235/5287072 – DarckBlezzer Feb 3 '17
at 17:33
20 this is exactly what the OP doesn't want... the request is for the path of the executable script! – caesarsol Mar 29 '18 at 9:10
3 Current directory is a very different thing If you run something like cd /foo; node bar/test js current directory would be /foo but the script is
3 Current directory is a very different thing. If you run something like cd /foo; node bar/test.js , current directory would be /foo , but the script is
located in /foo/bar/test.js . – rjmunro Jul 5 '18 at 11:20
It's not a good answer. It's mess a logic beacauese this can be much shorter path than you expect. – kris_IV Apr 9 '19 at 11:31
Why would you ever do this; if the file were relative to the current directory you could just read text.txt and it would work, you don't need to
construct the absolute path – Michael Mrozek Oct 3 '19 at 3:40
Use __dirname!!
124 __dirname
The directory name of the current module. This the same as the path.dirname() of the __filename .
console.log(__dirname);
// Prints: /Users/mjr
console.log(path.dirname(__filename));
// Prints: /Users/mjr
https://nodejs.org/api/modules.html#modules_dirname
Share Follow edited Apr 28 '19 at 12:40 answered Dec 22 '17 at 14:30
DDD
2,384 2 11 28
1 This survives symlinks too. So if you create a bin and need to find a file, eg path.join(__dirname, "../example.json"); it will still work when your binary
is linked in node_modules/.bin – Jason Apr 17 '18 at 17:12
2 Not only was this answer given years earlier, it also no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:39
process.argv
An array containing the command line arguments. The first element will be 'node', the second element will be the path to
the JavaScript file. The next elements will be any additional command line arguments.
If you need to know the path of a module file then use __filename.
Share Follow edited Jun 20 '20 at 9:12 answered Dec 17 '15 at 10:41
Community ♦ Lukasz Wiktor
1 1 16.4k 4 63 79
3 Could the downvoter please explain why this is not recommended? – Tamlyn Jan 15 '16 at 16:57
2 @Tamlyn Maybe because process.argv[1] applies only to the main script while __filename points to the module file being executed. I update
my answer to emphasize the difference. Still, I see nothing wrong in using process.argv[1] . Depends on one's requirements. – Lukasz Wiktor Jan
16 '16 at 6:40
10 If main script was launched with a node process manager like pm2 process.argv[1] will point to the executable of the process manager
/usr/local/lib/node_modules/pm2/lib/ProcessContainerFork.js – user3002996 Mar 1 '17 at 11:28
1 @LukaszWiktor Thanks a lot! Works perfectly with a custom Node.js CLI :-) – bgrand-ch Mar 31 at 16:06
Node.js 10 supports ECMAScript modules, where __dirname and __filename are no longer available.
49 Then to get the path to the current ES module one has to use:
Share Follow edited Jun 5 '19 at 2:37 answered Apr 27 '18 at 1:01
GOTO 0
27.8k 18 96 125
How would I know if I'm writing an ES module or not? Is it just a matter of which Node version I'm running, or if I'm using import/export keywords? –
Ed Brannin Apr 18 '19 at 19:42
2 ES modules available only with --experimental-modules flag. – Nickensoul May 7 '19 at 16:01
4 --experimental-modules is only required if you are running node version is < 13.2. just name the file .mjs rather than .js – Brent Apr 12 '20 at 19:56
Thanks, that solved it for me! It looks to me that it'd be great for back-compatibility support. – Gal Grünfeld Dec 1 '20 at 10:27
var settings =
JSON.parse(
require('fs').readFileSync(
28 require('path').resolve(
__dirname,
'settings.json'),
'utf8'));
Share Follow edited Mar 30 '14 at 4:38 answered Nov 5 '12 at 5:41
Community ♦ foobar
1 1 297 3 2
7 Just a note, as of node 0.5 you can just require a JSON file. Of course that wouldn't answer the question. – Kevin Cox Apr 9 '13 at 21:18
1 __dirname no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:40
Every Node.js program has some global variables in its environment, which represents some information about your process and one
23 of it is __dirname .
Share Follow edited Feb 26 '17 at 10:10 answered May 25 '16 at 21:27
Omar Ali Hazarapet Tunanyan
7,777 4 29 56 2,433 23 25
Not only was this answer given years earlier, __dirname no longer works with ES modules. – Dan Dascalescu Apr 25 '19 at 0:40
It's about NodeJs 10, but this answer was published in 2016. – Hazarapet Tunanyan May 3 '19 at 7:59
I know this is pretty old, and the original question I was responding to is marked as duplicate and directed here, but I ran into an issue
trying to get jasmine-reporters to work and didn't like the idea that I had to downgrade in order for it to work. I found out that jasmine-
14 reporters wasn't resolving the savePath correctly and was actually putting the reports folder output in jasmine-reporters directory
instead of the root directory of where I ran gulp. In order to make this work correctly I ended up using process.env.INIT_CWD to get
the initial Current Working Directory which should be the directory where you ran gulp. Hope this helps someone.
Share Follow edited Oct 30 '19 at 5:32 answered Mar 15 '17 at 15:37
Community ♦ Dana Harris
1 1 277 3 6
You can use process.env.PWD to get the current app folder path.
If you are using pkg to package your app, you'll find useful this expression:
require.main.filename holds the full path of the main script, but it's empty when Node runs in interactive mode.
__dirname holds the full path of the current script, so I'm not using it (although it may be what OP asks; then better use
appDirectory = process.pkg ? require('path').dirname(process.execPath) : (__dirname ||
require('path').dirname(process.argv[0])); noting that in interactive mode __dirname is empty.
For interactive mode, use either process.argv[0] to get the path to the Node executable or process.cwd() to get the current
directory.
module.exports = entries;
This will find all files in the root of the current directory, require and export every file present with the same export name as the
filename stem.
Share Follow edited Jan 17 at 23:10 answered Jan 17 at 22:53
Andy Lorenz
1,726 1 20 22
function getCurrentScriptPath () {
// Relative path from current working directory to the location of this script
var pathToScript = path.relative(process.cwd(), __filename);
__dirname and __filename are no longer available with ES modules. – Dan Dascalescu Apr 25 '19 at 0:41
Highly active question. Earn 10 reputation in order to answer this question. The reputation requirement helps protect this question from spam and non-
answer activity.
Create A REST API With JSON
Server
Sebastian Eschweiler Follow
Feb 26, 2017 · 6 min read
This post has been published first on CodingTheSmartWay.com.
Of course you can setup a full backend server, e.g. by using Node.js, Express
and MongoDB. However this takes some time and a much simpler approach
can help to speed up front-end development time.
JSON Server is a simple project that helps you to setup a REST API with
CRUD operations very fast. The project website can be found at
https://github.com/typicode/json-server.
In the following you’ll lean how to setup JSON server and publish a sample
REST API. Furthermore you’ll see how to use another library, Faker.js, to
generate fake data for the REST API which is exposed by using JSON server.
By adding the -g option we make sure that the package is installed globally
on your system.
JSON File
Now let’s create a new JSON file with name db.json. This file contains the
data which should be exposed by the REST API. For objects contained in the
JSON structure CRUD entpoints are created automatically. Take a look at
the following sample db.json file:
{
"employees": [
{
"id": 1,
"first_name": "Sebastian",
"last_name": "Eschweiler",
"email": "[email protected]"
},
{
"id": 2,
"first_name": "Steve",
"last_name": "Palmer",
"email": "[email protected]"
},
{
"id": 3,
"first_name": "Ann",
"last_name": "Smith",
"email": "[email protected]"
}
]
}
The JSON structure consists of one employee object which has three data
sets assigned. Each employee object is consisting of four properties: id,
first_name, last_name and email.
As a parameter we need to pass over the file containing our JSON structure
(db.json). Furthermore we’re using the — watch parameter. By using this
parameter we’re making sure that the server is started in watch mode which
means that it watches for file changes and updates the exposed API
accordingly.
Now we can open URL http://localhost:3000/employees in the browser
and we’ll get the following result:
From the output you can see that the employees resource has been
recognized correctly. Now you can click on the employees link and a HTTP
GET request to http://localhost:3000/employees shows the following
result:
The following HTTP endpoints are created automatically by JSON server:
GET /employees
GET /employees/{id}
POST /employees
PUT /employees/{id}
PATCH /employees/{id}
DELETE /employees/{id}
It's possible to extend URLs with further parameter. E.g. you can apply
filtering by using URL parameters like you can see in the following:
http://localhost:3000/employees?first_name=Sebastian
This returns just one employee object as a result. Or just perform a full text
over all properties:
http://localhost:3000/employees?q=codingthesmartway
For a full list of available URL parameters take a look at the JSON server
documentation: https://github.com/typicode/json-server
POST REQUEST
To create a new employee we need to perform a post request and set the
body content type to JSON (application/json). The new employee object is
entered in JSON format in the body data section:
PUT REQUEST
If you want to update or change an existing employee record you can use a
HTTP PUT request:
Mocking Data with Faker.js
So far we’ve entered data exposed by the API manually in a JSON file.
However, if you need a larger amount of data the manual way can be
cumbersome. An easy solution to this problem is to use the Faker.js
(https://github.com/marak/Faker.js/) library to generate fake data.
Integration of Faker.js into JSON server is easy. Just follow the steps below:
$ npm init
// employees.js
var faker = require('faker')
function generateEmployees () {
var employees = []
for (var id = 0; id < 50; id++) {
var firstName = faker.name.firstName()
var lastName = faker.name.lastName()
var email = faker.internet.email()
employees.push({
"id": id,
"first_name": firstName,
"last_name": lastName,
"email": email
})
}
module.exports = generateEmployees
faker.name.firstName()
faker.name.lastName()
faker.internet.email()
JSON server requires that we finally export the generateEmploees() function
which is responsible for fake data generation. This is done by using the
following line of code:
module.exports = generateEmployees
Having added that export, we're able to pass file employee.js directly to the
json-server command:
$ json-server employees.js
Now the exposed REST API gives you access to all 50 employee data sets
created with Faker.js.
Video Tutorial
This video tutorial contains the steps described in the text above:
The only course you need to learn web development — HTML, CSS, JS,
Node, and More!
MIT License
Star Notifications
master Go to file
View code
Generate JSON database for JSON server using Filltext.com as random JSON data source.
Install
Options
Possible options are:
--name, -n - Specify name of the database JSON file to create (in case of create command) or use (collection command).
Default name if not provided is "db.json".
--help, -h - Show help.
--version, -v - Show version number.
Commands overview
create
Command produces several prompts.
Collection prompt
Prompt for collection name and number of rows renders something like this:
> Collection name and number of rows, 5 if omitted (ex: posts 10):
Valid input would be a new collection name with optional number separated by space indicating how many rows should be
generated for this collection. For example, users 10 will generate collection "users" with 10 records in it, sessions will
result into collection "sessions" with default 5 records, etc.
Fields prompt
After collection name is entered one would need to configure what fields collection should have:
For example, to generate users collection with four fields: id, username, name and age, one could enter this command:
Add another
You can add as many collections as necessary: after fields prompt there is a confirmation if more collections need to be
created:
If "y" is entered flow repeats "Collection prompt" step, otherwise it fetches JSON data and saves it to the file.
collection
TODO...
Example
Here is how typical workflow looks like with create command:
$ json-server-init create
> Collection name and number of rows, 5 if omitted (ex: posts 10): users 2
>> What fields should "users" have?
Comma-separated fieldname:fieldtype pairs (ex: id:index, username:username)
id:index, username:username, motto:lorem|5
> Add another collection? (y/n) n
db.json saved.
{
"users": [
{
"id": 1,
"username": "RGershowitz",
"motto": "curabitur et magna placerat tellus"
},
{
"id": 2,
"username": "NMuroski",
"motto": "ante nullam dolor sit placerat"
}
]
}
README.md
Now you can start json-server:
License
MIT License © Aliaksandr Astashenkau
Releases
No releases published
Packages
No packages published
Used by 7
Contributors 2
typicode
Languages
JavaScript 100.0%
json-server-extension
json-server is great for stub server usage but in my opinion there where some caveat that i tried to solve in this package
Example
full example can be found here https://github.com/maty21/json-server-extension-example
Install
npm i json-server-extension
init example
//options:
//fullPath:fullpath for the combined object
//generatedPath:the path where the generated files will be found
//staticPath:the path where the static files will be found
const jsonExtender = new _jsonExtender({filePath:'./db_extends.json',
generatedPath:'./generated',
staticPath:'./static'})
//register accept array of generators or path to the generator scripts
//const funcs = Object.keys(generators).map(key => generators[key])
jsonExtender.register('../../../generators');
jsonExtender.generate().then((data)=>{
console.log(`wow ${data}`);
var server = jsonServer.create()
var router = jsonServer.router('./db_extends.json')
var middlewares = jsonServer.defaults()
server.use(middlewares)
server.use(router)
server.listen(4000, function () {
console.log('JSON Server is running')
}).catch((err) => {console.log(err)})
});
generator Example
}
module.exports = func;
api
constructor
constructor({filePath:'string',generatedPath:'string, staticPath:'string'})
register
register('path name') - a path where the generators scripts will be found the package will insatiate the scripts automatically
register([...generator scripts]) -array of your generators after requiring them manually
generate
isRun - there is ability to not generate the db.json each time good when you want to save the state after you close the process the
promise will recive the same data so you will not have to change the code
promise
resolve -{files:array of combined files, filePath:the combined file path }
reject - error
generator
const func= next =>create => {} - the generator should be initiated as follows first you will have to call for create this is sync function
and the for next
Sign Up Sign In
Wondering what’s next for npm? Check out our public roadmap! »
json-server
0.16.3 • Public • Published 5 months ago
Readme
Explore BETA
20 Dependencies
221 Dependents
123 Versions
Install
npm i json-server
Weekly Downloads
154,288
Version License
0.16.3 MIT
Homepage
github.com/typicode/json-server
Repository
github.com/typicode/json-server
Last publish
5 months ago
Collaborators
Try on RunKit
Report malware
Get a full fake REST API with zero coding in less than 30 seconds (seriously)
Created with <3 for front-end developers who need a quick back-end for prototyping and mocking.
See also:
Gold sponsors 🥇
Table of contents
Getting started
Routes
Plural routes
Singular routes
Filter
Paginate
Sort
Slice
Operators
Full-text search
Relationships
Database
Homepage
Extras
Static file server
Alternative port
Getting started
Getting started
{
"posts": [
{ "id": 1, "title": "json-server", "author": "typicode" }
],
"comments": [
{ "id": 1, "body": "some comment", "postId": 1 }
],
"profile": { "name": "typicode" }
}
Routes
Based on the previous db.json file, here are all the default routes. You can also add other routes using --routes .
Plural routes
GET /posts
GET /posts/1
POST /posts
PUT /posts/1
PATCH /posts/1
DELETE /posts/1
Singular routes
GET /profile
POST /profile
PUT /profile
PUT /profile
PATCH /profile
Filter
Use . to access deep properties
GET /posts?title=json-server&author=typicode
GET /posts?id=1&id=2
GET /comments?author.name=typicode
Paginate
Use _page and optionally _limit to paginate returned data.
In the Link header you'll get first , prev , next and last links.
GET /posts?_page=7
GET /posts?_page=7&_limit=20
Sort
Add _sort and _order (ascending order by default)
GET /posts?_sort=views&_order=asc
GET /posts/1/comments?_sort=votes&_order=asc
GET /posts?_sort=user,views&_order=desc,asc
Slice
Add _start and _end or _limit (an X-Total-Count header is included in the response)
GET /posts?_start=20&_end=30
GET /posts/1/comments?_start=20&_end=30
GET /posts/1/comments?_start=20&_limit=10
Operators
Add _gte or _lte for getting a range
GET /posts?views_gte=10&views_lte=20
GET /posts?id_ne=1
GET /posts?title_like=server
Full-text search
Add q
GET /posts?q=internet
Relationships
To include children resources, add _embed
GET /posts?_embed=comments
GET /posts/1?_embed=comments
GET /comments?_expand=post
GET /comments/1?_expand=post
To get or create nested resources (by default one level, add custom routes for more)
GET /posts/1/comments
POST /posts/1/comments
Database
GET /db
H
Homepage
Returns default index file or serves ./public directory
GET /
Extras
Static file server
You can use JSON Server to serve your HTML, JS and CSS, simply create a ./public directory or use --static to set a
different static files directory.
mkdir public
echo 'hello world' > public/index.html
json-server db.json
Alternative port
You can start JSON Server on other ports with the --port flag:
$ json-server http://example.com/file.json
$ json-server http://jsonplaceholder.typicode.com/db
// index.js
module.exports = () => {
const data = { users: [] }
// Create 1000 users
for (let i = 0; i < 1000; i++) {
data.users.push({ id: i, name: `user${i}` })
}
return data
}
$ json-server index.js
Tip use modules like Faker, Casual, Chance or JSON Schema Faker.
HTTPS
There are many ways to set up SSL in development. One simple way is to use hotel.
Add custom routes
Create a routes.json file. Pay attention to start every route with / .
{
"/api/*": "/$1",
"/:resource/:id/show": "/:resource/:id",
"/posts/:category": "/posts?category=:category",
"/articles\\?id=:id": "/posts/:id"
}
/api/posts # → /posts
/api/posts/1 # → /posts/1
/posts/1/show # → /posts/1
/posts/javascript # → /posts?category=javascript
/articles?id=1 # → /posts/1
Add middlewares
You can add your middlewares from the CLI using --middlewares option:
// h ll j
// hello.js
module.exports = (req, res, next) => {
res.header('X-Hello', 'World')
next()
}
CLI usage
Options:
--config, -c Path to config file [default: "json-server.json"]
--port, -p Set port [default: 3000]
--host, -H Set host [default: "localhost"]
--watch, -w Watch file(s) [boolean]
--routes, -r Path to routes file
--middlewares, -m Paths to middleware files [array]
--static, -s Set static files directory
--read-only, --ro Allow only GET requests [boolean]
--no-cors, --nc Disable Cross-Origin Resource Sharing [boolean]
--no-gzip, --ng Disable GZIP Content-Encoding [boolean]
--snapshots, -S Set snapshots directory [default: "."]
--delay, -d Add delay to responses (ms)
i i i ( i ) [ i ]
--id, -i Set database id property (e.g. _id) [default: "id"]
--foreignKeySuffix, --fks Set foreign key suffix, (e.g. _id as in post_id)
[default: "Id"]
--quiet, -q Suppress log messages from output [boolean]
--help, -h Show help [boolean]
--version, -v Show version number [boolean]
Examples:
json-server db.json
json-server file.js
json-server http://example.com/db.json
https://github.com/typicode/json-server
{
"port": 3000
}
Module
If you need to add authentication, validation, or any behavior, you can use the project as a module in combination with
other Express middlewares.
Simple example
$ npm install json-server --save-dev
// server.js
const jsonServer = require('json-server')
const server = jsonServer.create()
const router = jsonServer.router('db.json')
const middlewares = jsonServer.defaults()
server.use(middlewares)
server.use(router)
server.listen(3000, () => {
console.log('JSON Server is running')
})
$ node server.js
The path you provide to the jsonServer.router function is relative to the directory from where you launch your node
process. If you run the above code from another directory, it’s better to use an absolute path:
Please note also that jsonServer router() can be used in existing Express projects
Please note also that jsonServer.router() can be used in existing Express projects.
Let's say you want a route that echoes query parameters and another one that set a timestamp on every resource created.
server.use(middlewares)
server.use((req, res, next) => {
if (isAuthorized(req)) { // add your authorization logic here
next() // continue to JSON Server router
} else {
res.sendStatus(401)
}
})
server.use(router)
server.listen(3000, () => {
console.log('JSON Server is running')
g( g )
})
You can set your own status code for the response:
Rewriter example
server.use('/api', router)
API
jsonServer.create()
jsonServer.defaults([options])
options
static path to static files
logger enable logger middleware (default: true)
bodyParser enable body-parser middleware (default: true)
noCors disable CORS (default: false)
readOnly accept only GET requests (default: false)
jsonServer router([path|object])
jsonServer.router([path|object])
Deployment
You can deploy JSON Server. For example, JSONPlaceholder is an online fake API powered by JSON Server and running on
Heroku.
Links
Video
Articles
Third-party tools
License
MIT
Supporters ✨
Keywords
JSON server fake REST API prototyping mock mocking test testing rest data dummy
sandbox
Support
Help
Community
Advisories
Status
Contact npm
Company
About
Blog
Press
Policies
Terms of Use
Code of Conduct
Privacy
Modes and Environment Variables
Modes
Mode is an important concept in Vue CLI projects. By default, there are three modes:
You can overwrite the default mode used for a command by passing the --mode option flag. For example, if you want to use
development variables in the build command:
When running vue-cli-service , environment variables are loaded from all corresponding files. If they don't contain
a NODE_ENV variable, it will be set accordingly. For example, NODE_ENV will be set to "production" in production mode, "test" in
test mode, and defaults to "development" otherwise.
Then NODE_ENV will determine the primary mode your app is running in - development, production or test - and consequently, what
kind of webpack config will be created.
With NODE_ENV set to "test" for example, Vue CLI creates a webpack config that is intended to be used and optimized for unit tests. It
doesn't process images and other assets that are unnecessary for unit tests.
Similarly, NODE_ENV=development creates a webpack configuration which enables HMR, doesn't hash assets or create vendor bundles in
order to allow for fast re-builds when running a dev server.
When you are running vue-cli-service build , your NODE_ENV should always be set to "production" to obtain an app ready for
deployment, regardless of the environment you're deploying to.
NODE_ENV
If you have a default NODE_ENV in your environment, you should either remove it or explicitly set NODE_ENV when running vue-
cli-service commands.
Environment Variables
You can specify env variables by placing the following files in your project root:
FOO=bar
VUE_APP_NOT_SECRET_CODE=some_value
WARNING
Do not store any secrets (such as private API keys) in your app!
Environment variables are embedded into the build, meaning anyone can view them by inspecting your app's files.
Note that only NODE_ENV , BASE_URL , and variables that start with VUE_APP_ will be statically embedded into the client
bundle with webpack.DefinePlugin . It is to avoid accidentally exposing a private key on the machine that could have the same name.
For more detailed env parsing rules, please refer to the documentation of dotenv . We also use dotenv-expand for variable
expansion (available in Vue CLI 3.5+). For example:
FOO=foo
BAR=bar
CONCAT=$FOO$BAR # CONCAT=foobar
Loaded variables will become available to all vue-cli-service commands, plugins and dependencies.
An env file for a specific mode (e.g. .env.production ) will take higher priority than a generic one (e.g. .env ).
In addition, environment variables that already exist when Vue CLI is executed have the highest priority and will not be
overwritten by .env files.
.env files are loaded at the start of vue-cli-service . Restart the service after making changes.
NODE_ENV=production
VUE_APP_TITLE=My App (staging)
vue-cli-service build builds a production app, loading .env , .env.production and .env.production.local if they are present;
In both cases, the app is built as a production app because of the NODE_ENV , but in the staging version, process.env.VUE_APP_TITLE is
overwritten with a different value.
console.log(process.env.VUE_APP_NOT_SECRET_CODE)
During build, process.env.VUE_APP_NOT_SECRET_CODE will be replaced by the corresponding value. In the case
of VUE_APP_NOT_SECRET_CODE=some_value , it will be replaced by "some_value" .
In addition to VUE_APP_* variables, there are also two special variables that will always be available in your app code:
NODE_ENV - this will be one of "development" , "production" or "test" depending on the mode the app is running in.
BASE_URL - this corresponds to the publicPath option in vue.config.js and is the base path your app is deployed at.
All resolved env variables will be available inside public/index.html as discussed in HTML - Interpolation.
TIP
You can have computed env vars in your vue.config.js file. They still need to be prefixed with VUE_APP_ . This is useful for
version info
process.env.VUE_APP_VERSION = require('./package.json').version
module.exports = {
// config
}
.local can also be appended to mode-specific env files, for example .env.development.local will be loaded during development,
and is ignored by git.
Simple Configuration
The easiest way to tweak the webpack config is providing an object to the configureWebpack option in vue.config.js :
// vue.config.js
module.exports = {
configureWebpack: {
plugins: [
new MyAwesomeWebpackPlugin()
]
}
}
The object will be merged into the final webpack config using webpack-merge .
WARNING
Some webpack options are set based on values in vue.config.js and should not be mutated directly. For example, instead of
modifying output.path , you should use the outputDir option in vue.config.js ; instead of modifying output.publicPath ,
you should use the publicPath option in vue.config.js . This is because the values in vue.config.js will be used in multiple
places inside the config to ensure everything works properly together.
If you need conditional behavior based on the environment, or want to directly mutate the config, use a function (which will be lazy
evaluated after the env variables are set). The function receives the resolved config as the argument. Inside the function, you can either
mutate the config directly, OR return an object which will be merged:
// vue.config.js
module.exports = {
configureWebpack: config => {
if (process.env.NODE_ENV === 'production') {
// mutate config for production...
} else {
// mutate for development...
}
}
}
Chaining (Advanced)
The internal webpack config is maintained using webpack-chain . The library provides an abstraction over the raw webpack config, with
the ability to define named loader rules and named plugins, and later "tap" into those rules and modify their options.
This allows us finer-grained control over the internal config. Below you will see some examples of common modifications done via
the chainWebpack option in vue.config.js .
TIP
vue inspect will be extremely helpful when you are trying to access specific loaders via chaining.
// vue.config.js
module.exports = {
chainWebpack: config => {
config.module
.rule('vue')
.use('vue-loader')
.tap(options => {
// modify the options...
return options
})
}
}
TIP
For CSS related loaders, it's recommended to use css.loaderOptions instead of directly targeting loaders via chaining. This is
because there are multiple rules for each CSS file type and css.loaderOptions ensures you can affect all rules in one single
place.
// vue.config.js
module.exports = {
chainWebpack: config => {
// GraphQL Loader
config.module
.rule('graphql')
.test(/\.graphql$/)
.use('graphql-tag/loader')
.loader('graphql-tag/loader')
.end()
// Add another loader
.use('other-loader')
.loader('other-loader')
.end()
}
}
Replacing Loaders of a Rule
If you want to replace an existing Base Loader , for example using vue-svg-loader to inline SVG files instead of loading the file:
// vue.config.js
module.exports = {
chainWebpack: config => {
const svgRule = config.module.rule('svg')
// vue.config.js
module.exports = {
chainWebpack: config => {
config
.plugin('html')
.tap(args => {
return [/* new args to pass to html-webpack-plugin's constructor */]
})
}
}
You will need to familiarize yourself with webpack-chain's API and read some source code in order to understand how to leverage
the full power of this option, but it gives you a more expressive and safer way to modify the webpack config than directly mutate values.
// vue.config.js
module.exports = {
chainWebpack: config => {
config
.plugin('html')
.tap(args => {
args[0].template = '/Users/username/proj/app/templates/index.html'
return args
})
}
}
You can confirm that this change has taken place by examining the vue webpack config with the vue inspect utility, which we will
discuss next.
vue-cli-service exposes the inspect command for inspecting the resolved webpack config. The global vue binary also provides
the inspect command, and it simply proxies to vue-cli-service inspect in your project.
The command will print the resolved webpack config to stdout, which also contains hints on how to access rules and plugins via
chaining.
You can redirect the output into a file for easier inspection:
By default, inspect command will show the output for development config. To see the production configuration, you need to run
Note the output is not a valid webpack config file, it's a serialized format only meant for inspection.
<projectRoot>/node_modules/@vue/cli-service/webpack.config.js
This file dynamically resolves and exports the exact same webpack config used in vue-cli-service commands, including those from
plugins and even your custom configurations.
Modes and Environment Variables
Modes
Mode is an important concept in Vue CLI projects. By default, there are three modes:
You can overwrite the default mode used for a command by passing the --mode option flag. For example, if you want to use
development variables in the build command:
When running vue-cli-service , environment variables are loaded from all corresponding files. If they don't contain
a NODE_ENV variable, it will be set accordingly. For example, NODE_ENV will be set to "production" in production mode, "test" in
test mode, and defaults to "development" otherwise.
Then NODE_ENV will determine the primary mode your app is running in - development, production or test - and consequently, what
kind of webpack config will be created.
With NODE_ENV set to "test" for example, Vue CLI creates a webpack config that is intended to be used and optimized for unit tests. It
doesn't process images and other assets that are unnecessary for unit tests.
Similarly, NODE_ENV=development creates a webpack configuration which enables HMR, doesn't hash assets or create vendor bundles in
order to allow for fast re-builds when running a dev server.
When you are running vue-cli-service build , your NODE_ENV should always be set to "production" to obtain an app ready for
deployment, regardless of the environment you're deploying to.
NODE_ENV
If you have a default NODE_ENV in your environment, you should either remove it or explicitly set NODE_ENV when running vue-
cli-service commands.
Environment Variables
You can specify env variables by placing the following files in your project root:
FOO=bar
VUE_APP_NOT_SECRET_CODE=some_value
WARNING
Do not store any secrets (such as private API keys) in your app!
Environment variables are embedded into the build, meaning anyone can view them by inspecting your app's files.
Note that only NODE_ENV , BASE_URL , and variables that start with VUE_APP_ will be statically embedded into the client
bundle with webpack.DefinePlugin . It is to avoid accidentally exposing a private key on the machine that could have the same name.
For more detailed env parsing rules, please refer to the documentation of dotenv . We also use dotenv-expand for variable
expansion (available in Vue CLI 3.5+). For example:
FOO=foo
BAR=bar
CONCAT=$FOO$BAR # CONCAT=foobar
Loaded variables will become available to all vue-cli-service commands, plugins and dependencies.
An env file for a specific mode (e.g. .env.production ) will take higher priority than a generic one (e.g. .env ).
In addition, environment variables that already exist when Vue CLI is executed have the highest priority and will not be
overwritten by .env files.
.env files are loaded at the start of vue-cli-service . Restart the service after making changes.
Example: Staging Mode
Assuming we have an app with the following .env file:
VUE_APP_TITLE=My App
NODE_ENV=production
VUE_APP_TITLE=My App (staging)
vue-cli-service build builds a production app, loading .env , .env.production and .env.production.local if they are present;
In both cases, the app is built as a production app because of the NODE_ENV , but in the staging version, process.env.VUE_APP_TITLE is
overwritten with a different value.
console.log(process.env.VUE_APP_NOT_SECRET_CODE)
During build, process.env.VUE_APP_NOT_SECRET_CODE will be replaced by the corresponding value. In the case
of VUE_APP_NOT_SECRET_CODE=some_value , it will be replaced by "some_value" .
In addition to VUE_APP_* variables, there are also two special variables that will always be available in your app code:
NODE_ENV - this will be one of "development" , "production" or "test" depending on the mode the app is running in.
BASE_URL - this corresponds to the publicPath option in vue.config.js and is the base path your app is deployed at.
All resolved env variables will be available inside public/index.html as discussed in HTML - Interpolation.
TIP
You can have computed env vars in your vue.config.js file. They still need to be prefixed with VUE_APP_ . This is useful for
version info
process.env.VUE_APP_VERSION = require('./package.json').version
module.exports = {
// config
}
.local can also be appended to mode-specific env files, for example .env.development.local will be loaded during development,
and is ignored by git.
Cross-Origin Resource Sharing (CORS)
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to
indicate any other origins (domain, scheme, or port) than its own from which a browser should permit
loading of resources. CORS also relies on a mechanism by which browsers make a “preflight” request to
the server hosting the cross-origin resource, in order to check that the server will permit the actual request.
In that preflight, the browser sends headers that indicate the HTTP method and headers that will be used in
the actual request.
An example of a cross-origin request: the front-end JavaScript code served from https://domain-a.com
uses XMLHttpRequest to make a request for https://domain-b.com/data.json .
For security reasons, browsers restrict cross-origin HTTP requests initiated from scripts. For example,
XMLHttpRequest and the Fetch API follow the same-origin policy. This means that a web application using
those APIs can only request resources from the same origin the application was loaded from unless the
response from other origins includes the right CORS headers.
The CORS mechanism supports secure cross-origin requests and data transfers between browsers and
servers. Modern browsers use CORS in APIs such as XMLHttpRequest or Fetch to mitigate the risks of
cross-origin HTTP requests
cross-origin HTTP requests.
More specifically, this article is for web administrators, server developers, and front-end developers.
Modern browsers handle the client side of cross-origin sharing, including headers and policy enforcement.
But the CORS standard means servers have to handle new request and response headers.
WebGL textures.
Images/video frames drawn to a canvas using drawImage() .
CSS Shapes from images.
This article is a general discussion of Cross-Origin Resource Sharing and includes a discussion of the
necessary HTTP headers.
Functional overview
The Cross-Origin Resource Sharing standard works by adding new HTTP headers that let servers describe
which origins are permitted to read that information from a web browser. Additionally, for HTTP request
methods that can cause side-effects on server data (in particular, HTTP methods other than GET , or POST
( p , ,
with certain MIME types), the specification mandates that browsers "preflight" the request, soliciting
supported methods from the server with the HTTP OPTIONS request method, and then, upon "approval"
from the server, sending the actual request. Servers can also inform clients whether "credentials" (such as
Cookies and HTTP Authentication) should be sent with requests.
CORS failures result in errors, but for security reasons, specifics about the error are not available to
JavaScript. All the code knows is that an error occurred. The only way to determine what specifically went
wrong is to look at the browser's console for details.
Subsequent sections discuss scenarios, as well as provide a breakdown of the HTTP headers used.
Simple requests
Some requests don’t trigger a CORS preflight. Those are called “simple requests” in this article, though the
Fetch spec (which defines CORS) doesn’t use that term. A “simple request” is one that meets all the
following conditions:
Content-Language
Content-Type (but note the additional requirements below)
The only allowed values for the Content-Type header are:
application/x-www-form-urlencoded
multipart/form-data
text/plain
If the request is made using an XMLHttpRequest object, no event listeners are registered on the
object returned by the XMLHttpRequest.upload property used in the request; that is, given an
XMLHttpRequest instance xhr , no code has called xhr.upload.addEventListener() to add an
event listener to monitor the upload.
No ReadableStream object is used in the request.
Note
These are the same kinds of cross-site requests that web content can already issue, and no response data
is released to the requester unless the server sends an appropriate header. Therefore, sites that prevent
cross-site request forgery have nothing new to fear from HTTP access control.
Note
WebKit Nightly and Safari Technology Preview place additional restrictions on the values allowed in the
Accept , Accept-Language , and Content-Language headers. If any of those headers have
”nonstandard” values, WebKit/Safari does not consider the request to be a “simple request”. What values
WebKit/Safari consider “nonstandard” is not documented, except in the following WebKit bugs:
Require preflight for non-standard CORS-safelisted request headers Accept, Accept-Language, and
Content-Language
Allow commas in Accept, Accept-Language, and Content-Language request headers for simple
CORS
Switch to a blacklist model for restricted Accept headers in simple CORS requests
No other browsers implement these extra restrictions, because they’re not part of the spec.
For example, suppose web content at https://foo.example wishes to invoke content on domain
https://bar.other . Code of this sort might be used in JavaScript deployed on foo.example :
xhr.open('GET', url);
xhr.onreadystatechange = someHandler;
xhr.send();
This performs a simple exchange between the client and the server, using CORS headers to handle the
privileges:
Let's look at what the browser will send to the server in this case, and let's see how the server responds:
HTTP/1.1 200 OK
Date: Mon, 01 Dec 2008 00:23:53 GMT
Server: Apache/2
Access-Control-Allow-Origin: *
Keep-Alive: timeout=2, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/xml
[…XML Data…]
Access-Control-Allow-Origin: *
This pattern of the Origin and Access-Control-Allow-Origin headers is the simplest use of the
access control protocol. If the resource owners at https://bar.other wished to restrict access to the
resource to requests only from https://foo.example , (i.e no domain other than https://foo.example
can access the resource in a cross-site manner) they would send:
Access-Control-Allow-Origin: https://foo.example
Note
When responding to a credentialed requests request, the server must specify an origin in the value of the
Access-Control-Allow-Origin header, instead of specifying the " * " wildcard.
g , p y g
Preflighted requests
Unlike “simple requests” (discussed above), for "preflighted" requests the browser first sends an HTTP
request using the OPTIONS method to the resource on the other origin, in order to determine if the actual
request is safe to send. Cross-site requests are preflighted like this since they may have implications to
user data.
The example above creates an XML body to send with the POST request. Also, a non-standard HTTP X-
PINGOTHER request header is set. Such headers are not part of HTTP/1.1, but are generally useful to web
applications. Since the request uses a Content-Type of application/xml , and since a custom header
is set, this request is preflighted.
Note
As described below, the actual POST request does not include the Access-Control-Request-* headers;
they are needed only for the OPTIONS request.
Let's look at the full exchange between client and server. The first exchange is the preflight
request/response:
Lines 1 - 10 above represent the preflight request with the OPTIONS method. The browser determines that
it needs to send this based on the request parameters that the JavaScript code snippet above was using,
so that the server can respond whether it is acceptable to send the request with the actual request
parameters. OPTIONS is an HTTP/1.1 method that is used to determine further information from servers,
and is a safe method, meaning that it can't be used to change the resource. Note that along with the
OPTIONS request, two other request headers are sent (lines 9 and 10 respectively):
Access-Control-Request-Method: POST
Access-Control-Request-Headers: X-PINGOTHER, Content-Type
The Access-Control-Request-Method header notifies the server as part of a preflight request that when
the actual request is sent, it will be sent with a POST request method. The Access-Control-Request-
Headers header notifies the server that when the actual request is sent, it will be sent with a X-PINGOTHER
and Content-Type custom headers. The server now has an opportunity to determine whether it wishes to
accept a request under these circumstances.
Lines 13 - 22 above are the response that the server sends back, which indicate that the request method
( POST ) and request headers ( X-PINGOTHER ) are acceptable. In particular, let's look at lines 16-19:
Access-Control-Allow-Origin: http://foo.example
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Headers: X-PINGOTHER, Content-Type
ccess Co t o o eade s: GO , Co te t ype
Access-Control-Max-Age: 86400
that POST and GET are viable methods to query the resource in question (this header is similar to the
Allow response header, but used strictly within the context of access control).
The server also sends Access-Control-Allow-Headers with a value of " X-PINGOTHER, Content-
Type ", confirming that these are permitted headers to be used with the actual request. Like Access-
Control-Allow-Methods , Access-Control-Allow-Headers is a comma separated list of acceptable
headers.
Finally, Access-Control-Max-Age gives the value in seconds for how long the response to the preflight
request can be cached for without sending another preflight request. In this case, 86400 seconds is 24
hours. Note that each browser has a maximum internal value that takes precedence when the Access-
Control-Max-Age is greater.
<person><name>Arun</name></person>
HTTP/1.1 200 OK
Date: Mon, 01 Dec 2008 01:15:40 GMT
Server: Apache/2
Access-Control-Allow-Origin: https://foo.example
Vary: Accept-Encoding, Origin
Content-Encoding: gzip
Content-Length: 235
Keep-Alive: timeout=2, max=99
Connection: Keep-Alive
Content-Type: text/plain
The request was redirected to 'https://example.com/foo', which is disallowed for cross-origin requests that require
preflight
originally required.
Until browsers catch up with the spec, you may be able to work around this limitation by doing one or both
of the following:
Change the server-side behavior to avoid the preflight and/or to avoid the redirect
Change the request such that it is a simple request that doesn’t cause a preflight
1. Make a simple request (using Response.url for the Fetch API, or XMLHttpRequest.responseURL )
to determine what URL the real preflighted request would end up at.
2. Make another request (the “real” request) using the URL you obtained from Response.url or
XMLHttpRequest.responseURL in the first step.
However, if the request is one that triggers a preflight due to the presence of the Authorization header in
the request, you won’t be able to work around the limitation using the steps above. And you won’t be able
to work around it at all unless you have control over the server the request is being made to.
Note
When making credentialed requests to a different domain, third-party cookie policies will still apply. The
policy is always enforced independent of any setup on the server and the client, as described in this
chapter.
The most interesting capability exposed by both XMLHttpRequest or Fetch and CORS is the ability to
make "credentialed" requests that are aware of HTTP cookies and HTTP Authentication information. By
default, in cross-site XMLHttpRequest or Fetch invocations, browsers will not send credentials. A specific
flag has to be set on the XMLHttpRequest object or the Request constructor when it is invoked.
In this example, content originally loaded from http://foo.example makes a simple GET request to a
resource on http://bar.other which sets Cookies. Content on foo.example might contain JavaScript
like this:
function callOtherDomain() {
if (invocation) {
invocation.open('GET', url, true);
invocation.withCredentials = true;
invocation.onreadystatechange = handler;
invocation.send();
}
}
Line 7 shows the flag on XMLHttpRequest that has to be set in order to make the invocation with Cookies,
namely the withCredentials boolean value. By default, the invocation is made without Cookies. Since
this is a simple GET request, it is not preflighted, but the browser will reject any response that does not
have the Access-Control-Allow-Credentials : true header, and not make the response available to
the invoking web content.
HTTP/1.1 200 OK
Date: Mon, 01 Dec 2008 01:34:52 GMT
Server: Apache/2
Access-Control-Allow-Origin: https://foo.example
Access-Control-Allow-Credentials: true
Cache-Control: no-cache
Pragma: no-cache
Set-Cookie: pageAccess=3; expires=Wed, 31-Dec-2008 01:34:53 GMT
Vary: Accept-Encoding, Origin
Content-Encoding: gzip
Content-Length: 106
Keep-Alive: timeout=2, max=100
Connection: Keep-Alive
Content-Type: text/plain
[text/plain payload]
Although line 10 contains the Cookie destined for the content on http://bar.other , if bar.other did not
respond with an Access-Control-Allow-Credentials : true (line 17) the response would be ignored
and not made available to web content.
Note
Some enterprise authentication services require TLS client certificates be sent in preflight requests, in
contravention of the Fetch specification.
Because the request headers in the above example include a Cookie header, the request would fail if the
value of the Access-Control-Allow-Origin header was "*". But it does not fail: Because the value of
the Access-Control-Allow-Origin header is " http://foo.example " (an actual origin) rather than the
" * " wildcard, the credential-cognizant content is returned to the invoking web content.
Note that the Set-Cookie response header in the example above also sets a further cookie. In case of
failure, an exception—depending on the API used—is raised.
Third-party cookies
Note that cookies set in CORS responses are subject to normal third-party cookie policies. In the example
above, the page is loaded from foo.example , but the cookie on line 20 is sent by bar.other , and would
thus not be saved if the user has configured their browser to reject all third-party cookies.
Cookie in the request (line 10) may also be suppressed in normal third-party cookie policies. The enforced
cookie policy may therefore nullify the capability described in this chapter, effectively prevents you from
making credentialed requests whatsoever.
Access-Control-Allow-Origin
A returned resource may have one Access-Control-Allow-Origin header, with the following syntax:
Access-Control-Allow-Origin: <origin> | *
Access-Control-Allow-Origin specifies either a single origin, which tells browsers to allow that origin
to access the resource; or else — for requests without credentials — the " * " wildcard, to tell browsers to
allow any origin to access the resource.
For example, to allow code from the origin https://mozilla.org to access the resource, you can
specify:
Access-Control-Allow-Origin: https://mozilla.org
Vary: Origin
If the server specifies a single origin (that may dynamically change based on the requesting origin as part
of a white-list) rather than the " * " wildcard, then the server should also include Origin in the Vary
h d t i di t t li t th t ill diff b d th l f th O i i
response header — to indicate to clients that server responses will differ based on the value of the Origin
request header.
Access-Control-Expose-Headers
The Access-Control-Expose-Headers header lets a server whitelist headers that Javascript (such as
getResponseHeader() ) in browsers are allowed to access.
Access-Control-Max-Age
The Access-Control-Max-Age header indicates how long the results of a preflight request can be
cached. For an example of a preflight request, see the above examples.
Access-Control-Max-Age: <delta-seconds>
The delta-seconds parameter indicates the number of seconds the results can be cached.
Access-Control-Allow-Credentials
The Access-Control-Allow-Credentials header indicates whether or not the response to the request
b d h th d ti l fl i t Wh d t f t fli ht
can be exposed when the credentials flag is true. When used as part of a response to a preflight
request, this indicates whether or not the actual request can be made using credentials. Note that simple
GET requests are not preflighted, and so if a request is made for a resource with credentials, if this header
is not returned with the resource, the response is ignored by the browser and not returned to web content.
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods
The Access-Control-Allow-Methods header specifies the method or methods allowed when accessing
the resource. This is used in response to a preflight request. The conditions under which a request is
preflighted are discussed above.
An example of a preflight request is given above, including an example which sends this header to the
browser.
Access-Control-Allow-Headers
Developers using cross-site XMLHttpRequest capability do not have to set any cross-origin sharing
request headers programmatically.
Origin
The Origin header indicates the origin of the cross-site access request or preflight request.
Origin: <origin>
The origin is a URL indicating the server from which the request initiated. It does not include any path
information, but only the server name.
Note
The origin value can be null .
Note that in any access control request, the Origin header is always sent.
Access-Control-Request-Method
The Access-Control-Request-Method is used when issuing a preflight request to let the server know
what HTTP method will be used when the actual request is made.
Access-Control-Request-Method: <method>
q
Access-Control-Request-Headers
The Access-Control-Request-Headers header is used when issuing a preflight request to let the server
know what HTTP headers will be used when the actual request is made (such as
with setRequestHeader() ). This browser side header will be answered by the complementary server
side header of Access-Control-Allow-Headers .
Specifications
Fetch
Living Standard New definition; supplants W3C CORS specification.
The definition of 'CORS' in that specification.
Browser compatibility
Report problems with this compatibility data on GitHub
Access-Control-Allow-Origin
Chrome 4
Edge 12
Firefox 3.5
Internet Explorer 10
Opera 12
Safari 4
WebView Android 2
Opera Android 12
Full support
Compatibility notes
Internet Explorer 8 and 9 expose CORS via the XDomainRequest object, but have a full implementation in
IE 10
See also
CORS
CORS errors
Enable CORS: I want to add CORS support to my server
XMLHttpRequest
Fetch API
When Postman recently hosted the Postman Galaxy virtual conference with attendees from
around the world, I needed to nd out what people were saying about the event. And more
importantly, who was saying these things. As a new Postman developer advocate, I was assigned
to “ gure out a way to have all the tweets with the #PostmanGalaxy hashtag automatically sent
into a Slack channel.”
After brainstorming some di erent approaches, I was able to use a combination of the Twitter
API, the Postman API, and Slack Incoming Webhooks to achieve exactly that. Below is a result of
the nal integration in action. Look at all those kind tweets about Postman Galaxy!
Twitter hashtag search bot for Postman Galaxy in action showing results in Slack
While the Twitter API can obtain a large amount of user data for a speci c tweet, depending on
which product track you are using, you may have some limited access to certain endpoints. Since
I used the basic “Standard” track, I was restricted from getting everything I needed in one shot.
Instead of upgrading my product track, I thought to myself: “I’m a developer for the people! I’m
going to nd a way for even Standard track devs to do this.” I was also on a deadline and didn’t
have time to apply and wait for my access to be upgraded.
This limited access meant I had to get creative with gathering all the necessary data. Instead of
doing a single search and getting everything I desired (username, user handle, date, source,
body of tweet, etc.), I had to chain requests together using data from my initial search. Because
of the limited access inherent to the Standard product track, when I sent a search query for the
string %23PostmanGalaxy to the Twitter API, it would return only the author_id, but not the
user’s Twitter handle or Twitter name.
Now I don’t know about you, but I don’t have other peoples’ Twitter ID numbers memorized. In
fact, I couldn’t even tell you what mine is. And since we wanted to know who was using the
#PostmanGalaxy hashtag, I needed to nd a way to tie these author_ids to an actual Twitter
handle that humans would understand. So I captured the IDs of the users in a comma separated
string and saved it as an environment variable called allUserIdsString.
Thankfully, the Twitter API has a “Users by ID” request that takes the string of IDs as a parameter.
I copied this request from the Twitter API v2 Postman Collection into my own folder and entered
the environment variable as the query parameter value as shown below. Upon a successful
request, I used some JavaScript code in the Tests tab to match the newly acquired usernames
with the corresponding author_ids to our saved data.
Sending a second request to Twitter to obtain more user information
With all of our tweets and necessary information in place, it’s time to send these tweets
somewhere the rest of our team can see them: a Slack channel.
Using Slack’s nifty Block Kit Builder tool, you can get quite a bit of exibility and customization on
how you want the message to go through. Here’s a picture of that JSON body below. You can see
that it sends only one tweet at time, called current_tweet, which is in the double curly braces to
reference the variable named “current_tweet.”
This is what the JSON body looks like for a POST request to a Slack webhook URL
I learned the hard way that Slack has a character limit on the body of a request. When I tried to
send the content for 20 tweets to post to Slack in one shot, I continuously got an error message.
After banging my head against a wall for a while, an experienced co-worker suggested that the
character limit may be the reason (thank you, Arlemi Turpault, for that key insight!).
Because of the character limit, I reused this request multiple times. If there are 20 tweets we
want to send to Slack, we end up looping through the array of 20 tweets and calling this request
20 times. Thankfully, Postman makes it really easy to specify the order of your work ow based
on certain conditions.
Automating the process with Postman monitors
If you planned on running this collection manually in Postman every time you wanted to see the
new Tweets, we’d be done. But then you’d have to come into Postman every so often and use
the Collection Runner to run through the entire collection. Not too bad if you only need to do it
once or twice, but for a multi-day event like Postman Galaxy you’ll want a more e cient solution.
Since I didn’t want to wake up every 10 minutes in the middle of the night to update my team
with the new #PostmanGalaxy tweets, I found a way to automate this process. By taking
advantage of Postman’s monitors, we can run this collection automatically at set intervals.
The last step of this collection involves a pair of requests that work together to track the most
recent tweet’s ID number, which is saved as an environment variable highest_tweet_id.
Because we’re automating this process with monitors, it’s important to note that
global/environment variables are not persisted across collection runs using a monitor.
To get around this, we can actually use Postman to help us use Postman. While Postman is an
API collaboration platform, we also have the Postman API, which you can use to ensure only the
newest, unseen tweets get pushed to Slack each time the monitor is run. All we’re really doing in
the nal two requests is making sure that the environment variable for the highest tweet ID gets
updated and safely tracked for future monitor runs.
Here’s a visual overview of everything in this work ow. You can see that POST request to Slack
getting called again so long as there are tweets to send to Slack:
3. Enter the missing auth credentials in the environment (Twitter bearer token, Postman API
key, and Slack webhook URL).
Stay up to date with your favorite hashtagged tweets about anything you nd interesting (weird
animals, anyone?). Seriously, there are some adorable and strange hashtags worth following—
the Twitter world of helpful and entertaining hashtags is endless.
Sean Keegan
Sean Keegan is a developer advocate at Postman
Sean Keegan is a developer advocate at Postman.
+7
Comments
Your name
Your email
Post Comment
Rolo
March 29, 2021
+3
Great POST
Read More
MORE
Continuous API Testing with Postman 10 Tips for Working with Postman Variables Kubernetes Tutorial: Guide to Deploying an App...
OVERVIEW GET POSTMAN RESOURCES
API Monitoring
API Network
Version Control
Workspaces
Interceptor
API Visualizer
API Testing
Contact Us
Student Program
Swag Shop
Disclaimer: This article doesn’t claim to replace the official documentation but rather elaborate it - you definitely should go over it in
order to be aligned with the most updated API specification.
Sponsor:
How to Install
To begin with, we’ll have to install one of Puppeteer’s packages.
Library Package
A lightweight package, called puppeteer-core , which is a library that interacts with any browser that’s based on DevTools protocol -
without actually installing Chromium. It comes in handy mainly when we don’t need a downloaded version of Chromium, for instance,
bundling this library within a project that interacts with a browser remotely.
Product Package
The main package, called puppeteer , which is actually a full product for browser automation on top of puppeteer-core . Once it’s
installed, the most recent version of Chromium is placed inside node_modules , what guarantees that the downloaded version is
compatible with the host operating system.
Interacting Browser
As mentioned before, Puppeteer is just an API over the Chrome DevTools Protocol. Naturally, it should have a Chromium instance to
interact with. This is the reason why Puppeteer’s ecosystem provides methods to launch a new Chromium instance and connect an
existing instance also.
Launching Chromium
The easiest way to interact with the browser is by launching a Chromium instance using Puppeteer:
(async () => {
const browser = await puppeteer.launch();
console.info(browser);
await browser.close();
})();
Connecting Chromium
Sometimes we want to interact with an existing Chromium instance - whether using puppeteer-core or just attaching a remote
instance:
(async () => {
// Initializing a Chrome instance manually
const chrome = await chromeLauncher.launch({
chromeFlags: ['--headless']
});
const response = await axios.get(`http://localhost:${chrome.port}/json/version`);
const { webSocketDebuggerUrl } = response.data;
await browser.close();
await chrome.kill();
})();
The connect method attaches the instance we just created to Puppeteer. All we’ve to do is supplying the WebSocket endpoint of our
instance.
Note: Of course, chrome-launcher is only to demonstrate an instance creation. We absolutely could connect an instance in other ways,
as long as we have the appropriate WebSocket endpoint.
Launching Firefox
Some of you might wonder - could Puppeteer interact with other browsers besides Chromium? 🤔
Although there are projects that claim to support the variety browsers - the official team has started to maintain an experimental
project that interacts with Firefox, specifically:
Update: puppeteer-firefox was an experimental package to examine communication with an outdated Firefox fork, however, this
project is no longer maintained. Presently, the way to go is by setting the PUPPETEER_PRODUCT environment variable to firefox and so
fetching the binary of Firefox Nightly.
Once we’ve the binary, we merely need to change the product to “firefox” whereas the rest of the lines remain the same - what means
we’re already familiar with how to launch the browser:
// Deprecated package
// const puppeteer = require('puppeteer-firefox');
const puppeteer = require('puppeteer');
(async () => {
// FireFox's binary is needed to be fetched before
const browser = await puppeteer.launch({ product: 'firefox' });
console.info(browser);
await browser.close();
})();
⚠ Pay attention - the API integration isn’t totally ready yet and implemented progressively. Also, it’s better to check out the
implementation status here.
Browser Context
Imagine that instead of recreating a browser instance each time, which is pretty expensive operation, we could use the same instance
but separate it into different individual sessions which belong to this shared browser.
It’s actually possible, and these sessions are known as Browser Contexts.
A default browser context is created as soon as creating a browser instance, but we can create additional browser contexts as
necessary:
(async () => {
const browser = await puppeteer.launch();
// A reference for the default browser context
const defaultContext = browser.defaultBrowserContext();
console.info(defaultContext.isIncognito()); // False
Apart from the fact that we demonstrate how to access each context, we need to know that the only way to terminate the default context
is by closing the browser instance - which, in fact, terminates all the contexts that belong to the browser.
Better yet, the browser context also come in handy when we want to apply a specific configuration on the session isolatedly - for
instance, granting additional permissions.
Headful Mode
As opposed to the headless mode - which merely uses the command line, the headful mode opens the browser with a graphical user
interface during the instruction:
(async () => {
// Makes the browser to be launched in a headful way
const browser = await puppeteer.launch({ headless: false });
console.info(browser);
await browser.close();
})();
Because of the fact that the browser is launched in headless mode by default, we demonstrate how to launch it in a headful way.
In case you wonder - headless mode is mostly useful for environments that don’t really need the UI or neither support such an interface.
The cool thing is that we can headless almost everything in Puppeteer. 💪
Note: We’re going to launch the browser in a headful mode for most of the upcoming examples, which will allow us to notice the result
clearly.
Debugging
When writing code, we should be aware of what kinds of ways are available to debug our program. The documentation lists
several tips about debugging Puppeteer.
That’s fairly probable we would like to see how our script instructs the browser and what’s actually displayed, at some point.
The headful mode, which we’re already familiar with, helps us to practically do that:
// Browser operations
await browser.close();
})();
Beyond that the browser is truly opened, we can notice now the operated instructions clearly - due to slowMo which slows down
Puppeteer when performing each operation.
In case we want to debug the application itself in the opened browser - it basically means to open the DevTools and start debugging as
usual:
(async () => {
const browser = await puppeteer.launch({ devtools: true });
// Browser operations
await browser.close();
})();
Notice that we use devtools which launches the browser in a headful mode by default and opens the DevTools automatically. On top
of that, we utilize waitForTarget in order to hold the browser process until we terminate it explicitly.
Apparently - some of you may wonder if it’s possible to sleep the browser with a specified time period, so:
(async () => {
const browser = await puppeteer.launch({ devtools: true });
// Browser operations
await browser.close();
})();
The first approach is merely a function that resolves a promise when setTimeout finishes. The second approach, however, is much
simpler but demands having a page instance (we’ll get to that later).
As we know, Puppeteer is executed in a Node.js process - which is absolutely separated from the browser process. Hence, in this case,
we should treat it as much as we debug a regular Node.js application.
Whether we connect to an inspector client or prefer using ndb - it’s all about placing the breakpoints right before Puppeteer’s operation.
Adding them programmatically is possible either, simply by inserting the debugger; statement, obviously.
Interacting Page
Now that Puppeteer is attached to a browser instance - which, as we already mentioned, represents our browser instance (Chromium,
Firefox, whatever), allows us creating easily a page (or multiple pages):
(async () => {
const browser = await puppeteer.launch();
await browser.close();
})();
In the code example above we plainly create a new page by invoking the newPage method. Notice it’s created on the default browser
context.
Basically, Page is a class that represents a single tab in the browser (or an extension background). As you guess, this class provides
handy methods and events in order to interact with the page (such as selecting elements, retrieving information, waiting for elements,
etc.).
Well, it’s about time to present a list of practical examples, as promised. To do this, we’re going to scrape data from the official
Puppeteer website and operate it.🕵
Navigating by URL
One of the earliest things is, intuitively, instructing the blank page to navigate to a specified URL:
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await browser.close();
})();
We use goto to drive the created page to navigate Puppeteer’s website. Afterward, we just take the title of Page’s main frame, print it,
and expect to get that as an output:
Navigating by a URL and scraping the title
This example shows us which there’s no guarantee that our page would render the selected element at the right moment, and if
anything. To clarify - possible reasons could be that the page is loaded slowly, part of the page is lazy-loaded, or perhaps it’s navigated
immediately to another page.
That’s exactly why Puppeteer provides methods to wait for stuff like elements, navigation, functions, requests, responses or simply a
certain predicate - mainly to deal with an asynchronous flow.
Anyway, it turns out that Puppeteer’s website has an entry page, which immediately redirects us to the well-known website’s index page.
The thing is, that entry page in question doesn’t render a title meta element:
Evaluating the title meta element
When navigating to Puppeteer’s website, the title element is evaluated as an empty string. However, a few moments later, the page
is really navigated to the website’s index page and rendered with a title.
This means that the invoked title method is actually applied too early, on the entry page, instead of the website’s index page. Thus,
the entry page is considered as the first main frame, and eventually its title, which is an empty string, is returned.
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.goto('https://pptr.dev');
await browser.close();
})();
All we do, is instructing Puppeteer to wait until the page renders a title meta element, which is achieved by
invoking waitForSelector . This method basically waits until the selected element is rendered within the page.
In that way - we can easily deal with asynchronous rendering and ensure that elements are visible on the page.
Emulating Devices
Puppeteer’s library provides tools for approximating how the page looks and behaves on various devices, which are pretty useful when
testing a website’s responsiveness.
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
// Emulates an iPhone X
await page.setUserAgent('Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mob
await page.setViewport({ width: 375, height: 812 });
await page.goto('https://pptr.dev');
await browser.close();
})();
We choose to emulate an iPhone X - which means changing the user agent appropriately. Furthermore, we adjust the viewport size
according to the display points that appear here.
It’s easy to understand that setUserAgent defines a specific user agent for the page, whereas setViewport modifies the viewport
definition of the page. In case of multiple pages, each one has its own user agent and viewport definition.
Indeed, the console panel shows us that the page is opened with the right user agent and viewport size.
The truth is that we don’t have to specify the iPhone X’s descriptions explicitly, because the library arrives with a built-in list of device
descriptors. On top of that, it provides a method called emulate which is practically a shortcut for
invoking setUserAgent and setViewport , one after another.
Let’s use that:
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await browser.close();
})();
It’s merely changed to pass the boilerplate descriptor to emulate (instead of declaring that explicitly). Notice we import the descriptors
out of puppeteer/DeviceDescriptors .
Handling Events
The Page class supports emitting of various events by actually extending the Node.js’s EventEmitter object. This means we can use
the natively supported methods in order to handle these events - such as: on , once , removeListener and so on.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Emitted when the DOM is parsed and ready (without waiting for resources)
page.once('domcontentloaded', () => console.info('✅ DOM is ready'));
// Emitted when the page emits an error event (for example, the page crashes)
page.on('error', error => console.error(`❌ ${error}`));
// Emitted when a script within the page uses `alert`, `prompt`, `confirm` or `beforeunload`
page.on('dialog', async dialog => {
console.info(`👉 ${dialog.message()}`);
await dialog.dismiss();
});
// Emitted when a new page, that belongs to the browser context, is opened
page.on('popup', () => console.info('👉 New page is opened'));
await page.goto('https://pptr.dev');
await browser.close();
})();
Let’s simulate and trigger part of the events by adding this script:
As we probably know, evaluate just executes the supplied script within the page context.
In case you wonder - it’s possible to listen for custom events that are triggered in the page. Basically it means to define the event
handler on page’s window using the exposeFunction method. Check out this example to understand exactly how to implement it.
Operating Mouse
In general, the mouse controls the motion of a pointer in two dimensions within a viewport. Unsurprisingly, Puppeteer represents the
mouse by a class called Mouse .
Moreover, every Page instance has a Mouse - which allows performing operations such as changing its position and clicking within the
viewport.
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await browser.close();
})();
The scenario we simulate is moving the mouse over the second link of the left API sidebar. We set a viewport size and wait explicitly for
the sidebar component to ensure it’s really rendered.
Then, we invoke move in order to position the mouse with appropriate coordinates, that actually represent the center of the second link.
The next step is simply clicking on the link by the respective coordinates:
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.setViewport({ width: 1920, height: 1080 });
await page.goto('https://pptr.dev');
await page.waitForSelector('sidebar-component');
// Clicks the second link and triggers `mouseup` event after 1000ms
await page.mouse.click(40, 150, { delay: 1000 });
await browser.close();
})();
Instead of changing the position explicitly, we just use click - which basically triggers mousemove , mousedown and mouseup events,
one after another.
Note: We delay the pressing in order to demonstrate how to modify the click behavior, nothing more. It’s worth pointing out that we can
also control the mouse buttons (left, center, right) and the number of clicks.
Another nice thing is the ability to simulate a drag and drop behavior easily:
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await browser.close();
})();
All we do is using the Mouse methods for grabbing the mouse, from one position to another, and afterward releasing it.
Operating Keyboard
The keyboard is another way to interact with the page, mostly for input purposes.
Similar to the mouse, Puppeteer represents the keyboard by a class called Keyboard - and every Page instance holds such an
instance.
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await browser.close();
})();
Notice that we wait for the toolbar (instead of the API sidebar). Then, we focus the search input element and simply type a text into it.
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.focus('[type="search"]');
await page.keyboard.type('Keyboard', { delay: 100 });
Basically, we press ArrowDown twice and Enter in order to choose the third search result.
By the way, it’s nice to know that there is a list of the key codes.
Taking Screenshots
Taking screenshots through Puppeteer is a quite easy mission.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await browser.close();
})();
As we see, the screenshot method makes all the charm - whereas we just have to insert a path for the output.
Moreover, it’s also possible to control the type, quality and even clipping the image:
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setViewport({ width: 1920, height: 1080 });
await page.goto('https://pptr.dev');
await page.waitForSelector('title');
await browser.close();
})();
Generating PDF
Puppeteer is either useful for generating a PDF file from the page content.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Navigates to the project README file
await page.goto('https://github.com/GoogleChrome/puppeteer/blob/master/README.md');
await browser.close();
})();
(async () => {
const browser = await puppeteer.launch({ devtools: true });
const page = await browser.newPage();
await page.goto('https://pptr.dev');
await page.waitForSelector('title');
await browser.close();
})();
First, we grants the browser context the appropriate permissions. Then, we use setGeolocation to override the current geolocation
with the coordinates of the north pole.
Accessibility
The accessibility tree is a subset of the DOM that includes only elements with relevant information for assistive technologies such as
screen readers, voice controls and so on. Having the accessibility tree means we can analyze and test the accessibility support in the
page.
When it comes to Puppeteer, it enables to capture the current state of the tree:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://pptr.dev');
await page.waitForSelector('title');
await browser.close();
})();
The snapshot doesn’t pretend to be the full tree, but rather including just the interesting nodes (those which are acceptable by most of
the assistive technologies).
Note: We can obtain the full tree through setting interestingOnly to false.
Code Coverage
The code coverage feature was introduced officially as part of Chrome v59 - and provides the ability to measure how much code is
being used, compared to the code that is actually loaded. In this manner, we can reduce the dead code and eventually speed up the
loading time of the pages.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://pptr.dev');
await page.waitForSelector('title');
// Calculates how many bytes are being used based on the coverage
const calculateUsedBytes = (type, coverage) =>
coverage.map(({ url, ranges, text }) => {
let usedBytes = 0;
return {
url,
type,
usedBytes,
totalBytes: text.length
};
});
console.info([
...calculateUsedBytes('js', jsCoverage),
...calculateUsedBytes('css', cssCoverage)
]);
await browser.close();
})();
We instruct Puppeteer to gather coverage information for JavaScript and CSS files, until the page is loaded. Thereafter, we
define calculateUsedBytes which goes through a collected coverage data and calculates how many bytes are being used (based on
the coverage). At last, we merely invoke the created function on both coverages.
[
{
url: 'https://pptr.dev/',
type: 'js',
usedBytes: 149,
totalBytes: 150
},
{
url: 'https://www.googletagmanager.com/gtag/js?id=UA-106086244-2',
type: 'js',
usedBytes: 21018,
totalBytes: 66959
},
{
url: 'https://pptr.dev/index.js',
type: 'js',
usedBytes: 108922,
totalBytes: 141703
},
{
url: 'https://www.google-analytics.com/analytics.js',
type: 'js',
usedBytes: 19665,
totalBytes: 44287
},
{
url: 'https://pptr.dev/style.css',
type: 'css',
usedBytes: 5135,
totalBytes: 14326
}
]
As expected, the output contains usedBytes and totalBytes for each file.
Measuring Performance
One objective of measuring performance in terms of websites is to analyze how a page performs, during load and runtime - intending to
make it faster.
Navigation Timing is a Web API that provides information and metrics relating to page navigation and load events, and accessible
by window.performance .
In order to benefit from it, we should evaluate this API within the page context:
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://pptr.dev');
await page.waitForSelector('title');
await browser.close();
})();
The result is transformed into a comfy object, which looks like the following:
{
timeOrigin: 1562785571340.2559,
timing: {
navigationStart: 1562785571340,
unloadEventStart: 0,
unloadEventEnd: 0,
redirectStart: 0,
redirectEnd: 0,
fetchStart: 1562785571340,
domainLookupStart: 1562785571347,
domainLookupEnd: 1562785571348,
connectStart: 1562785571348,
connectEnd: 1562785571528,
secureConnectionStart: 1562785571425,
requestStart: 1562785571529,
responseStart: 1562785571607,
responseEnd: 1562785571608,
domLoading: 1562785571615,
domInteractive: 1562785571621,
domContentLoadedEventStart: 1562785571918,
domContentLoadedEventEnd: 1562785571926,
domComplete: 1562785572538,
loadEventStart: 1562785572538,
loadEventEnd: 1562785572538
},
navigation: {
type: 0,
redirectCount: 0
}
}
Now we can simply combine these metrics and calculate different load times over the loading timeline. For instance, loadEventEnd -
navigationStart represents the time since the navigation started until the page is loaded.
Note: All explanations about the different timings above are available here.
As far as the runtime metrics, unlike load time, Puppeteer provides a neat API:
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://pptr.dev');
await page.waitForSelector('title');
await browser.close();
})();
The interesting metric above is apparently JSHeapUsedSize which represents, in other words, the actual memory usage of the page.
Notice that the result is actually the output of Performance.getMetrics , which is part of Chrome DevTools Protocol.
Chromium Tracing is a profiling tool that allows recording what the browser is really doing under the hood - with an emphasis on every
thread, tab, and process. And yet, it’s reflected in Chrome DevTools as part of the Timelinepanel.
Furthermore, this tracing ability is possible with Puppeteer either - which, as we might guess, practically uses the Chrome DevTools
Protocol.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://pptr.dev');
await page.waitForSelector('title');
await browser.close();
})();
When the recording is stopped, a file called trace.json is created and contains the output that looks like:
{
"traceEvents":[
{
"pid": 21975,
"tid": 38147,
"ts": 17376402124,
"ph": "X",
"cat": "toplevel",
"name": "MessageLoop::RunTask",
"args": {
"src_file": "../../mojo/public/cpp/system/simple_watcher.cc",
"src_func": "Notify"
},
"dur": 68,
"tdur": 56,
"tts": 26330
},
// More trace events
]
}
Now that we’ve the trace file, we can open it using Chrome DevTools, chrome://tracing or Timeline Viewer.
Here’s the Performance panel after importing the trace file into the DevTools:
Puppeteer is a Node.js library for automating, testing and scraping web pages on top of the Chrome DevTools Protocol.
Puppeteer’s ecosystem provides a lightweight package, puppeteer-core , which is a library for browser automation - that interacts
with any browser, which is based on DevTools protocol, without installing Chromium.
Puppeteer’s ecosystem provides a package, which is actually the full product, that installs Chromium in addition to the browser
automation library.
Puppeteer provides the ability to launch a Chromium browser instance or just connect an existing instance.
Puppeteer’s ecosystem provides an experimental package, puppeteer-firefox , that interacts with Firefox.
The browser context allows separating different sessions for a single browser instance.
Puppeteer launches the browser in a headless mode by default, which merely uses the command line. Also - a headful mode, for
opening the browser with a GUI, is supported either.
Puppeteer provides several ways to debug our application in the browser, whereas, debugging the process that executes
Puppeteer is obviously the same as debugging a regular Node.js process.
Puppeteer allows navigating to a page by a URL and operating the page through the mouse and keyboard.
Puppeteer allows examining a page’s visibility, behavior and responsiveness on various devices.
Puppeteer allows taking screenshots of the page and generating PDFs from the content, easily.
Puppeteer allows analyzing and testing the accessibility support in the page.
Puppeteer allows speeding up the page performance by providing information about the dead code, handy metrics and manually
tracing ability.
And finally, Puppeteer is a powerful browser automation tool with a pretty simple API. A decent number of capabilities are supported,
including such we haven’t covered at all - and that’s why your next step could definitely be the official documentation. 😉
For this guide, we're going to assume you're interested in scraping keywords from a specific list of websites you're interested in.
request
module for Node:
Shell
To get your API token, go the 80legs Web Portal (https://portal.80legs.com), login, and click on your account name and the top-right. From
there, you'll see a link to the "My Account" page, which will take you to a page showing your token. Your API token will be a long string of
letters and numbers. Copy the API token or store it somewhere you can easily reference.
📘 For the rest of this document, we'll use AAAXXXXXXXXXXXX as a substitute example for your actual API token when showing
example API calls.
var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/urllists/' + urllist_name,
method: 'PUT',
json: [
'https://www.80legs.com',
'https://www.datafiniti.co'
],
headers: {
'Content-Type': 'application/json'
}
}
https://www.80legs.com
and https://www.datafiniti.co . Any crawl using this URL list will start crawling from these two URLs.
You should get a response similar to this (although it may not look as pretty in your terminal):
JSON
{
location: 'urllists/AAAXXXXXXXXXXXX/urlList1',
name: 'urlList1js2',
user: 'AAAXXXXXXXXXXXX',
date_created: '2018-07-24T00:30:43.991Z',
date_updated: '2018-07-24T00:30:43.991Z',
id: '5b5673331141d3e8f728dde6'
}
You can read more about 80apps here. You can also view sample 80app code here. For now, we'll just use the code from the KeywordCollector
80app, since we're interested in scraping keywords for this example. Copy the code and save it your local system as
keywordCollector.js
.
Write the following code in your code editor (replace the dummy API token with your real API token and
/path/to/keywordCollector.js
with the actual path to this file on your local system):
JavaScript
var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/apps/' + app_name,
method: 'PUT',
body: app_content,
headers: {
'Content-Type': 'application/octet-stream'
}
}
} else {
console.log(body);
}
});
You should get a response similar to this (although it may not look as pretty in your terminal):
JSON
{
"location":"80apps/AAAXXXXXXXXXXXX/keywordCollector.js",
"name":"app1",
"user":"AAAXXXXXXXXXXXX",
"date_created":"2018-07-24T00:41:29.598Z",
"date_updated":"2018-07-24T00:41:29.598Z",
"id":"5b5675b91141d3e8f76d4fc7"
}
Write the following code in your code editor (replace the dummy API token with your real API token):
JavaScript
var request = require('request');
var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/crawls/' + crawl_name,
method: 'PUT',
json: {
"urllist": url_list,
"app": app,
"max_depth": max_depth,
"max_urls": max_urls
},
headers: {
'Content-Type': 'application/json'
}
}
You should get a response similar to this (although it may not look as pretty in your terminal):
JSON
{
date_updated: '2018-07-24T00:57:47.445Z',
date_created: '2018-07-24T00:57:47.245Z',
user: 'AAAXXXXXXXXXXXX',
name: 'crawl1',
urllist: 'urlList1',
max_urls: 1000,
date_started: '2018-07-24T00:57:47.444Z',
format: 'json',
urls_crawled: 0,
max_depth: 10,
depth: 0,
status: 'STARTED',
app: 'keywordCollector.js',
id: 1568124
}
max_depth The maximum depth level for this crawl. Learn more about crawl depth here.
status The current status of the crawl. Check the possible values here.
date_completed The date the crawl completed. This will be empty until the crawl completes or is canceled.
The date the crawl started running. This can be different than date_created when the crawl starts off
date_started
as queued.
6. Check on crawl status
As mentioned, there is a
status
field in the response body above. This field shows us the crawl has started, which means it's running. Web crawls typically do not complete
instantaneously, since they need to spend requesting URLs and crawling links. In order to tell if the crawl has finished, we can check on its
status on a periodic basis.
Write the following code in your code editor (replace the dummy API token with your real API token):
JavaScript
var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/crawls/' + crawl_name,
method: 'GET'
}
JSON
{
date_updated: '2018-07-24T00:57:47.445Z',
date_created: '2018-07-24T00:57:47.245Z',
user: 'AAAXXXXXXXXXXXX',
name: 'crawl1',
lli ' l i '
urllist: 'urlList1',
max_urls: 1000,
date_started: '2018-07-24T00:57:47.444Z',
format: 'json',
urls_crawled: 1,
max_depth: 10,
depth: 0,
status: 'STARTED',
app: 'keywordCollector.js',
id: 1568124
}
depth
and urls_crawled gradually increasing. At some point, status will change to COMPLETED . That's how you know the crawl has finished
running.
7. Download results
After the crawl finishes, you'll want to download the result files. Result files are logs of all the data scraped during the crawl.
status
of COMPLETED for your crawl, use the following code to get the results (replace the dummy API token with your real API token):
JavaScript
var request = require('request');
var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/results/' + crawl_name,
method: 'GET'
}
console.log(response);
} else {
console.log(body);
}
});
You should get a response similar to this (although it may not look as pretty in your terminal):
JSON
[
"http://datafiniti-voltron-results.s3.amazonaws.com/abcdefghijklmnopqrstuvwxyz012345/123456_1.txt?AWSAccessKeyId=AKIAIELL2XADVPVJZ4
]
Depending on how many URLs you crawl, and how much data you scrape from each URL, you'll see one or more links to result files in your
results response. 80legs will create a results file for every 100 MB of data you scrape, which means result files can post while your crawl is
running.
For very large crawls that take more than 7 days to run, we recommend checking your available results on a weekly basis. Result files will expire
7 days after they are created.
To download the result files, you can run code like this:
JavaScript
var request = require('request');
var fs = require('fs');
var request_options = {
url: 'https://' + API_token + ':@api.80legs.com/v2/results/' + crawl_name,
method: 'GET'
}
JSON
[
{
"url": "https://www.80legs.com",
"result": "...."
},
{
"url": "https://www.datafiniti.co",
"result": "...."
},
...
]
Note that the file is a large JSON object. Specifically, it's an array of objects, where each object consists of a
url
field and a result field. The result field will contain a string related to the data you've scraped, which, if you remember, is determined by
your 80app.
In order to process these results files, you can use code similar to this:
JavaScript
var fs = require('fs');
function processData(result) {
// Edit these lines to do more with the data.
console.log(result);
}
processData
function above to do whatever you'd like with the data, such as store the data in a database, write it out to your console, etc.
📘 For this guide, we have created separate code files or blocks for each step of the crawl creation process. We've done this so you can
understand the process better. In practice, it's probably best to combine the code into a single application to improve
maintainability and usability.
Creating Sequelize Associations
with the Sequelize CLI
Bruno Galvao Follow
Apr 16, 2020 · 7 min read
Sequelize is a popular, easy-to-use JavaScript object relational mapping
(ORM) tool that works with SQL databases. It’s fairly straightforward to
start a new project using the Sequelize CLI, but to truly take advantage of
Sequelize’s capabilities, you’ll want to define relationships between your
models.
Let’s start by installing Postgres, Sequelize, and the Sequelize CLI in a new
project folder:
mkdir sequelize-associations
cd sequelize-associations
npm init -y
npm install sequelize pg
npm install --save-dev sequelize-cli
Next, let’s initialize a Sequelize project, then open the whole directory in
our code editor:
npx sequelize-cli init
code .
To learn more about any of the Sequelize CLI commands below, see:
Getting Started with Sequelize CLI
{
"development": {
"database": "sequelize_associations_development",
"host": "127.0.0.1",
"dialect": "postgres"
},
"test": {
"database": "sequelize_associations_test",
"host": "127.0.0.1",
"dialect": "postgres"
},
"production": {
"database": "sequelize_associations_production",
"host": "127.0.0.1",
"dialect": "postgres"
}
}
Cool, now we can tell Sequelize to create the database:
Now we’ll execute our migration to create the Users table in our database:
You will see a new file in /seeders . In that file, paste the following code to
module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.bulkInsert('Users', [{
firstName: 'John',
lastName: 'Doe',
email: '[email protected]',
password: '$321!pass!123$',
createdAt: new Date(),
updatedAt: new Date()
}], {});
},
Drop into psql and query the database to see the Users table:
psql sequelize_associations_development
SELECT * FROM "Users";
Defining associations
Great! We’ve got a working User model, but our John Doe seems a little
bored. Let’s give John something to do by creating a Task model:
Just as with the User model above, this Sequelize CLI command will create
both a model file and a migration based on the attributes we specified. But
this time, we’ll need to edit both in order to tie our models together.
First, find task.js in the /models subdirectory within your project
directory. This is the Sequelize model for tasks, and you’ll see that the
sequelize.define() method sets up title and userId as attributes, just as
we specified above.
Below that, you’ll see Task.associate . It’s currently empty, but this is where
we’ll actually tie each task to a userId . Edit your file to look like this:
'CASCADE' configures our model so that if a user is deleted, the user’s tasks
will be deleted too.)
We also need to change our User model to reflect the other side of this
relationship. Find user.js and change the section under User.associate so
that your file looks like this:
We still have to make one more change to set up our relationship in the
database. In your project’s /migrations folder, you should see a file whose
name ends with create-task.js . Change the object labeled userId so that
your file looks like the code below:
module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.createTable('Tasks', {
id: {
allowNull: false,
autoIncrement: true,
primaryKey: true,
type: Sequelize.INTEGER
},
title: {
type: Sequelize.STRING
},
userId: {
type: Sequelize.INTEGER,
onDelete: 'CASCADE',
references: {
model: 'Users',
key: 'id',
as: 'userId',
}
},
createdAt: {
allowNull: false,
type: Sequelize.DATE
},
updatedAt: {
allowNull: false,
type: Sequelize.DATE
}
});
},
down: (queryInterface, Sequelize) => {
return queryInterface.dropTable('Tasks');
}
};
The references section will set up the Tasks table in our database to reflect
the same relationships we described above. Now we can run our migration:
Now our John Doe is ready to take on tasks — but John still doesn’t have
any actual tasks assigned. Let’s create a task seed file:
module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.bulkInsert('Tasks', [{
title: 'Build an app',
userId: 1,
createdAt: new Date(),
updatedAt: new Date()
}], {});
},
We’ll set userId to 1 so that the task will belong to the user we created
earlier. Now we can populate the database.
touch query.js
run()
The first three lines above import our User and Task models, along with
Sequelize. After that, we include a query function that returns every User
Now it’s clear that our John Doe has a project to work on! We can use the
same method to include the User when our query finds a Task . Paste the
You can also include other options alongside include to make more specific
queries. For example, below we’ll use the where option to find only the
users named John while still returning the associated tasks for each:
Paste the above into your query.js and change const run to call
findAllJohnsWithTasks() to try it out.
Now that you know how to use model associations in Sequelize, you can
design your application to deliver the nested data you need. For your next
step, you might decide to include more robust seed data using Faker or
integrate your Sequelize application with Express to create a Node.js
server!
This article was co-authored with Jeremy Rose, a software engineer, editor,
and writer based in New York City.
Resources
https://sequelize.org/master/manual/associations.html
https://sequelize.org/master/manual/querying.html
The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.
A Console class with methods such as console.log() , console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and process.stderr . The global console can be used without calling require('console') .
Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like
all other Node.js streams. See the note on process I/O for more information.
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Class: Console
The Console class can be used to create a simple logger with configurable output streams and can be accessed using
stdout <stream.Writable>
stderr <stream.Writable>
ignoreErrors <boolean> Ignore errors when writing to the underlying streams. Default: true .
colorMode <boolean> | <string> Set color support for this Console instance. Setting to true enables coloring while inspecting values. Setting
to false disables coloring while inspecting values. Setting to 'auto' makes color support depend on the value of the isTTY property and the value
returned by getColorDepth() on the respective stream. This option can not be used, if inspectOptions.colors is set as well. Default: 'auto' .
inspectOptions <Object> Specifies options that are passed along to util.inspect() .
Creates a new Console with one or two writable stream instances. stdout is a writable stream to print log or info output. stderr is used for warning or error output.
If stderr is not provided, stdout is used for stderr .
const count = 5;
// In stdout.log: count 5
The global console is a special Console whose output is sent to process.stdout and process.stderr . It is equivalent to calling:
console.assert(value[, ...message])
value <any> The value tested for being truthy.
...message <any> All arguments besides value are used as error message.
console.assert() writes a message if value is falsy or omitted. It only writes a message and does not otherwise affect execution. The output always starts
console.assert();
// Assertion failed
l l ()
console.clear()
When stdout is a TTY, calling console.clear() will attempt to clear the TTY. When stdout is not a TTY, this method does nothing.
The specific operation of console.clear() can vary across operating systems and terminal types. For most Linux operating systems, console.clear() operates
similarly to the clear shell command. On Windows, console.clear() will clear only the output in the current terminal viewport for the Node.js binary.
console.count([label])
label <string> The display label for the counter. Default: 'default' .
Maintains an internal counter specific to label and outputs to stdout the number of times console.count() has been called with the given label .
> console.count()
default: 1
undefined
> console.count('default')
default: 2
undefined
> console.count('abc')
abc: 1
undefined
> console.count('xyz')
xyz: 1
undefined
> console.count('abc')
abc: 2
undefined
> console.count()
default: 3
undefined
>
console.countReset([label])
label <string> The display label for the counter. Default: 'default' .
> console.count('abc');
abc: 1
undefined
> console.countReset('abc');
undefined
> console.count('abc');
abc: 1
undefined
>
console.debug(data[, ...args])
data <any>
...args <any>
console.dir(obj[, options])
obj <any>
options <Object>
showHidden <boolean> If true then the object's non-enumerable and symbol properties will be shown too. Default: false .
depth <number> Tells util.inspect() how many times to recurse while formatting the object. This is useful for inspecting large complicated objects. To
colors <boolean> If true , then the output will be styled with ANSI color codes. Colors are customizable;
Uses util.inspect() on obj and prints the resulting string to stdout . This function bypasses any custom inspect() function defined on obj .
console.dirxml(...data)
...data <any>
This method calls console.log() passing it the arguments received. This method does not produce any XML formatting.
console.error([data][, ...args])
data <any>
...args <any>
Prints to stderr with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar
to printf(3) (the arguments are all passed to util.format() ).
const code = 5;
console.error('error', code);
If formatting elements (e.g. %d ) are not found in the first string then util.inspect() is called on each argument and the resulting string values are concatenated.
See util.format() for more information.
l ([ l b l])
console.group([...label])
...label <any>
If one or more label s are provided, those are printed first without the additional indentation.
console.groupCollapsed()
An alias for console.group() .
console.groupEnd()
Decreases indentation of subsequent lines by spaces for groupIndentation length.
console.info([data][, ...args])
data <any>
...args <any>
console.log([data][, ...args])
data <any>
...args <any>
Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar
to printf(3) (the arguments are all passed to util.format() ).
const count = 5;
console.log('count:', count);
console.table(tabularData[, properties])
tabularData <any>
Try to construct a table with the columns of the properties of tabularData (or use properties ) and rows of tabularData and log it. Falls back to just logging the
argument if it can’t be parsed as tabular.
console.table(Symbol());
// Symbol()
console.table(undefined);
// undefined
// ┌─────────┬─────┬─────┐
// │ (index) │ a │ b │
// ├─────────┼─────┼─────┤
// │ 0 │ 1 │ 'Y' │
// │ 1 │ 'Z' │ 2 │
// └─────────┴─────┴─────┘
// │ (index) │ a │
// ├─────────┼─────┤
// │ 0 │ 1 │
// │ 1 │ 'Z' │
// └─────────┴─────┘
console.time([label])
label <string> Default: 'default'
Starts a timer that can be used to compute the duration of an operation. Timers are identified by a unique label . Use the same label when
calling console.timeEnd() to stop the timer and output the elapsed time in suitable time units to stdout . For example, if the elapsed time is
console.timeEnd([label])
label <string> Default: 'default'
Stops a timer that was previously started by calling console.time() and prints the result to stdout :
console.time('100-elements');
console.timeEnd('100-elements');
console.timeLog([label][, ...data])
label <string> Default: 'default'
...data <any>
For a timer that was previously started by calling console.time() , prints the elapsed time and other data arguments to stdout :
console.time('process');
console.timeLog('process', value);
doExpensiveProcess2(value);
console.timeEnd('process');
console.trace([message][, ...args])
message <any>
...args <any>
Prints to stderr the string 'Trace: ' , followed by the util.format() formatted message and stack trace to the current position in the code.
console.trace('Show me');
// Trace: Show me
// at repl:2:9
// at REPLServer.defaultEval (repl.js:248:27)
// at bound (domain.js:287:14)
// at REPLServer.<anonymous> (repl.js:412:12)
// at emitOne (events.js:82:20)
// at REPLServer.emit (events.js:169:7)
// at REPLServer.Interface._onLine (readline.js:210:10)
// at REPLServer.Interface._line (readline.js:549:8)
// at REPLServer.Interface._ttyWrite (readline.js:826:14)
console.warn([data][, ...args])
data <any>
...args <any>
console.profile([label])
label <string>
This method does not display anything unless used in the inspector. The console.profile() method starts a JavaScript CPU profile with an optional label
until console.profileEnd() is called. The profile is then added to the Profile panel of the inspector.
console.profile('MyLabel');
// Some code
console.profileEnd('MyLabel');
console.profileEnd([label])
label <string>
This method does not display anything unless used in the inspector. Stops the current JavaScript CPU profiling session if one has been started and prints the report
to the Profiles panel of the inspector. See console.profile() for an example.
If this method is called without a label, the most recently started profile is stopped.
console.timeStamp([label])
label <string>
This method does not display anything unless used in the inspector. The console.timeStamp() method adds an event with the label 'label' to the Timeline panel of
the inspector.
Path
Stability: 2 - Stable
The path module provides utilities for working with file and directory paths. It can be accessed using:
operating system, the path module will assume that Windows-style paths are being used.
On POSIX:
path.basename('C:\\temp\\myfile.html');
// Returns: 'C:\\temp\\myfile.html'
On Windows:
path.basename('C:\\temp\\myfile.html');
// Returns: 'myfile.html'
To achieve consistent results when working with Windows file paths on any operating system, use path.win32 :
path.win32.basename('C:\\temp\\myfile.html');
// Returns: 'myfile.html'
To achieve consistent results when working with POSIX file paths on any operating system, use path.posix :
path.posix.basename('/tmp/myfile.html');
// Returns: 'myfile.html'
On Windows Node.js follows the concept of per-drive working directory. This behavior can be observed when using a drive path without a backslash. For
example, path.resolve('C:\\') can potentially return a different result than path.resolve('C:') . For more information, see this MSDN page .
path.basename(path[, ext])
path <string>
Returns: <string>
The path.basename() method returns the last portion of a path , similar to the Unix basename command. Trailing directory separators are ignored, see path.sep .
path.basename('/foo/bar/baz/asdf/quux.html');
// Returns: 'quux.html'
path.basename('/foo/bar/baz/asdf/quux.html', '.html');
// Returns: 'quux'
Although Windows usually treats file names, including file extensions, in a case-insensitive manner, this function does not. For
example, C:\\foo.html and C:\\foo.HTML refer to the same file, but basename treats the extension as a case-sensitive string:
path.win32.basename('C:\\foo.html', '.html');
// Returns: 'foo'
path.win32.basename('C:\\foo.HTML', '.html');
// Returns: 'foo.HTML'
A TypeError is thrown if path is not a string or if ext is given and is not a string.
path.delimiter
<string>
; for Windows
: for POSIX
console.log(process.env.PATH);
// Prints: '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'
process.env.PATH.split(path.delimiter);
On Windows:
console.log(process.env.PATH);
process.env.PATH.split(path.delimiter);
path.dirname(path)
path <string>
Returns: <string>
The path.dirname() method returns the directory name of a path , similar to the Unix dirname command. Trailing directory separators are ignored, see path.sep .
path.dirname('/foo/bar/baz/asdf/quux');
// Returns: '/foo/bar/baz/asdf'
path.extname(path)
path <string>
Returns: <string>
The path.extname() method returns the extension of the path , from the last occurrence of the . (period) character to end of string in the last portion of the path .
If there is no . in the last portion of the path , or if there are no . characters other than the first character of the basename of path (see path.basename() ) , an
empty string is returned.
path.extname('index.html');
// Returns: '.html'
path.extname('index.coffee.md');
// Returns: '.md'
path.extname('index.');
// Returns: '.'
path.extname('index');
// Returns: ''
path.extname('.index');
// Returns: ''
path.extname('.index.md');
// Returns: '.md'
path.format(pathObject)
pathObject <Object>
dir <string>
root <string>
base <string>
name <string>
ext <string>
Returns: <string>
The path.format() method returns a path string from an object. This is the opposite of path.parse() .
When providing properties to the pathObject remember that there are combinations where one property has priority over another:
// `${dir}${path.sep}${base}`
path.format({
root: '/ignored',
dir: '/home/user/dir',
base: 'file.txt'
});
// Returns: '/home/user/dir/file.txt'
path.format({
root: '/',
base: 'file.txt',
ext: 'ignored'
});
// Returns: '/file.txt'
path.format({
root: '/',
name: 'file',
ext: '.txt'
});
// Returns: '/file.txt'
On Windows:
path.format({
dir: 'C:\\path\\dir',
base: 'file.txt'
});
// Returns: 'C:\\path\\dir\\file.txt'
path.isAbsolute(path)
path <string>
Returns: <boolean>
path.isAbsolute('/baz/..'); // true
path.isAbsolute('qux/'); // false
path.isAbsolute('.'); // false
On Windows:
path.isAbsolute('//server'); // true
path.isAbsolute('\\\\server'); // true
path.isAbsolute('C:/foo/..'); // true
path.isAbsolute('C:\\foo\\..'); // true
path.isAbsolute('bar\\baz'); // false
path.isAbsolute('bar/baz'); // false
path.isAbsolute('.'); // false
path.join([...paths])
...paths <string> A sequence of path segments
Returns: <string>
The path.join() method joins all given path segments together using the platform-specific separator as a delimiter, then normalizes the resulting path.
Zero-length path segments are ignored. If the joined path string is a zero-length string then '.' will be returned, representing the current working directory.
// Returns: '/foo/bar/baz/asdf'
path.join('foo', {}, 'bar');
path.normalize(path)
path <string>
Returns: <string>
The path.normalize() method normalizes the given path , resolving '..' and '.' segments.
When multiple, sequential path segment separation characters are found (e.g. / on POSIX and either \ or / on Windows), they are replaced by a single instance of
the platform-specific path segment separator ( / on POSIX and \ on Windows). Trailing separators are preserved.
If the path is a zero-length string, '.' is returned, representing the current working directory.
path.normalize('/foo/bar//baz/asdf/quux/..');
// Returns: '/foo/bar/baz/asdf'
On Windows:
path.normalize('C:\\temp\\\\foo\\bar\\..\\');
// Returns: 'C:\\temp\\foo\\'
Since Windows recognizes multiple path separators, both separators will be replaced by instances of the Windows preferred separator ( \ ):
path.win32.normalize('C:////temp\\\\/\\/\\/foo/bar');
// Returns: 'C:\\temp\\foo\\bar'
path.parse(path)
path <string>
Returns: <Object>
The path.parse() method returns an object whose properties represent significant elements of the path . Trailing directory separators are ignored, see path.sep .
dir <string>
root <string>
base <string>
name <string>
ext <string>
path.parse('/home/user/dir/file.txt');
// Returns:
// { root: '/',
// dir: '/home/user/dir',
// base: 'file.txt',
// ext: '.txt',
// name: 'file' }
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
└──────┴──────────────┴──────┴─────┘
(All spaces in the "" line should be ignored. They are purely for formatting.)
On Windows:
path.parse('C:\\path\\dir\\file.txt');
// Returns:
// { root: 'C:\\',
// dir: 'C:\\path\\dir',
// base: 'file.txt',
// ext: '.txt',
// name: 'file' }
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
└──────┴──────────────┴──────┴─────┘
(All spaces in the "" line should be ignored. They are purely for formatting.)
The path.posix property provides access to POSIX specific implementations of the path methods.
path.relative(from, to)
from <string>
to <string>
Returns: <string>
The path.relative() method returns the relative path from from to to based on the current working directory. If from and to each resolve to the same path (after
If a zero-length string is passed as from or to , the current working directory will be used instead of the zero-length strings.
path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb');
// Returns: '../../impl/bbb'
On Windows:
path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb');
// Returns: '..\\..\\impl\\bbb'
A TypeError is thrown if either from or to is not a string.
path.resolve([...paths])
...paths <string> A sequence of paths or path segments
Returns: <string>
The path.resolve() method resolves a sequence of paths or path segments into an absolute path.
The given sequence of paths is processed from right to left, with each subsequent path prepended until an absolute path is constructed. For instance, given the
sequence of path segments: /foo , /bar , baz , calling path.resolve('/foo', '/bar', 'baz') would return /bar/baz because 'baz' is not an absolute path but '/bar'
If, after processing all given path segments, an absolute path has not yet been generated, the current working directory is used.
The resulting path is normalized and trailing slashes are removed unless the path is resolved to the root directory.
If no path segments are passed, path.resolve() will return the absolute path of the current working directory.
path.resolve('/foo/bar', './baz');
// Returns: '/foo/bar/baz'
path.resolve('/foo/bar', '/tmp/file/');
// Returns: '/tmp/file'
path.sep
<string>
\ on Windows
/ on POSIX
'foo/bar/baz'.split(path.sep);
On Windows:
'foo\\bar\\baz'.split(path.sep);
On Windows, both the forward slash ( / ) and backward slash ( \ ) are accepted as path segment separators; however, the path methods only add backward slashes
( \ ).
path.toNamespacedPath(path)
path <string>
Returns: <string>
On Windows systems only, returns an equivalent namespace-prefixed path for the given path . If path is not a string, path will be returned without modifications.
This method is meaningful only on Windows systems. On POSIX systems, the method is non-operational and always returns path without modifications.
path.win32
<Object>
The path.win32 property provides access to Windows-specific implementations of the path methods.
Timers
Stability: 2 - Stable
The timer module exposes a global API for scheduling functions to be called at some future period of time. Because the timer functions are globals, there is no need to call
require('timers') to use the API.
The timer functions within Node.js implement a similar API as the timers API provided by Web Browsers but use a different internal implementation that is built around the Node.js Event
Loop .
Class: Immediate
This object is created internally and is returned from setImmediate() . It can be passed to clearImmediate() in order to cancel the scheduled actions.
By default, when an immediate is scheduled, the Node.js event loop will continue running as long as the immediate is active. The Immediate object returned by setImmediate() exports
both immediate.ref() and immediate.unref() functions that can be used to control this default behavior.
immediate.hasRef()
Returns: <boolean>
If true, the Immediate object will keep the Node.js event loop active.
immediate.ref()
Returns: <Immediate> a reference to immediate
When called, requests that the Node.js event loop not exit so long as the Immediate is active. Calling immediate.ref() multiple times will have no effect.
By default, all Immediate objects are "ref'ed", making it normally unnecessary to call immediate.ref() unless immediate.unref() had been called previously.
immediate.unref()
Returns: <Immediate> a reference to immediate
When called, the active Immediate object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit before the
Immediate object's callback is invoked. Calling immediate.unref() multiple times will have no effect.
Class: Timeout
This object is created internally and is returned from setTimeout() and setInterval() . It can be passed to either clearTimeout() or clearInterval() in order to cancel the scheduled
actions.
By default, when a timer is scheduled using either setTimeout() or setInterval() , the Node.js event loop will continue running as long as the timer is active. Each of the Timeout objects
returned by these functions export both timeout.ref() and timeout.unref() functions that can be used to control this default behavior.
timeout.hasRef()
Returns: <boolean>
If true, the Timeout object will keep the Node.js event loop active.
timeout.ref()
Returns: <Timeout> a reference to timeout
When called, requests that the Node.js event loop not exit so long as the Timeout is active. Calling timeout.ref() multiple times will have no effect.
By default, all Timeout objects are "ref'ed", making it normally unnecessary to call timeout.ref() unless timeout.unref() had been called previously.
timeout.refresh()
Returns: <Timeout> a reference to timeout
Sets the timer's start time to the current time, and reschedules the timer to call its callback at the previously specified duration adjusted to the current time. This is useful for refreshing a
timer without allocating a new JavaScript object.
Using this on a timer that has already called its callback will reactivate the timer.
timeout.unref()
Returns: <Timeout> a reference to timeout
When called, the active Timeout object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit before the
Timeout object's callback is invoked. Calling timeout.unref() multiple times will have no effect.
Calling timeout.unref() creates an internal timer that will wake the Node.js event loop. Creating too many of these can adversely impact performance of the Node.js application.
timeout[Symbol.toPrimitive]()
Returns: <integer> a number that can be used to reference this timeout
Coerce a Timeout to a primitive. The primitive can be used to clear the Timeout . The primitive can only be used in the same thread where the timeout was created. Therefore, to use it
across worker_threads it must first be passed to the correct thread. This allows enhanced compatibility with browser setTimeout() and setInterval() implementations.
Scheduling timers
A timer in Node.js is an internal construct that calls a given function after a certain period of time. When a timer's function is called varies depending on which method was used to create the
timer and what other work the Node.js event loop is doing.
setImmediate(callback[, ...args])
callback <Function> The function to call at the end of this turn of the Node.js Event Loop
Schedules the "immediate" execution of the callback after I/O events' callbacks.
When multiple calls to setImmediate() are made, the callback functions are queued for execution in the order in which they are created. The entire callback queue is processed every
event loop iteration. If an immediate timer is queued from inside an executing callback, that timer will not be triggered until the next event loop iteration.
This method has a custom variant for promises that is available using util.promisify() :
setImmediatePromise('foobar').then((value) => {
// value === 'foobar' (passing values is optional)
// This is executed after all I/O callbacks.
});
delay <number> The number of milliseconds to wait before calling the callback . Default: 1 .
When delay is larger than 2147483647 or less than 1 , the delay will be set to 1 . Non-integer delays are truncated to an integer.
delay <number> The number of milliseconds to wait before calling the callback . Default: 1 .
The callback will likely not be invoked in precisely delay milliseconds. Node.js makes no guarantees about the exact timing of when callbacks will fire, nor of their ordering. The callback
will be called as close as possible to the time specified.
When delay is larger than 2147483647 or less than 1 , the delay will be set to 1 . Non-integer delays are truncated to an integer.
This method has a custom variant for promises that is available using util.promisify() :
Cancelling timers
The setImmediate() , setInterval() , and setTimeout() methods each return objects that represent the scheduled timers. These can be used to cancel the timer and prevent it from
triggering.
For the promisified variants of setImmediate() and setTimeout() , an AbortController may be used to cancel the timer. When canceled, the returned Promises will be rejected with an
'AbortError' .
For setImmediate() :
setImmediatePromise('foobar', { signal })
.then(console.log)
.catch((err) => {
if (err.message === 'AbortError')
console.log('The immediate was aborted');
});
ac.abort();
For setTimeout() :
ac.abort();
clearImmediate(immediate)
immediate <Immediate> An Immediate object as returned by setImmediate() .
clearInterval(timeout)
timeout <Timeout> A Timeout object as returned by setInterval() .
clearTimeout(timeout)
timeout <Timeout> A Timeout object as returned by setTimeout() .
Stability: 1 - Experimental
The timers/promises API provides an alternative set of timer functions that return Promise objects. The API is accessible via require('timers/promises') .
signal <AbortSignal> An optional AbortSignal that can be used to cancel the scheduled Timeout .
timersPromises.setImmediate([value[, options]])
value <any> A value with which the Promise is resolved.
options <Object>
ref <boolean> Set to false to indicate that the scheduled Immediate should not require the Node.js event loop to remain active. Default: true .
signal <AbortSignal> An optional AbortSignal that can be used to cancel the scheduled Immediate .
options <Object>
ref <boolean> Set to false to indicate that the scheduled Timeout between iterations should not require the Node.js event loop to remain active. Default: true .
signal <AbortSignal> An optional AbortSignal that can be used to cancel the scheduled Timeout between operations.
(async function() {
const { setInterval } = require('timers/promises');
const interval = 100;
for await (const startTime of setInterval(interval, Date.now())) {
const now = Date.now();
console.log(now);
if ((now - startTime) > 1000)
break;
}
console.log(Date.now());
})();
Node.js v15.12.0 Documentation
Stream
Stability: 2 - Stable
A stream is an abstract interface for working with streaming data in Node.js. The stream module provides an API for implementing the stream interface.
There are many stream objects provided by Node.js. For instance, a request to an HTTP server and process.stdout are both stream instances.
Streams can be readable, writable, or both. All streams are instances of EventEmitter .
The stream module is useful for creating new types of stream instances. It is usually not necessary to use the stream module to consume streams.
Types of streams
There are four fundamental stream types within Node.js:
Readable : streams from which data can be read (for example, fs.createReadStream() ).
Duplex : streams that are both Readable and Writable (for example, net.Socket ).
Transform : Duplex streams that can modify or transform the data as it is written and read (for example, zlib.createDeflate() ).
Additionally, this module includes the utility functions stream.pipeline() , stream.finished() , stream.Readable.from() and stream.addAbortSignal() .
Object mode
All streams created by Node.js APIs operate exclusively on strings and Buffer (or Uint8Array ) objects. It is possible, however, for stream implementations to work with other types of
JavaScript values (with the exception of null , which serves a special purpose within streams). Such streams are considered to operate in "object mode".
Stream instances are switched into object mode using the objectMode option when the stream is created. Attempting to switch an existing stream into object mode is not safe.
Buffering
Both Writable and Readable streams will store data in an internal buffer.
The amount of data potentially buffered depends on the highWaterMark option passed into the stream's constructor. For normal streams, the highWaterMark option specifies a total
number of bytes . For streams operating in object mode, the highWaterMark specifies a total number of objects.
Data is buffered in Readable streams when the implementation calls stream.push(chunk) . If the consumer of the Stream does not call stream.read() , the data will sit in the internal queue
until it is consumed.
Once the total size of the internal read buffer reaches the threshold specified by highWaterMark , the stream will temporarily stop reading data from the underlying resource until the data
currently buffered can be consumed (that is, the stream will stop calling the internal readable._read() method that is used to fill the read buffer).
Data is buffered in Writable streams when the writable.write(chunk) method is called repeatedly. While the total size of the internal write buffer is below the threshold set by
highWaterMark , calls to writable.write() will return true . Once the size of the internal buffer reaches or exceeds the highWaterMark , false will be returned.
A key goal of the stream API, particularly the stream.pipe() method, is to limit the buffering of data to acceptable levels such that sources and destinations of differing speeds will not
overwhelm the available memory.
The highWaterMark option is a threshold, not a limit: it dictates the amount of data that a stream buffers before it stops asking for more data. It does not enforce a strict memory limitation in
general. Specific stream implementations may choose to enforce stricter limits but doing so is optional.
Because Duplex and Transform streams are both Readable and Writable , each maintains two separate internal buffers used for reading and writing, allowing each side to operate
independently of the other while maintaining an appropriate and efficient flow of data. For example, net.Socket instances are Duplex streams whose Readable side allows consumption of
data received from the socket and whose Writable side allows writing data to the socket. Because data may be written to the socket at a faster or slower rate than data is received, each side
should operate (and buffer) independently of the other.
The mechanics of the internal buffering are an internal implementation detail and may be changed at any time. However, for certain advanced implementations, the internal buffers can be
retrieved using writable.writableBuffer or readable.readableBuffer . Use of these undocumented properties is discouraged.
// The 'end' event indicates that the entire body has been received.
req.on('end', () => {
try {
const data = JSON.parse(body);
// Write back something interesting to the user:
res.write(typeof data);
res.end();
} catch (er) {
// uh oh! bad json!
res.statusCode = 400;
return res.end(`error: ${er.message}`);
}
});
});
server.listen(1337);
// $ curl localhost:1337 -d "{}"
// object
// $ curl localhost:1337 -d "\"foo\""
// string
// $ curl localhost:1337 -d "not json"
// error: Unexpected token o in JSON at position 1
Writable streams (such as res in the example) expose methods such as write() and end() that are used to write data onto the stream.
Readable streams use the EventEmitter API for notifying application code when data is available to be read off the stream. That available data can be read from the stream in multiple
ways.
Both Writable and Readable streams use the EventEmitter API in various ways to communicate the current state of the stream.
Applications that are either writing data to or consuming data from a stream are not required to implement the stream interfaces directly and will generally have no reason to call
require('stream') .
Developers wishing to implement new types of streams should refer to the section API for stream implementers .
Writable streams
Writable streams are an abstraction for a destination to which data is written.
Some of these examples are actually Duplex streams that implement the Writable interface.
All Writable streams implement the interface defined by the stream.Writable class.
While specific instances of Writable streams may differ in various ways, all Writable streams follow the same fundamental usage pattern as illustrated in the example below:
const myStream = getWritableStreamSomehow();
myStream.write('some data');
myStream.write('some more data');
myStream.end('done writing data');
Class: stream.Writable
Event: 'close'
The 'close' event is emitted when the stream and any of its underlying resources (a file descriptor, for example) have been closed. The event indicates that no more events will be emitted,
and no further computation will occur.
A Writable stream will always emit the 'close' event if it is created with the emitClose option.
Event: 'drain'
If a call to stream.write(chunk) returns false , the 'drain' event will be emitted when it is appropriate to resume writing data to the stream.
// Write the data to the supplied writable stream one million times.
// Be attentive to back-pressure.
function writeOneMillionTimes(writer, data, encoding, callback) {
let i = 1000000;
write();
function write() {
let ok = true;
do {
i--;
if (i === 0) {
// Last time!
writer.write(data, encoding, callback);
} else {
// See if we should continue, or wait.
// Don't pass the callback, because we're not done yet.
ok = writer.write(data, encoding);
}
} while (i > 0 && ok);
if (i > 0) {
// Had to stop early!
// Write some more once it drains.
writer.once('drain', write);
}
}
}
Event: 'error'
<Error>
The 'error' event is emitted if an error occurred while writing or piping data. The listener callback is passed a single Error argument when called.
The stream is closed when the 'error' event is emitted unless the autoDestroy option was set to false when creating the stream.
After 'error' , no further events other than 'close' should be emitted (including 'error' events).
Event: 'finish'
The 'finish' event is emitted after the stream.end() method has been called, and all data has been flushed to the underlying system.
Event: 'pipe'
src <stream.Readable> source stream that is piping to this writable
The 'pipe' event is emitted when the stream.pipe() method is called on a readable stream, adding this writable to its set of destinations.
The 'unpipe' event is emitted when the stream.unpipe() method is called on a Readable stream, removing this Writable from its set of destinations.
This is also emitted in case this Writable stream emits an error when a Readable stream pipes into it.
writable.cork()
The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the stream.uncork() or stream.end() methods are called.
The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them
to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev() , if present. This prevents a head-
of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing
writable._writev() may have an adverse effect on throughput.
writable.destroy([error])
error <Error> Optional, an error to emit with 'error' event.
Returns: <this>
Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false ). After this call, the writable stream has ended and subsequent calls to
write() or end() will result in an ERR_STREAM_DESTROYED error. This is a destructive and immediate way to destroy a stream. Previous calls to write() may not have drained, and may
trigger an ERR_STREAM_DESTROYED error. Use end() instead of destroy if data should flush before close, or wait for the 'drain' event before destroying the stream.
Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error' .
Implementors should not override this method, but instead implement writable._destroy() .
writable.destroyed
<boolean>
Returns: <this>
Calling the writable.end() method signals that no more data will be written to the Writable . The optional chunk and encoding arguments allow one final additional chunk of data to be
written immediately before closing the stream.
Calling the stream.write() method after calling stream.end() will raise an error.
writable.setDefaultEncoding(encoding)
encoding <string> The new default encoding
Returns: <this>
The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.
writable.uncork()
The writable.uncork() method flushes all data buffered since stream.cork() was called.
When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, it is recommended that calls to writable.uncork() be deferred using
process.nextTick() . Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.
stream.cork();
stream.write('some ');
stream.write('data ');
process.nextTick(() => stream.uncork());
If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.
stream.cork();
stream.write('some ');
stream.cork();
stream.write('data ');
process.nextTick(() => {
stream.uncork();
// The data will not be flushed until uncork() is called a second time.
stream.uncork();
});
writable.writable
<boolean>
Is true if it is safe to call writable.write() , which means the stream has not been destroyed, errored or ended.
writable.writableEnded
<boolean>
Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.
writable.writableCorked
<integer>
Number of times writable.uncork() needs to be called in order to fully uncork the stream.
writable.writableFinished
<boolean>
writable.writableHighWaterMark
<number>
writable.writableLength
<number>
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark .
writable.writableNeedDrain
<boolean>
Is true if the stream's buffer has been full and stream will emit 'drain' .
writable.writableObjectMode
<boolean>
Returns: <boolean> false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true .
The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback may or may not be
called with the error as its first argument. To reliably detect write errors, add a listener for the 'error' event. The callback is called asynchronously and before 'error' is emitted.
The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk . If false is returned, further attempts to
write data to the stream should stop until the 'drain' event is emitted.
While a stream is not draining, calls to write() will buffer chunk , and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the
'drain' event will be emitted. It is recommended that once write() returns false, no more chunks be written until the 'drain' event is emitted. While calling write() on a stream that is
not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will
cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain
if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularly problematic for a Transform , because the Transform streams are paused by default until they are piped or a 'data' or
'readable' event handler is added.
If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use stream.pipe() . However, if calling write() is
preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:
A Writable stream in object mode will always ignore the encoding argument.
Readable streams
Readable streams are an abstraction for a source from which data is consumed.
All Readable streams implement the interface defined by the stream.Readable class.
In paused mode, the stream.read() method must be called explicitly to read chunks of data from the stream.
All Readable streams begin in paused mode but can be switched to flowing mode in one of the following ways:
The Readable can switch back to paused mode using one of the following:
If there are pipe destinations, by removing all pipe destinations. Multiple pipe destinations may be removed by calling the stream.unpipe() method.
The important concept to remember is that a Readable will not generate data until a mechanism for either consuming or ignoring that data is provided. If the consuming mechanism is
disabled or taken away, the Readable will attempt to stop generating the data.
For backward compatibility reasons, removing 'data' event handlers will not automatically pause the stream. Also, if there are piped destinations, then calling stream.pause() will not
guarantee that the stream will remain paused once those destinations drain and ask for more data.
If a Readable is switched into flowing mode and there are no consumers available to handle the data, that data will be lost. This can occur, for instance, when the readable.resume()
method is called without a listener attached to the 'data' event, or when a 'data' event handler is removed from the stream.
Adding a 'readable' event handler automatically makes the stream stop flowing, and the data has to be consumed via readable.read() . If the 'readable' event handler is removed, then
the stream will start flowing again if there is a 'data' event handler.
Three states
The "two modes" of operation for a Readable stream are a simplified abstraction for the more complicated internal state management that is happening within the Readable stream
implementation.
Specifically, at any given point in time, every Readable is in one of three possible states:
When readable.readableFlowing is null , no mechanism for consuming the stream's data is provided. Therefore, the stream will not generate data. While in this state, attaching a listener
for the 'data' event, calling the readable.pipe() method, or calling the readable.resume() method will switch readable.readableFlowing to true , causing the Readable to begin
actively emitting events as data is generated.
Calling readable.pause() , readable.unpipe() , or receiving backpressure will cause the readable.readableFlowing to be set as false , temporarily halting the flowing of events but not
halting the generation of data. While in this state, attaching a listener for the 'data' event will not switch readable.readableFlowing to true .
pass.pipe(writable);
pass.unpipe(writable);
// readableFlowing is now false.
While readable.readableFlowing is false , data may be accumulating within the stream's internal buffer.
Use of the readable.pipe() method is recommended for most users as it has been implemented to provide the easiest way of consuming stream data. Developers that require more fine-
grained control over the transfer and generation of data can use the EventEmitter and readable.on('readable') / readable.read() or the readable.pause() / readable.resume() APIs.
Class: stream.Readable
Event: 'close'
The 'close' event is emitted when the stream and any of its underlying resources (a file descriptor, for example) have been closed. The event indicates that no more events will be emitted,
and no further computation will occur.
A Readable stream will always emit the 'close' event if it is created with the emitClose option.
Event: 'data'
chunk <Buffer> | <string> | <any> The chunk of data. For streams that are not operating in object mode, the chunk will be either a string or Buffer . For streams that are in object
mode, the chunk can be any JavaScript value other than null .
The 'data' event is emitted whenever the stream is relinquishing ownership of a chunk of data to a consumer. This may occur whenever the stream is switched in flowing mode by calling
readable.pipe() , readable.resume() , or by attaching a listener callback to the 'data' event. The 'data' event will also be emitted whenever the readable.read() method is called and
a chunk of data is available to be returned.
Attaching a 'data' event listener to a stream that has not been explicitly paused will switch the stream into flowing mode. Data will then be passed as soon as it is available.
The listener callback will be passed the chunk of data as a string if a default encoding has been specified for the stream using the readable.setEncoding() method; otherwise the data will
be passed as a Buffer .
Event: 'end'
The 'end' event is emitted when there is no more data to be consumed from the stream.
The 'end' event will not be emitted unless the data is completely consumed. This can be accomplished by switching the stream into flowing mode, or by calling stream.read() repeatedly
until all data has been consumed.
Event: 'error'
<Error>
The 'error' event may be emitted by a Readable implementation at any time. Typically, this may occur if the underlying stream is unable to generate data due to an underlying internal
failure, or when a stream implementation attempts to push an invalid chunk of data.
Event: 'pause'
The 'pause' event is emitted when stream.pause() is called and readableFlowing is not false .
Event: 'readable'
The 'readable' event is emitted when there is data available to be read from the stream. In some cases, attaching a listener for the 'readable' event will cause some amount of data to be
read into an internal buffer.
The 'readable' event will also be emitted once the end of the stream data has been reached but before the 'end' event is emitted.
Effectively, the 'readable' event indicates that the stream has new information: either new data is available or the end of the stream has been reached. In the former case, stream.read()
will return the available data. In the latter case, stream.read() will return null . For instance, in the following example, foo.txt is an empty file:
const fs = require('fs');
const rr = fs.createReadStream('foo.txt');
rr.on('readable', () => {
console.log(`readable: ${rr.read()}`);
});
rr.on('end', () => {
console.log('end');
});
$ node test.js
readable: null
end
In general, the readable.pipe() and 'data' event mechanisms are easier to understand than the 'readable' event. However, handling 'readable' might result in increased throughput.
If both 'readable' and 'data' are used at the same time, 'readable' takes precedence in controlling the flow, i.e. 'data' will be emitted only when stream.read() is called. The
readableFlowing property would become false . If there are 'data' listeners when 'readable' is removed, the stream will start flowing, i.e. 'data' events will be emitted without
calling .resume() .
Event: 'resume'
The 'resume' event is emitted when stream.resume() is called and readableFlowing is not true .
readable.destroy([error])
error <Error> Error which will be passed as payload in 'error' event
Returns: <this>
Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false ). After this call, the readable stream will release any internal resources
and subsequent calls to push() will be ignored.
Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error' .
Implementors should not override this method, but instead implement readable._destroy() .
readable.destroyed
<boolean>
readable.isPaused()
Returns: <boolean>
The readable.isPaused() method returns the current operating state of the Readable . This is used primarily by the mechanism that underlies the readable.pipe() method. In most
typical cases, there will be no reason to use this method directly.
readable.pause()
Returns: <this>
The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the
internal buffer.
readable.pipe(destination[, options])
destination <stream.Writable> The destination for writing data
Returns: <stream.Writable> The destination, allowing for a chain of pipes if it is a Duplex or a Transform stream
The readable.pipe() method attaches a Writable stream to the readable , causing it to switch automatically into flowing mode and push all of its data to the attached Writable . The flow
of data will be automatically managed so that the destination Writable stream is not overwhelmed by a faster Readable stream.
The following example pipes all of the data from the readable into a file named file.txt :
const fs = require('fs');
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt'.
readable.pipe(writable);
The readable.pipe() method returns a reference to the destination stream making it possible to set up chains of piped streams:
const fs = require('fs');
const r = fs.createReadStream('file.txt');
const z = zlib.createGzip();
const w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);
By default, stream.end() is called on the destination Writable stream when the source Readable stream emits 'end' , so that the destination is no longer writable. To disable this default
behavior, the end option can be passed as false , causing the destination stream to remain open:
One important caveat is that if the Readable stream emits an error during processing, the Writable destination is not closed automatically. If an error occurs, it will be necessary to manually
close each stream in order to prevent memory leaks.
The process.stderr and process.stdout Writable streams are never closed until the Node.js process exits, regardless of the specified options.
readable.read([size])
size <number> Optional argument to specify how much data to read.
The readable.read() method pulls some data out of the internal buffer and returns it. If no data available to be read, null is returned. By default, the data will be returned as a Buffer
object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.
The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of
the data remaining in the internal buffer will be returned.
If the size argument is not specified, all of the data contained in the internal buffer will be returned.
The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is
fully drained.
Each call to readable.read() returns a chunk of data, or null . The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a
large file .read() may return null , having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted
when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.
Therefore to read a file's whole contents from a readable , it is necessary to collect chunks across multiple 'readable' events:
readable.on('readable', () => {
let chunk;
while (null !== (chunk = readable.read())) {
chunks.push(chunk);
}
});
readable.on('end', () => {
const content = chunks.join('');
});
A Readable stream in object mode will always return a single item from a call to readable.read(size) , regardless of the value of the size argument.
If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.
Calling stream.read([size]) after the 'end' event has been emitted will return null . No runtime error will be raised.
readable.readable
<boolean>
Is true if it is safe to call readable.read() , which means the stream has not been destroyed or emitted 'error' or 'end' .
readable.readableEncoding
<null> | <string>
Getter for the property encoding of a given Readable stream. The encoding property can be set using the readable.setEncoding() method.
readable.readableEnded
<boolean>
readable.readableFlowing
<boolean>
This property reflects the current state of a Readable stream as described in the Three states section.
readable.readableHighWaterMark
<number>
readable.readableLength
<number>
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark .
readable.readableObjectMode
<boolean>
readable.resume()
Returns: <this>
The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.
The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:
getReadableStreamSomehow()
.resume()
.on('end', () => {
console.log('Reached the end, but did not read anything.');
});
readable.setEncoding(encoding)
encoding <string> The encoding to use.
Returns: <this>
The readable.setEncoding() method sets the character encoding for data read from the Readable stream.
By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather
than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling
readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.
The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as
Buffer objects.
readable.unpipe([destination])
destination <stream.Writable> Optional specific stream to unpipe
Returns: <this>
The readable.unpipe() method detaches a Writable stream previously attached using the stream.pipe() method.
If the destination is specified, but no pipe is set up for it, then the method does nothing.
const fs = require('fs');
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt',
// but only for the first second.
readable.pipe(writable);
setTimeout(() => {
console.log('Stop writing to file.txt.');
readable.unpipe(writable);
console.log('Manually close the file stream.');
writable.end();
}, 1000);
readable.unshift(chunk[, encoding])
chunk <Buffer> | <Uint8Array> | <string> | <null> | <any> Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a string,
Buffer , Uint8Array or null . For object mode streams, chunk may be any JavaScript value.
encoding <string> Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii' .
Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null) , after which no more data can be written. The EOF signal is put at the end of the
buffer and any buffered data will still be flushed.
The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-
consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.
The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.
Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.
Unlike stream.push(chunk) , stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if
readable.unshift() is called during a read (i.e. from within a stream._read() implementation on a custom stream). Following the call to readable.unshift() with an immediate
stream.push('') will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.
readable.wrap(stream)
stream <Stream> An "old style" readable stream
Returns: <this>
Prior to Node.js 0.10, streams did not implement the entire stream module API as it is currently defined. (See Compatibility for more information.)
When using an older Node.js library that emits 'data' events and has a stream.pause() method that is advisory only, the readable.wrap() method can be used to create a Readable
stream that uses the old stream as its data source.
It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.
myReader.on('readable', () => {
myReader.read(); // etc.
});
readable[Symbol.asyncIterator]()
Returns: <AsyncIterator> to fully consume the stream.
const fs = require('fs');
print(fs.createReadStream('file')).catch(console.error);
If the loop terminates with a break or a throw , the stream will be destroyed. In other terms, iterating over a stream will consume the stream fully. The stream will be read in chunks of size
equal to the highWaterMark option. In the code example above, data will be in a single chunk if the file has less then 64KB of data because no highWaterMark option is provided to
fs.createReadStream() .
Class: stream.Duplex
Duplex streams are streams that implement both the Readable and Writable interfaces.
TCP sockets
zlib streams
crypto streams
Class: stream.Transform
Transform streams are Duplex streams where the output is in some way related to the input. Like all Duplex streams, Transform streams implement both the Readable and Writable
interfaces.
zlib streams
crypto streams
transform.destroy([error])
error <Error>
Returns: <this>
Destroy the stream, and optionally emit an 'error' event. After this call, the transform stream would release any internal resources. Implementors should not override this method, but
instead implement readable._destroy() . The default implementation of _destroy() for Transform also emit 'close' unless emitClose is set in false.
Once destroy() has been called, any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error' .
options <Object>
error <boolean> If set to false , then a call to emit('error', err) is not treated as finished. Default: true .
readable <boolean> When set to false , the callback will be called when the stream ends even though the stream might still be readable. Default: true .
writable <boolean> When set to false , the callback will be called when the stream ends even though the stream might still be writable. Default: true .
signal <AbortSignal> allows aborting the wait for the stream finish. The underlying stream will not be aborted if the signal is aborted. The callback will get called with an
AbortError . All registered listeners added by this function will also be removed.
A function to get notified when a stream is no longer readable, writable or has experienced an error or a premature close event.
const rs = fs.createReadStream('archive.tar');
Especially useful in error handling scenarios where a stream is destroyed prematurely (like an aborted HTTP request), and will not emit 'end' or 'finish' .
const rs = fs.createReadStream('archive.tar');
run().catch(console.error);
rs.resume(); // Drain the stream.
stream.finished() leaves dangling event listeners (in particular 'error' , 'end' , 'finish' and 'close' ) after callback has been invoked. The reason for this is so that unexpected
'error' events (due to incorrect stream implementations) do not cause unexpected crashes. If this is unwanted behavior then the returned cleanup function needs to be invoked in the
callback:
Returns: <AsyncIterable>
Returns: <Stream>
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
}
);
The pipeline API provides a promise version, which can also receive an options argument as the last parameter with a signal <AbortSignal> property. When the signal is aborted,
destroy will be called on the underlying pipeline, with an AbortError .
run().catch(console.error);
run().catch(console.error); // AbortError
run().catch(console.error);
stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and
swallowed errors.
stream.Readable.from(iterable, [options])
iterable <Iterable> Object implementing the Symbol.asyncIterator or Symbol.iterator iterable protocol. Emits an 'error' event if a null value is passed.
options <Object> Options provided to new stream.Readable([options]) . By default, Readable.from() will set options.objectMode to true , unless this is explicitly opted out by
setting options.objectMode to false .
Returns: <stream.Readable>
Calling Readable.from(string) or Readable.from(buffer) will not have the strings or buffers be iterated to match the other streams semantics for performance reasons.
stream.addAbortSignal(signal, stream)
signal <AbortSignal> A signal representing possible cancellation
Attaches an AbortSignal to a readable or writeable stream. This lets code control stream destruction using an AbortController .
Calling abort on the AbortController corresponding to the passed AbortSignal will behave the same way as calling .destroy(new AbortError()) on the stream.
const fs = require('fs');
First, a stream developer would declare a new JavaScript class that extends one of the four basic stream classes ( stream.Writable , stream.Readable , stream.Duplex , or
stream.Transform ), making sure they call the appropriate parent class constructor:
When extending streams, keep in mind what options the user can and should provide before forwarding these to the base constructor. For example, if the implementation makes
assumptions in regard to the autoDestroy and emitClose options, do not allow the user to override these. Be explicit about what options are forwarded instead of implicitly forwarding all
options.
The new stream class must then implement one or more specific methods, depending on the type of stream being created, as detailed in the chart below:
Operate on written data, then read the result Transform _transform() , _flush() , _final()
The implementation code for a stream should never call the "public" methods of a stream that are intended for use by consumers (as described in the API for stream consumers section).
Doing so may lead to adverse side effects in application code consuming the stream.
Avoid overriding public methods such as write() , end() , cork() , uncork() , read() and destroy() , or emitting internal events such as 'error' , 'data' , 'end' , 'finish' and
'close' through .emit() . Doing so can break current and future stream invariants leading to behavior and/or compatibility issues with other streams, stream utilities, and user
expectations.
Simplified construction
For many simple cases, it is possible to create a stream without relying on inheritance. This can be accomplished by directly creating instances of the stream.Writable , stream.Readable ,
stream.Duplex or stream.Transform objects and passing appropriate methods as constructor options.
Custom Writable streams must call the new stream.Writable([options]) constructor and implement the writable._write() and/or writable._writev() method.
new stream.Writable([options])
options <Object>
highWaterMark <number> Buffer level when stream.write() starts returning false . Default: 16384 (16KB), or 16 for objectMode streams.
decodeStrings <boolean> Whether to encode string s passed to stream.write() to Buffer s (with the encoding specified in the stream.write() call) before passing them to
stream._write() . Other types of data are not converted (i.e. Buffer s are not decoded into string s). Setting to false will prevent string s from being converted. Default: true .
defaultEncoding <string> The default encoding that is used when no encoding is specified as an argument to stream.write() . Default: 'utf8' .
objectMode <boolean> Whether or not the stream.write(anyObj) is a valid operation. When set, it becomes possible to write JavaScript values other than string, Buffer or
Uint8Array if supported by the stream implementation. Default: false .
emitClose <boolean> Whether or not the stream should emit 'close' after it has been destroyed. Default: true .
autoDestroy <boolean> Whether this stream should automatically call .destroy() on itself after ending. Default: true .
function MyWritable(options) {
if (!(this instanceof MyWritable))
return new MyWritable(options);
Writable.call(this, options);
}
util.inherits(MyWritable, Writable);
Calling abort on the AbortController corresponding to the passed AbortSignal will behave the same way as calling .destroy(new AbortError()) on the writeable stream.
writable._construct(callback)
callback <Function> Call this function (optionally with an error argument) when the stream has finished initializing.
The _construct() method MUST NOT be called directly. It may be implemented by child classes, and if so, will be called by the internal Writable class methods only.
This optional function will be called in a tick after the stream constructor has returned, delaying any _write() , _final() and _destroy() calls until callback is called. This is useful to
initialize state or asynchronously initialize resources before the stream can be used.
All Writable stream implementations must provide a writable._write() and/or writable._writev() method to send data to the underlying resource.
This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Writable class methods only.
The callback function must be called synchronously inside of writable._write() or asynchronously (i.e. different tick) to signal either that the write completed successfully or failed with
an error. The first argument passed to the callback must be the Error object if the call failed or null if the write succeeded.
All calls to writable.write() that occur between the time writable._write() is called and the callback is called will cause the written data to be buffered. When the callback is
invoked, the stream might emit a 'drain' event. If a stream implementation is capable of processing multiple chunks of data at once, the writable._writev() method should be
implemented.
If the decodeStrings property is explicitly set to false in the constructor options, then chunk will remain the same object that is passed to .write() , and may be a string rather than a
Buffer . This is to support implementations that have an optimized handling for certain string data encodings. In that case, the encoding argument will indicate the character encoding of
the string. Otherwise, the encoding argument can be safely ignored.
The writable._write() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.
writable._writev(chunks, callback)
chunks <Object[]> The data to be written. The value is an array of <Object> that each represent a discreet chunk of data to write. The properties of these objects are:
chunk <Buffer> | <string> A buffer instance or string containing the data to be written. The chunk will be a string if the Writable was created with the decodeStrings option
set to false and a string was passed to write() .
encoding <string> The character encoding of the chunk . If chunk is a Buffer , the encoding will be 'buffer' .
callback <Function> A callback function (optionally with an error argument) to be invoked when processing is complete for the supplied chunks.
This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Writable class methods only.
The writable._writev() method may be implemented in addition or alternatively to writable._write() in stream implementations that are capable of processing multiple chunks of data
at once. If implemented and if there is buffered data from previous writes, _writev() will be called instead of _write() .
The writable._writev() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.
writable._destroy(err, callback)
err <Error> A possible error.
The _destroy() method is called by writable.destroy() . It can be overridden by child classes but it must not be called directly.
writable._final(callback)
callback <Function> Call this function (optionally with an error argument) when finished writing any remaining data.
The _final() method must not be called directly. It may be implemented by child classes, and if so, will be called by the internal Writable class methods only.
This optional function will be called before the stream closes, delaying the 'finish' event until callback is called. This is useful to close resources or write buffered data before a stream
ends.
If a Readable stream pipes into a Writable stream when Writable emits an error, the Readable stream will be unpiped.
w.write('currency: ');
w.write(euro[0]);
w.end(euro[1]);
console.log(w.data); // currency: €
Implementing a readable stream
The stream.Readable class is extended to implement a Readable stream.
Custom Readable streams must call the new stream.Readable([options]) constructor and implement the readable._read() method.
new stream.Readable([options])
options <Object>
highWaterMark <number> The maximum number of bytes to store in the internal buffer before ceasing to read from the underlying resource. Default: 16384 (16KB), or 16 for
objectMode streams.
encoding <string> If specified, then buffers will be decoded to strings using the specified encoding. Default: null .
objectMode <boolean> Whether this stream should behave as a stream of objects. Meaning that stream.read(n) returns a single value instead of a Buffer of size n . Default:
false .
emitClose <boolean> Whether or not the stream should emit 'close' after it has been destroyed. Default: true .
autoDestroy <boolean> Whether this stream should automatically call .destroy() on itself after ending. Default: true .
function MyReadable(options) {
if (!(this instanceof MyReadable))
return new MyReadable(options);
Readable.call(this, options);
}
util.inherits(MyReadable, Readable);
Calling abort on the AbortController corresponding to the passed AbortSignal will behave the same way as calling .destroy(new AbortError()) on the readable created.
readable._construct(callback)
callback <Function> Call this function (optionally with an error argument) when the stream has finished initializing.
The _construct() method MUST NOT be called directly. It may be implemented by child classes, and if so, will be called by the internal Readable class methods only.
This optional function will be scheduled in the next tick by the stream constructor, delaying any _read() and _destroy() calls until callback is called. This is useful to initialize state or
asynchronously initialize resources before the stream can be used.
const { Readable } = require('stream');
const fs = require('fs');
readable._read(size)
size <number> Number of bytes to read asynchronously
This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Readable class methods only.
All Readable stream implementations must provide an implementation of the readable._read() method to fetch data from the underlying resource.
When readable._read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method.
_read() should continue reading from the resource and pushing data until readable.push() returns false . Only when _read() is called again after it has stopped should it resume
pushing additional data onto the queue.
Once the readable._read() method has been called, it will not be called again until more data is pushed through the readable.push() method. Empty data such as empty buffers and
strings will not cause readable._read() to be called.
The size argument is advisory. For implementations where a "read" is a single operation that returns data can use the size argument to determine how much data to fetch. Other
implementations may ignore this argument and simply provide data whenever it becomes available. There is no need to "wait" until size bytes are available before calling
stream.push(chunk) .
The readable._read() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.
readable._destroy(err, callback)
err <Error> A possible error.
The _destroy() method is called by readable.destroy() . It can be overridden by child classes but it must not be called directly.
readable.push(chunk[, encoding])
chunk <Buffer> | <Uint8Array> | <string> | <null> | <any> Chunk of data to push into the read queue. For streams not operating in object mode, chunk must be a string, Buffer
or Uint8Array . For object mode streams, chunk may be any JavaScript value.
encoding <string> Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii' .
Returns: <boolean> true if additional chunks of data may continue to be pushed; false otherwise.
When chunk is a Buffer , Uint8Array or string , the chunk of data will be added to the internal queue for users of the stream to consume. Passing chunk as null signals the end of the
stream (EOF), after which no more data can be written.
When the Readable is operating in paused mode, the data added with readable.push() can be read out by calling the readable.read() method when the 'readable' event is emitted.
When the Readable is operating in flowing mode, the data added with readable.push() will be delivered by emitting a 'data' event.
The readable.push() method is designed to be as flexible as possible. For example, when wrapping a lower-level source that provides some form of pause/resume mechanism, and a data
callback, the low-level source can be wrapped by the custom Readable instance:
// `_source` is an object with readStop() and readStart() methods,
// and an `ondata` member that gets called when it has data, and
// an `onend` member that gets called when the data is over.
this._source = getLowLevelSourceObject();
The readable.push() method is used to push the content into the internal buffer. It can be driven by the readable._read() method.
For streams not operating in object mode, if the chunk parameter of readable.push() is undefined , it will be treated as empty string or buffer. See readable.push('') for more
information.
_read() {
const i = this._index++;
if (i > this._max)
this.push(null);
else {
const str = String(i);
const buf = Buffer.from(str, 'ascii');
this.push(buf);
}
}
}
Implementing a duplex stream
A Duplex stream is one that implements both Readable and Writable , such as a TCP socket connection.
Because JavaScript does not have support for multiple inheritance, the stream.Duplex class is extended to implement a Duplex stream (as opposed to extending the stream.Readable and
stream.Writable classes).
The stream.Duplex class prototypically inherits from stream.Readable and parasitically from stream.Writable , but instanceof will work properly for both base classes due to overriding
Symbol.hasInstance on stream.Writable .
Custom Duplex streams must call the new stream.Duplex([options]) constructor and implement both the readable._read() and writable._write() methods.
new stream.Duplex(options)
options <Object> Passed to both Writable and Readable constructors. Also has the following fields:
allowHalfOpen <boolean> If set to false , then the stream will automatically end the writable side when the readable side ends. Default: true .
readable <boolean> Sets whether the Duplex should be readable. Default: true .
writable <boolean> Sets whether the Duplex should be writable. Default: true .
readableObjectMode <boolean> Sets objectMode for readable side of the stream. Has no effect if objectMode is true . Default: false .
writableObjectMode <boolean> Sets objectMode for writable side of the stream. Has no effect if objectMode is true . Default: false .
readableHighWaterMark <number> Sets highWaterMark for the readable side of the stream. Has no effect if highWaterMark is provided.
writableHighWaterMark <number> Sets highWaterMark for the writable side of the stream. Has no effect if highWaterMark is provided.
function MyDuplex(options) {
if (!(this instanceof MyDuplex))
return new MyDuplex(options);
Duplex.call(this, options);
}
util.inherits(MyDuplex, Duplex);
pipeline(
fs.createReadStream('object.json')
.setEncoding('utf-8'),
new Transform({
decodeStrings: false, // Accept string input rather than Buffers
construct(callback) {
this.data = '';
callback();
},
transform(chunk, encoding, callback) {
this.data += chunk;
callback();
},
flush(callback) {
try {
// Make sure is valid json.
JSON.parse(this.data);
this.push(this.data);
} catch (err) {
callback(err);
}
}
}),
fs.createWriteStream('valid-object.json'),
(err) => {
if (err) {
console.error('failed', err);
} else {
console.log('completed');
}
}
);
The most important aspect of a Duplex stream is that the Readable and Writable sides operate independently of one another despite co-existing within a single object instance.
In the following example, for instance, a new Transform stream (which is a type of Duplex stream) is created that has an object mode Writable side that accepts JavaScript numbers that
are converted to hexadecimal strings on the Readable side.
myTransform.setEncoding('ascii');
myTransform.on('data', (chunk) => console.log(chunk));
myTransform.write(1);
// Prints: 01
myTransform.write(10);
// Prints: 0a
myTransform.write(100);
// Prints: 64
There is no requirement that the output be the same size as the input, the same number of chunks, or arrive at the same time. For example, a Hash stream will only ever have a single chunk
of output which is provided when the input is ended. A zlib stream will produce output that is either much smaller or much larger than its input.
The stream.Transform class prototypically inherits from stream.Duplex and implements its own versions of the writable._write() and readable._read() methods. Custom Transform
implementations must implement the transform._transform() method and may also implement the transform._flush() method.
Care must be taken when using Transform streams in that data written to the stream can cause the Writable side of the stream to become paused if the output on the Readable side is not
consumed.
new stream.Transform([options])
options <Object> Passed to both Writable and Readable constructors. Also has the following fields:
transform <Function> Implementation for the stream._transform() method.
function MyTransform(options) {
if (!(this instanceof MyTransform))
return new MyTransform(options);
Transform.call(this, options);
}
util.inherits(MyTransform, Transform);
Event: 'end'
The 'end' event is from the stream.Readable class. The 'end' event is emitted after all data has been output, which occurs after the callback in transform._flush() has been called. In
the case of an error, 'end' should not be emitted.
Event: 'finish'
The 'finish' event is from the stream.Writable class. The 'finish' event is emitted after stream.end() is called and all chunks have been processed by stream._transform() . In the
case of an error, 'finish' should not be emitted.
transform._flush(callback)
callback <Function> A callback function (optionally with an error argument and data) to be called when remaining data has been flushed.
This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Readable class methods only.
In some cases, a transform operation may need to emit an additional bit of data at the end of the stream. For example, a zlib compression stream will store an amount of internal state used
to optimally compress the output. When the stream ends, however, that additional data needs to be flushed so that the compressed data will be complete.
Custom Transform implementations may implement the transform._flush() method. This will be called when there is no more written data to be consumed, but before the 'end' event is
emitted signaling the end of the Readable stream.
Within the transform._flush() implementation, the transform.push() method may be called zero or more times, as appropriate. The callback function must be called when the flush
operation is complete.
The transform._flush() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.
encoding <string> If the chunk is a string, then this is the encoding type. If chunk is a buffer, then this is the special value 'buffer' . Ignore it in that case.
callback <Function> A callback function (optionally with an error argument and data) to be called after the supplied chunk has been processed.
This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal Readable class methods only.
All Transform stream implementations must provide a _transform() method to accept input and produce output. The transform._transform() implementation handles the bytes being
written, computes an output, then passes that output off to the readable portion using the transform.push() method.
The transform.push() method may be called zero or more times to generate output from a single input chunk, depending on how much is to be output as a result of the chunk.
It is possible that no output is generated from any given chunk of input data.
The callback function must be called only when the current chunk is completely consumed. The first argument passed to the callback must be an Error object if an error occurred while
processing the input or null otherwise. If a second argument is passed to the callback , it will be forwarded on to the transform.push() method. In other words, the following are
equivalent:
The transform._transform() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.
transform._transform() is never called in parallel; streams implement a queue mechanism, and to receive the next chunk, callback must be called, either synchronously or
asynchronously.
Class: stream.PassThrough
The stream.PassThrough class is a trivial implementation of a Transform stream that simply passes the input bytes across to the output. Its purpose is primarily for examples and testing,
but there are some use cases where stream.PassThrough is useful as a building block for novel sorts of streams.
Additional notes
Streams compatibility with async generators and async iterators
With the support of async generators and iterators in JavaScript, async generators are effectively a first-class language-level stream construct at this point.
Some common interop cases of using Node.js streams with async generators and async iterators are provided below.
(async function() {
for await (const chunk of readable) {
console.log(chunk);
}
})();
Async iterators register a permanent error handler on the stream to prevent any unhandled post-destroy errors.
const fs = require('fs');
const { pipeline } = require('stream');
const { pipeline: pipelinePromise } = require('stream/promises');
// Callback Pattern
pipeline(iterator, writable, (err, value) => {
if (err) {
console.error(err);
} else {
console.log(value, 'value returned');
}
});
// Promise Pattern
pipelinePromise(iterator, writable)
.then((value) => {
console.log(value, 'value returned');
})
.catch(console.error);
Rather than waiting for calls to the stream.read() method, 'data' events would begin emitting immediately. Applications that would need to perform some amount of work to decide
how to handle data were required to store read data into buffers so the data would not be lost.
The stream.pause() method was advisory, rather than guaranteed. This meant that it was still necessary to be prepared to receive 'data' events even when the stream was in a paused
state.
In Node.js 0.10, the Readable class was added. For backward compatibility with older Node.js programs, Readable streams switch into "flowing mode" when a 'data' event handler is
added, or when the stream.resume() method is called. The effect is that, even when not using the new stream.read() method and 'readable' event, it is no longer necessary to worry
about losing 'data' chunks.
While most applications will continue to function normally, this introduces an edge case in the following conditions:
// WARNING! BROKEN!
net.createServer((socket) => {
}).listen(1337);
Prior to Node.js 0.10, the incoming message data would be simply discarded. However, in Node.js 0.10 and beyond, the socket remains paused forever.
The workaround in this situation is to call the stream.resume() method to begin the flow of data:
// Workaround.
net.createServer((socket) => {
socket.on('end', () => {
socket.end('The message was received but was not processed.\n');
});
In addition to new Readable streams switching into flowing mode, pre-0.10 style streams can be wrapped in a Readable class using the readable.wrap() method.
readable.read(0)
There are some cases where it is necessary to trigger a refresh of the underlying readable stream mechanisms, without actually consuming any data. In such cases, it is possible to call
readable.read(0) , which will always return null .
If the internal read buffer is below the highWaterMark , and the stream is not currently reading, then calling stream.read(0) will trigger a low-level stream._read() call.
While most applications will almost never need to do this, there are situations within Node.js where this is done, particularly in the Readable stream class internals.
readable.push('')
Use of readable.push('') is not recommended.
Pushing a zero-byte string, Buffer or Uint8Array to a stream that is not in object mode has an interesting side effect. Because it is a call to readable.push() , the call will end the reading
process. However, because the argument is an empty string, no data is added to the readable buffer so there is nothing for a user to consume.
Typically, the size of the current buffer is measured against the highWaterMark in bytes. However, after setEncoding() is called, the comparison function will begin to measure the buffer's
size in characters.
This is not a problem in common cases with latin1 or ascii . But it is advised to be mindful about this behavior when working with strings that could contain multi-byte characters.
Node.js v15.12.0 Documentation
Events
Stability: 2 - Stable
Much of the Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain kinds of objects (called "emitters") emit named events that cause
Function objects ("listeners") to be called.
For instance: a net.Server object emits an event each time a peer connects to it; a fs.ReadStream emits an event when the file is opened; a stream emits an event whenever data is
available to be read.
All objects that emit events are instances of the EventEmitter class. These objects expose an eventEmitter.on() function that allows one or more functions to be attached to named
events emitted by the object. Typically, event names are camel-cased strings but any valid JavaScript property key can be used.
When the EventEmitter object emits an event, all of the functions attached to that specific event are called synchronously. Any values returned by the called listeners are ignored and
discarded.
The following example shows a simple EventEmitter instance with a single listener. The eventEmitter.on() method is used to register listeners, while the eventEmitter.emit() method is
used to trigger the event.
It is possible to use ES6 Arrow Functions as listeners, however, when doing so, the this keyword will no longer reference the EventEmitter instance:
Using the eventEmitter.once() method, it is possible to register a listener that is called at most once for a particular event. Once the event is emitted, the listener is unregistered and then
called.
Error events
When an error occurs within an EventEmitter instance, the typical action is for an 'error' event to be emitted. These are treated as special cases within Node.js.
If an EventEmitter does not have at least one listener registered for the 'error' event, and an 'error' event is emitted, the error is thrown, a stack trace is printed, and the Node.js
process exits.
As a best practice, listeners should always be added for the 'error' events.
It is possible to monitor 'error' events without consuming the emitted error by installing a listener using the symbol events.errorMonitor .
Using async functions with event handlers is problematic, because it can lead to an unhandled rejection in case of a thrown exception:
The captureRejections option in the EventEmitter constructor or the global setting change this behavior, installing a .then(undefined, handler) handler on the Promise . This handler
routes the exception asynchronously to the Symbol.for('nodejs.rejection') method if there is one, or to 'error' event handler if there is none.
const ee1 = new EventEmitter({ captureRejections: true });
ee1.on('something', async (value) => {
throw new Error('kaboom');
});
ee1.on('error', console.log);
ee2[Symbol.for('nodejs.rejection')] = console.log;
Setting events.captureRejections = true will change the default for all new instances of EventEmitter .
ee1.on('error', console.log);
The 'error' events that are generated by the captureRejections behavior do not have a catch handler to avoid infinite error loops: the recommendation is to not use async functions as
'error' event handlers.
Class: EventEmitter
The EventEmitter class is defined and exposed by the events module:
All EventEmitter s emit the event 'newListener' when new listeners are added and 'removeListener' when existing listeners are removed.
Event: 'newListener'
eventName <string> | <symbol> The name of the event being listened for
The EventEmitter instance will emit its own 'newListener' event before a listener is added to its internal array of listeners.
Listeners registered for the 'newListener' event are passed the event name and a reference to the listener being added.
The fact that the event is triggered before adding the listener has a subtle but important side effect: any additional listeners registered to the same name within the 'newListener' callback
are inserted before the listener that is in the process of being added.
Event: 'removeListener'
eventName <string> | <symbol> The event name
listener <Function>
emitter.emit(eventName[, ...args])
eventName <string> | <symbol>
...args <any>
Returns: <boolean>
Synchronously calls each of the listeners registered for the event named eventName , in the order they were registered, passing the supplied arguments to each.
// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener`);
});
console.log(myEmitter.listeners('event'));
myEmitter.emit('event', 1, 2, 3, 4, 5);
// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener
emitter.eventNames()
Returns: <Array>
Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbol s.
console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]
emitter.getMaxListeners()
Returns: <integer>
Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners .
emitter.listenerCount(eventName)
eventName <string> | <symbol> The name of the event being listened for
Returns: <integer>
emitter.listeners(eventName)
eventName <string> | <symbol>
Returns: <Function[]>
Returns a copy of the array of listeners for the event named eventName .
emitter.off(eventName, listener)
eventName <string> | <symbol>
listener <Function>
Returns: <EventEmitter>
emitter.on(eventName, listener)
eventName <string> | <symbol> The name of the event.
Returns: <EventEmitter>
Adds the listener function to the end of the listeners array for the event named eventName . No checks are made to see if the listener has already been added. Multiple calls passing the
same combination of eventName and listener will result in the listener being added, and called, multiple times.
By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the
listeners array.
emitter.once(eventName, listener)
eventName <string> | <symbol> The name of the event.
Returns: <EventEmitter>
Adds a one-time listener function for the event named eventName . The next time eventName is triggered, this listener is removed and then invoked.
By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of
the listeners array.
emitter.prependListener(eventName, listener)
eventName <string> | <symbol> The name of the event.
Returns: <EventEmitter>
Adds the listener function to the beginning of the listeners array for the event named eventName . No checks are made to see if the listener has already been added. Multiple calls passing
the same combination of eventName and listener will result in the listener being added, and called, multiple times.
emitter.prependOnceListener(eventName, listener)
eventName <string> | <symbol> The name of the event.
Returns: <EventEmitter>
Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.
emitter.removeAllListeners([eventName])
eventName <string> | <symbol>
Returns: <EventEmitter>
It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file
streams).
emitter.removeListener(eventName, listener)
eventName <string> | <symbol>
listener <Function>
Returns: <EventEmitter>
Removes the specified listener from the listener array for the event named eventName .
removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified
eventName , then removeListener() must be called multiple times to remove each instance.
Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and
before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in
which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.
When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the
example the once('ping') listener is removed:
function pong() {
console.log('pong');
}
ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);
ee.emit('ping');
ee.emit('ping');
emitter.setMaxListeners(n)
n <integer>
Returns: <EventEmitter>
By default EventEmitter s will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The
emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0 ) to indicate an unlimited number of
listeners.
emitter.rawListeners(eventName)
eventName <string> | <symbol>
Returns: <Function[]>
Returns a copy of the array of listeners for the event named eventName , including any wrappers (such as those created by .once() ).
// Logs "log once" to the console and does not unbind the `once` event
logFnWrapper.listener();
err Error
...args <any>
The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use
events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection') .
const { EventEmitter, captureRejectionSymbol } = require('events');
destroy(err) {
// Tear the resource down here.
}
}
events.defaultMaxListeners
By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n)
method. To change the default for all EventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.
Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling
emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners .
This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has
been detected. For any single EventEmitter , the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:
emitter.setMaxListeners(emitter.getMaxListeners() + 1);
emitter.once('event', () => {
// do stuff
emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
});
The --trace-warnings command-line flag can be used to display the stack trace for such warnings.
The emitted warning can be inspected with process.on('warning') and will have the additional emitter , type and count properties, referring to the event emitter instance, the event’s
name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning' .
events.errorMonitor
This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.
Installing a listener using this symbol does not change the behavior once an 'error' event is emitted, therefore the process will still crash if no regular 'error' listener is installed.
events.getEventListeners(emitterOrTarget, eventName)
emitterOrTarget <EventEmitter> | <EventTarget>
Returns: <Function[]>
Returns a copy of the array of listeners for the event named eventName .
For EventEmitter s this behaves exactly the same as calling .listeners on the emitter.
For EventTarget s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.
{
const ee = new EventEmitter();
const listener = () => console.log('Events are fun');
ee.on('foo', listener);
getEventListeners(ee, 'foo'); // [listener]
}
{
const et = new EventTarget();
const listener = () => console.log('Events are fun');
et.addEventListener('foo', listener);
getEventListeners(et, 'foo'); // [listener]
}
name <string>
options <Object>
signal <AbortSignal> Can be used to cancel waiting for the event.
Returns: <Promise>
Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an
array of all the arguments emitted to the given event.
This method is intentionally generic and works with the web platform EventTarget interface, which has no special 'error' event semantics and does not listen to the 'error' event.
process.nextTick(() => {
ee.emit('myevent', 42);
});
try {
await once(ee, 'myevent');
} catch (err) {
console.log('error happened', err);
}
}
run();
The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the ' error' event itself, then it is treated
as any other kind of event without special handling:
once(ee, 'error')
.then(([err]) => console.log('ok', err.message))
.catch((err) => console.log('error', err.message));
// Prints: ok boom
// This Promise will never resolve because the 'foo' event will
// have already been emitted before the Promise is created.
await once(myEE, 'foo');
console.log('foo');
}
process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});
To catch both events, create each of the Promises before awaiting either of them, then it becomes possible to use Promise.all() , Promise.race() , or Promise.allSettled() :
process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});
Value: <boolean>
events.captureRejectionSymbol
Stability: 1 - captureRejections is experimental.
Value: Symbol.for('nodejs.rejection')
events.listenerCount(emitter, eventName)
Stability: 0 - Deprecated: Use emitter.listenerCount() instead.
A class method that returns the number of listeners for the given eventName registered on the given emitter .
eventName <string> | <symbol> The name of the event being listened for
options <Object>
signal <AbortSignal> Can be used to cancel awaiting events.
(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error' . It removes all listeners when exiting the loop. The value returned by each
iteration is an array composed of the emitted event arguments.
(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
events.setMaxListeners(n[, ...eventTargets])
n <number> A non-negative number. The maximum number of listeners per EventTarget event.
...eventsTargets <EventTarget[]> | <EventEmitter[]> Zero or more <EventTarget> or <EventEmitter> instances. If none are specified, n is set as the default max for all newly
created <EventTarget> and <EventEmitter> objects.
const {
setMaxListeners,
EventEmitter
} = require('events');
1. Whereas DOM EventTarget instances may be hierarchical, there is no concept of hierarchy and event propagation in Node.js. That is, an event dispatched to an EventTarget does not
propagate through a hierarchy of nested target objects that may each have their own set of handlers for the event.
2. In the Node.js EventTarget , if an event listener is an async function or returns a Promise , and the returned Promise rejects, the rejection is automatically captured and handled the
same way as a listener that throws synchronously (see EventTarget error handling for details).
1. Unlike EventEmitter , any given listener can be registered at most once per event type . Attempts to register a listener multiple times are ignored.
2. The NodeEventTarget does not emulate the full EventEmitter API. Specifically the prependListener() , prependOnceListener() , rawListeners() , setMaxListeners() ,
getMaxListeners() , and errorMonitor APIs are not emulated. The 'newListener' and 'removeListener' events will also not be emitted.
3. The NodeEventTarget does not implement any special default behavior for events with type 'error' .
4. The NodeEventTarget supports EventListener objects as well as functions as handlers for all event types.
Event listener
Event listeners registered for an event type may either be JavaScript functions or objects with a handleEvent property whose value is a function.
In either case, the handler function is invoked with the event argument passed to the eventTarget.dispatchEvent() function.
Async functions may be used as event listeners. If an async handler function rejects, the rejection is captured and handled as described in EventTarget error handling .
An error thrown by one handler function does not prevent the other handlers from being invoked.
function handler1(event) {
console.log(event.type); // Prints 'foo'
event.a = 1;
}
const handler3 = {
handleEvent(event) {
console.log(event.type); // Prints 'foo'
}
};
const handler4 = {
async handleEvent(event) {
console.log(event.type); // Prints 'foo'
}
};
target.addEventListener('foo', handler1);
target.addEventListener('foo', handler2);
target.addEventListener('foo', handler3);
target.addEventListener('foo', handler4, { once: true });
Throwing within an event listener will not stop the other registered handlers from being invoked.
The EventTarget does not implement any special default handling for 'error' type events like EventEmitter .
Currently errors are first forwarded to the process.on('error') event before reaching process.on('uncaughtException') . This behavior is deprecated and will change in a future release
to align EventTarget with other Node.js APIs. Any code relying on the process.on('error') event should be aligned with the new behavior.
Class: Event
The Event object is an adaptation of the Event Web API . Instances are created internally by Node.js.
event.bubbles
Type: <boolean> Always returns false .
event.cancelBubble()
Alias for event.stopPropagation() . This is not used in Node.js and is provided purely for completeness.
event.cancelable
Type: <boolean> True if the event was created with the cancelable option.
event.composed
Type: <boolean> Always returns false .
event.composedPath()
Returns an array containing the current EventTarget as the only entry or empty if the event is not being dispatched. This is not used in Node.js and is provided purely for completeness.
event.currentTarget
Type: <EventTarget> The EventTarget dispatching the event.
event.defaultPrevented
Type: <boolean>
event.eventPhase
Type: <number> Returns 0 while an event is not being dispatched, 2 while it is being dispatched.
event.isTrusted
Type: <boolean>
The <AbortSignal> "abort" event is emitted with isTrusted set to true . The value is false in all other cases.
event.preventDefault()
Sets the defaultPrevented property to true if cancelable is true .
event.returnValue
Type: <boolean> True if the event has not been canceled.
event.srcElement
Type: <EventTarget> The EventTarget dispatching the event.
event.stopImmediatePropagation()
Stops the invocation of event listeners after the current one completes.
event.stopPropagation()
This is not used in Node.js and is provided purely for completeness.
event.target
Type: <EventTarget> The EventTarget dispatching the event.
event.timeStamp
Type: <number>
event.type
Type: <string>
Class: EventTarget
options <Object>
once <boolean> When true , the listener is automatically removed when it is first invoked. Default: false .
passive <boolean> When true , serves as a hint that the listener will not call the Event object's preventDefault() method. Default: false .
capture <boolean> Not directly used by Node.js. Added for API completeness. Default: false .
Adds a new handler for the type event. Any given listener is added only once per type and per capture option value.
If the once option is true , the listener is removed after the next time a type event is dispatched.
The capture option is not used by Node.js in any functional way other than tracking registered event listeners per the EventTarget specification. Specifically, the capture option is used as
part of the key when registering a listener . Any individual listener may be added once with capture = false , and once with capture = true .
function handler(event) {}
eventTarget.dispatchEvent(event)
event <Object> | <Event>
Dispatches the event to the list of handlers for event.type . The event may be an Event object or any object with a type property whose value is a string .
The registered event listeners is synchronously invoked in the order they were registered.
eventTarget.removeEventListener(type, listener)
type <string>
options <Object>
capture <boolean>
Removes the listener from the list of handlers for event type .
Class: NodeEventTarget
Extends: <EventTarget>
The NodeEventTarget is a Node.js-specific extension to EventTarget that emulates a subset of the EventEmitter API.
options <Object>
once <boolean>
Node.js-specific extension to the EventTarget class that emulates the equivalent EventEmitter API. The only difference between addListener() and addEventListener() is that
addListener() will return a reference to the EventTarget .
nodeEventTarget.eventNames()
Returns: <string[]>
Node.js-specific extension to the EventTarget class that returns an array of event type names for which event listeners are registered.
nodeEventTarget.listenerCount(type)
type <string>
Returns: <number>
Node.js-specific extension to the EventTarget class that returns the number of event listeners registered for the type .
nodeEventTarget.off(type, listener)
type <string>
options <Object>
once <boolean>
options <Object>
Node.js-specific extension to the EventTarget class that adds a once listener for the given event type . This is equivalent to calling on with the once option set to true .
nodeEventTarget.removeAllListeners([type])
type <string>
Node.js-specific extension to the EventTarget class. If type is specified, removes all registered listeners for type , otherwise removes all registered listeners.
nodeEventTarget.removeListener(type, listener)
type <string>
Node.js-specific extension to the EventTarget class that removes the listener for the given type . The only difference between removeListener() and removeEventListener() is that
removeListener() will return a reference to the EventTarget .
Node.js v15.12.0 Documentation
File system
Stability: 2 - Stable
The fs module enables interacting with the file system in a way modeled on standard POSIX functions.
All file system operations have synchronous, callback, and promise-based forms, and are accessible using both CommonJS syntax and ES6 Modules (ESM).
Promise example
Promise-based operations return a promise that is fulfilled when the asynchronous operation is complete.
(async function(path) {
try {
await unlink(path);
console.log(`successfully deleted ${path}`);
} catch (error) {
console.error('there was an error:', error.message);
}
})('/tmp/hello');
Callback example
The callback form takes a completion callback function as its last argument and invokes the operation asynchronously. The arguments passed to the completion callback depend on the
method, but the first argument is always reserved for an exception. If the operation is completed successfully, then the first argument is null or undefined .
The callback-based versions of the fs module APIs are preferable over the use of the promise APIs when maximal performance (both in terms of execution time and memory allocation are
required).
Synchronous example
The synchronous APIs block the Node.js event loop and further JavaScript execution until the operation is complete. Exceptions are thrown immediately and can be handled using try…
catch , or can be allowed to bubble up.
try {
unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');
} catch (err) {
// handle the error
}
Promises API
The fs/promises API provides asynchronous file system methods that return promises.
The promise APIs use the underlying Node.js threadpool to perform file system operations off the event loop thread. These operations are not synchronized or threadsafe. Care must be
taken when performing multiple concurrent modifications on the same file or data corruption may occur.
Class: FileHandle
A <FileHandle> object is an object wrapper for a numeric file descriptor.
If a <FileHandle> is not closed using the filehandle.close() method, it will try to automatically close the file descriptor and emit a process warning, helping to prevent memory leaks.
Please do not rely on this behavior because it can be unreliable and the file may not be closed. Instead, always explicitly close <FileHandle> s. Node.js may change this behavior in the future.
Event: 'close'
The 'close' event is emitted when the <FileHandle> has been closed and can no longer be used.
filehandle.appendFile(data[, options])
data <string> | <Buffer> | <TypedArray> | <DataView>
Alias of filehandle.writeFile() .
When operating on file handles, the mode cannot be changed from what it was set to with fsPromises.open() . Therefore, this is equivalent to filehandle.writeFile() .
filehandle.chmod(mode)
mode <integer> the file mode bit mask.
filehandle.chown(uid, gid)
uid <integer> The file's new owner's user id.
filehandle.close()
Returns: <Promise> Fulfills with undefined upon success.
Closes the file handle after waiting for any pending operation on the handle to complete.
let filehandle;
try {
filehandle = await open('thefile.txt', 'r');
} finally {
await filehandle?.close();
}
filehandle.datasync()
Returns: <Promise> Fulfills with undefined upon success.
Forces all currently queued I/O operations associated with the file to the operating system's synchronized I/O completion state. Refer to the POSIX fdatasync(2) documentation for
details.
filehandle.fd
<number> The numeric file descriptor managed by the <FileHandle> object.
offset <integer> The location in the buffer at which to start filling. Default: 0
position <integer> The location where to begin reading data from the file. If null , data will be read from the current file position, and the position will be updated. If position is an
integer, the current file position will remain unchanged.
Returns: <Promise> Fulfills upon success with an object with two properties:
bytesRead <integer> The number of bytes read
Reads data from the file and stores that in the given buffer.
If the file is not modified concurrently, the end-of-file is reached when the number of bytes read is zero.
filehandle.read(options)
options <Object>
buffer <Buffer> | <Uint8Array> A buffer that will be filled with the file data read. Default: Buffer.alloc(16384)
offset <integer> The location in the buffer at which to start filling. Default: 0
position <integer> The location where to begin reading data from the file. If null , data will be read from the current file position, and the position will be updated. If position
is an integer, the current file position will remain unchanged. Default:: null
Returns: <Promise> Fulfills upon success with an object with two properties:
bytesRead <integer> The number of bytes read
If the file is not modified concurrently, the end-of-file is reached when the number of bytes read is zero.
filehandle.readFile(options)
options <Object> | <string>
encoding <string> | <null> Default: null
Returns: <Promise> Fulfills upon a successful read with the contents of the file. If no encoding is specified (using options.encoding ), the data is returned as a <Buffer> object.
Otherwise, the data will be a string.
If one or more filehandle.read() calls are made on a file handle and then a filehandle.readFile() call is made, the data will be read from the current position till the end of the file. It
doesn't always read from the beginning of the file.
filehandle.readv(buffers[, position])
buffers <Buffer[]> | <TypedArray[]> | <DataView[]>
position <integer> The offset from the beginning of the file where the data should be read from. If position is not a number , the data will be read from the current position.
buffers <Buffer[]> | <TypedArray[]> | <DataView[]> property containing a reference to the buffers input.
filehandle.stat([options])
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
filehandle.sync()
Returns: <Promise> Fufills with undefined upon success.
Request that all data for the open file descriptor is flushed to the storage device. The specific implementation is operating system and device specific. Refer to the POSIX fsync(2)
documentation for more detail.
filehandle.truncate(len)
len <integer> Default: 0
If the file was larger than len bytes, only the first len bytes will be retained in the file.
The following example retains only the first four bytes of the file:
If the file previously was shorter than len bytes, it is extended, and the extended part is filled with null bytes ( '\0' ):
filehandle.utimes(atime, mtime)
atime <number> | <string> | <Date>
Returns: <Promise>
Change the file system timestamps of the object referenced by the <FileHandle> then resolves the promise with no arguments upon success.
This function does not work on AIX versions before 7.1, it will reject the promise with an error using code UV_ENOSYS .
offset <integer> The start position from within buffer where the data to write begins.
position <integer> The offset from the beginning of the file where the data from buffer should be written. If position is not a number , the data will be written at the current
position. See the POSIX pwrite(2) documentation for more detail.
Returns: <Promise>
It is unsafe to use filehandle.write() multiple times on the same file without waiting for the promise to be resolved (or rejected). For this scenario, use fs.createWriteStream() .
On Linux, positional writes do not work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
Returns: <Promise>
Write string to the file. If string is not a string, or an object with an own toString function property, the promise is rejected with an error.
It is unsafe to use filehandle.write() multiple times on the same file without waiting for the promise to be resolved (or rejected). For this scenario, use fs.createWriteStream() .
On Linux, positional writes do not work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
filehandle.writeFile(data, options)
data <string> | <Buffer> | <Uint8Array> | <Object>
Returns: <Promise>
Asynchronously writes data to a file, replacing the file if it already exists. data can be a string, a buffer, or an object with an own toString function property. The promise is resolved with no
arguments upon success.
It is unsafe to use filehandle.writeFile() multiple times on the same file without waiting for the promise to be resolved (or rejected).
If one or more filehandle.write() calls are made on a file handle and then a filehandle.writeFile() call is made, the data will be written from the current position till the end of the file.
It doesn't always write from the beginning of the file.
filehandle.writev(buffers[, position])
buffers <Buffer[]> | <TypedArray[]> | <DataView[]>
position <integer> The offset from the beginning of the file where the data from buffers should be written. If position is not a number , the data will be written at the current
position.
Returns: <Promise>
It is unsafe to call writev() multiple times on the same file without waiting for the promise to be resolved (or rejected).
On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
fsPromises.access(path[, mode])
path <string> | <Buffer> | <URL>
Tests a user's permissions for the file or directory specified by path . The mode argument is an optional integer that specifies the accessibility checks to be performed. Check File access
constants for possible values of mode . It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. fs.constants.W_OK | fs.constants.R_OK ).
If the accessibility check is successful, the promise is resolved with no value. If any of the accessibility checks fail, the promise is rejected with an <Error> object. The following example
checks if the file /etc/passwd can be read and written by the current process.
try {
await access('/etc/passwd', constants.R_OK | constants.W_OK);
console.log('can access');
} catch {
console.error('cannot access');
}
Using fsPromises.access() to check for the accessibility of a file before calling fsPromises.open() is not recommended. Doing so introduces a race condition, since other processes may
change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file is not accessible.
Asynchronously append data to a file, creating the file if it does not yet exist. data can be a string or a <Buffer> .
The path may be specified as a <FileHandle> that has been opened for appending (using fsPromises.open() ).
fsPromises.chmod(path, mode)
path <string> | <Buffer> | <URL>
uid <integer>
gid <integer>
mode <integer> Optional modifiers that specify the behavior of the copy operation. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g.
fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE ) Default: 0 .
fs.constants.COPYFILE_EXCL : The copy operation will fail if dest already exists.
fs.constants.COPYFILE_FICLONE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then a fallback copy
mechanism is used.
fs.constants.COPYFILE_FICLONE_FORCE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then the operation will
fail.
Returns: <Promise> Fulfills with undefined upon success.
Asynchronously copies src to dest . By default, dest is overwritten if it already exists.
No guarantees are made about the atomicity of the copy operation. If an error occurs after the destination file has been opened for writing, an attempt will be made to remove the
destination.
try {
await copyFile('source.txt', 'destination.txt');
console.log('source.txt was copied to destination.txt');
} catch {
console.log('The file could not be copied');
}
fsPromises.lchmod(path, mode)
path <string> | <Buffer> | <URL>
mode <integer>
uid <integer>
gid <integer>
Changes the access and modification times of a file in the same way as fsPromises.utimes() , with the difference that if the path refers to a symbolic link, then the link is not dereferenced:
instead, the timestamps of the symbolic link itself are changed.
fsPromises.link(existingPath, newPath)
existingPath <string> | <Buffer> | <URL>
Creates a new link from the existingPath to the newPath . See the POSIX link(2) documentation for more detail.
fsPromises.lstat(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
Returns: <Promise> Fulfills with the <fs.Stats> object for the given symbolic link path .
Equivalent to fsPromises.stats() when path refers to a symbolic link. Refer to the POSIX lstat(2) document for more detail.
fsPromises.mkdir(path[, options])
path <string> | <Buffer> | <URL>
Returns: <Promise> Upon success, fulfills with undefined if recursive is false , or the first directory path created if recursive is true .
fsPromises.mkdtemp(prefix[, options])
prefix <string>
Returns: <Promise> Fulfills with a string containing the filesystem path of the newly created temporary directory.
Creates a unique temporary directory. A unique directory name is generated by appending six random characters to the end of the provided prefix . Due to platform inconsistencies, avoid
trailing X characters in prefix . Some platforms, notably the BSDs, can return more than six random characters, and replace trailing X characters in prefix with random characters.
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use.
try {
await mkdtemp(path.join(os.tmpdir(), 'foo-'));
} catch (err) {
console.error(err);
}
The fsPromises.mkdtemp() method will append the six randomly selected characters directly to the prefix string. For instance, given a directory /tmp , if the intention is to create a
temporary directory within /tmp , the prefix must end with a trailing platform-specific path separator ( require('path').sep ).
flags <string> | <number> See support of file system flags . Default: 'r' .
mode <string> | <integer> Sets the file mode (permission and sticky bits) if the file is created. Default: 0o666 (readable and writable)
Opens a <FileHandle> .
Some characters ( < > : " / \ | ? * ) are reserved under Windows as documented by Naming Files, Paths, and Namespaces . Under NTFS, if the filename contains a colon, Node.js will
open a file system stream, as described by this MSDN page .
fsPromises.opendir(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
encoding <string> | <null> Default: 'utf8'
bufferSize <number> Number of directory entries that are buffered internally when reading from the directory. Higher values lead to better performance but higher memory
usage. Default: 32
Returns: <Promise> Fulfills with an <fs.Dir> .
Asynchronously open a directory for iterative scanning. See the POSIX opendir(3) documentation for more detail.
Creates an <fs.Dir> , which contains all further functions for reading from and cleaning up the directory.
The encoding option sets the encoding for the path while opening the directory and subsequent read operations.
try {
const dir = await opendir('./');
for await (const dirent of dir)
console.log(dirent.name);
} catch (err) {
console.error(err);
}
fsPromises.readdir(path[, options])
path <string> | <Buffer> | <URL>
Returns: <Promise> Fulfills with an array of the names of the files in the directory excluding '.' and '..' .
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames. If the encoding is
set to 'buffer' , the filenames returned will be passed as <Buffer> objects.
If options.withFileTypes is set to true , the resolved array will contain <fs.Dirent> objects.
try {
const files = await readdir(path);
for await (const file of files)
console.log(file);
} catch (err) {
console.error(err);
}
fsPromises.readFile(path[, options])
path <string> | <Buffer> | <URL> | <FileHandle> filename or FileHandle
If no encoding is specified (using options.encoding ), the data is returned as a <Buffer> object. Otherwise, the data will be a string.
When the path is a directory, the behavior of fsPromises.readFile() is platform-specific. On macOS, Linux, and Windows, the promise will be rejected with an error. On FreeBSD, a
representation of the directory's contents will be returned.
It is possible to abort an ongoing readFile using an <AbortSignal> . If a request is aborted the promise returned is rejected with an AbortError :
try {
const controller = new AbortController();
const signal = controller.signal;
readFile(fileName, { signal });
// Abort the request
controller.abort();
} catch (err) {
console.error(err);
}
Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.readFile performs.
fsPromises.readlink(path[, options])
path <string> | <Buffer> | <URL>
Reads the contents of the symbolic link referred to by path . See the POSIX readlink(2) documentation for more detail. The promise is resolved with the linkString upon success.
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path returned. If the
encoding is set to 'buffer' , the link path returned will be passed as a <Buffer> object.
fsPromises.realpath(path[, options])
path <string> | <Buffer> | <URL>
Determines the actual location of path using the same semantics as the fs.realpath.native() function.
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path. If the encoding is set
to 'buffer' , the path returned will be passed as a <Buffer> object.
On Linux, when Node.js is linked against musl libc, the procfs file system must be mounted on /proc in order for this function to work. Glibc does not have this restriction.
fsPromises.rename(oldPath, newPath)
oldPath <string> | <Buffer> | <URL>
newPath <string> | <Buffer> | <URL>
fsPromises.rmdir(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js retries the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive directory removal. In recursive mode, errors are not reported if path does not exist, and operations are retried on failure.
Default: false .
retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .
Using fsPromises.rmdir() on a file (not a directory) results in the promise being rejected with an ENOENT error on Windows and an ENOTDIR error on POSIX.
Setting recursive to true results in behavior similar to the Unix command rm -rf : an error will not be raised for paths that do not exist, and paths that represent files will be deleted. The
permissive behavior of the recursive option is deprecated, ENOTDIR and ENOENT will be thrown in the future.
fsPromises.rm(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
force <boolean> When true , exceptions will be ignored if path does not exist. Default: false .
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js will retry the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive directory removal. In recursive mode operations are retried on failure. Default: false .
retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .
fsPromises.stat(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
Returns: <Promise> Fulfills with the <fs.Stats> object for the given path .
The type argument is only used on Windows platforms and can be one of 'dir' , 'file' , or 'junction' . Windows junction points require the destination path to be absolute. When using
'junction' , the target argument will automatically be normalized to absolute path.
fsPromises.truncate(path[, len])
path <string> | <Buffer> | <URL>
Truncates (shortens or extends the length) of the content at path to len bytes.
fsPromises.unlink(path)
path <string> | <Buffer> | <URL>
If path refers to a symbolic link, then the link is removed without affecting the file or directory to which that link refers. If the path refers to a file path that is not a symbolic link, the file is
deleted. See the POSIX unlink(2) documentation for more detail.
Values can be either numbers representing Unix epoch time, Date s, or a numeric string like '123456789.0' .
If the value can not be converted to a number, or is NaN , Infinity or -Infinity , an Error will be thrown.
fsPromises.watch(filename[, options])
filename <string> | <Buffer> | <URL>
recursive <boolean> Indicates whether all subdirectories should be watched, or only the current directory. This applies when a directory is specified, and only on supported
platforms (See caveats ). Default: false .
encoding <string> Specifies the character encoding to be used for the filename passed to the listener. Default: 'utf8' .
signal <AbortSignal> An <AbortSignal> used to signal when the watcher should stop.
Returns an async iterator that watches for changes on filename , where filename is either a file or a directory.
(async () => {
try {
const watcher = watch(__filename, { signal });
for await (const event of watcher)
console.log(event);
} catch (err) {
if (err.name === 'AbortError')
return;
throw err;
}
})();
On most platforms, 'rename' is emitted whenever a filename appears or disappears in the directory.
Asynchronously writes data to a file, replacing the file if it already exists. data can be a string, a <Buffer> , or an object with an own toString function property.
It is unsafe to use fsPromises.writeFile() multiple times on the same file without waiting for the promise to be settled.
Similarly to fsPromises.readFile - fsPromises.writeFile is a convenience method that performs multiple write calls internally to write the buffer passed to it. For performance
sensitive code consider using fs.createWriteStream() .
It is possible to use an <AbortSignal> to cancel an fsPromises.writeFile() . Cancelation is "best effort", and some amount of data is likely still to be written.
try {
const controller = new AbortController();
const { signal } = controller;
const data = new Uint8Array(Buffer.from('Hello Node.js'));
writeFile('message.txt', data, { signal });
controller.abort();
} catch (err) {
// When a request is aborted - err is an AbortError
console.error(err);
}
Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.writeFile performs.
Callback API
The callback APIs perform all operations asynchronously, without blocking the event loop, then invoke a callback function upon completion or error.
The callback APIs use the underlying Node.js threadpool to perform file system operations off the event loop thread. These operations are not synchronized or threadsafe. Care must be
taken when performing multiple concurrent modifications on the same file or data corruption may occur.
callback <Function>
err <Error>
Tests a user's permissions for the file or directory specified by path . The mode argument is an optional integer that specifies the accessibility checks to be performed. Check File access
constants for possible values of mode . It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. fs.constants.W_OK | fs.constants.R_OK ).
The final argument, callback , is a callback function that is invoked with a possible error argument. If any of the accessibility checks fail, the error argument will be an Error object. The
following examples check if package.json exists, and if it is readable or writable.
Do not use fs.access() to check for the accessibility of a file before calling fs.open() , fs.readFile() or fs.writeFile() . Doing so introduces a race condition, since other processes
may change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file is not accessible.
try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
});
write (RECOMMENDED)
throw err;
}
try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
throw err;
}
read (RECOMMENDED)
throw err;
}
try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
The "not recommended" examples above check for accessibility and then use the file; the "recommended" examples are better because they use the file directly and handle the error, if any.
In general, check for the accessibility of a file only if the file will not be used directly, for example when its accessibility is a signal from another process.
On Windows, access-control policies (ACLs) on a directory may limit access to a file or directory. The fs.access() function, however, does not check the ACL and therefore may report that
a path is accessible even if the ACL restricts the user from reading or writing to it.
fs.appendFile(path, data[, options], callback)
path <string> | <Buffer> | <URL> | <number> filename or file descriptor
callback <Function>
err <Error>
Asynchronously append data to a file, creating the file if it does not yet exist. data can be a string or a <Buffer> .
The path may be specified as a numeric file descriptor that has been opened for appending (using fs.open() or fs.openSync() ). The file descriptor will not be closed automatically.
function closeFd(fd) {
close(fd, (err) => {
if (err) throw err;
});
}
callback <Function>
err <Error>
Asynchronously changes the permissions of a file. No arguments other than a possible exception are given to the completion callback.
File modes
The mode argument used in both the fs.chmod() and fs.chmodSync() methods is a numeric bitmask created using a logical OR of the following constants:
An easier method of constructing the mode is to use a sequence of three octal digits (e.g. 765 ). The left-most digit ( 7 in the example), specifies the permissions for the file owner. The middle
digit ( 6 in the example), specifies permissions for the group. The right-most digit ( 5 in the example), specifies the permissions for others.
Number Description
4 read only
2 write only
1 execute only
0 no permission
Caveats: on Windows only the write permission can be changed, and the distinction among the permissions of group, owner or others is not implemented.
uid <integer>
gid <integer>
callback <Function>
err <Error>
Asynchronously changes owner and group of a file. No arguments other than a possible exception are given to the completion callback.
fs.close(fd[, callback])
fd <integer>
callback <Function>
err <Error>
Closes the file descriptor. No arguments other than a possible exception are given to the completion callback.
Calling fs.close() on any file descriptor ( fd ) that is currently in use through any other fs operation may lead to undefined behavior.
callback <Function>
Asynchronously copies src to dest . By default, dest is overwritten if it already exists. No arguments other than a possible exception are given to the callback function. Node.js makes no
guarantees about the atomicity of the copy operation. If an error occurs after the destination file has been opened for writing, Node.js will attempt to remove the destination.
mode is an optional integer that specifies the behavior of the copy operation. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g.
fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE ).
fs.constants.COPYFILE_EXCL : The copy operation will fail if dest already exists.
fs.constants.COPYFILE_FICLONE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then a fallback copy mechanism is
used.
fs.constants.COPYFILE_FICLONE_FORCE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then the operation will fail.
function callback(err) {
if (err) throw err;
console.log('source.txt was copied to destination.txt');
}
fs.createReadStream(path[, options])
path <string> | <Buffer> | <URL>
start <integer>
Unlike the 16 kb default highWaterMark for a readable stream, the stream returned by this method has a default highWaterMark of 64 kb.
options can include start and end values to read a range of bytes from the file instead of the entire file. Both start and end are inclusive and start counting at 0, allowed values are in
the [0, Number.MAX_SAFE_INTEGER ] range. If fd is specified and start is omitted or undefined , fs.createReadStream() reads sequentially from the current file position. The encoding
can be any one of those accepted by <Buffer> .
If fd is specified, ReadStream will ignore the path argument and will use the specified file descriptor. This means that no 'open' event will be emitted. fd should be blocking; non-
blocking fd s should be passed to <net.Socket> .
If fd points to a character device that only supports blocking reads (such as keyboard or sound card), read operations do not finish until data is available. This can prevent the process from
exiting and the stream from closing naturally.
By default, the stream will emit a 'close' event after it has been destroyed, like most Readable streams. Set the emitClose option to false to change this behavior.
By providing the fs option, it is possible to override the corresponding fs implementations for open , read , and close . When providing the fs option, overrides for open , read , and
close are required.
If autoClose is false, then the file descriptor won't be closed, even if there's an error. It is the application's responsibility to close it and make sure there's no file descriptor leak. If autoClose
is set to true (default behavior), on 'error' or 'end' the file descriptor will be closed automatically.
mode sets the file mode (permission and sticky bits), but only if the file was created.
An example to read the last 10 bytes of a file which is 100 bytes long:
fs.createWriteStream(path[, options])
path <string> | <Buffer> | <URL>
start <integer>
options may also include a start option to allow writing data at some position past the beginning of the file, allowed values are in the [0, Number.MAX_SAFE_INTEGER ] range. Modifying a
file rather than replacing it may require the flags option to be set to r+ rather than the default w . The encoding can be any one of those accepted by <Buffer> .
If autoClose is set to true (default behavior) on 'error' or 'finish' the file descriptor will be closed automatically. If autoClose is false, then the file descriptor won't be closed, even if
there's an error. It is the application's responsibility to close it and make sure there's no file descriptor leak.
By default, the stream will emit a 'close' event after it has been destroyed, like most Writable streams. Set the emitClose option to false to change this behavior.
By providing the fs option it is possible to override the corresponding fs implementations for open , write , writev and close . Overriding write() without writev() can reduce
performance as some optimizations ( _writev() ) will be disabled. When providing the fs option, overrides for open , close , and at least one of write and writev are required.
Like <fs.ReadStream> , if fd is specified, <fs.WriteStream> will ignore the path argument and will use the specified file descriptor. This means that no 'open' event will be emitted. fd
should be blocking; non-blocking fd s should be passed to <net.Socket> .
fs.exists(path, callback)
Stability: 0 - Deprecated: Use fs.stat() or fs.access() instead.
Test whether or not the given path exists by checking with the file system. Then call the callback argument with either true or false:
The parameters for this callback are not consistent with other Node.js callbacks. Normally, the first parameter to a Node.js callback is an err parameter, optionally followed by other
parameters. The fs.exists() callback has only one boolean parameter. This is one reason fs.access() is recommended instead of fs.exists() .
Using fs.exists() to check for the existence of a file before calling fs.open() , fs.readFile() or fs.writeFile() is not recommended. Doing so introduces a race condition, since other
processes may change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file does not exist.
try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
}
});
write (RECOMMENDED)
import { open, close } from 'fs';
open('myfile', 'wx', (err, fd) => {
if (err) {
if (err.code === 'EEXIST') {
console.error('myfile already exists');
return;
}
throw err;
}
try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
} else {
console.error('myfile does not exist');
}
});
read (RECOMMENDED)
throw err;
}
try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
The "not recommended" examples above check for existence and then use the file; the "recommended" examples are better because they use the file directly and handle the error, if any.
In general, check for the existence of a file only if the file won’t be used directly, for example when its existence is a signal from another process.
callback <Function>
err <Error>
Sets the permissions on the file. No arguments other than a possible exception are given to the completion callback.
uid <integer>
gid <integer>
callback <Function>
err <Error>
Sets the owner of the file. No arguments other than a possible exception are given to the completion callback.
fs.fdatasync(fd, callback)
fd <integer>
callback <Function>
err <Error>
Forces all currently queued I/O operations associated with the file to the operating system's synchronized I/O completion state. Refer to the POSIX fdatasync(2) documentation for
details. No arguments other than a possible exception are given to the completion callback.
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
callback <Function>
err <Error>
stats <fs.Stats>
Invokes the callback with the <fs.Stats> for the file descriptor.
fs.fsync(fd, callback)
fd <integer>
callback <Function>
err <Error>
Request that all data for the open file descriptor is flushed to the storage device. The specific implementation is operating system and device specific. Refer to the POSIX fsync(2)
documentation for more detail. No arguments other than a possible exception are given to the completion callback.
callback <Function>
err <Error>
Truncates the file descriptor. No arguments other than a possible exception are given to the completion callback.
If the file referred to by the file descriptor was larger than len bytes, only the first len bytes will be retained in the file.
For example, the following program retains only the first four bytes of the file:
function closeFd(fd) {
close(fd, (err) => {
if (err) throw err;
});
}
try {
ftruncate(fd, 4, (err) => {
closeFd(fd);
if (err) throw err;
});
} catch (err) {
closeFd(fd);
if (err) throw err;
}
});
If the file previously was shorter than len bytes, it is extended, and the extended part is filled with null bytes ( '\0' ):
callback <Function>
err <Error>
Change the file system timestamps of the object referenced by the supplied file descriptor. See fs.utimes() .
This function does not work on AIX versions before 7.1, it will return the error UV_ENOSYS .
mode <integer>
callback <Function>
err <Error>
Changes the permissions on a symbolic link. No arguments other than a possible exception are given to the completion callback.
uid <integer>
gid <integer>
callback <Function>
err <Error>
Set the owner of the symbolic link. No arguments other than a possible exception are given to the completion callback.
callback <Function>
err <Error>
Changes the access and modification times of a file in the same way as fs.utimes() , with the difference that if the path refers to a symbolic link, then the link is not dereferenced: instead,
the timestamps of the symbolic link itself are changed.
No arguments other than a possible exception are given to the completion callback.
callback <Function>
err <Error>
Creates a new link from the existingPath to the newPath . See the POSIX link(2) documentation for more detail. No arguments other than a possible exception are given to the
completion callback.
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
callback <Function>
err <Error>
stats <fs.Stats>
Retrieves the <fs.Stats> for the symbolic link referred to by the path. The callback gets two arguments (err, stats) where stats is a { fs.Stats} object. lstat() is identical
to stat() , except that if path` is a symbolic link, then the link itself is stat-ed, not the file that it refers to.
callback <Function>
err <Error>
The callback is given a possible exception and, if recursive is true , the first directory path created, (err, [path]) . path can still be undefined when recursive is true , if no directory
was created.
The optional options argument can be an integer specifying mode (permission and sticky bits), or an object with a mode property and a recursive property indicating whether parent
directories should be created. Calling fs.mkdir() when path is a directory that exists results in an error only when recursive is false.
On Windows, using fs.mkdir() on the root directory even with recursion will result in an error:
callback <Function>
err <Error>
directory <string>
Generates six random characters to be appended behind a required prefix to create a unique temporary directory. Due to platform inconsistencies, avoid trailing X characters in prefix .
Some platforms, notably the BSDs, can return more than six random characters, and replace trailing X characters in prefix with random characters.
The created directory path is passed as a string to the callback's second parameter.
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use.
The fs.mkdtemp() method will append the six randomly selected characters directly to the prefix string. For instance, given a directory /tmp , if the intention is to create a temporary
directory within /tmp , the prefix must end with a trailing platform-specific path separator ( require('path').sep ).
flags <string> | <number> See support of file system flags . Default: 'r' .
callback <Function>
err <Error>
fd <integer>
Asynchronous file open. See the POSIX open(2) documentation for more details.
mode sets the file mode (permission and sticky bits), but only if the file was created. On Windows, only the write permission can be manipulated; see fs.chmod() .
Some characters ( < > : " / \ | ? * ) are reserved under Windows as documented by Naming Files, Paths, and Namespaces . Under NTFS, if the filename contains a colon, Node.js will
open a file system stream, as described by this MSDN page .
Functions based on fs.open() exhibit this behavior as well: fs.writeFile() , fs.readFile() , etc.
options <Object>
encoding <string> | <null> Default: 'utf8'
bufferSize <number> Number of directory entries that are buffered internally when reading from the directory. Higher values lead to better performance but higher memory
usage. Default: 32
callback <Function>
err <Error>
dir <fs.Dir>
Asynchronously open a directory. See the POSIX opendir(3) documentation for more details.
Creates an <fs.Dir> , which contains all further functions for reading from and cleaning up the directory.
The encoding option sets the encoding for the path while opening the directory and subsequent read operations.
buffer <Buffer> | <TypedArray> | <DataView> The buffer that the data will be written to.
position <integer> | <bigint> Specifies where to begin reading from in the file. If position is null or -1 , data will be read from the current file position, and the file position will
be updated. If position is an integer, the file position will be unchanged.
callback <Function>
err <Error>
bytesRead <integer>
buffer <Buffer>
If the file is not modified concurrently, the end-of-file is reached when the number of bytes read is zero.
If this method is invoked as its util.promisify() ed version, it returns a promise for an Object with bytesRead and buffer properties.
options <Object>
buffer <Buffer> | <TypedArray> | <DataView> Default: Buffer.alloc(16384)
callback <Function>
err <Error>
bytesRead <integer>
buffer <Buffer>
Similar to the fs.read90 function, this version takes an optional options object. If no options object is specified, it will default with the above values.
callback <Function>
err <Error>
Reads the contents of a directory. The callback gets two arguments (err, files) where files is an array of the names of the files in the directory excluding '.' and '..' .
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames passed to the
callback. If the encoding is set to 'buffer' , the filenames returned will be passed as <Buffer> objects.
If options.withFileTypes is set to true , the files array will contain <fs.Dirent> objects.
callback <Function>
err <Error>
The callback is passed two arguments (err, data) , where data is the contents of the file.
When the path is a directory, the behavior of fs.readFile() and fs.readFileSync() is platform-specific. On macOS, Linux, and Windows, an error will be returned. On FreeBSD, a
representation of the directory's contents will be returned.
// FreeBSD
readFile('<directory>', (err, data) => {
// => null, <data>
});
It is possible to abort an ongoing request using an AbortSignal . If a request is aborted the callback is called with an AbortError :
Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.readFile performs.
File descriptors
1. Any specified file descriptor has to support reading.
2. If a file descriptor is specified as the path , it will not be closed automatically.
3. The reading will begin at the current position. For example, if the file already had 'Hello World ' and six bytes are read with the file descriptor, the call to fs.readFile() with the same
file descriptor, would give 'World' , rather than 'Hello World' .
Performance Considerations
The fs.readFile() method asynchronously reads the contents of a file into memory one chunk at a time, allowing the event loop to turn between each chunk. This allows the read
operation to have less impact on other activity that may be using the underlying libuv thread pool but means that it will take longer to read a complete file into memory.
The additional read overhead can vary broadly on different systems and depends on the type of file being read. If the file type is not a regular file (a pipe for instance) and Node.js is unable to
determine an actual file size, each read operation will load on 64kb of data. For regular files, each read will process 512kb of data.
For applications that require as-fast-as-possible reading of file contents, it is better to use fs.read() directly and for application code to manage reading the full contents of the file itself.
The Node.js GitHub issue #25741 provides more information and a detailed analysis on the performance of fs.readFile() for multiple file sizes in different Node.js versions.
callback <Function>
err <Error>
Reads the contents of the symbolic link referred to by path . The callback gets two arguments (err, linkString) .
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path passed to the
callback. If the encoding is set to 'buffer' , the link path returned will be passed as a <Buffer> object.
buffers <ArrayBufferView[]>
position <integer>
callback <Function>
err <Error>
bytesRead <integer>
buffers <ArrayBufferView[]>
Read from a file specified by fd and write to an array of ArrayBufferView s using readv() .
position is the offset from the beginning of the file from where data should be read. If typeof position !== 'number' , the data will be read from the current position.
The callback will be given three arguments: err , bytesRead , and buffers . bytesRead is how many bytes were read from the file.
If this method is invoked as its util.promisify() ed version, it returns a promise for an Object with bytesRead and buffers properties.
callback <Function>
err <Error>
A canonical pathname is not necessarily unique. Hard links and bind mounts can expose a file system entity through many pathnames.
2. The maximum number of symbolic links is platform-independent and generally (much) higher than what the native realpath(3) implementation supports.
The callback gets two arguments (err, resolvedPath) . May use process.cwd to resolve relative paths.
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path passed to the callback.
If the encoding is set to 'buffer' , the path returned will be passed as a <Buffer> object.
If path resolves to a socket or a pipe, the function will return a system dependent name for that object.
callback <Function>
err <Error>
Asynchronous realpath(3) .
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path passed to the callback.
If the encoding is set to 'buffer' , the path returned will be passed as a <Buffer> object.
On Linux, when Node.js is linked against musl libc, the procfs file system must be mounted on /proc in order for this function to work. Glibc does not have this restriction.
callback <Function>
err <Error>
Asynchronously rename file at oldPath to the pathname provided as newPath . In the case that newPath already exists, it will be overwritten. If there is a directory at newPath , an error will
be raised instead. No arguments other than a possible exception are given to the completion callback.
options <Object>
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js retries the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive directory removal. In recursive mode, errors are not reported if path does not exist, and operations are retried on failure.
Default: false .
retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .
callback <Function>
err <Error>
Asynchronous rmdir(2) . No arguments other than a possible exception are given to the completion callback.
Using fs.rmdir() on a file (not a directory) results in an ENOENT error on Windows and an ENOTDIR error on POSIX.
Setting recursive to true results in behavior similar to the Unix command rm -rf : an error will not be raised for paths that do not exist, and paths that represent files will be deleted. The
permissive behavior of the recursive option is deprecated, ENOTDIR and ENOENT will be thrown in the future.
options <Object>
force <boolean> When true , exceptions will be ignored if path does not exist. Default: false .
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js will retry the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive removal. In recursive mode operations are retried on failure. Default: false .
retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .
callback <Function>
err <Error>
Asynchronously removes files and directories (modeled on the standard POSIX rm utility). No arguments other than a possible exception are given to the completion callback.
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
callback <Function>
err <Error>
stats <fs.Stats>
Asynchronous stat(2) . The callback gets two arguments (err, stats) where stats is an <fs.Stats> object.
Using fs.stat() to check for the existence of a file before calling fs.open() , fs.readFile() or fs.writeFile() is not recommended. Instead, user code should open/read/write the file
directly and handle the error raised if the file is not available.
- txtDir
-- file.txt
- app.js
The next program will check for the stats of the given paths:
true
Stats {
dev: 16777220,
mode: 16877,
nlink: 3,
uid: 501,
gid: 20,
rdev: 0,
blksize: 4096,
ino: 14214262,
size: 96,
blocks: 0,
atimeMs: 1561174653071.963,
mtimeMs: 1561174614583.3518,
ctimeMs: 1561174626623.5366,
birthtimeMs: 1561174126937.2893,
atime: 2019-06-22T03:37:33.072Z,
mtime: 2019-06-22T03:36:54.583Z,
ctime: 2019-06-22T03:37:06.624Z,
birthtime: 2019-06-22T03:28:46.937Z
}
false
Stats {
dev: 16777220,
mode: 33188,
nlink: 1,
uid: 501,
gid: 20,
rdev: 0,
blksize: 4096,
ino: 14214074,
size: 8,
blocks: 8,
atimeMs: 1561174616618.8555,
mtimeMs: 1561174614584,
ctimeMs: 1561174614583.8145,
birthtimeMs: 1561174007710.7478,
atime: 2019-06-22T03:36:56.619Z,
mtime: 2019-06-22T03:36:54.584Z,
ctime: 2019-06-22T03:36:54.584Z,
birthtime: 2019-06-22T03:26:47.711Z
}
type <string>
callback <Function>
err <Error>
Creates the link called path pointing to target . No arguments other than a possible exception are given to the completion callback.
The type argument is only available on Windows and ignored on other platforms. It can be set to 'dir' , 'file' , or 'junction' . If the type argument is not set, Node.js will autodetect
target type and use 'file' or 'dir' . If the target does not exist, 'file' will be used. Windows junction points require the destination path to be absolute. When using 'junction' ,
the target argument will automatically be normalized to absolute path.
The above example creates a symbolic link mewtwo in the example which points to mew in the same directory:
$ tree example/
example/
├── mew
└── mewtwo -> ./mew
callback <Function>
err <Error>
Truncates the file. No arguments other than a possible exception are given to the completion callback. A file descriptor can also be passed as the first argument. In this case, fs.ftruncate()
is called.
Passing a file descriptor is deprecated and may result in an error being thrown in the future.
callback <Function>
err <Error>
Asynchronously removes a file or symbolic link. No arguments other than a possible exception are given to the completion callback.
fs.unlink() will not work on a directory, empty or otherwise. To remove a directory, use fs.rmdir() .
fs.unwatchFile(filename[, listener])
filename <string> | <Buffer> | <URL>
Stop watching for changes on filename . If listener is specified, only that particular listener is removed. Otherwise, all listeners are removed, effectively stopping watching of filename .
Calling fs.unwatchFile() with a filename that is not being watched is a no-op, not an error.
Using fs.watch() is more efficient than fs.watchFile() and fs.unwatchFile() . fs.watch() should be used instead of fs.watchFile() and fs.unwatchFile() when possible.
callback <Function>
err <Error>
recursive <boolean> Indicates whether all subdirectories should be watched, or only the current directory. This applies when a directory is specified, and only on supported
platforms (See caveats ). Default: false .
encoding <string> Specifies the character encoding to be used for the filename passed to the listener. Default: 'utf8' .
Returns: <fs.FSWatcher>
The second argument is optional. If options is provided as a string, it specifies the encoding . Otherwise options should be passed as an object.
The listener callback gets two arguments (eventType, filename) . eventType is either 'rename' or 'change' , and filename is the name of the file which triggered the event.
On most platforms, 'rename' is emitted whenever a filename appears or disappears in the directory.
The listener callback is attached to the 'change' event fired by <fs.FSWatcher> , but it is not the same thing as the 'change' value of eventType .
If a signal is passed, aborting the corresponding AbortController will close the returned <fs.FSWatcher> .
Caveats
The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.
The recursive option is only supported on macOS and Windows. An ERR_FEATURE_UNAVAILABLE_ON_PLATFORM exception will be thrown when the option is used on a platform that does not
support it.
On Windows, no events will be emitted if the watched directory is moved or renamed. An EPERM error is reported when the watched directory is deleted.
Availability
This feature depends on the underlying operating system providing a way to be notified of filesystem changes.
On Linux systems, this uses inotify(7) .
On macOS, this uses kqueue(2) for files and FSEvents for directories.
On SunOS systems (including Solaris and SmartOS), this uses event ports .
On Windows systems, this feature depends on ReadDirectoryChangesW .
It is still possible to use fs.watchFile() , which uses stat polling, but this method is slower and less reliable.
Inodes
On Linux and macOS systems, fs.watch() resolves the path to an inode and watches the inode. If the watched path is deleted and recreated, it is assigned a new inode. The watch will emit
an event for the delete but will continue watching the original inode. Events for the new inode will not be emitted. This is expected behavior.
AIX files retain the same inode for the lifetime of a file. Saving and closing a watched file on AIX will result in two notifications (one for adding new content, and one for truncation).
Filename argument
Providing filename argument in the callback is only supported on Linux, macOS, Windows, and AIX. Even on supported platforms, filename is not always guaranteed to be provided.
Therefore, don't assume that filename argument is always provided in the callback, and have some fallback logic if it is null .
options <Object>
bigint <boolean> Default: false
listener <Function>
current <fs.Stats>
previous <fs.Stats>
Returns: <fs.StatWatcher>
Watch for changes on filename . The callback listener will be called each time the file is accessed.
The options argument may be omitted. If provided, it should be an object. The options object may contain a boolean named persistent that indicates whether the process should
continue to run as long as files are being watched. The options object may specify an interval property indicating how often the target should be polled in milliseconds.
The listener gets two arguments the current stat object and the previous stat object:
These stat objects are instances of fs.Stat . If the bigint option is true , the numeric values in these objects are specified as BigInt s.
To be notified when the file was modified, not just accessed, it is necessary to compare curr.mtime and prev.mtime .
When an fs.watchFile operation results in an ENOENT error, it will invoke the listener once, with all the fields zeroed (or, for dates, the Unix Epoch). If the file is created later on, the listener
will be called again, with the latest stat objects. This is a change in functionality since v0.10.
Using fs.watch() is more efficient than fs.watchFile and fs.unwatchFile . fs.watch should be used instead of fs.watchFile and fs.unwatchFile when possible.
When a file being watched by fs.watchFile() disappears and reappears, then the contents of previous in the second callback event (the file's reappearance) will be the same as the
contents of previous in the first callback event (its disappearance).
offset <integer>
length <integer>
position <integer>
callback <Function>
err <Error>
bytesWritten <integer>
Write buffer to the file specified by fd . If buffer is a normal object, it must have an own toString function property.
offset determines the part of the buffer to be written, and length is an integer specifying the number of bytes to write.
position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number' , the data will be written at the current position. See
pwrite(2) .
The callback will be given three arguments (err, bytesWritten, buffer) where bytesWritten specifies how many bytes were written from buffer .
If this method is invoked as its util.promisify() ed version, it returns a promise for an Object with bytesWritten and buffer properties.
It is unsafe to use fs.write() multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream() is recommended.
On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
position <integer>
callback <Function>
err <Error>
written <integer>
string <string>
Write string to the file specified by fd . If string is not a string, or an object with an own toString function property, then an exception is thrown.
position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number' the data will be written at the current position. See
pwrite(2) .
The callback will receive the arguments (err, written, string) where written specifies how many bytes the passed string required to be written. Bytes written is not necessarily the
same as string characters written. See Buffer.byteLength .
It is unsafe to use fs.write() multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream() is recommended.
On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
On Windows, if the file descriptor is connected to the console (e.g. fd == 1 or stdout ) a string containing non-ASCII characters will not be rendered properly by default, regardless of the
encoding used. It is possible to configure the console to render UTF-8 properly by changing the active codepage with the chcp 65001 command. See the chcp docs for more details.
callback <Function>
err <Error>
When file is a filename, asynchronously writes data to the file, replacing the file if it already exists. data can be a string or a buffer.
When file is a file descriptor, the behavior is similar to calling fs.write() directly (which is recommended). See the notes below on using a file descriptor.
The encoding option is ignored if data is a buffer. If data is a normal object, it must have an own toString function property.
It is unsafe to use fs.writeFile() multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream() is recommended.
Similarly to fs.readFile - fs.writeFile is a convenience method that performs multiple write calls internally to write the buffer passed to it. For performance sensitive code consider
using fs.createWriteStream() .
It is possible to use an <AbortSignal> to cancel an fs.writeFile() . Cancelation is "best effort", and some amount of data is likely still to be written.
Aborting an ongoing request does not abort individual operating system requests but rather the internal buffering fs.writeFile performs.
The difference from directly calling fs.write() is that under some unusual conditions, fs.write() might write only part of the buffer and need to be retried to write the remaining data,
whereas fs.writeFile() retries until the data is entirely written (or an error occurs).
The implications of this are a common source of confusion. In the file descriptor case, the file is not replaced! The data is not necessarily written to the beginning of the file, and the file's
original data may remain before and/or after the newly written data.
For example, if fs.writeFile() is called twice in a row, first to write the string 'Hello' , then to write the string ', World' , the file would contain 'Hello, World' , and might contain
some of the file's original data (depending on the size of the original file, and the position of the file descriptor). If a file name had been used instead of a descriptor, the file would be
guaranteed to contain only ', World' .
buffers <ArrayBufferView[]>
position <integer>
callback <Function>
err <Error>
bytesWritten <integer>
buffers <ArrayBufferView[]>
position is the offset from the beginning of the file where this data should be written. If typeof position !== 'number' , the data will be written at the current position.
The callback will be given three arguments: err , bytesWritten , and buffers . bytesWritten is how many bytes were written from buffers .
If this method is util.promisify() ed, it returns a promise for an Object with bytesWritten and buffers properties.
It is unsafe to use fs.writev() multiple times on the same file without waiting for the callback. For this scenario, use fs.createWriteStream() .
On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
Synchronous API
The synchronous APIs perform all operations synchronously, blocking the event loop until the operation completes or fails.
fs.accessSync(path[, mode])
path <string> | <Buffer> | <URL>
Synchronously tests a user's permissions for the file or directory specified by path . The mode argument is an optional integer that specifies the accessibility checks to be performed. Check
File access constants for possible values of mode . It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. fs.constants.W_OK | fs.constants.R_OK ).
If any of the accessibility checks fail, an Error will be thrown. Otherwise, the method will return undefined .
try {
accessSync('etc/passwd', constants.R_OK | constants.W_OK);
console.log('can read/write');
} catch (err) {
console.error('no access!');
}
Synchronously append data to a file, creating the file if it does not yet exist. data can be a string or a <Buffer> .
try {
appendFileSync('message.txt', 'data to append');
console.log('The "data to append" was appended to file!');
} catch (err) {
/* Handle the error */
}
let fd;
try {
fd = openSync('message.txt', 'a');
appendFileSync(fd, 'data to append', 'utf8');
} catch (err) {
/* Handle the error */
} finally {
if (fd !== undefined)
closeSync(fd);
}
fs.chmodSync(path, mode)
path <string> | <Buffer> | <URL>
For detailed information, see the documentation of the asynchronous version of this API: fs.chmod() .
uid <integer>
gid <integer>
Synchronously changes owner and group of a file. Returns undefined . This is the synchronous version of fs.chown() .
fs.closeSync(fd)
fd <integer>
Synchronously copies src to dest . By default, dest is overwritten if it already exists. Returns undefined . Node.js makes no guarantees about the atomicity of the copy operation. If an
error occurs after the destination file has been opened for writing, Node.js will attempt to remove the destination.
mode is an optional integer that specifies the behavior of the copy operation. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g.
fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE ).
fs.constants.COPYFILE_FICLONE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then a fallback copy mechanism is
used.
fs.constants.COPYFILE_FICLONE_FORCE : The copy operation will attempt to create a copy-on-write reflink. If the platform does not support copy-on-write, then the operation will fail.
fs.existsSync(path)
path <string> | <Buffer> | <URL>
Returns: <boolean>
For detailed information, see the documentation of the asynchronous version of this API: fs.exists() .
fs.exists() is deprecated, but fs.existsSync() is not. The callback parameter to fs.exists() accepts parameters that are inconsistent with other Node.js callbacks.
fs.existsSync() does not use a callback.
if (existsSync('/etc/passwd'))
console.log('The path exists.');
fs.fchmodSync(fd, mode)
fd <integer>
fs.fdatasyncSync(fd)
fd <integer>
Forces all currently queued I/O operations associated with the file to the operating system's synchronized I/O completion state. Refer to the POSIX fdatasync(2) documentation for
details. Returns undefined .
fs.fstatSync(fd[, options])
fd <integer>
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
Returns: <fs.Stats>
Retrieves the <fs.Stats> for the file descriptor.
fs.fsyncSync(fd)
fd <integer>
Request that all data for the open file descriptor is flushed to the storage device. The specific implementation is operating system and device specific. Refer to the POSIX fsync(2)
documentation for more detail. Returns undefined .
fs.ftruncateSync(fd[, len])
fd <integer>
For detailed information, see the documentation of the asynchronous version of this API: fs.ftruncate() .
fs.lchmodSync(path, mode)
path <string> | <Buffer> | <URL>
mode <integer>
Change the file system timestamps of the symbolic link referenced by path . Returns undefined , or throws an exception when parameters are incorrect or the operation fails. This is the
synchronous version of fs.lutimes() .
fs.linkSync(existingPath, newPath)
existingPath <string> | <Buffer> | <URL>
Creates a new link from the existingPath to the newPath . See the POSIX link(2) documentation for more detail. Returns undefined .
fs.lstatSync(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
throwIfNoEntry <boolean> Whether an exception will be thrown if no file system entry exists, rather than returning undefined . Default: true .
Returns: <fs.Stats>
fs.mkdirSync(path[, options])
path <string> | <Buffer> | <URL>
Synchronously creates a directory. Returns undefined , or if recursive is true , the first directory path created. This is the synchronous version of fs.mkdir() .
fs.mkdtempSync(prefix[, options])
prefix <string>
Returns: <string>
For detailed information, see the documentation of the asynchronous version of this API: fs.mkdtemp() .
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use.
fs.opendirSync(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
encoding <string> | <null> Default: 'utf8'
bufferSize <number> Number of directory entries that are buffered internally when reading from the directory. Higher values lead to better performance but higher memory
usage. Default: 32
Returns: <fs.Dir>
Creates an <fs.Dir> , which contains all further functions for reading from and cleaning up the directory.
The encoding option sets the encoding for the path while opening the directory and subsequent read operations.
flags <string> | <number> Default: 'r' . See support of file system flags .
Returns: <number>
fs.readdirSync(path[, options])
path <string> | <Buffer> | <URL>
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames returned. If the
encoding is set to 'buffer' , the filenames returned will be passed as <Buffer> objects.
fs.readFileSync(path[, options])
path <string> | <Buffer> | <URL> | <integer> filename or file descriptor
For detailed information, see the documentation of the asynchronous version of this API: fs.readFile() .
If the encoding option is specified then this function returns a string. Otherwise it returns a buffer.
Similar to fs.readFile() , when the path is a directory, the behavior of fs.readFileSync() is platform-specific.
fs.readlinkSync(path[, options])
path <string> | <Buffer> | <URL>
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path returned. If the
encoding is set to 'buffer' , the link path returned will be passed as a <Buffer> object.
offset <integer>
length <integer>
Returns: <number>
For detailed information, see the documentation of the asynchronous version of this API: fs.read() .
options <Object>
offset <integer> Default: 0
Similar to the above fs.readSync function, this version takes an optional options object. If no options object is specified, it will default with the above values.
For detailed information, see the documentation of the asynchronous version of this API: fs.read() .
buffers <ArrayBufferView[]>
position <integer>
For detailed information, see the documentation of the asynchronous version of this API: fs.readv() .
fs.realpathSync(path[, options])
path <string> | <Buffer> | <URL>
For detailed information, see the documentation of the asynchronous version of this API: fs.realpath() .
fs.realpathSync.native(path[, options])
path <string> | <Buffer> | <URL>
Synchronous realpath(3) .
The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path returned. If the
encoding is set to 'buffer' , the path returned will be passed as a <Buffer> object.
On Linux, when Node.js is linked against musl libc, the procfs file system must be mounted on /proc in order for this function to work. Glibc does not have this restriction.
fs.renameSync(oldPath, newPath)
oldPath <string> | <Buffer> | <URL>
fs.rmdirSync(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js retries the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive directory removal. In recursive mode, errors are not reported if path does not exist, and operations are retried on failure.
Default: false .
retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .
Using fs.rmdirSync() on a file (not a directory) results in an ENOENT error on Windows and an ENOTDIR error on POSIX.
Setting recursive to true results in behavior similar to the Unix command rm -rf : an error will not be raised for paths that do not exist, and paths that represent files will be deleted. The
permissive behavior of the recursive option is deprecated, ENOTDIR and ENOENT will be thrown in the future.
fs.rmSync(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
force <boolean> When true , exceptions will be ignored if path does not exist. Default: false .
maxRetries <integer> If an EBUSY , EMFILE , ENFILE , ENOTEMPTY , or EPERM error is encountered, Node.js will retry the operation with a linear backoff wait of retryDelay
milliseconds longer on each try. This option represents the number of retries. This option is ignored if the recursive option is not true . Default: 0 .
recursive <boolean> If true , perform a recursive directory removal. In recursive mode operations are retried on failure. Default: false .
retryDelay <integer> The amount of time in milliseconds to wait between retries. This option is ignored if the recursive option is not true . Default: 100 .
Synchronously removes files and directories (modeled on the standard POSIX rm utility). Returns undefined .
fs.statSync(path[, options])
path <string> | <Buffer> | <URL>
options <Object>
bigint <boolean> Whether the numeric values in the returned <fs.Stats> object should be bigint . Default: false .
throwIfNoEntry <boolean> Whether an exception will be thrown if no file system entry exists, rather than returning undefined . Default: true .
Returns: <fs.Stats>
type <string>
Returns undefined .
For detailed information, see the documentation of the asynchronous version of this API: fs.symlink() .
fs.truncateSync(path[, len])
path <string> | <Buffer> | <URL>
Truncates the file. Returns undefined . A file descriptor can also be passed as the first argument. In this case, fs.ftruncateSync() is called.
Passing a file descriptor is deprecated and may result in an error being thrown in the future.
fs.unlinkSync(path)
path <string> | <Buffer> | <URL>
Returns undefined .
For detailed information, see the documentation of the asynchronous version of this API: fs.utimes() .
Returns undefined .
For detailed information, see the documentation of the asynchronous version of this API: fs.writeFile() .
offset <integer>
length <integer>
position <integer>
For detailed information, see the documentation of the asynchronous version of this API: fs.write(fd, buffer...) .
position <integer>
encoding <string>
For detailed information, see the documentation of the asynchronous version of this API: fs.write(fd, string...) .
buffers <ArrayBufferView[]>
position <integer>
For detailed information, see the documentation of the asynchronous version of this API: fs.writev() .
Common Objects
The common objects are shared by all of the file system API variants (promise, callback, and synchronous).
Class: fs.Dir
A class representing a directory stream.
try {
const dir = await opendir('./');
for await (const dirent of dir)
console.log(dirent.name);
} catch (err) {
console.error(err);
}
dir.close()
Returns: <Promise>
Asynchronously close the directory's underlying resource handle. Subsequent reads will result in errors.
A promise is returned that will be resolved after the resource has been closed.
dir.close(callback)
callback <Function>
err <Error>
Asynchronously close the directory's underlying resource handle. Subsequent reads will result in errors.
The callback will be called after the resource handle has been closed.
dir.closeSync()
Synchronously close the directory's underlying resource handle. Subsequent reads will result in errors.
dir.path
<string>
The read-only path of this directory as was provided to fs.opendir() , fs.opendirSync() , or fsPromises.opendir() .
dir.read()
Returns: <Promise> containing <fs.Dirent> | <null>
A promise is returned that will be resolved with an <fs.Dirent> , or null if there are no more directory entries to read.
Directory entries returned by this function are in no particular order as provided by the operating system's underlying directory mechanisms. Entries added or removed while iterating over
the directory might not be included in the iteration results.
dir.read(callback)
callback <Function>
err <Error>
After the read is completed, the callback will be called with an <fs.Dirent> , or null if there are no more directory entries to read.
Directory entries returned by this function are in no particular order as provided by the operating system's underlying directory mechanisms. Entries added or removed while iterating over
the directory might not be included in the iteration results.
dir.readSync()
Returns: <fs.Dirent> | <null>
Synchronously read the next directory entry as an <fs.Dirent> . See the POSIX readdir(3) documentation for more detail.
dir[Symbol.asyncIterator]()
Returns: <AsyncIterator> of <fs.Dirent>
Asynchronously iterates over the directory until all entries have been read. Refer to the POSIX readdir(3) documentation for more detail.
Entries returned by the async iterator are always an <fs.Dirent> . The null case from dir.read() is handled internally.
Directory entries returned by this iterator are in no particular order as provided by the operating system's underlying directory mechanisms. Entries added or removed while iterating over
the directory might not be included in the iteration results.
Class: fs.Dirent
A representation of a directory entry, which can be a file or a subdirectory within the directory, as returned by reading from an <fs.Dir> . The directory entry is a combination of the file
name and file type pairs.
Additionally, when fs.readdir() or fs.readdirSync() is called with the withFileTypes option set to true , the resulting array is filled with <fs.Dirent> objects, rather than strings or
<Buffer> s.
dirent.isBlockDevice()
Returns: <boolean>
dirent.isCharacterDevice()
Returns: <boolean>
dirent.isDirectory()
Returns: <boolean>
dirent.isFIFO()
Returns: <boolean>
Returns true if the <fs.Dirent> object describes a first-in-first-out (FIFO) pipe.
dirent.isFile()
Returns: <boolean>
dirent.isSocket()
Returns: <boolean>
dirent.isSymbolicLink()
Returns: <boolean>
dirent.name
<string> | <Buffer>
The file name that this <fs.Dirent> object refers to. The type of this value is determined by the options.encoding passed to fs.readdir() or fs.readdirSync() .
Class: fs.FSWatcher
Extends <EventEmitter>
All <fs.FSWatcher> objects emit a 'change' event whenever a specific watched file is modified.
Event: 'change'
eventType <string> The type of change event that has occurred
Emitted when something changes in a watched directory or file. See more details in fs.watch() .
The filename argument may not be provided depending on operating system support. If filename is provided, it will be provided as a <Buffer> if fs.watch() is called with its encoding
option set to 'buffer' , otherwise filename will be a UTF-8 string.
import { watch } from 'fs';
// Example when handled through fs.watch() listener
watch('./tmp', { encoding: 'buffer' }, (eventType, filename) => {
if (filename) {
console.log(filename);
// Prints: <Buffer ...>
}
});
Event: 'close'
Emitted when the watcher stops watching for changes. The closed <fs.FSWatcher> object is no longer usable in the event handler.
Event: 'error'
error <Error>
Emitted when an error occurs while watching the file. The errored <fs.FSWatcher> object is no longer usable in the event handler.
watcher.close()
Stop watching for changes on the given <fs.FSWatcher> . Once stopped, the <fs.FSWatcher> object is no longer usable.
watcher.ref()
Returns: <fs.FSWatcher>
When called, requests that the Node.js event loop not exit so long as the <fs.FSWatcher> is active. Calling watcher.ref() multiple times will have no effect.
By default, all <fs.FSWatcher> objects are "ref'ed", making it normally unnecessary to call watcher.ref() unless watcher.unref() had been called previously.
watcher.unref()
Returns: <fs.FSWatcher>
When called, the active <fs.FSWatcher> object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit
before the <fs.FSWatcher> object's callback is invoked. Calling watcher.unref() multiple times will have no effect.
Class: fs.StatWatcher
Extends <EventEmitter>
When called, requests that the Node.js event loop not exit so long as the <fs.StatWatcher> is active. Calling watcher.ref() multiple times will have no effect.
By default, all <fs.StatWatcher> objects are "ref'ed", making it normally unnecessary to call watcher.ref() unless watcher.unref() had been called previously.
watcher.unref()
Returns: <fs.StatWatcher>
When called, the active <fs.StatWatcher> object will not require the Node.js event loop to remain active. If there is no other activity keeping the event loop running, the process may exit
before the <fs.StatWatcher> object's callback is invoked. Calling watcher.unref() multiple times will have no effect.
Class: fs.ReadStream
Extends: <stream.Readable>
Instances of <fs.ReadStream> are created and returned using the fs.createReadStream() function.
Event: 'close'
Emitted when the <fs.ReadStream> 's underlying file descriptor has been closed.
Event: 'open'
fd <integer> Integer file descriptor used by the <fs.ReadStream> .
Emitted when the <fs.ReadStream> 's file descriptor has been opened.
Event: 'ready'
Emitted when the <fs.ReadStream> is ready to be used.
readStream.bytesRead
<number>
readStream.path
<string> | <Buffer>
The path to the file the stream is reading from as specified in the first argument to fs.createReadStream() . If path is passed as a string, then readStream.path will be a string. If path is
passed as a <Buffer> , then readStream.path will be a <Buffer> .
readStream.pending
<boolean>
This property is true if the underlying file has not been opened yet, i.e. before the 'ready' event is emitted.
Class: fs.Stats
A <fs.Stats> object provides information about a file.
Objects returned from fs.stat() , fs.lstat() and fs.fstat() and their synchronous counterparts are of this type. If bigint in the options passed to those methods is true, the
numeric values will be bigint instead of number , and the object will contain additional nanosecond-precision properties suffixed with Ns .
Stats {
dev: 2114,
ino: 48064969,
mode: 33188,
nlink: 1,
uid: 85,
gid: 100,
rdev: 0,
size: 527,
blksize: 4096,
blocks: 8,
atimeMs: 1318289051000.1,
mtimeMs: 1318289051000.1,
ctimeMs: 1318289051000.1,
birthtimeMs: 1318289051000.1,
atime: Mon, 10 Oct 2011 23:24:11 GMT,
mtime: Mon, 10 Oct 2011 23:24:11 GMT,
ctime: Mon, 10 Oct 2011 23:24:11 GMT,
birthtime: Mon, 10 Oct 2011 23:24:11 GMT }
bigint version:
BigIntStats {
dev: 2114n,
ino: 48064969n,
mode: 33188n,
nlink: 1n,
uid: 85n,
gid: 100n,
rdev: 0n,
size: 527n,
blksize: 4096n,
blocks: 8n,
atimeMs: 1318289051000n,
mtimeMs: 1318289051000n,
ctimeMs: 1318289051000n,
birthtimeMs: 1318289051000n,
atimeNs: 1318289051000000000n,
mtimeNs: 1318289051000000000n,
ctimeNs: 1318289051000000000n,
birthtimeNs: 1318289051000000000n,
atime: Mon, 10 Oct 2011 23:24:11 GMT,
mtime: Mon, 10 Oct 2011 23:24:11 GMT,
ctime: Mon, 10 Oct 2011 23:24:11 GMT,
birthtime: Mon, 10 Oct 2011 23:24:11 GMT }
stats.isBlockDevice()
Returns: <boolean>
stats.isCharacterDevice()
Returns: <boolean>
stats.isDirectory()
Returns: <boolean>
If the <fs.Stats> object was obtained from fs.lstat() , this method will always return false . This is because fs.lstat() returns information about a symbolic link itself and not the
path it resolves to.
stats.isFIFO()
Returns: <boolean>
stats.isFile()
Returns: <boolean>
stats.isSocket()
Returns: <boolean>
stats.isSymbolicLink()
Returns: <boolean>
stats.dev
<number> | <bigint>
stats.ino
<number> | <bigint>
stats.mode
<number> | <bigint>
stats.nlink
<number> | <bigint>
The number of hard-links that exist for the file.
stats.uid
<number> | <bigint>
The numeric user identifier of the user that owns the file (POSIX).
stats.gid
<number> | <bigint>
The numeric group identifier of the group that owns the file (POSIX).
stats.rdev
<number> | <bigint>
stats.size
<number> | <bigint>
stats.blksize
<number> | <bigint>
stats.blocks
<number> | <bigint>
stats.atimeMs
<number> | <bigint>
The timestamp indicating the last time this file was accessed expressed in milliseconds since the POSIX Epoch.
stats.mtimeMs
<number> | <bigint>
The timestamp indicating the last time this file was modified expressed in milliseconds since the POSIX Epoch.
stats.ctimeMs
<number> | <bigint>
The timestamp indicating the last time the file status was changed expressed in milliseconds since the POSIX Epoch.
stats.birthtimeMs
<number> | <bigint>
The timestamp indicating the creation time of this file expressed in milliseconds since the POSIX Epoch.
stats.atimeNs
<bigint>
Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the last time this file was accessed expressed in nanoseconds since the
POSIX Epoch.
stats.mtimeNs
<bigint>
Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the last time this file was modified expressed in nanoseconds since the
POSIX Epoch.
stats.ctimeNs
<bigint>
Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the last time the file status was changed expressed in nanoseconds since
the POSIX Epoch.
stats.birthtimeNs
<bigint>
Only present when bigint: true is passed into the method that generates the object. The timestamp indicating the creation time of this file expressed in nanoseconds since the POSIX
Epoch.
stats.atime
<Date>
The timestamp indicating the last time this file was accessed.
stats.mtime
<Date>
The timestamp indicating the last time this file was modified.
stats.ctime
<Date>
The timestamp indicating the last time the file status was changed.
stats.birthtime
<Date>
The atimeNs , mtimeNs , ctimeNs , birthtimeNs properties are bigints that hold the corresponding times in nanoseconds. They are only present when bigint: true is passed into the
method that generates the object. Their precision is platform specific.
atime , mtime , ctime , and birthtime are Date object alternate representations of the various times. The Date and number values are not connected. Assigning a new number value, or
mutating the Date value, will not be reflected in the corresponding alternate representation.
atime "Access Time": Time when file data last accessed. Changed by the mknod(2) , utimes(2) , and read(2) system calls.
mtime "Modified Time": Time when file data last modified. Changed by the mknod(2) , utimes(2) , and write(2) system calls.
ctime "Change Time": Time when file status was last changed (inode data modification). Changed by the chmod(2) , chown(2) , link(2) , mknod(2) , rename(2) , unlink(2) ,
utimes(2) , read(2) , and write(2) system calls.
birthtime "Birth Time": Time of file creation. Set once when the file is created. On filesystems where birthtime is not available, this field may instead hold either the ctime or 1970-01-
01T00:00Z (ie, Unix epoch timestamp 0 ). This value may be greater than atime or mtime in this case. On Darwin and other FreeBSD variants, also set if the atime is explicitly set to an
earlier value than the current birthtime using the utimes(2) system call.
Prior to Node.js 0.12, the ctime held the birthtime on Windows systems. As of 0.12, ctime is not "creation time", and on Unix systems, it never was.
Class: fs.WriteStream
Extends <stream.Writable>
Instances of <fs.WriteStream> are created and returned using the fs.createWriteStream() function.
Event: 'close'
Emitted when the <fs.WriteStream> 's underlying file descriptor has been closed.
Event: 'open'
fd <integer> Integer file descriptor used by the <fs.WriteStream> .
Event: 'ready'
Emitted when the <fs.WriteStream> is ready to be used.
writeStream.bytesWritten
The number of bytes written so far. Does not include data that is still queued for writing.
writeStream.path
The path to the file the stream is writing to as specified in the first argument to fs.createWriteStream() . If path is passed as a string, then writeStream.path will be a string. If path is
passed as a <Buffer> , then writeStream.path will be a <Buffer> .
writeStream.pending
<boolean>
This property is true if the underlying file has not been opened yet, i.e. before the 'ready' event is emitted.
fs.constants
<Object>
Returns an object containing commonly used constants for file system operations.
FS constants
The following constants are exported by fs.constants .
Example:
const {
O_RDWR,
O_CREAT,
O_EXCL
} = constants;
Constant Description
F_OK Flag indicating that the file is visible to the calling process. This is useful for determining if a file exists, but says nothing about rwx permissions. Default if no mode is specified.
R_OK Flag indicating that the file can be read by the calling process.
W_OK Flag indicating that the file can be written by the calling process.
X_OK Flag indicating that the file can be executed by the calling process. This has no effect on Windows (will behave like fs.constants.F_OK ).
Constant Description
COPYFILE_EXCL If present, the copy operation will fail with an error if the destination path already exists.
COPYFILE_FICLONE If present, the copy operation will attempt to create a copy-on-write reflink. If the underlying platform does not support copy-on-write, then a fallback copy
mechanism is used.
COPYFILE_FICLONE_FO If present, the copy operation will attempt to create a copy-on-write reflink. If the underlying platform does not support copy-on-write, then the operation will
fail with an error.
RCE
Constant Description
O_CREAT Flag indicating to create the file if it does not already exist.
O_EXCL Flag indicating that opening a file should fail if the O_CREAT flag is set and the file already exists.
O_NOCTTY Flag indicating that if path identifies a terminal device, opening the path shall not cause that terminal to become the controlling terminal for the process (if the process
does not already have one).
O_TRUNC Flag indicating that if the file exists and is a regular file, and the file is opened successfully for write access, its length shall be truncated to zero.
O_APPEND Flag indicating that data will be appended to the end of the file.
O_DIRECTORY Flag indicating that the open should fail if the path is not a directory.
O_NOATIME Flag indicating reading accesses to the file system will no longer result in an update to the atime information associated with the file. This flag is available on Linux
operating systems only.
O_NOFOLLOW Flag indicating that the open should fail if the path is a symbolic link.
O_SYNC Flag indicating that the file is opened for synchronized I/O with write operations waiting for file integrity.
O_DSYNC Flag indicating that the file is opened for synchronized I/O with write operations waiting for data integrity.
O_SYMLINK Flag indicating to open the symbolic link itself rather than the resource it is pointing to.
O_DIRECT When set, an attempt will be made to minimize caching effects of file I/O.
O_NONBLOCK Flag indicating to open the file in nonblocking mode when possible.
UV_FS_O_FILE When set, a memory file mapping is used to access the file. This flag is available on Windows operating systems only. On other operating systems, this flag is ignored.
MAP
File type constants
The following constants are meant for use with the <fs.Stats> object's mode property for determining a file's type.
Constant Description
Constant Description
Notes
Ordering of callback and promise-based operations
Because they are executed asynchronously by the underlying thread pool, there is no guaranteed ordering when using either the callback or promise-based methods.
For example, the following is prone to error because the fs.stat() operation might complete before the fs.rename() operation:
It is important to correctly order the operations by awaiting the results of one before invoking the other:
try {
await rename(from, to);
const stats = await stat(to);
console.log(`stats: ${JSON.stringify(stats)}`);
} catch (error) {
console.error('there was an error:', error.message);
}
// Using CommonJS syntax
const { rename, stat } = require('fs/promises');
Or, when using the callback APIs, move the fs.stat() call into the callback of the fs.rename() operation:
String paths
String form paths are interpreted as UTF-8 character sequences identifying the absolute or relative filename. Relative paths will be resolved relative to the current working directory as
determined by calling process.cwd() .
let fd;
try {
fd = await open('/open/some/file.txt', 'r');
// Do something with the file
} finally {
await fd.close();
}
let fd;
try {
fd = await open('file.txt', 'r');
// Do something with the file
} finally {
await fd.close();
}
Platform-specific considerations
On Windows, file: <URL> s with a host name convert to UNC paths, while file: <URL> s with drive letters convert to local absolute paths. file: <URL> s without a host name nor a drive
letter will result in an error:
file: <URL> s with drive letters must use : as a separator just after the drive letter. Using another separator will result in an error.
On all other platforms, file: <URL> s with a host name are unsupported and will result in an error:
// On Windows
readFileSync(new URL('file:///C:/p/a/t/h/%2F'));
readFileSync(new URL('file:///C:/p/a/t/h/%2f'));
/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded
\ or / characters */
// On POSIX
readFileSync(new URL('file:///p/a/t/h/%2F'));
readFileSync(new URL('file:///p/a/t/h/%2f'));
/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded
/ characters */
// On Windows
readFileSync(new URL('file:///C:/path/%5C'));
readFileSync(new URL('file:///C:/path/%5c'));
/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded
\ or / characters */
Buffer paths
Paths specified using a <Buffer> are useful primarily on certain POSIX operating systems that treat file paths as opaque byte sequences. On such systems, it is possible for a single file path
to contain sub-sequences that use multiple character encodings. As with string paths, <Buffer> paths may be relative or absolute:
let fd;
try {
fd = await open(Buffer.from('/open/some/file.txt'), 'r');
// Do something with the file
} finally {
await fd.close();
}
File descriptors
On POSIX systems, for every process, the kernel maintains a table of currently open files and resources. Each open file is assigned a simple numeric identifier called a file descriptor. At the
system-level, all file system operations use these file descriptors to identify and track each specific file. Windows systems use a different but conceptually similar mechanism for tracking
resources. To simplify things for users, Node.js abstracts away the differences between operating systems and assigns all open files a numeric file descriptor.
The callback-based fs.open() , and synchronous fs.openSync() methods open a file and allocate a new file descriptor. Once allocated, the file descriptor may be used to read data from,
write data to, or request information about the file.
Operating systems limit the number of file descriptors that may be open at any given time so it is critical to close the descriptor when operations are completed. Failure to do so will result in
a memory leak that will eventually cause an application to crash.
function closeFd(fd) {
close(fd, (err) => {
if (err) throw err;
});
}
closeFd(fd);
});
} catch (err) {
closeFd(fd);
throw err;
}
});
The promise-based APIs use a <FileHandle> object in place of the numeric file descriptor. These objects are better managed by the system to ensure that resources are not leaked.
However, it is still required that they are closed when operations are completed:
let file;
try {
file = await open('/open/some/file.txt', 'r');
const stat = await file.stat();
// use stat
} finally {
await file.close();
}
Threadpool usage
All callback and promise-based file system APIs ( with the exception of fs.FSWatcher() ) use libuv's threadpool. This can have surprising and negative performance implications for some
applications. See the UV_THREADPOOL_SIZE documentation for more information.
'a' : Open file for appending. The file is created if it does not exist.
'a+' : Open file for reading and appending. The file is created if it does not exist.
'ax+' : Like 'a+' but fails if the path exists.
'as' : Open file for appending in synchronous mode. The file is created if it does not exist.
'as+' : Open file for reading and appending in synchronous mode. The file is created if it does not exist.
'r' : Open file for reading. An exception occurs if the file does not exist.
'r+' : Open file for reading and writing. An exception occurs if the file does not exist.
'rs+' : Open file for reading and writing in synchronous mode. Instructs the operating system to bypass the local file system cache.
This is primarily useful for opening files on NFS mounts as it allows skipping the potentially stale local cache. It has a very real impact on I/O performance so using this flag is not
recommended unless it is needed.
This doesn't turn fs.open() or fsPromises.open() into a synchronous blocking call. If synchronous operation is desired, something like fs.openSync() should be used.
'w' : Open file for writing. The file is created (if it does not exist) or truncated (if it exists).
'w+' : Open file for reading and writing. The file is created (if it does not exist) or truncated (if it exists).
flag can also be a number as documented by open(2) ; commonly used constants are available from fs.constants . On Windows, flags are translated to their equivalent ones where
applicable, e.g. O_WRONLY to FILE_GENERIC_WRITE , or O_EXCL|O_CREAT to CREATE_NEW , as accepted by CreateFileW .
The exclusive flag 'x' ( O_EXCL flag in open(2) ) causes the operation to return an error if the path already exists. On POSIX, if the path is a symbolic link, using O_EXCL returns an error
even if the link is to a path that does not exist. The exclusive flag might not work with network file systems.
On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
Modifying a file rather than replacing it may require the flag option to be set to 'r+' rather than the default 'w' .
The behavior of some flags are platform-specific. As such, opening a directory on macOS and Linux with the 'a+' flag, as in the example below, will return an error. In contrast, on Windows
and FreeBSD, a file descriptor or a FileHandle will be returned.
On Windows, opening an existing hidden file using the 'w' flag (either through fs.open() or fs.writeFile() or fsPromises.open() ) will fail with EPERM . Existing hidden files can be
opened for writing with the 'r+' flag.
Child process
Stability: 2 - Stable
The child_process module provides the ability to spawn subprocesses in a manner that is similar, but not identical, to popen(3) . This capability is primarily provided by the
child_process.spawn() function:
By default, pipes for stdin , stdout , and stderr are established between the parent Node.js process and the spawned subprocess. These pipes have limited (and platform-specific)
capacity. If the subprocess writes to stdout in excess of that limit without the output being captured, the subprocess blocks waiting for the pipe buffer to accept more data. This is identical to
the behavior of pipes in the shell. Use the { stdio: 'ignore' } option if the output will not be consumed.
The command lookup is performed using the options.env.PATH environment variable if it is in the options object. Otherwise, process.env.PATH is used.
On Windows, environment variables are case-insensitive. Node.js lexicographically sorts the env keys and uses the first one that case-insensitively matches. Only first (in lexicographic
order) entry will be passed to the subprocess. This might lead to issues on Windows when passing objects to the env option that have multiple variants of the same key, such as PATH and
Path .
The child_process.spawn() method spawns the child process asynchronously, without blocking the Node.js event loop. The child_process.spawnSync() function provides equivalent
functionality in a synchronous manner that blocks the event loop until the spawned process either exits or is terminated.
For convenience, the child_process module provides a handful of synchronous and asynchronous alternatives to child_process.spawn() and child_process.spawnSync() . Each of these
alternatives are implemented on top of child_process.spawn() or child_process.spawnSync() .
child_process.exec() : spawns a shell and runs a command within that shell, passing the stdout and stderr to a callback function when complete.
child_process.execFile() : similar to child_process.exec() except that it spawns the command directly without first spawning a shell by default.
child_process.fork() : spawns a new Node.js process and invokes a specified module with an IPC communication channel established that allows sending messages between parent
and child.
child_process.execSync() : a synchronous version of child_process.exec() that will block the Node.js event loop.
child_process.execFileSync() : a synchronous version of child_process.execFile() that will block the Node.js event loop.
For certain use cases, such as automating shell scripts, the synchronous counterparts may be more convenient. In many cases, however, the synchronous methods can have significant
impact on performance due to stalling the event loop while spawned processes complete.
Each of the methods returns a ChildProcess instance. These objects implement the Node.js EventEmitter API, allowing the parent process to register listener functions that are called
when certain events occur during the life cycle of the child process.
The child_process.exec() and child_process.execFile() methods additionally allow for an optional callback function to be specified that is invoked when the child process
terminates.
// On Windows Only...
const { spawn } = require('child_process');
const bat = spawn('cmd.exe', ['/c', 'my.bat']);
bat.stdout.on('data', (data) => {
console.log(data.toString());
});
// OR...
const { exec, spawn } = require('child_process');
exec('my.bat', (err, stdout, stderr) => {
if (err) {
console.error(err);
return;
}
console.log(stdout);
});
options <Object>
cwd <string> Current working directory of the child process. Default: process.cwd() .
maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .
uid <number> Sets the user identity of the process (see setuid(2) ).
gid <number> Sets the group identity of the process (see setgid(2) ).
windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .
Returns: <ChildProcess>
Spawns a shell then executes the command within that shell, buffering any generated output. The command string passed to the exec function is processed directly by the shell and special
characters (vary based on shell ) need to be dealt with accordingly:
Never pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
If a callback function is provided, it is called with the arguments (error, stdout, stderr) . On success, error will be null . On error, error will be an instance of Error . The
error.code property will be the exit code of the process. By convention, any exit code other than 0 indicates an error. error.signal will be the signal that terminated the process.
The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings
to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer' , or an unrecognized character
encoding, Buffer objects will be passed to the callback instead.
const { exec } = require('child_process');
exec('cat *.js missing_file | wc -l', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
console.log(`stdout: ${stdout}`);
console.error(`stderr: ${stderr}`);
});
If timeout is greater than 0 , the parent will send the signal identified by the killSignal property (the default is 'SIGTERM' ) if the child runs longer than timeout milliseconds.
Unlike the exec(3) POSIX system call, child_process.exec() does not replace the existing process and uses a shell to execute the command.
If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with stdout and stderr properties. The returned ChildProcess instance is attached to
the Promise as a child property. In case of an error (including any error resulting in an exit code other than 0), a rejected promise is returned, with the same error object given in the
callback, but with two additional properties stdout and stderr .
If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :
options <Object>
cwd <string> Current working directory of the child process.
maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .
uid <number> Sets the user identity of the process (see setuid(2) ).
gid <number> Sets the group identity of the process (see setgid(2) ).
windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .
windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. Default: false .
shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).
Returns: <ChildProcess>
The child_process.execFile() function is similar to child_process.exec() except that it does not spawn a shell by default. Rather, the specified executable file is spawned directly as a
new process making it slightly more efficient than child_process.exec() .
The same options as child_process.exec() are supported. Since a shell is not spawned, behaviors such as I/O redirection and file globbing are not supported.
The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings
to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer' , or an unrecognized character
encoding, Buffer objects will be passed to the callback instead.
If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with stdout and stderr properties. The returned ChildProcess instance is attached to
the Promise as a child property. In case of an error (including any error resulting in an exit code other than 0), a rejected promise is returned, with the same error object given in the
callback, but with two additional properties stdout and stderr .
If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :
options <Object>
cwd <string> Current working directory of the child process.
detached <boolean> Prepare child to run independently of its parent process. Specific behavior depends on the platform, see options.detached ).
execArgv <string[]> List of string arguments passed to the executable. Default: process.execArgv .
gid <number> Sets the group identity of the process (see setgid(2) ).
serialization <string> Specify the kind of serialization used for sending messages between processes. Possible values are 'json' and 'advanced' . See Advanced
serialization for more details. Default: 'json' .
signal <AbortSignal> Allows closing the child process using an AbortSignal.
killSignal <string> The signal value to be used when the spawned process will be killed by the abort signal. Default: 'SIGTERM' .
silent <boolean> If true , stdin, stdout, and stderr of the child will be piped to the parent, otherwise they will be inherited from the parent, see the 'pipe' and 'inherit'
options for child_process.spawn() 's stdio for more details. Default: false .
stdio <Array> | <string> See child_process.spawn() 's stdio . When this option is provided, it overrides silent . If the array variant is used, it must contain exactly one item
with value 'ipc' or an error will be thrown. For instance [0, 1, 2, 'ipc'] .
uid <number> Sets the user identity of the process (see setuid(2) ).
windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. Default: false .
Returns: <ChildProcess>
The child_process.fork() method is a special case of child_process.spawn() used specifically to spawn new Node.js processes. Like child_process.spawn() , a ChildProcess object is
returned. The returned ChildProcess will have an additional communication channel built-in that allows messages to be passed back and forth between the parent and child. See
subprocess.send() for details.
Keep in mind that spawned Node.js child processes are independent of the parent with exception of the IPC communication channel that is established between the two. Each process has its
own memory, with their own V8 instances. Because of the additional resource allocations required, spawning a large number of child Node.js processes is not recommended.
By default, child_process.fork() will spawn new Node.js instances using the process.execPath of the parent process. The execPath property in the options object allows for an
alternative execution path to be used.
Node.js processes launched with a custom execPath will communicate with the parent process using the file descriptor (fd) identified using the environment variable NODE_CHANNEL_FD on
the child process.
Unlike the fork(2) POSIX system call, child_process.fork() does not clone the current process.
The shell option available in child_process.spawn() is not supported by child_process.fork() and will be ignored if set.
If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :
if (process.argv[2] === 'child') {
setTimeout(() => {
console.log(`Hello from ${process.argv[2]}!`);
}, 1_000);
} else {
const { fork } = require('child_process');
const controller = new AbortController();
const { signal } = controller;
const child = fork(__filename, ['child'], { signal });
child.on('error', (err) => {
// This will be called with err being an AbortError if the controller aborts
});
controller.abort(); // Stops the child process
}
options <Object>
argv0 <string> Explicitly set the value of argv[0] sent to the child process. This will be set to command if not specified.
detached <boolean> Prepare child to run independently of its parent process. Specific behavior depends on the platform, see options.detached ).
uid <number> Sets the user identity of the process (see setuid(2) ).
gid <number> Sets the group identity of the process (see setgid(2) ).
serialization <string> Specify the kind of serialization used for sending messages between processes. Possible values are 'json' and 'advanced' . See Advanced
serialization for more details. Default: 'json' .
shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).
windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. This is set to true automatically when shell is specified and
is CMD. Default: false .
windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .
killSignal <string> The signal value to be used when the spawned process will be killed by the abort signal. Default: 'SIGTERM' .
Returns: <ChildProcess>
The child_process.spawn() method spawns a new process using the given command , with command-line arguments in args . If omitted, args defaults to an empty array.
If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
A third argument may be used to specify additional options, with these defaults:
const defaults = {
cwd: undefined,
env: process.env
};
Use cwd to specify the working directory from which the process is spawned. If not given, the default is to inherit the current working directory. If given, but the path does not exist, the child
process emits an ENOENT error and exits immediately. ENOENT is also emitted when the command does not exist.
Use env to specify environment variables that will be visible to the new process, the default is process.env .
Example of running ls -lh /usr , capturing stdout , stderr , and the exit code:
Certain platforms (macOS, Linux) will use the value of argv[0] for the process title while others (Windows, SunOS) will use command .
Node.js currently overwrites argv[0] with process.execPath on startup, so process.argv[0] in a Node.js child process will not match the argv0 parameter passed to spawn from the
parent, retrieve it with the process.argv0 property instead.
If the signal option is enabled, calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an
AbortError :
options.detached
On Windows, setting options.detached to true makes it possible for the child process to continue running after the parent exits. The child will have its own console window. Once enabled
for a child process, it cannot be disabled.
On non-Windows platforms, if options.detached is set to true , the child process will be made the leader of a new process group and session. Child processes may continue running after
the parent exits regardless of whether they are detached or not. See setsid(2) for more information.
By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given subprocess to exit, use the subprocess.unref() method. Doing so will cause
the parent's event loop to not include the child in its reference count, allowing the parent to exit independently of the child, unless there is an established IPC channel between the child and
the parent.
When using the detached option to start a long-running process, the process will not stay running in the background after the parent exits unless it is provided with a stdio configuration
that is not connected to the parent. If the parent's stdio is inherited, the child will remain attached to the controlling terminal.
Example of a long-running process, by detaching and also ignoring its parent stdio file descriptors, in order to ignore the parent's termination:
const { spawn } = require('child_process');
subprocess.unref();
Alternatively one can redirect the child process' output into files:
const fs = require('fs');
const { spawn } = require('child_process');
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');
subprocess.unref();
options.stdio
The options.stdio option is used to configure the pipes that are established between the parent and child process. By default, the child's stdin, stdout, and stderr are redirected to
corresponding subprocess.stdin , subprocess.stdout , and subprocess.stderr streams on the ChildProcess object. This is equivalent to setting the options.stdio equal to ['pipe',
'pipe', 'pipe'] .
Otherwise, the value of options.stdio is an array where each index corresponds to an fd in the child. The fds 0, 1, and 2 correspond to stdin, stdout, and stderr, respectively. Additional fds
can be specified to create additional pipes between the parent and child. The value is one of the following:
1. 'pipe' : Create a pipe between the child process and the parent process. The parent end of the pipe is exposed to the parent as a property on the child_process object as
subprocess.stdio[fd] . Pipes created for fds 0, 1, and 2 are also available as subprocess.stdin , subprocess.stdout and subprocess.stderr , respectively.
2. 'overlapped' : Same as 'pipe' except that the FILE_FLAG_OVERLAPPED flag is set on the handle. This is necessary for overlapped I/O on the child process's stdio handles. See the docs
for more details. This is exactly the same as 'pipe' on non-Windows systems.
3. 'ipc' : Create an IPC channel for passing messages/file descriptors between parent and child. A ChildProcess may have at most one IPC stdio file descriptor. Setting this option
enables the subprocess.send() method. If the child is a Node.js process, the presence of an IPC channel will enable process.send() and process.disconnect() methods, as well as
'disconnect' and 'message' events within the child.
Accessing the IPC channel fd in any way other than process.send() or using the IPC channel with a child process that is not a Node.js instance is not supported.
4. 'ignore' : Instructs Node.js to ignore the fd in the child. While Node.js will always open fds 0, 1, and 2 for the processes it spawns, setting the fd to 'ignore' will cause Node.js to open
/dev/null and attach it to the child's fd.
5. 'inherit' : Pass through the corresponding stdio stream to/from the parent process. In the first three positions, this is equivalent to process.stdin , process.stdout , and
process.stderr , respectively. In any other position, equivalent to 'ignore' .
6. <Stream> object: Share a readable or writable stream that refers to a tty, file, socket, or a pipe with the child process. The stream's underlying file descriptor is duplicated in the child
process to the fd that corresponds to the index in the stdio array. The stream must have an underlying descriptor (file streams do not until the 'open' event has occurred).
7. Positive integer: The integer value is interpreted as a file descriptor that is currently open in the parent process. It is shared with the child process, similar to how <Stream> objects can
be shared. Passing sockets is not supported on Windows.
8. null , undefined : Use default value. For stdio fds 0, 1, and 2 (in other words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the default is 'ignore' .
It is worth noting that when an IPC channel is established between the parent and child processes, and the child is a Node.js process, the child is launched with the IPC channel unreferenced (using
unref() ) until the child registers an event handler for the 'disconnect' event or the 'message' event. This allows the child to exit normally without the process being held open by the open IPC
channel.
On Unix-like operating systems, the child_process.spawn() method performs memory operations synchronously before decoupling the event loop from the child. Applications with a large
memory footprint may find frequent child_process.spawn() calls to be a bottleneck. For more information, see V8 issue 7381 .
Blocking calls like these are mostly useful for simplifying general-purpose scripting tasks and for simplifying the loading/processing of application configuration at startup.
options <Object>
cwd <string> Current working directory of the child process.
input <string> | <Buffer> | <TypedArray> | <DataView> The value which will be passed as stdin to the spawned process. Supplying this value will override stdio[0] .
stdio <string> | <Array> Child's stdio configuration. stderr by default will be output to the parent process' stderr unless stdio is specified. Default: 'pipe' .
uid <number> Sets the user identity of the process (see setuid(2) ).
gid <number> Sets the group identity of the process (see setgid(2) ).
timeout <number> In milliseconds the maximum amount of time the process is allowed to run. Default: undefined .
killSignal <string> | <integer> The signal value to be used when the spawned process will be killed. Default: 'SIGTERM' .
maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated. See caveat at maxBuffer and Unicode . Default:
1024 * 1024 .
encoding <string> The encoding used for all stdio inputs and outputs. Default: 'buffer' .
windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .
shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).
The child_process.execFileSync() method is generally identical to child_process.execFile() with the exception that the method will not return until the child process has fully closed.
When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited.
If the child process intercepts and handles the SIGTERM signal and does not exit, the parent process will still wait until the child process has exited.
If the process times out or has a non-zero exit code, this method will throw an Error that will include the full result of the underlying child_process.spawnSync() .
If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
child_process.execSync(command[, options])
command <string> The command to run.
options <Object>
cwd <string> Current working directory of the child process.
input <string> | <Buffer> | <TypedArray> | <DataView> The value which will be passed as stdin to the spawned process. Supplying this value will override stdio[0] .
stdio <string> | <Array> Child's stdio configuration. stderr by default will be output to the parent process' stderr unless stdio is specified. Default: 'pipe' .
shell <string> Shell to execute the command with. See Shell requirements and Default Windows shell . Default: '/bin/sh' on Unix, process.env.ComSpec on Windows.
uid <number> Sets the user identity of the process. (See setuid(2) ).
gid <number> Sets the group identity of the process. (See setgid(2) ).
timeout <number> In milliseconds the maximum amount of time the process is allowed to run. Default: undefined .
killSignal <string> | <integer> The signal value to be used when the spawned process will be killed. Default: 'SIGTERM' .
maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .
encoding <string> The encoding used for all stdio inputs and outputs. Default: 'buffer' .
windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .
The child_process.execSync() method is generally identical to child_process.exec() with the exception that the method will not return until the child process has fully closed. When a
timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. If the child process intercepts and handles the SIGTERM signal and
doesn't exit, the parent process will wait until the child process has exited.
If the process times out or has a non-zero exit code, this method will throw. The Error object will contain the entire result from child_process.spawnSync() .
Never pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
input <string> | <Buffer> | <TypedArray> | <DataView> The value which will be passed as stdin to the spawned process. Supplying this value will override stdio[0] .
argv0 <string> Explicitly set the value of argv[0] sent to the child process. This will be set to command if not specified.
uid <number> Sets the user identity of the process (see setuid(2) ).
gid <number> Sets the group identity of the process (see setgid(2) ).
timeout <number> In milliseconds the maximum amount of time the process is allowed to run. Default: undefined .
killSignal <string> | <integer> The signal value to be used when the spawned process will be killed. Default: 'SIGTERM' .
maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer
and Unicode . Default: 1024 * 1024 .
encoding <string> The encoding used for all stdio inputs and outputs. Default: 'buffer' .
shell <boolean> | <string> If true , runs command inside of a shell. Uses '/bin/sh' on Unix, and process.env.ComSpec on Windows. A different shell can be specified as a
string. See Shell requirements and Default Windows shell . Default: false (no shell).
windowsVerbatimArguments <boolean> No quoting or escaping of arguments is done on Windows. Ignored on Unix. This is set to true automatically when shell is specified and
is CMD. Default: false .
windowsHide <boolean> Hide the subprocess console window that would normally be created on Windows systems. Default: false .
Returns: <Object>
pid <number> Pid of the child process.
status <number> | <null> The exit code of the subprocess, or null if the subprocess terminated due to a signal.
signal <string> | <null> The signal used to kill the subprocess, or null if the subprocess did not terminate due to a signal.
error <Error> The error object if the child process failed or timed out.
The child_process.spawnSync() method is generally identical to child_process.spawn() with the exception that the function will not return until the child process has fully closed. When
a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. If the process intercepts and handles the SIGTERM signal and
doesn't exit, the parent process will wait until the child process has exited.
If the shell option is enabled, do not pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
Class: ChildProcess
Extends: <EventEmitter>
Instances of ChildProcess are not intended to be created directly. Rather, use the child_process.spawn() , child_process.exec() , child_process.execFile() , or
child_process.fork() methods to create instances of ChildProcess .
Event: 'close'
code <number> The exit code if the child exited on its own.
signal <string> The signal by which the child process was terminated.
The 'close' event is emitted when the stdio streams of a child process have been closed. This is distinct from the 'exit' event, since multiple processes might share the same stdio
streams.
Event: 'disconnect'
The 'disconnect' event is emitted after calling the subprocess.disconnect() method in parent process or process.disconnect() in child process. After disconnecting it is no longer
possible to send or receive messages, and the subprocess.connected property is false .
Event: 'error'
err <Error> The error.
The 'error' event is emitted whenever:
Event: 'exit'
code <number> The exit code if the child exited on its own.
signal <string> The signal by which the child process was terminated.
The 'exit' event is emitted after the child process ends. If the process exited, code is the final exit code of the process, otherwise null . If the process terminated due to receipt of a signal,
signal is the string name of the signal, otherwise null . One of the two will always be non- null .
When the 'exit' event is triggered, child process stdio streams might still be open.
Node.js establishes signal handlers for SIGINT and SIGTERM and Node.js processes will not terminate immediately due to receipt of those signals. Rather, Node.js will perform a sequence of
cleanup actions and then will re-raise the handled signal.
See waitpid(2) .
Event: 'message'
message <Object> A parsed JSON object or primitive value.
The 'message' event is triggered when a child process uses process.send() to send messages.
The message goes through serialization and parsing. The resulting message might not be the same as what is originally sent.
If the serialization option was set to 'advanced' used when spawning the child process, the message argument can contain data that JSON is not able to represent. See Advanced
serialization for more details.
Event: 'spawn'
The 'spawn' event is emitted once the child process has spawned successfully.
If emitted, the 'spawn' event comes before all other events and before any data is received via stdout or stderr .
The 'spawn' event will fire regardless of whether an error occurs within the spawned process. For example, if bash some-command spawns successfully, the 'spawn' event will fire, though
bash may fail to spawn some-command . This caveat also applies when using { shell: true } .
subprocess.channel
<Object> A pipe representing the IPC channel to the child process.
The subprocess.channel property is a reference to the child's IPC channel. If no IPC channel currently exists, this property is undefined .
subprocess.channel.ref()
This method makes the IPC channel keep the event loop of the parent process running if .unref() has been called before.
subprocess.channel.unref()
This method makes the IPC channel not keep the event loop of the parent process running, and lets it finish even while the channel is open.
subprocess.connected
<boolean> Set to false after subprocess.disconnect() is called.
The subprocess.connected property indicates whether it is still possible to send and receive messages from a child process. When subprocess.connected is false , it is no longer possible
to send or receive messages.
subprocess.disconnect()
Closes the IPC channel between parent and child, allowing the child to exit gracefully once there are no other connections keeping it alive. After calling this method the
subprocess.connected and process.connected properties in both the parent and child (respectively) will be set to false , and it will be no longer possible to pass messages between the
processes.
The 'disconnect' event will be emitted when there are no messages in the process of being received. This will most often be triggered immediately after calling subprocess.disconnect() .
When the child process is a Node.js instance (e.g. spawned using child_process.fork() ), the process.disconnect() method can be invoked within the child process to close the IPC
channel as well.
subprocess.exitCode
<integer>
The subprocess.exitCode property indicates the exit code of the child process. If the child process is still running, the field will be null .
subprocess.kill([signal])
signal <number> | <string>
Returns: <boolean>
The subprocess.kill() method sends a signal to the child process. If no argument is given, the process will be sent the 'SIGTERM' signal. See signal(7) for a list of available signals. This
function returns true if kill(2) succeeds, and false otherwise.
The ChildProcess object may emit an 'error' event if the signal cannot be delivered. Sending a signal to a child process that has already exited is not an error but may have unforeseen
consequences. Specifically, if the process identifier (PID) has been reassigned to another process, the signal will be delivered to that process instead which can have unexpected results.
While the function is called kill , the signal delivered to the child process may not actually terminate the process.
On Linux, child processes of child processes will not be terminated when attempting to kill their parent. This is likely to happen when running a new process in a shell or with the use of the
shell option of ChildProcess :
'use strict';
const { spawn } = require('child_process');
subprocess.killed
<boolean> Set to true after subprocess.kill() is used to successfully send a signal to the child process.
The subprocess.killed property indicates whether the child process successfully received a signal from subprocess.kill() . The killed property does not indicate that the child process
has been terminated.
subprocess.pid
<integer> | <undefined>
Returns the process identifier (PID) of the child process. If the child process fails to spawn due to errors, then the value is undefined and error is emitted.
subprocess.ref()
Calling subprocess.ref() after making a call to subprocess.unref() will restore the removed reference count for the child process, forcing the parent to wait for the child to exit before
exiting itself.
subprocess.unref();
subprocess.ref();
subprocess.send(message[, sendHandle[, options]][, callback])
message <Object>
sendHandle <Handle>
options <Object> The options argument, if present, is an object used to parameterize the sending of certain types of handles. options supports the following properties:
keepOpen <boolean> A value that can be used when passing instances of net.Socket . When true , the socket is kept open in the sending process. Default: false .
callback <Function>
Returns: <boolean>
When an IPC channel has been established between the parent and child ( i.e. when using child_process.fork() ), the subprocess.send() method can be used to send messages to the
child process. When the child process is a Node.js instance, these messages can be received via the 'message' event.
The message goes through serialization and parsing. The resulting message might not be the same as what is originally sent.
const cp = require('child_process');
const n = cp.fork(`${__dirname}/sub.js`);
And then the child script, 'sub.js' might look like this:
// Causes the parent to print: PARENT got message: { foo: 'bar', baz: null }
process.send({ foo: 'bar', baz: NaN });
Child Node.js processes will have a process.send() method of their own that allows the child to send messages back to the parent.
There is a special case when sending a {cmd: 'NODE_foo'} message. Messages containing a NODE_ prefix in the cmd property are reserved for use within Node.js core and will not be
emitted in the child's 'message' event. Rather, such messages are emitted using the 'internalMessage' event and are consumed internally by Node.js. Applications should avoid using such
messages or listening for 'internalMessage' events as it is subject to change without notice.
The optional sendHandle argument that may be passed to subprocess.send() is for passing a TCP server or socket object to the child process. The child will receive the object as the
second argument passed to the callback function registered on the 'message' event. Any data that is received and buffered in the socket will not be sent to the child.
The optional callback is a function that is invoked after the message is sent but before the child may have received it. The function is called with a single argument: null on success, or an
Error object on failure.
If no callback function is provided and the message cannot be sent, an 'error' event will be emitted by the ChildProcess object. This can happen, for instance, when the child process
has already exited.
subprocess.send() will return false if the channel has closed or when the backlog of unsent messages exceeds a threshold that makes it unwise to send more. Otherwise, the method
returns true . The callback function can be used to implement flow control.
Once the server is now shared between the parent and child, some connections can be handled by the parent and some by the child.
While the example above uses a server created using the net module, dgram module servers use exactly the same workflow with the exceptions of listening on a 'message' event instead
of 'connection' and using server.bind() instead of server.listen() . This is, however, currently only supported on Unix platforms.
// Open up the server and send sockets to child. Use pauseOnConnect to prevent
// the sockets from being read before they are sent to the child process.
const server = require('net').createServer({ pauseOnConnect: true });
server.on('connection', (socket) => {
The subprocess.js would receive the socket handle as the second argument passed to the event callback function:
Any 'message' handlers in the subprocess should verify that socket exists, as the connection may have been closed during the time it takes to send the connection to the child.
subprocess.signalCode
<string> | <null>
The subprocess.signalCode property indicates the signal received by the child process if any, else null .
subprocess.spawnargs
<Array>
The subprocess.spawnargs property represents the full list of command-line arguments the child process was launched with.
subprocess.spawnfile
<string>
The subprocess.spawnfile property indicates the executable file name of the child process that is launched.
For child_process.fork() , its value will be equal to process.execPath . For child_process.spawn() , its value will be the name of the executable file. For child_process.exec() , its value
will be the name of the shell in which the child process is launched.
subprocess.stderr
<stream.Readable>
If the child was spawned with stdio[2] set to anything other than 'pipe' , then this will be null .
subprocess.stderr is an alias for subprocess.stdio[2] . Both properties will refer to the same value.
The subprocess.stderr property can be null if the child process could not be successfully spawned.
subprocess.stdin
<stream.Writable>
If a child process waits to read all of its input, the child will not continue until this stream has been closed via end() .
If the child was spawned with stdio[0] set to anything other than 'pipe' , then this will be null .
subprocess.stdin is an alias for subprocess.stdio[0] . Both properties will refer to the same value.
The subprocess.stdin property can be undefined if the child process could not be successfully spawned.
subprocess.stdio
<Array>
A sparse array of pipes to the child process, corresponding with positions in the stdio option passed to child_process.spawn() that have been set to the value 'pipe' .
subprocess.stdio[0] , subprocess.stdio[1] , and subprocess.stdio[2] are also available as subprocess.stdin , subprocess.stdout , and subprocess.stderr , respectively.
In the following example, only the child's fd 1 (stdout) is configured as a pipe, so only the parent's subprocess.stdio[1] is a stream, all other values in the array are null .
assert.strictEqual(subprocess.stdio[0], null);
assert.strictEqual(subprocess.stdio[0], subprocess.stdin);
assert(subprocess.stdout);
assert.strictEqual(subprocess.stdio[1], subprocess.stdout);
assert.strictEqual(subprocess.stdio[2], null);
assert.strictEqual(subprocess.stdio[2], subprocess.stderr);
The subprocess.stdio property can be undefined if the child process could not be successfully spawned.
subprocess.stdout
<stream.Readable>
A Readable Stream that represents the child process's stdout .
If the child was spawned with stdio[1] set to anything other than 'pipe' , then this will be null .
subprocess.stdout is an alias for subprocess.stdio[1] . Both properties will refer to the same value.
The subprocess.stdout property can be null if the child process could not be successfully spawned.
subprocess.unref()
By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given subprocess to exit, use the subprocess.unref() method. Doing so will cause
the parent's event loop to not include the child in its reference count, allowing the parent to exit independently of the child, unless there is an established IPC channel between the child and
the parent.
subprocess.unref();
Shell requirements
The shell should understand the -c switch. If the shell is 'cmd.exe' , it should understand the /d /s /c switches and command-line parsing should be compatible.
Advanced serialization
Child processes support a serialization mechanism for IPC that is based on the serialization API of the v8 module , based on the HTML structured clone algorithm . This is generally more
powerful and supports more built-in JavaScript object types, such as BigInt , Map and Set , ArrayBuffer and TypedArray , Buffer , Error , RegExp etc.
However, this format is not a full superset of JSON, and e.g. properties set on objects of such built-in types will not be passed on through the serialization step. Additionally, performance may
not be equivalent to that of JSON, depending on the structure of the passed data. Therefore, this feature requires opting in by setting the serialization option to 'advanced' when calling
child_process.spawn() or child_process.fork() .
Node.js v15.12.0 Documentation
Buffer
Stability: 2 - Stable
Buffer objects are used to represent a fixed-length sequence of bytes. Many Node.js APIs support Buffer s.
The Buffer class is a subclass of JavaScript's Uint8Array class and extends it with methods that cover additional use cases. Node.js APIs accept plain Uint8Array s wherever Buffer s are
supported as well.
The Buffer class is within the global scope, making it unlikely that one would need to ever use require('buffer').Buffer .
// Creates a Buffer containing the Latin-1 bytes [0x74, 0xe9, 0x73, 0x74].
const buf7 = Buffer.from('tést', 'latin1');
console.log(buf.toString('hex'));
// Prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
// Prints: aGVsbG8gd29ybGQ=
console.log(Buffer.from('fhqwhgads', 'utf8'));
// Prints: <Buffer 66 68 71 77 68 67 61 64 73>
console.log(Buffer.from('fhqwhgads', 'utf16le'));
// Prints: <Buffer 66 00 68 00 71 00 77 00 68 00 67 00 61 00 64 00 73 00>
'utf8' : Multi-byte encoded Unicode characters. Many web pages and other document formats use UTF-8 . This is the default character encoding. When decoding a Buffer into a
string that does not exclusively contain valid UTF-8 data, the Unicode replacement character U+FFFD � will be used to represent those errors.
'utf16le' : Multi-byte encoded Unicode characters. Unlike 'utf8' , each character in the string will be encoded using either 2 or 4 bytes. Node.js only supports the little-endian
variant of UTF-16 .
'latin1' : Latin-1 stands for ISO-8859-1 . This character encoding only supports the Unicode characters from U+0000 to U+00FF . Each character is encoded using a single byte.
Characters that do not fit into that range are truncated and will be mapped to characters in that range.
Converting a Buffer into a string using one of the above is referred to as decoding, and converting a string into a Buffer is referred to as encoding.
Node.js also supports the following binary-to-text encodings. For binary-to-text encodings, the naming convention is reversed: Converting a Buffer into a string is typically referred to as
encoding, and converting a string into a Buffer as decoding.
'base64' : Base64 encoding. When creating a Buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5 .
Whitespace characters such as spaces, tabs, and new lines contained within the base64-encoded string are ignored.
'base64url' : base64url encoding as specified in RFC 4648, Section 5 . When creating a Buffer from a string, this encoding will also correctly accept regular base64-encoded strings.
When encoding a Buffer to a string, this encoding will omit padding.
'hex' : Encode each byte as two hexadecimal characters. Data truncation may occur when decoding strings that do exclusively contain valid hexadecimal characters. See below for an
example.
'ascii' : For 7-bit ASCII data only. When encoding a string into a Buffer , this is equivalent to using 'latin1' . When decoding a Buffer into a string, using this encoding will
additionally unset the highest bit of each byte before decoding as 'latin1' . Generally, there should be no reason to use this encoding, as 'utf8' (or, if the data is known to always be
ASCII-only, 'latin1' ) will be a better choice when encoding or decoding ASCII-only text. It is only provided for legacy compatibility.
'binary' : Alias for 'latin1' . See binary strings for more background on this topic. The name of this encoding can be very misleading, as all of the encodings listed here convert
between strings and binary data. For converting between strings and Buffer s, typically 'utf-8' is the right choice.
'ucs2' : Alias of 'utf16le' . UCS-2 used to refer to a variant of UTF-16 that did not support characters that had code points larger than U+FFFF. In Node.js, these code points are
always supported.
Buffer.from('1ag', 'hex');
// Prints <Buffer 1a>, data truncated when first non-hexadecimal value
// ('g') encountered.
Buffer.from('1a7g', 'hex');
// Prints <Buffer 1a>, data truncated when data ends in single digit ('7').
Buffer.from('1634', 'hex');
// Prints <Buffer 16 34>, all data represented.
Modern Web browsers follow the WHATWG Encoding Standard which aliases both 'latin1' and 'ISO-8859-1' to 'win-1252' . This means that while doing something like http.get() ,
if the returned charset is one of those listed in the WHATWG specification it is possible that the server actually returned 'win-1252' -encoded data, and using 'latin1' encoding may
incorrectly decode the characters.
In particular:
While TypedArray#slice() creates a copy of part of the TypedArray , Buffer#slice() creates a view over the existing Buffer without copying. This behavior can be surprising, and
only exists for legacy compatibility. TypedArray#subarray() can be used to achieve the behavior of Buffer#slice() on both Buffer s and other TypedArray s.
There are two ways to create new TypedArray instances from a Buffer :
Passing a Buffer to a TypedArray constructor will copy the Buffer s contents, interpreted as an array of integers, and not as a byte sequence of the target type.
console.log(uint32array);
// Prints: Uint32Array(4) [ 1, 2, 3, 4 ]
Passing the Buffer s underlying ArrayBuffer will create a TypedArray that shares its memory with the Buffer .
console.log(uint16array);
It is possible to create a new Buffer that shares the same allocated memory as a TypedArray instance by using the TypedArray object’s .buffer property in the same way. Buffer.from()
behaves like new Uint8Array() in this context.
arr[0] = 5000;
arr[1] = 4000;
console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 a0 0f>
arr[1] = 6000;
console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 70 17>
When creating a Buffer using a TypedArray 's .buffer , it is possible to use only a portion of the underlying ArrayBuffer by passing in byteOffset and length parameters.
console.log(buf.length);
// Prints: 16
The Buffer.from() and TypedArray.from() have different signatures and implementations. Specifically, the TypedArray variants accept a second argument that is a mapping function that
is invoked on every element of the typed array:
The Buffer.from() method, however, does not support the use of a mapping function:
Buffer.from(array)
Buffer.from(buffer)
Buffer.from(string[, encoding])
Buffers and iteration
Buffer instances can be iterated over using for..of syntax:
Additionally, the buf.values() , buf.keys() , and buf.entries() methods can be used to create iterators.
Class: Blob
Stability: 1 - Experimental
A Blob encapsulates immutable, raw data that can be safely shared across multiple worker threads.
options <Object>
encoding <string> The character encoding to use for string sources. Default: 'utf8' .
type <string> The Blob content-type. The intent is for type to convey the MIME media type of the data, however no validation of the type format is performed.
<ArrayBuffer> , <TypedArray> , <DataView> , and <Buffer> sources are copied into the 'Blob' and can therefore be safely modified after the 'Blob' is created.
blob.arrayBuffer()
Returns: <Promise>
Returns a promise that fulfills with an <ArrayBuffer> containing a copy of the Blob data.
blob.size
The total size of the Blob in bytes.
Creates and returns a new Blob containing a subset of this Blob objects data. The original Blob is not alterered.
blob.text()
Returns: <Promise>
Returns a promise that resolves the contents of the Blob decoded as a UTF-8 string.
blob.type
Type: <string>
mc1.port2.postMessage(blob);
mc2.port2.postMessage(blob);
Class: Buffer
The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.
fill <string> | <Buffer> | <Uint8Array> | <integer> A value to pre-fill the new Buffer with. Default: 0 .
Allocates a new Buffer of size bytes. If fill is undefined , the Buffer will be zero-filled.
console.log(buf);
// Prints: <Buffer 00 00 00 00 00>
If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding) .
console.log(buf);
// Prints: <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>
Calling Buffer.alloc() can be measurably slower than the alternative Buffer.allocUnsafe() but ensures that the newly created Buffer instance contents will never contain sensitive
data from previous allocations, including data that might not have been allocated for Buffer s.
Allocates a new Buffer of size bytes. If size is larger than buffer.constants.MAX_LENGTH or smaller than 0, ERR_INVALID_ARG_VALUE is thrown.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use
Buffer.alloc() instead to initialize Buffer instances with zeroes.
console.log(buf);
// Prints (contents may vary): <Buffer a0 8b 28 3f 01 00 00 00 50 32>
buf.fill(0);
console.log(buf);
// Prints: <Buffer 00 00 00 00 00 00 00 00 00 00>
The Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using
Buffer.allocUnsafe() , Buffer.from(array) , Buffer.concat() , and the deprecated new Buffer(size) constructor only when size is less than or equal to Buffer.poolSize >> 1 (floor
of Buffer.poolSize divided by two).
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill) . Specifically,
Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half
Buffer.poolSize . The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe() provides.
Allocates a new Buffer of size bytes. If size is larger than buffer.constants.MAX_LENGTH or smaller than 0, ERR_INVALID_ARG_VALUE is thrown. A zero-length Buffer is created if size
is 0.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use buf.fill(0)
to initialize such Buffer instances with zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are sliced from a single pre-allocated Buffer . This allows applications to avoid the garbage
collection overhead of creating many individually allocated Buffer instances. This approach improves both performance and memory usage by eliminating the need to track and clean up as
many individual ArrayBuffer objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer
instance using Buffer.allocUnsafeSlow() and then copying out the relevant bits.
socket.on('readable', () => {
let data;
while (null !== (data = readable.read())) {
// Allocate for retained data.
const sb = Buffer.allocUnsafeSlow(10);
store.push(sb);
}
});
Returns the byte length of a string when encoded using encoding . This is not the same as String.prototype.length , which does not account for the encoding that is used to convert the
string into bytes.
For 'base64' , 'base64url' , and 'hex' , this function assumes valid input. For strings that contain non-base64/hex-encoded data (e.g. whitespace), the return value might be greater than
the length of a Buffer created from the string.
When string is a Buffer / DataView / TypedArray / ArrayBuffer / SharedArrayBuffer , the byte length as reported by .byteLength is returned.
Returns: <integer> Either -1 , 0 , or 1 , depending on the result of the comparison. See buf.compare() for details.
Compares buf1 to buf2 , typically for the purpose of sorting arrays of Buffer instances. This is equivalent to calling buf1.compare(buf2) .
console.log(arr.sort(Buffer.compare));
// Prints: [ <Buffer 30 31 32 33>, <Buffer 31 32 33 34> ]
// (This result is equal to: [buf2, buf1].)
totalLength <integer> Total length of the Buffer instances in list when concatenated.
Returns: <Buffer>
Returns a new Buffer which is the result of concatenating all the Buffer instances in the list together.
If the list has no items, or if the totalLength is 0, then a new zero-length Buffer is returned.
If totalLength is not provided, it is calculated from the Buffer instances in list by adding their lengths.
If totalLength is provided, it is coerced to an unsigned integer. If the combined length of the Buffer s in list exceeds totalLength , the result is truncated to totalLength .
console.log(totalLength);
// Prints: 42
console.log(bufA);
// Prints: <Buffer 00 00 00 00 ...>
console.log(bufA.length);
// Prints: 42
Buffer.concat() may also use the internal Buffer pool like Buffer.allocUnsafe() does.
Allocates a new Buffer using an array of bytes in the range 0 – 255 . Array entries outside that range will be truncated to fit into it.
// Creates a new Buffer containing the UTF-8 bytes of the string 'buffer'.
const buf = Buffer.from([0x62, 0x75, 0x66, 0x66, 0x65, 0x72]);
A TypeError will be thrown if array is not an Array or another type appropriate for Buffer.from() variants.
Buffer.from(array) and Buffer.from(string) may also use the internal Buffer pool like Buffer.allocUnsafe() does.
Static method: Buffer.from(arrayBuffer[, byteOffset[, length]])
arrayBuffer <ArrayBuffer> | <SharedArrayBuffer> An ArrayBuffer , SharedArrayBuffer , for example the .buffer property of a TypedArray .
This creates a view of the ArrayBuffer without copying the underlying memory. For example, when passed a reference to the .buffer property of a TypedArray instance, the newly
created Buffer will share the same allocated memory as the TypedArray 's underlying ArrayBuffer .
arr[0] = 5000;
arr[1] = 4000;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
console.log(buf);
// Prints: <Buffer 88 13 70 17>
The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer .
console.log(buf.length);
// Prints: 2
A TypeError will be thrown if arrayBuffer is not an ArrayBuffer or a SharedArrayBuffer or another type appropriate for Buffer.from() variants.
It is important to remember that a backing ArrayBuffer can cover a range of memory that extends beyond the bounds of a TypedArray view. A new Buffer created using the buffer
property of a TypedArray may extend beyond the range of the TypedArray :
const arrA = Uint8Array.from([0x63, 0x64, 0x65, 0x66]); // 4 elements
const arrB = new Uint8Array(arrA.buffer, 1, 2); // 2 elements
console.log(arrA.buffer === arrB.buffer); // true
buf1[0] = 0x61;
console.log(buf1.toString());
// Prints: auffer
console.log(buf2.toString());
// Prints: buffer
A TypeError will be thrown if buffer is not a Buffer or another type appropriate for Buffer.from() variants.
For objects whose valueOf() function returns a value not strictly equal to object , returns Buffer.from(object.valueOf(), offsetOrEncoding, length) .
A TypeError will be thrown if object does not have the mentioned methods or is not of another type appropriate for Buffer.from() variants.
Creates a new Buffer containing string . The encoding parameter identifies the character encoding to be used when converting string into bytes.
console.log(buf1.toString());
// Prints: this is a tést
console.log(buf2.toString());
// Prints: this is a tést
console.log(buf1.toString('latin1'));
// Prints: this is a tést
A TypeError will be thrown if string is not a string or another type appropriate for Buffer.from() variants.
Returns: <boolean>
Buffer.isBuffer(Buffer.alloc(10)); // true
Buffer.isBuffer(Buffer.from('foo')); // true
Buffer.isBuffer('a string'); // false
Buffer.isBuffer([]); // false
Buffer.isBuffer(new Uint8Array(1024)); // false
Returns: <boolean>
Returns true if encoding is the name of a supported character encoding, or false otherwise.
console.log(Buffer.isEncoding('utf-8'));
// Prints: true
console.log(Buffer.isEncoding('hex'));
// Prints: true
console.log(Buffer.isEncoding('utf/8'));
// Prints: false
console.log(Buffer.isEncoding(''));
// Prints: false
This is the size (in bytes) of pre-allocated internal Buffer instances used for pooling. This value may be modified.
buf[index]
index <integer>
The index operator [index] can be used to get and set the octet at position index in buf . The values refer to individual bytes, so the legal value range is between 0x00 and 0xFF (hex) or
0 and 255 (decimal).
This operator is inherited from Uint8Array , so its behavior on out-of-bounds access is the same as Uint8Array . In other words, buf[index] returns undefined when index is negative or
greater or equal to buf.length , and buf[index] = value does not modify the buffer if index is negative or >= buf.length .
// Copy an ASCII string into a `Buffer` one byte at a time.
// (This only works for ASCII-only strings. In general, one should use
// `Buffer.from()` to perform this conversion.)
console.log(buf.toString('utf8'));
// Prints: Node.js
buf.buffer
<ArrayBuffer> The underlying ArrayBuffer object based on which this Buffer object is created.
This ArrayBuffer is not guaranteed to correspond exactly to the original Buffer . See the notes on buf.byteOffset for details.
buf.byteOffset
<integer> The byteOffset of the Buffer s underlying ArrayBuffer object.
When setting byteOffset in Buffer.from(ArrayBuffer, byteOffset, length) , or sometimes when allocating a Buffer smaller than Buffer.poolSize , the buffer does not start from a
zero offset on the underlying ArrayBuffer .
This can cause problems when accessing the underlying ArrayBuffer directly using buf.buffer , as other parts of the ArrayBuffer may be unrelated to the Buffer object itself.
A common issue when creating a TypedArray object that shares its memory with a Buffer is that in this case one needs to specify the byteOffset correctly:
// Create a buffer smaller than `Buffer.poolSize`.
const nodeBuffer = new Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);
targetStart <integer> The offset within target at which to begin comparison. Default: 0 .
targetEnd <integer> The offset within target at which to end comparison (not inclusive). Default: target.length .
sourceStart <integer> The offset within buf at which to begin comparison. Default: 0 .
sourceEnd <integer> The offset within buf at which to end comparison (not inclusive). Default: buf.length .
Returns: <integer>
Compares buf with target and returns a number indicating whether buf comes before, after, or is the same as target in sort order. Comparison is based on the actual sequence of bytes
in each Buffer .
console.log(buf1.compare(buf1));
// Prints: 0
console.log(buf1.compare(buf2));
// Prints: -1
console.log(buf1.compare(buf3));
// Prints: -1
console.log(buf2.compare(buf1));
// Prints: 1
console.log(buf2.compare(buf3));
// Prints: 1
console.log([buf1, buf2, buf3].sort(Buffer.compare));
// Prints: [ <Buffer 41 42 43>, <Buffer 41 42 43 44>, <Buffer 42 43 44> ]
// (This result is equal to: [buf1, buf3, buf2].)
The optional targetStart , targetEnd , sourceStart , and sourceEnd arguments can be used to limit the comparison to specific ranges within target and buf respectively.
console.log(buf1.compare(buf2, 5, 9, 0, 4));
// Prints: 0
console.log(buf1.compare(buf2, 0, 6, 4));
// Prints: -1
console.log(buf1.compare(buf2, 5, 6, 5));
// Prints: 1
ERR_OUT_OF_RANGE is thrown if targetStart < 0 , sourceStart < 0 , targetEnd > target.byteLength , or sourceEnd > source.byteLength .
targetStart <integer> The offset within target at which to begin writing. Default: 0 .
sourceStart <integer> The offset within buf from which to begin copying. Default: 0 .
sourceEnd <integer> The offset within buf at which to stop copying (not inclusive). Default: buf.length .
Copies data from a region of buf to a region in target , even if the target memory region overlaps with buf .
TypedArray#set() performs the same operation, and is available for all TypedArrays, including Node.js Buffer s, although it takes different function arguments.
console.log(buf2.toString('ascii', 0, 25));
// Prints: !!!!!!!!qrst!!!!!!!!!!!!!
// Create a `Buffer` and copy data from one region to an overlapping region
// within the same `Buffer`.
buf.copy(buf, 0, 4, 10);
console.log(buf.toString());
// Prints: efghijghijklmnopqrstuvwxyz
buf.entries()
Returns: <Iterator>
Creates and returns an iterator of [index, byte] pairs from the contents of buf .
buf.equals(otherBuffer)
otherBuffer <Buffer> | <Uint8Array> A Buffer or Uint8Array with which to compare buf .
Returns: <boolean>
Returns true if both buf and otherBuffer have exactly the same bytes, false otherwise. Equivalent to buf.compare(otherBuffer) === 0 .
console.log(buf1.equals(buf2));
// Prints: true
console.log(buf1.equals(buf3));
// Prints: false
offset <integer> Number of bytes to skip before starting to fill buf . Default: 0 .
end <integer> Where to stop filling buf (not inclusive). Default: buf.length .
encoding <string> The encoding for value if value is a string. Default: 'utf8' .
Fills buf with the specified value . If the offset and end are not given, the entire buf will be filled:
const b = Buffer.allocUnsafe(50).fill('h');
console.log(b.toString());
// Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
value is coerced to a uint32 value if it is not a string, Buffer , or integer. If the resulting integer is greater than 255 (decimal), buf will be filled with value & 255 .
If the final write of a fill() operation falls on a multi-byte character, then only the bytes of that character that fit into buf are written:
console.log(Buffer.allocUnsafe(5).fill('\u0222'));
// Prints: <Buffer c8 a2 c8 a2 c8>
If value contains invalid characters, it is truncated; if no valid fill data remains, an exception is thrown:
console.log(buf.fill('a'));
// Prints: <Buffer 61 61 61 61 61>
console.log(buf.fill('aazz', 'hex'));
// Prints: <Buffer aa aa aa aa aa>
console.log(buf.fill('zz', 'hex'));
// Throws an exception.
byteOffset <integer> Where to begin searching in buf . If negative, then offset is calculated from the end of buf . Default: 0 .
console.log(buf.includes('this'));
// Prints: true
console.log(buf.includes('is'));
// Prints: true
console.log(buf.includes(Buffer.from('a buffer')));
// Prints: true
console.log(buf.includes(97));
// Prints: true (97 is the decimal ASCII value for 'a')
console.log(buf.includes(Buffer.from('a buffer example')));
// Prints: false
console.log(buf.includes(Buffer.from('a buffer example').slice(0, 8)));
// Prints: true
console.log(buf.includes('this', 4));
// Prints: false
byteOffset <integer> Where to begin searching in buf . If negative, then offset is calculated from the end of buf . Default: 0 .
encoding <string> If value is a string, this is the encoding used to determine the binary representation of the string that will be searched for in buf . Default: 'utf8' .
Returns: <integer> The index of the first occurrence of value in buf , or -1 if buf does not contain value .
If value is:
a Buffer or Uint8Array , value will be used in its entirety. To compare a partial Buffer , use buf.slice() .
a number, value will be interpreted as an unsigned 8-bit integer value between 0 and 255 .
console.log(buf.indexOf('this'));
// Prints: 0
console.log(buf.indexOf('is'));
// Prints: 2
console.log(buf.indexOf(Buffer.from('a buffer')));
// Prints: 8
console.log(buf.indexOf(97));
// Prints: 8 (97 is the decimal ASCII value for 'a')
console.log(buf.indexOf(Buffer.from('a buffer example')));
// Prints: -1
console.log(buf.indexOf(Buffer.from('a buffer example').slice(0, 8)));
// Prints: 8
console.log(utf16Buffer.indexOf('\u03a3', 0, 'utf16le'));
// Prints: 4
console.log(utf16Buffer.indexOf('\u03a3', -4, 'utf16le'));
// Prints: 6
If value is not a string, number, or Buffer , this method will throw a TypeError . If value is a number, it will be coerced to a valid byte value, an integer between 0 and 255.
If byteOffset is not a number, it will be coerced to a number. If the result of coercion is NaN or 0 , then the entire buffer will be searched. This behavior matches String#indexOf() .
const b = Buffer.from('abcdef');
If value is an empty string or empty Buffer and byteOffset is less than buf.length , byteOffset will be returned. If value is empty and byteOffset is at least buf.length ,
buf.length will be returned.
buf.keys()
Returns: <Iterator>
byteOffset <integer> Where to begin searching in buf . If negative, then offset is calculated from the end of buf . Default: buf.length - 1 .
encoding <string> If value is a string, this is the encoding used to determine the binary representation of the string that will be searched for in buf . Default: 'utf8' .
Returns: <integer> The index of the last occurrence of value in buf , or -1 if buf does not contain value .
Identical to buf.indexOf() , except the last occurrence of value is found rather than the first occurrence.
console.log(buf.lastIndexOf('this'));
// Prints: 0
console.log(buf.lastIndexOf('buffer'));
// Prints: 17
console.log(buf.lastIndexOf(Buffer.from('buffer')));
// Prints: 17
console.log(buf.lastIndexOf(97));
// Prints: 15 (97 is the decimal ASCII value for 'a')
console.log(buf.lastIndexOf(Buffer.from('yolo')));
// Prints: -1
console.log(buf.lastIndexOf('buffer', 5));
// Prints: 5
console.log(buf.lastIndexOf('buffer', 4));
// Prints: -1
If value is not a string, number, or Buffer , this method will throw a TypeError . If value is a number, it will be coerced to a valid byte value, an integer between 0 and 255.
If byteOffset is not a number, it will be coerced to a number. Any arguments that coerce to NaN , like {} or undefined , will search the whole buffer. This behavior matches
String#lastIndexOf() .
const b = Buffer.from('abcdef');
buf.length
<integer>
console.log(buf.length);
// Prints: 1234
buf.write('some string', 0, 'utf8');
console.log(buf.length);
// Prints: 1234
buf.parent
Stability: 0 - Deprecated: Use buf.buffer instead.
buf.readBigInt64BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
Returns: <bigint>
Reads a signed, big-endian 64-bit integer from buf at the specified offset .
Integers read from a Buffer are interpreted as two's complement signed values.
buf.readBigInt64LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
Returns: <bigint>
Reads a signed, little-endian 64-bit integer from buf at the specified offset .
Integers read from a Buffer are interpreted as two's complement signed values.
buf.readBigUInt64BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
Returns: <bigint>
Reads an unsigned, big-endian 64-bit integer from buf at the specified offset .
console.log(buf.readBigUInt64BE(0));
// Prints: 4294967295n
buf.readBigUInt64LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
Returns: <bigint>
Reads an unsigned, little-endian 64-bit integer from buf at the specified offset .
const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff]);
console.log(buf.readBigUInt64LE(0));
// Prints: 18446744069414584320n
buf.readDoubleBE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .
Returns: <number>
console.log(buf.readDoubleBE(0));
// Prints: 8.20788039913184e-304
buf.readDoubleLE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .
Returns: <number>
console.log(buf.readDoubleLE(0));
// Prints: 5.447603722011605e-270
console.log(buf.readDoubleLE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readFloatBE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Returns: <number>
console.log(buf.readFloatBE(0));
// Prints: 2.387939260590663e-38
buf.readFloatLE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Returns: <number>
console.log(buf.readFloatLE(0));
// Prints: 1.539989614439558e-36
console.log(buf.readFloatLE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readInt8([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .
Returns: <integer>
Integers read from a Buffer are interpreted as two's complement signed values.
console.log(buf.readInt8(0));
// Prints: -1
console.log(buf.readInt8(1));
// Prints: 5
console.log(buf.readInt8(2));
// Throws ERR_OUT_OF_RANGE.
buf.readInt16BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Returns: <integer>
Reads a signed, big-endian 16-bit integer from buf at the specified offset .
Integers read from a Buffer are interpreted as two's complement signed values.
console.log(buf.readInt16BE(0));
// Prints: 5
buf.readInt16LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Returns: <integer>
Reads a signed, little-endian 16-bit integer from buf at the specified offset .
Integers read from a Buffer are interpreted as two's complement signed values.
const buf = Buffer.from([0, 5]);
console.log(buf.readInt16LE(0));
// Prints: 1280
console.log(buf.readInt16LE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readInt32BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Returns: <integer>
Reads a signed, big-endian 32-bit integer from buf at the specified offset .
Integers read from a Buffer are interpreted as two's complement signed values.
console.log(buf.readInt32BE(0));
// Prints: 5
buf.readInt32LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Returns: <integer>
Reads a signed, little-endian 32-bit integer from buf at the specified offset .
Integers read from a Buffer are interpreted as two's complement signed values.
console.log(buf.readInt32LE(0));
// Prints: 83886080
console.log(buf.readInt32LE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readIntBE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .
Returns: <integer>
Reads byteLength number of bytes from buf at the specified offset and interprets the result as a big-endian, two's complement signed value supporting up to 48 bits of accuracy.
console.log(buf.readIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.
console.log(buf.readIntBE(1, 0).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readIntLE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .
Returns: <integer>
Reads byteLength number of bytes from buf at the specified offset and interprets the result as a little-endian, two's complement signed value supporting up to 48 bits of accuracy.
console.log(buf.readIntLE(0, 6).toString(16));
// Prints: -546f87a9cbee
buf.readUInt8([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .
Returns: <integer>
console.log(buf.readUInt8(0));
// Prints: 1
console.log(buf.readUInt8(1));
// Prints: 254
console.log(buf.readUInt8(2));
// Throws ERR_OUT_OF_RANGE.
buf.readUInt16BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Returns: <integer>
Reads an unsigned, big-endian 16-bit integer from buf at the specified offset .
console.log(buf.readUInt16BE(0).toString(16));
// Prints: 1234
console.log(buf.readUInt16BE(1).toString(16));
// Prints: 3456
buf.readUInt16LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Returns: <integer>
Reads an unsigned, little-endian 16-bit integer from buf at the specified offset .
console.log(buf.readUInt16LE(0).toString(16));
// Prints: 3412
console.log(buf.readUInt16LE(1).toString(16));
// Prints: 5634
console.log(buf.readUInt16LE(2).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readUInt32BE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Returns: <integer>
Reads an unsigned, big-endian 32-bit integer from buf at the specified offset .
console.log(buf.readUInt32BE(0).toString(16));
// Prints: 12345678
buf.readUInt32LE([offset])
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Returns: <integer>
Reads an unsigned, little-endian 32-bit integer from buf at the specified offset .
console.log(buf.readUInt32LE(0).toString(16));
// Prints: 78563412
console.log(buf.readUInt32LE(1).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readUIntBE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .
Returns: <integer>
Reads byteLength number of bytes from buf at the specified offset and interprets the result as an unsigned big-endian integer supporting up to 48 bits of accuracy.
console.log(buf.readUIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readUIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readUIntLE(offset, byteLength)
offset <integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6 .
Returns: <integer>
Reads byteLength number of bytes from buf at the specified offset and interprets the result as an unsigned, little-endian integer supporting up to 48 bits of accuracy.
console.log(buf.readUIntLE(0, 6).toString(16));
// Prints: ab9078563412
buf.subarray([start[, end]])
start <integer> Where the new Buffer will start. Default: 0 .
end <integer> Where the new Buffer will end (not inclusive). Default: buf.length .
Returns: <Buffer>
Returns a new Buffer that references the same memory as the original, but offset and cropped by the start and end indices.
Specifying end greater than buf.length will return the same result as that of end equal to buf.length .
Modifying the new Buffer slice will modify the memory in the original Buffer because the allocated memory of the two objects overlap.
// Create a `Buffer` with the ASCII alphabet, take a slice, and modify one byte
// from the original `Buffer`.
console.log(buf2.toString('ascii', 0, buf2.length));
// Prints: abc
buf1[0] = 33;
console.log(buf2.toString('ascii', 0, buf2.length));
// Prints: !bc
Specifying negative indexes causes the slice to be generated relative to the end of buf rather than the beginning.
console.log(buf.subarray(-6, -1).toString());
// Prints: buffe
// (Equivalent to buf.subarray(0, 5).)
console.log(buf.subarray(-6, -2).toString());
// Prints: buff
// (Equivalent to buf.subarray(0, 4).)
console.log(buf.subarray(-5, -2).toString());
// Prints: uff
// (Equivalent to buf.subarray(1, 4).)
buf.slice([start[, end]])
start <integer> Where the new Buffer will start. Default: 0 .
end <integer> Where the new Buffer will end (not inclusive). Default: buf.length .
Returns: <Buffer>
Returns a new Buffer that references the same memory as the original, but offset and cropped by the start and end indices.
This method is not compatible with the Uint8Array.prototype.slice() , which is a superclass of Buffer . To copy the slice, use Uint8Array.prototype.slice() .
console.log(buf.toString());
// Prints: buffer
buf.swap16()
Returns: <Buffer> A reference to buf .
Interprets buf as an array of unsigned 16-bit integers and swaps the byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is not a multiple of 2.
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap16();
console.log(buf1);
// Prints: <Buffer 02 01 04 03 06 05 08 07>
buf2.swap16();
// Throws ERR_INVALID_BUFFER_SIZE.
One convenient use of buf.swap16() is to perform a fast in-place conversion between UTF-16 little-endian and UTF-16 big-endian:
buf.swap32()
Returns: <Buffer> A reference to buf .
Interprets buf as an array of unsigned 32-bit integers and swaps the byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is not a multiple of 4.
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap32();
console.log(buf1);
// Prints: <Buffer 04 03 02 01 08 07 06 05>
buf2.swap32();
// Throws ERR_INVALID_BUFFER_SIZE.
buf.swap64()
Returns: <Buffer> A reference to buf .
Interprets buf as an array of 64-bit numbers and swaps byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is not a multiple of 8.
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap64();
console.log(buf1);
// Prints: <Buffer 08 07 06 05 04 03 02 01>
buf2.swap64();
// Throws ERR_INVALID_BUFFER_SIZE.
buf.toJSON()
Returns: <Object>
Returns a JSON representation of buf . JSON.stringify() implicitly calls this function when stringifying a Buffer instance.
Buffer.from() accepts objects in the format returned from this method. In particular, Buffer.from(buf.toJSON()) works like Buffer.from(buf) .
console.log(json);
// Prints: {"type":"Buffer","data":[1,2,3,4,5]}
console.log(copy);
// Prints: <Buffer 01 02 03 04 05>
buf.toString([encoding[, start[, end]]])
encoding <string> The character encoding to use. Default: 'utf8' .
end <integer> The byte offset to stop decoding at (not inclusive). Default: buf.length .
Returns: <string>
Decodes buf to a string according to the specified character encoding in encoding . start and end may be passed to decode only a subset of buf .
If encoding is 'utf8' and a byte sequence in the input is not valid UTF-8, then each invalid byte is replaced with the replacement character U+FFFD .
The maximum length of a string instance (in UTF-16 code units) is available as buffer.constants.MAX_STRING_LENGTH .
console.log(buf1.toString('utf8'));
// Prints: abcdefghijklmnopqrstuvwxyz
console.log(buf1.toString('utf8', 0, 5));
// Prints: abcde
console.log(buf2.toString('hex'));
// Prints: 74c3a97374
console.log(buf2.toString('utf8', 0, 3));
// Prints: té
console.log(buf2.toString(undefined, 0, 3));
// Prints: té
buf.values()
Returns: <Iterator>
Creates and returns an iterator for buf values (bytes). This function is called automatically when a Buffer is used in a for..of statement.
const buf = Buffer.from('buffer');
offset <integer> Number of bytes to skip before starting to write string . Default: 0 .
length <integer> Maximum number of bytes to write (written bytes will not exceed buf.length - offset ). Default: buf.length - offset .
Writes string to buf at offset according to the character encoding in encoding . The length parameter is the number of bytes to write. If buf did not contain enough space to fit the
entire string, only part of string will be written. However, partially encoded characters will not be written.
buf.writeBigInt64BE(value[, offset])
value <bigint> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
buf.writeBigInt64BE(0x0102030405060708n, 0);
console.log(buf);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf.writeBigInt64LE(value[, offset])
value <bigint> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
buf.writeBigInt64LE(0x0102030405060708n, 0);
console.log(buf);
// Prints: <Buffer 08 07 06 05 04 03 02 01>
buf.writeBigUInt64BE(value[, offset])
value <bigint> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
buf.writeBigUInt64BE(0xdecafafecacefaden, 0);
console.log(buf);
// Prints: <Buffer de ca fa fe ca ce fa de>
buf.writeBigUInt64LE(value[, offset])
value <bigint> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8 . Default: 0 .
buf.writeBigUInt64LE(0xdecafafecacefaden, 0);
console.log(buf);
// Prints: <Buffer de fa ce ca fe fa ca de>
buf.writeDoubleBE(value[, offset])
value <number> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .
Writes value to buf at the specified offset as big-endian. The value must be a JavaScript number. Behavior is undefined when value is anything other than a JavaScript number.
buf.writeDoubleBE(123.456, 0);
console.log(buf);
// Prints: <Buffer 40 5e dd 2f 1a 9f be 77>
buf.writeDoubleLE(value[, offset])
value <number> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 8 . Default: 0 .
Writes value to buf at the specified offset as little-endian. The value must be a JavaScript number. Behavior is undefined when value is anything other than a JavaScript number.
buf.writeDoubleLE(123.456, 0);
console.log(buf);
// Prints: <Buffer 77 be 9f 1a 2f dd 5e 40>
buf.writeFloatBE(value[, offset])
value <number> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Writes value to buf at the specified offset as big-endian. Behavior is undefined when value is anything other than a JavaScript number.
buf.writeFloatBE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer 4f 4a fe bb>
buf.writeFloatLE(value[, offset])
value <number> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Writes value to buf at the specified offset as little-endian. Behavior is undefined when value is anything other than a JavaScript number.
buf.writeFloatLE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer bb fe 4a 4f>
buf.writeInt8(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .
Writes value to buf at the specified offset . value must be a valid signed 8-bit integer. Behavior is undefined when value is anything other than a signed 8-bit integer.
buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);
console.log(buf);
// Prints: <Buffer 02 fe>
buf.writeInt16BE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Writes value to buf at the specified offset as big-endian. The value must be a valid signed 16-bit integer. Behavior is undefined when value is anything other than a signed 16-bit
integer.
buf.writeInt16BE(0x0102, 0);
console.log(buf);
// Prints: <Buffer 01 02>
buf.writeInt16LE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Writes value to buf at the specified offset as little-endian. The value must be a valid signed 16-bit integer. Behavior is undefined when value is anything other than a signed 16-bit
integer.
buf.writeInt16LE(0x0304, 0);
console.log(buf);
// Prints: <Buffer 04 03>
buf.writeInt32BE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Writes value to buf at the specified offset as big-endian. The value must be a valid signed 32-bit integer. Behavior is undefined when value is anything other than a signed 32-bit
integer.
buf.writeInt32BE(0x01020304, 0);
console.log(buf);
// Prints: <Buffer 01 02 03 04>
buf.writeInt32LE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Writes value to buf at the specified offset as little-endian. The value must be a valid signed 32-bit integer. Behavior is undefined when value is anything other than a signed 32-bit
integer.
buf.writeInt32LE(0x05060708, 0);
console.log(buf);
// Prints: <Buffer 08 07 06 05>
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .
Writes byteLength bytes of value to buf at the specified offset as big-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than a signed
integer.
buf.writeIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .
Writes byteLength bytes of value to buf at the specified offset as little-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than a signed
integer.
console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>
buf.writeUInt8(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 1 . Default: 0 .
Writes value to buf at the specified offset . value must be a valid unsigned 8-bit integer. Behavior is undefined when value is anything other than an unsigned 8-bit integer.
buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);
console.log(buf);
// Prints: <Buffer 03 04 23 42>
buf.writeUInt16BE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Writes value to buf at the specified offset as big-endian. The value must be a valid unsigned 16-bit integer. Behavior is undefined when value is anything other than an unsigned 16-
bit integer.
console.log(buf);
// Prints: <Buffer de ad be ef>
buf.writeUInt16LE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2 . Default: 0 .
Writes value to buf at the specified offset as little-endian. The value must be a valid unsigned 16-bit integer. Behavior is undefined when value is anything other than an unsigned 16-
bit integer.
buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer ad de ef be>
buf.writeUInt32BE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Writes value to buf at the specified offset as big-endian. The value must be a valid unsigned 32-bit integer. Behavior is undefined when value is anything other than an unsigned 32-
bit integer.
console.log(buf);
// Prints: <Buffer fe ed fa ce>
buf.writeUInt32LE(value[, offset])
value <integer> Number to be written to buf .
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4 . Default: 0 .
Writes value to buf at the specified offset as little-endian. The value must be a valid unsigned 32-bit integer. Behavior is undefined when value is anything other than an unsigned 32-
bit integer.
buf.writeUInt32LE(0xfeedface, 0);
console.log(buf);
// Prints: <Buffer ce fa ed fe>
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .
Writes byteLength bytes of value to buf at the specified offset as big-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than an unsigned
integer.
buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
offset <integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength .
byteLength <integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6 .
Writes byteLength bytes of value to buf at the specified offset as little-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value is anything other than an
unsigned integer.
buf.writeUIntLE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>
new Buffer(array)
See Buffer.from(array) .
new Buffer(buffer)
buffer <Buffer> | <Uint8Array> An existing Buffer or Uint8Array from which to copy data.
See Buffer.from(buffer) .
new Buffer(size)
See Buffer.alloc() and Buffer.allocUnsafe() . This variant of the constructor is equivalent to Buffer.alloc() .
Returns the maximum number of bytes that will be returned when buf.inspect() is called. This can be overridden by user modules. See util.inspect() for more details on
buf.inspect() behavior.
buffer.kMaxLength
<integer> The largest size allowed for a single Buffer instance.
Returns: <Buffer>
Re-encodes the given Buffer or Uint8Array instance from one character encoding to another. Returns a new Buffer instance.
Throws if the fromEnc or toEnc specify invalid character encodings or if conversion from fromEnc to toEnc is not permitted.
Encodings supported by buffer.transcode() are: 'ascii' , 'utf8' , 'utf16le' , 'ucs2' , 'latin1' , and 'binary' .
The transcoding process will use substitution characters if a given byte sequence cannot be adequately represented in the target encoding. For instance:
Because the Euro ( € ) sign is not representable in US-ASCII, it is replaced with ? in the transcoded Buffer .
Class: SlowBuffer
new SlowBuffer(size)
See Buffer.allocUnsafeSlow() .
Buffer constants
buffer.constants.MAX_LENGTH
<integer> The largest size allowed for a single Buffer instance.
On 32-bit architectures, this value currently is 230 - 1 (~1GB). On 64-bit architectures, this value currently is 231 - 1 (~2GB).
buffer.constants.MAX_STRING_LENGTH
<integer> The largest length allowed for a single string instance.
Represents the largest length that a string primitive can have, counted in UTF-16 code units.
Passing a number as the first argument to Buffer() (e.g. new Buffer(10) ) allocates a new Buffer object of the specified size. Prior to Node.js 8.0.0, the memory allocated for such
Buffer instances is not initialized and can contain sensitive data. Such Buffer instances must be subsequently initialized by using either buf.fill(0) or by writing to the entire Buffer
before reading data from the Buffer . While this behavior is intentional to improve performance, development experience has demonstrated that a more explicit distinction is required
between creating a fast-but-uninitialized Buffer versus creating a slower-but-safer Buffer . Since Node.js 8.0.0, Buffer(num) and new Buffer(num) return a Buffer with initialized
memory.
Passing a string, array, or Buffer as the first argument copies the passed object's data into the Buffer .
Passing an ArrayBuffer or a SharedArrayBuffer returns a Buffer that shares allocated memory with the given array buffer.
Because the behavior of new Buffer() is different depending on the type of the first argument, security and reliability issues can be inadvertently introduced into applications when
argument validation or Buffer initialization is not performed.
For example, if an attacker can cause an application to receive a number where a string is expected, the application may call new Buffer(100) instead of new Buffer("100") , leading it to
allocate a 100 byte buffer instead of allocating a 3 byte buffer with content "100" . This is commonly possible using JSON API calls. Since JSON distinguishes between numeric and string
types, it allows injection of numbers where a naively written application that does not validate its input sufficiently might expect to always receive a string. Before Node.js 8.0.0, the 100 byte
buffer might contain arbitrary pre-existing in-memory data, so may be used to expose in-memory secrets to a remote attacker. Since Node.js 8.0.0, exposure of memory cannot occur
because the data is zero-filled. However, other attacks are still possible, such as causing very large buffers to be allocated by the server, leading to performance degradation or crashing on
memory exhaustion.
To make the creation of Buffer instances more reliable and less error-prone, the various forms of the new Buffer() constructor have been deprecated and replaced by separate
Buffer.from() , Buffer.alloc() , and Buffer.allocUnsafe() methods.
Developers should migrate all existing uses of the new Buffer() constructors to one of these new APIs.
Buffer.from(array) returns a new Buffer that contains a copy of the provided octets.
Buffer.from(arrayBuffer[, byteOffset[, length]]) returns a new Buffer that shares the same allocated memory as the given ArrayBuffer .
Buffer.from(buffer) returns a new Buffer that contains a copy of the contents of the given Buffer .
Buffer.from(string[, encoding]) returns a new Buffer that contains a copy of the provided string.
Buffer.alloc(size[, fill[, encoding]]) returns a new initialized Buffer of the specified size. This method is slower than Buffer.allocUnsafe(size) but guarantees that newly
created Buffer instances never contain old data that is potentially sensitive. A TypeError will be thrown if size is not a number.
Buffer.allocUnsafe(size) and Buffer.allocUnsafeSlow(size) each return a new uninitialized Buffer of the specified size . Because the Buffer is uninitialized, the allocated
segment of memory might contain old data that is potentially sensitive.
Buffer instances returned by Buffer.allocUnsafe() and Buffer.from(array) may be allocated off a shared internal memory pool if size is less than or equal to half Buffer.poolSize .
Instances returned by Buffer.allocUnsafeSlow() never use the shared internal memory pool.
$ node --zero-fill-buffers
> Buffer.allocUnsafe(5);
<Buffer 00 00 00 00 00>
While there are clear performance advantages to using Buffer.allocUnsafe() , extra care must be taken in order to avoid introducing security vulnerabilities into an application.