Class: stream.Readable#
Added in: v0.9.4
Event: 'close'#
The 'close' event is emitted when the stream and any of its underlying
resources (a file descriptor, for example) have been closed. The event indicates
that no more events will be emitted, and no further computation will occur.
A Readable stream will always emit the 'close' event if it is
created with the emitClose option.
Event: 'data'#
Added in: v0.9.4
chunk <Buffer> | <string> | <any> The chunk of data. For streams that are not
operating in object mode, the chunk will be either a string or Buffer.
For streams that are in object mode, the chunk can be any JavaScript value
other than null.
The 'data' event is emitted whenever the stream is relinquishing ownership of
a chunk of data to a consumer. This may occur whenever the stream is switched
in flowing mode by calling readable.pipe(), readable.resume(), or by
attaching a listener callback to the 'data' event. The 'data' event will
also be emitted whenever the readable.read() method is called and a chunk of
data is available to be returned.
Attaching a 'data' event listener to a stream that has not been explicitly
paused will switch the stream into flowing mode. Data will then be passed as
soon as it is available.
The listener callback will be passed the chunk of data as a string if a default
encoding has been specified for the stream using the
readable.setEncoding() method; otherwise the data will be passed as a
Buffer.
const readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
});
Event: 'end'#
Added in: v0.9.4
The 'end' event is emitted when there is no more data to be consumed from
the stream.
The 'end' event will not be emitted unless the data is completely
consumed. This can be accomplished by switching the stream into flowing mode,
or by calling stream.read() repeatedly until all data has been
consumed.
const readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
});
readable.on('end', () => {
console.log('There will be no more data.');
});
Event: 'error'#
Added in: v0.9.4
The 'error' event may be emitted by a Readable implementation at any time.
Typically, this may occur if the underlying stream is unable to generate data
due to an underlying internal failure, or when a stream implementation attempts
to push an invalid chunk of data.
The listener callback will be passed a single Error object.
Event: 'pause'#
Added in: v0.9.4
The 'pause' event is emitted when stream.pause() is called
and readableFlowing is not false.
Event: 'readable'#
The 'readable' event is emitted when there is data available to be read from
the stream, up to the configured high water mark (state.highWaterMark). Effectively,
it indicates that the stream has new information within the buffer. If data is available
within this buffer, stream.read() can be called to retrieve that data.
Additionally, the 'readable' event may also be emitted when the end of the stream has been
reached.
const readable = getReadableStreamSomehow();
readable.on('readable', function() {
let data;
while ((data = this.read()) !== null) {
console.log(data);
}
});
If the end of the stream has been reached, calling
stream.read() will return null and trigger the 'end'
event. This is also true if there never was any data to be read. For instance,
in the following example, foo.txt is an empty file:
const fs = require('node:fs');
const rr = fs.createReadStream('foo.txt');
rr.on('readable', () => {
console.log(`readable: ${rr.read()}`);
});
rr.on('end', () => {
console.log('end');
});
The output of running this script is:
$ node test.js
readable: null
end
In some cases, attaching a listener for the 'readable' event will cause some
amount of data to be read into an internal buffer.
In general, the readable.pipe() and 'data' event mechanisms are easier to
understand than the 'readable' event. However, handling 'readable' might
result in increased throughput.
If both 'readable' and 'data' are used at the same time, 'readable'
takes precedence in controlling the flow, i.e. 'data' will be emitted
only when stream.read() is called. The
readableFlowing property would become false.
If there are 'data' listeners when 'readable' is removed, the stream
will start flowing, i.e. 'data'Â events will be emitted without calling
.resume().
Event: 'resume'#
Added in: v0.9.4
The 'resume' event is emitted when stream.resume() is
called and readableFlowing is not true.
readable.destroy([error])#
error <Error> Error which will be passed as payload in 'error' event
- Returns: <this>
Destroy the stream. Optionally emit an 'error' event, and emit a 'close'
event (unless emitClose is set to false). After this call, the readable
stream will release any internal resources and subsequent calls to push()
will be ignored.
Once destroy() has been called any further calls will be a no-op and no
further errors except from _destroy() may be emitted as 'error'.
Implementors should not override this method, but instead implement
readable._destroy().
readable.closed#
Added in: v18.0.0
Is true after 'close' has been emitted.
readable.isPaused()#
Added in: v0.11.14
The readable.isPaused() method returns the current operating state of the
Readable. This is used primarily by the mechanism that underlies the
readable.pipe() method. In most typical cases, there will be no reason to
use this method directly.
const readable = new stream.Readable();
readable.isPaused();
readable.pause();
readable.isPaused();
readable.resume();
readable.isPaused();
readable.pause()#
Added in: v0.9.4
The readable.pause() method will cause a stream in flowing mode to stop
emitting 'data' events, switching out of flowing mode. Any data that
becomes available will remain in the internal buffer.
const readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
readable.pause();
console.log('There will be no additional data for 1 second.');
setTimeout(() => {
console.log('Now data will start flowing again.');
readable.resume();
}, 1000);
});
The readable.pause() method has no effect if there is a 'readable'
event listener.
readable.pipe(destination[, options])#
Added in: v0.9.4
The readable.pipe() method attaches a Writable stream to the readable,
causing it to switch automatically into flowing mode and push all of its data
to the attached Writable. The flow of data will be automatically managed
so that the destination Writable stream is not overwhelmed by a faster
Readable stream.
The following example pipes all of the data from the readable into a file
named file.txt:
const fs = require('node:fs');
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
readable.pipe(writable);
It is possible to attach multiple Writable streams to a single Readable
stream.
The readable.pipe() method returns a reference to the destination stream
making it possible to set up chains of piped streams:
const fs = require('node:fs');
const zlib = require('node:zlib');
const r = fs.createReadStream('file.txt');
const z = zlib.createGzip();
const w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);
By default, stream.end() is called on the destination Writable
stream when the source Readable stream emits 'end', so that the
destination is no longer writable. To disable this default behavior, the end
option can be passed as false, causing the destination stream to remain open:
reader.pipe(writer, { end: false });
reader.on('end', () => {
writer.end('Goodbye\n');
});
One important caveat is that if the Readable stream emits an error during
processing, the Writable destination is not closed automatically. If an
error occurs, it will be necessary to manually close each stream in order
to prevent memory leaks.
The process.stderr and process.stdout Writable streams are never
closed until the Node.js process exits, regardless of the specified options.
readable.read([size])#
Added in: v0.9.4
The readable.read() method reads data out of the internal buffer and
returns it. If no data is available to be read, null is returned. By default,
the data is returned as a Buffer object unless an encoding has been
specified using the readable.setEncoding() method or the stream is operating
in object mode.
The optional size argument specifies a specific number of bytes to read. If
size bytes are not available to be read, null will be returned unless
the stream has ended, in which case all of the data remaining in the internal
buffer will be returned.
If the size argument is not specified, all of the data contained in the
internal buffer will be returned.
The size argument must be less than or equal to 1 GiB.
The readable.read() method should only be called on Readable streams
operating in paused mode. In flowing mode, readable.read() is called
automatically until the internal buffer is fully drained.
const readable = getReadableStreamSomehow();
readable.on('readable', () => {
let chunk;
console.log('Stream is readable (new data received in buffer)');
while (null !== (chunk = readable.read())) {
console.log(`Read ${chunk.length} bytes of data...`);
}
});
readable.on('end', () => {
console.log('Reached end of stream.');
});
Each call to readable.read() returns a chunk of data or null, signifying
that there's no more data to read at that moment. These chunks aren't automatically
concatenated. Because a single read() call does not return all the data, using
a while loop may be necessary to continuously read chunks until all data is retrieved.
When reading a large file, .read() might return null temporarily, indicating
that it has consumed all buffered content but there may be more data yet to be
buffered. In such cases, a new 'readable' event is emitted once there's more
data in the buffer, and the 'end' event signifies the end of data transmission.
Therefore to read a file's whole contents from a readable, it is necessary
to collect chunks across multiple 'readable' events:
const chunks = [];
readable.on('readable', () => {
let chunk;
while (null !== (chunk = readable.read())) {
chunks.push(chunk);
}
});
readable.on('end', () => {
const content = chunks.join('');
});
A Readable stream in object mode will always return a single item from
a call to readable.read(size), regardless of the value of the
size argument.
If the readable.read() method returns a chunk of data, a 'data' event will
also be emitted.
Calling stream.read([size]) after the 'end' event has
been emitted will return null. No runtime error will be raised.
readable.readable#
Added in: v11.4.0
Is true if it is safe to call readable.read(), which means
the stream has not been destroyed or emitted 'error' or 'end'.
readable.readableAborted#
Returns whether the stream was destroyed or errored before emitting 'end'.
readable.readableDidRead#
Returns whether 'data' has been emitted.
readable.readableEncoding#
Added in: v12.7.0
Getter for the property encoding of a given Readable stream. The encoding
property can be set using the readable.setEncoding() method.
readable.readableEnded#
Added in: v12.9.0
Becomes true when 'end' event is emitted.
readable.errored#
Added in: v18.0.0
Returns error if the stream has been destroyed with an error.
readable.readableFlowing#
Added in: v9.4.0
This property reflects the current state of a Readable stream as described
in the Three states section.
readable.readableHighWaterMark#
Added in: v9.3.0
Returns the value of highWaterMark passed when creating this Readable.
readable.readableLength#
Added in: v9.4.0
This property contains the number of bytes (or objects) in the queue
ready to be read. The value provides introspection data regarding
the status of the highWaterMark.
readable.readableObjectMode#
Added in: v12.3.0
Getter for the property objectMode of a given Readable stream.
readable.resume()#
The readable.resume() method causes an explicitly paused Readable stream to
resume emitting 'data' events, switching the stream into flowing mode.
The readable.resume() method can be used to fully consume the data from a
stream without actually processing any of that data:
getReadableStreamSomehow()
.resume()
.on('end', () => {
console.log('Reached the end, but did not read anything.');
});
The readable.resume() method has no effect if there is a 'readable'
event listener.
readable.setEncoding(encoding)#
Added in: v0.9.4
The readable.setEncoding() method sets the character encoding for
data read from the Readable stream.
By default, no encoding is assigned and stream data will be returned as
Buffer objects. Setting an encoding causes the stream data
to be returned as strings of the specified encoding rather than as Buffer
objects. For instance, calling readable.setEncoding('utf8') will cause the
output data to be interpreted as UTF-8 data, and passed as strings. Calling
readable.setEncoding('hex') will cause the data to be encoded in hexadecimal
string format.
The Readable stream will properly handle multi-byte characters delivered
through the stream that would otherwise become improperly decoded if simply
pulled from the stream as Buffer objects.
const readable = getReadableStreamSomehow();
readable.setEncoding('utf8');
readable.on('data', (chunk) => {
assert.equal(typeof chunk, 'string');
console.log('Got %d characters of string data:', chunk.length);
});
readable.unpipe([destination])#
Added in: v0.9.4
The readable.unpipe() method detaches a Writable stream previously attached
using the stream.pipe() method.
If the destination is not specified, then all pipes are detached.
If the destination is specified, but no pipe is set up for it, then
the method does nothing.
const fs = require('node:fs');
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
readable.pipe(writable);
setTimeout(() => {
console.log('Stop writing to file.txt.');
readable.unpipe(writable);
console.log('Manually close the file stream.');
writable.end();
}, 1000);
readable.unshift(chunk[, encoding])#
Passing chunk as null signals the end of the stream (EOF) and behaves the
same as readable.push(null), after which no more data can be written. The EOF
signal is put at the end of the buffer and any buffered data will still be
flushed.
The readable.unshift() method pushes a chunk of data back into the internal
buffer. This is useful in certain situations where a stream is being consumed by
code that needs to "un-consume" some amount of data that it has optimistically
pulled out of the source, so that the data can be passed on to some other party.
The stream.unshift(chunk) method cannot be called after the 'end' event
has been emitted or a runtime error will be thrown.
Developers using stream.unshift() often should consider switching to
use of a Transform stream instead. See the API for stream implementers
section for more information.
const { StringDecoder } = require('node:string_decoder');
function parseHeader(stream, callback) {
stream.on('error', callback);
stream.on('readable', onReadable);
const decoder = new StringDecoder('utf8');
let header = '';
function onReadable() {
let chunk;
while (null !== (chunk = stream.read())) {
const str = decoder.write(chunk);
if (str.includes('\n\n')) {
const split = str.split(/\n\n/);
header += split.shift();
const remaining = split.join('\n\n');
const buf = Buffer.from(remaining, 'utf8');
stream.removeListener('error', callback);
stream.removeListener('readable', onReadable);
if (buf.length)
stream.unshift(buf);
callback(null, header, stream);
return;
}
header += str;
}
}
}
Unlike stream.push(chunk), stream.unshift(chunk) will not
end the reading process by resetting the internal reading state of the stream.
This can cause unexpected results if readable.unshift() is called during a
read (i.e. from within a stream._read() implementation on a
custom stream). Following the call to readable.unshift() with an immediate
stream.push('') will reset the reading state appropriately,
however it is best to simply avoid calling readable.unshift() while in the
process of performing a read.
readable.wrap(stream)#
Added in: v0.9.4
Prior to Node.js 0.10, streams did not implement the entire node:stream
module API as it is currently defined. (See Compatibility for more
information.)
When using an older Node.js library that emits 'data' events and has a
stream.pause() method that is advisory only, the
readable.wrap() method can be used to create a Readable stream that uses
the old stream as its data source.
It will rarely be necessary to use readable.wrap() but the method has been
provided as a convenience for interacting with older Node.js applications and
libraries.
const { OldReader } = require('./old-api-module.js');
const { Readable } = require('node:stream');
const oreader = new OldReader();
const myReader = new Readable().wrap(oreader);
myReader.on('readable', () => {
myReader.read();
});
readable[Symbol.asyncIterator]()#
const fs = require('node:fs');
async function print(readable) {
readable.setEncoding('utf8');
let data = '';
for await (const chunk of readable) {
data += chunk;
}
console.log(data);
}
print(fs.createReadStream('file')).catch(console.error);
If the loop terminates with a break, return, or a throw, the stream will
be destroyed. In other terms, iterating over a stream will consume the stream
fully. The stream will be read in chunks of size equal to the highWaterMark
option. In the code example above, data will be in a single chunk if the file
has less then 64 KiB of data because no highWaterMark option is provided to
fs.createReadStream().
readable[Symbol.asyncDispose]()#
Added in: v20.4.0, v18.18.0
Calls readable.destroy() with an AbortError and returns
a promise that fulfills when the stream is finished.
readable.compose(stream[, options])#
import { Readable } from 'node:stream';
async function* splitToWords(source) {
for await (const chunk of source) {
const words = String(chunk).split(' ');
for (const word of words) {
yield word;
}
}
}
const wordsStream = Readable.from(['this is', 'compose as operator']).compose(splitToWords);
const words = await wordsStream.toArray();
console.log(words);
See stream.compose for more information.
readable.iterator([options])#
options <Object>
destroyOnReturn <boolean> When set to false, calling return on the
async iterator, or exiting a for await...of iteration using a break,
return, or throw will not destroy the stream. Default: true.
- Returns: <AsyncIterator> to consume the stream.
The iterator created by this method gives users the option to cancel the
destruction of the stream if the for await...of loop is exited by return,
break, or throw, or if the iterator should destroy the stream if the stream
emitted an error during iteration.
const { Readable } = require('node:stream');
async function printIterator(readable) {
for await (const chunk of readable.iterator({ destroyOnReturn: false })) {
console.log(chunk);
break;
}
console.log(readable.destroyed);
for await (const chunk of readable.iterator({ destroyOnReturn: false })) {
console.log(chunk);
}
console.log(readable.destroyed);
}
async function printSymbolAsyncIterator(readable) {
for await (const chunk of readable) {
console.log(chunk);
break;
}
console.log(readable.destroyed);
}
async function showBoth() {
await printIterator(Readable.from([1, 2, 3]));
await printSymbolAsyncIterator(Readable.from([1, 2, 3]));
}
showBoth();
readable.map(fn[, options])#
fn <Function> | <AsyncFunction> a function to map over every chunk in the
stream.
data <any> a chunk of data from the stream.
options <Object>
signal <AbortSignal> aborted if the stream is destroyed allowing to
abort the fn call early.
options <Object>
concurrency <number> the maximum concurrent invocation of fn to call
on the stream at once. Default: 1.
highWaterMark <number> how many items to buffer while waiting for user
consumption of the mapped items. Default: concurrency * 2 - 1.
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Readable> a stream mapped with the function
fn.
This method allows mapping over the stream. The fn function will be called
for every chunk in the stream. If the fn function returns a promise - that
promise will be awaited before being passed to the result stream.
import { Readable } from 'node:stream';
import { Resolver } from 'node:dns/promises';
for await (const chunk of Readable.from([1, 2, 3, 4]).map((x) => x * 2)) {
console.log(chunk);
}
const resolver = new Resolver();
const dnsResults = Readable.from([
'nodejs.org',
'openjsf.org',
'www.linuxfoundation.org',
]).map((domain) => resolver.resolve4(domain), { concurrency: 2 });
for await (const result of dnsResults) {
console.log(result);
}
readable.filter(fn[, options])#
fn <Function> | <AsyncFunction> a function to filter chunks from the stream.
data <any> a chunk of data from the stream.
options <Object>
signal <AbortSignal> aborted if the stream is destroyed allowing to
abort the fn call early.
options <Object>
concurrency <number> the maximum concurrent invocation of fn to call
on the stream at once. Default: 1.
highWaterMark <number> how many items to buffer while waiting for user
consumption of the filtered items. Default: concurrency * 2 - 1.
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Readable> a stream filtered with the predicate
fn.
This method allows filtering the stream. For each chunk in the stream the fn
function will be called and if it returns a truthy value, the chunk will be
passed to the result stream. If the fn function returns a promise - that
promise will be awaited.
import { Readable } from 'node:stream';
import { Resolver } from 'node:dns/promises';
for await (const chunk of Readable.from([1, 2, 3, 4]).filter((x) => x > 2)) {
console.log(chunk);
}
const resolver = new Resolver();
const dnsResults = Readable.from([
'nodejs.org',
'openjsf.org',
'www.linuxfoundation.org',
]).filter(async (domain) => {
const { address } = await resolver.resolve4(domain, { ttl: true });
return address.ttl > 60;
}, { concurrency: 2 });
for await (const result of dnsResults) {
console.log(result);
}
readable.forEach(fn[, options])#
Added in: v17.5.0, v16.15.0
fn <Function> | <AsyncFunction> a function to call on each chunk of the stream.
data <any> a chunk of data from the stream.
options <Object>
signal <AbortSignal> aborted if the stream is destroyed allowing to
abort the fn call early.
options <Object>
concurrency <number> the maximum concurrent invocation of fn to call
on the stream at once. Default: 1.
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Promise> a promise for when the stream has finished.
This method allows iterating a stream. For each chunk in the stream the
fn function will be called. If the fn function returns a promise - that
promise will be awaited.
This method is different from for await...of loops in that it can optionally
process chunks concurrently. In addition, a forEach iteration can only be
stopped by having passed a signal option and aborting the related
AbortController while for await...of can be stopped with break or
return. In either case the stream will be destroyed.
This method is different from listening to the 'data' event in that it
uses the readable event in the underlying machinery and can limit the
number of concurrent fn calls.
import { Readable } from 'node:stream';
import { Resolver } from 'node:dns/promises';
for await (const chunk of Readable.from([1, 2, 3, 4]).filter((x) => x > 2)) {
console.log(chunk);
}
const resolver = new Resolver();
const dnsResults = Readable.from([
'nodejs.org',
'openjsf.org',
'www.linuxfoundation.org',
]).map(async (domain) => {
const { address } = await resolver.resolve4(domain, { ttl: true });
return address;
}, { concurrency: 2 });
await dnsResults.forEach((result) => {
console.log(result);
});
console.log('done');
readable.toArray([options])#
Added in: v17.5.0, v16.15.0
options <Object>
signal <AbortSignal> allows cancelling the toArray operation if the
signal is aborted.
- Returns: <Promise> a promise containing an array with the contents of the
stream.
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of
streams. It's intended for interoperability and convenience, not as the primary
way to consume streams.
import { Readable } from 'node:stream';
import { Resolver } from 'node:dns/promises';
await Readable.from([1, 2, 3, 4]).toArray();
const dnsResults = await Readable.from([
'nodejs.org',
'openjsf.org',
'www.linuxfoundation.org',
]).map(async (domain) => {
const { address } = await resolver.resolve4(domain, { ttl: true });
return address;
}, { concurrency: 2 }).toArray();
readable.some(fn[, options])#
Added in: v17.5.0, v16.15.0
fn <Function> | <AsyncFunction> a function to call on each chunk of the stream.
data <any> a chunk of data from the stream.
options <Object>
signal <AbortSignal> aborted if the stream is destroyed allowing to
abort the fn call early.
options <Object>
concurrency <number> the maximum concurrent invocation of fn to call
on the stream at once. Default: 1.
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Promise> a promise evaluating to
true if fn returned a truthy
value for at least one of the chunks.
This method is similar to Array.prototype.some and calls fn on each chunk
in the stream until the awaited return value is true (or any truthy value).
Once an fn call on a chunk awaited return value is truthy, the stream is
destroyed and the promise is fulfilled with true. If none of the fn
calls on the chunks return a truthy value, the promise is fulfilled with
false.
import { Readable } from 'node:stream';
import { stat } from 'node:fs/promises';
await Readable.from([1, 2, 3, 4]).some((x) => x > 2);
await Readable.from([1, 2, 3, 4]).some((x) => x < 0);
const anyBigFile = await Readable.from([
'file1',
'file2',
'file3',
]).some(async (fileName) => {
const stats = await stat(fileName);
return stats.size > 1024 * 1024;
}, { concurrency: 2 });
console.log(anyBigFile);
console.log('done');
readable.find(fn[, options])#
Added in: v17.5.0, v16.17.0
fn <Function> | <AsyncFunction> a function to call on each chunk of the stream.
data <any> a chunk of data from the stream.
options <Object>
signal <AbortSignal> aborted if the stream is destroyed allowing to
abort the fn call early.
options <Object>
concurrency <number> the maximum concurrent invocation of fn to call
on the stream at once. Default: 1.
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Promise> a promise evaluating to the first chunk for which
fn
evaluated with a truthy value, or undefined if no element was found.
This method is similar to Array.prototype.find and calls fn on each chunk
in the stream to find a chunk with a truthy value for fn. Once an fn call's
awaited return value is truthy, the stream is destroyed and the promise is
fulfilled with value for which fn returned a truthy value. If all of the
fn calls on the chunks return a falsy value, the promise is fulfilled with
undefined.
import { Readable } from 'node:stream';
import { stat } from 'node:fs/promises';
await Readable.from([1, 2, 3, 4]).find((x) => x > 2);
await Readable.from([1, 2, 3, 4]).find((x) => x > 0);
await Readable.from([1, 2, 3, 4]).find((x) => x > 10);
const foundBigFile = await Readable.from([
'file1',
'file2',
'file3',
]).find(async (fileName) => {
const stats = await stat(fileName);
return stats.size > 1024 * 1024;
}, { concurrency: 2 });
console.log(foundBigFile);
console.log('done');
readable.every(fn[, options])#
Added in: v17.5.0, v16.15.0
fn <Function> | <AsyncFunction> a function to call on each chunk of the stream.
data <any> a chunk of data from the stream.
options <Object>
signal <AbortSignal> aborted if the stream is destroyed allowing to
abort the fn call early.
options <Object>
concurrency <number> the maximum concurrent invocation of fn to call
on the stream at once. Default: 1.
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Promise> a promise evaluating to
true if fn returned a truthy
value for all of the chunks.
This method is similar to Array.prototype.every and calls fn on each chunk
in the stream to check if all awaited return values are truthy value for fn.
Once an fn call on a chunk awaited return value is falsy, the stream is
destroyed and the promise is fulfilled with false. If all of the fn calls
on the chunks return a truthy value, the promise is fulfilled with true.
import { Readable } from 'node:stream';
import { stat } from 'node:fs/promises';
await Readable.from([1, 2, 3, 4]).every((x) => x > 2);
await Readable.from([1, 2, 3, 4]).every((x) => x > 0);
const allBigFiles = await Readable.from([
'file1',
'file2',
'file3',
]).every(async (fileName) => {
const stats = await stat(fileName);
return stats.size > 1024 * 1024;
}, { concurrency: 2 });
console.log(allBigFiles);
console.log('done');
readable.flatMap(fn[, options])#
Added in: v17.5.0, v16.15.0
This method returns a new stream by applying the given callback to each
chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from
fn and the result streams will be merged (flattened) into the returned
stream.
import { Readable } from 'node:stream';
import { createReadStream } from 'node:fs';
for await (const chunk of Readable.from([1, 2, 3, 4]).flatMap((x) => [x, x])) {
console.log(chunk);
}
const concatResult = Readable.from([
'./1.mjs',
'./2.mjs',
'./3.mjs',
'./4.mjs',
]).flatMap((fileName) => createReadStream(fileName));
for await (const result of concatResult) {
console.log(result);
}
readable.drop(limit[, options])#
Added in: v17.5.0, v16.15.0
limit <number> the number of chunks to drop from the readable.
options <Object>
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Readable> a stream with
limit chunks dropped.
This method returns a new stream with the first limit chunks dropped.
import { Readable } from 'node:stream';
await Readable.from([1, 2, 3, 4]).drop(2).toArray();
readable.take(limit[, options])#
Added in: v17.5.0, v16.15.0
limit <number> the number of chunks to take from the readable.
options <Object>
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Readable> a stream with
limit chunks taken.
This method returns a new stream with the first limit chunks.
import { Readable } from 'node:stream';
await Readable.from([1, 2, 3, 4]).take(2).toArray();
readable.reduce(fn[, initial[, options]])#
Added in: v17.5.0, v16.15.0
fn <Function> | <AsyncFunction> a reducer function to call over every chunk
in the stream.
previous <any> the value obtained from the last call to fn or the
initial value if specified or the first chunk of the stream otherwise.
data <any> a chunk of data from the stream.
options <Object>
signal <AbortSignal> aborted if the stream is destroyed allowing to
abort the fn call early.
initial <any> the initial value to use in the reduction.
options <Object>
signal <AbortSignal> allows destroying the stream if the signal is
aborted.
- Returns: <Promise> a promise for the final value of the reduction.
This method calls fn on each chunk of the stream in order, passing it the
result from the calculation on the previous element. It returns a promise for
the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the
initial value. If the stream is empty, the promise is rejected with a
TypeError with the ERR_INVALID_ARGS code property.
import { Readable } from 'node:stream';
import { readdir, stat } from 'node:fs/promises';
import { join } from 'node:path';
const directoryPath = './src';
const filesInDir = await readdir(directoryPath);
const folderSize = await Readable.from(filesInDir)
.reduce(async (totalSize, file) => {
const { size } = await stat(join(directoryPath, file));
return totalSize + size;
}, 0);
console.log(folderSize);
The reducer function iterates the stream element-by-element which means that
there is no concurrency parameter or parallelism. To perform a reduce
concurrently, you can extract the async function to readable.map method.
import { Readable } from 'node:stream';
import { readdir, stat } from 'node:fs/promises';
import { join } from 'node:path';
const directoryPath = './src';
const filesInDir = await readdir(directoryPath);
const folderSize = await Readable.from(filesInDir)
.map((file) => stat(join(directoryPath, file)), { concurrency: 2 })
.reduce((totalSize, { size }) => totalSize + size, 0);
console.log(folderSize);