. NodeJs Advanced Level interview questions and answers for experienced
Q. How does Node.js read the content of a file?
The "normal" way in Node.js is probably to read the content of a file in a non-blocking, asynchronous way. That is, to tell Node to read in the file, and then to get a callback when the file-reading has been finished. That would allow us to handle in several requests in parallel.
Common use for the File System module:
- Read files
- Create files
- Update files
- Delete files
- Rename files
Read Files
index.html
<html>
<body>
<h1>My Header</h1>
<p>My paragraph.</p>
</body>
</html>
// read_file.js
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
fs.readFile('index.html', function(err, data) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write(data);
res.end();
});
}).listen(8080);
Initiate read_file.js:
node read_file.js
Q. How many types of streams are present in node.js?
Streams are objects that let you read data from a source or write data to a destination in continuous fashion. There are four types of streams
- Readable − Stream which is used for read operation.
- Writable − Stream which is used for write operation.
- Duplex − Stream which can be used for both read and write operation.
- Transform − A type of duplex stream where the output is computed based on input.
Each type of Stream is an EventEmitter instance and throws several events at different instance of times.
Example:
- data − This event is fired when there is data is available to read.
- end − This event is fired when there is no more data to read.
- error − This event is fired when there is any error receiving or writing data.
- finish − This event is fired when all the data has been flushed to underlying system.
Reading from a Stream:
var fs = require("fs");
var data = '';
// Create a readable stream
var readerStream = fs.createReadStream('input.txt');
// Set the encoding to be utf8.
readerStream.setEncoding('UTF8');
// Handle stream events --> data, end, and error
readerStream.on('data', function(chunk) {
data += chunk;
});
readerStream.on('end',function() {
console.log(data);
});
readerStream.on('error', function(err) {
console.log(err.stack);
});
console.log("Program Ended");
Writing to a Stream:
var fs = require("fs");
var data = 'Simply Easy Learning';
// Create a writable stream
var writerStream = fs.createWriteStream('output.txt');
// Write the data to stream with encoding to be utf8
writerStream.write(data,'UTF8');
// Mark the end of file
writerStream.end();
// Handle stream events --> finish, and error
writerStream.on('finish', function() {
console.log("Write completed.");
});
writerStream.on('error', function(err) {
console.log(err.stack);
});
console.log("Program Ended");
Piping the Streams:
Piping is a mechanism where we provide the output of one stream as the input to another stream. It is normally used to get data from one stream and to pass the output of that stream to another stream. There is no limit on piping operations.
var fs = require("fs");
// Create a readable stream
var readerStream = fs.createReadStream('input.txt');
// Create a writable stream
var writerStream = fs.createWriteStream('output.txt');
// Pipe the read and write operations
// read input.txt and write data to output.txt
readerStream.pipe(writerStream);
console.log("Program Ended");
Chaining the Streams:
Chaining is a mechanism to connect the output of one stream to another stream and create a chain of multiple stream operations. It is normally used with piping operations.
var fs = require("fs");
var zlib = require('zlib');
// Compress the file input.txt to input.txt.gz
fs.createReadStream('input.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('input.txt.gz'));
console.log("File Compressed.");
Q. Is Node.js entirely based on a single-thread?
Yes, it is true that Node.js processes all requests on a single thread. But it is just a part of the theory behind Node.js design. In fact, more than the single thread mechanism, it makes use of events and callbacks to handle a large no. of requests asynchronously.
Moreover, Node.js has an optimized design which utilizes both JavaScript and C++ to guarantee maximum performance. JavaScript executes at the server-side by Google Chrome v8 engine. And the C++ lib UV library takes care of the non-sequential I/O via background workers.
To explain it practically, let’s assume there are 100s of requests lined up in Node.js queue. As per design, the main thread of Node.js event loop will receive all of them and forwards to background workers for execution. Once the workers finish processing requests, the registered callbacks get notified on event loop thread to pass the result back to the user.
Q. How does Node.js handle child threads?
Node.js is a single threaded language which in background uses multiple threads to execute asynchronous code. Node.js is non-blocking which means that all functions ( callbacks ) are delegated to the event loop and they are ( or can be ) executed by different threads. That is handled by Node.js run-time.
- Nodejs Primary application runs in an event loop, which is in a single thread.
- Background I/O is running in a thread pool that is only accessible to C/C++ or other compiled/native modules and mostly transparent to the JS.
- Node v11/12 now has experimental worker_threads, which is another option.
- Node.js does support forking multiple processes ( which are executed on different cores ).
- It is important to know that state is not shared between master and forked process.
- We can pass messages to forked process ( which is different script ) and to master process from forked process with function send.
Q. How does Node.js support multi-processor platforms, and does it fully utilize all processor resources?
Since Node.js is by default a single thread application, it will run on a single processor core and will not take full advantage of multiple core resources. However, Node.js provides support for deployment on multiple-core systems, to take greater advantage of the hardware. The Cluster module is one of the core Node.js modules and it allows running multiple Node.js worker processes that will share the same port.
The cluster module helps to spawn new processes on the operating system. Each process works independently, so you cannot use shared state between child processes. Each process communicates with the main process by IPC and pass server handles back and forth.
Cluster supports two types of load distribution:
- The main process listens on a port, accepts new connection and assigns it to a child process in a round robin fashion.
- The main process assigns the port to a child process and child process itself listen the port.
Q. If Node.js is single threaded then how it handles concurrency?
Node js despite being single-threaded is the asynchronous nature that makes it possible to handle concurrency and perform multiple I/O operations at the same time. Node js uses an event loop to maintain concurrency and perform non-blocking I/O operations.
As soon as Node js starts, it initializes an event loop. The event loop works on a queue (which is called an event queue) and performs tasks in FIFO (First In First Out) order. It executes a task only when there is no ongoing task in the call stack. The call stack works in LIFO(Last In First Out) order. The event loop continuously checks the call stack to check if there is any task that needs to be run. Now whenever the event loop finds any function, it adds it to the stack and runs in order.
function add(a,b){
return a+b;
}
function print(n){
console.log(`Two times the number ${n} is `+add(n,n));
}
print(5);
Here, when the code executes, the function print(5) will be invoked and will push into the call stack. When the function is called, it starts consoling the statement inside it but before consoling the whole statement it encounters another function add(n,n) and suspends its current execution, and pushes the add function into the top of the call stack. Now the function will return the addition a+b and then popped out from the stack and now the previously suspended function will start running and will log the output to console and then this function too will get pop from the stack and now the stack is empty. So this is how a call stack works.
Q. How to kill child processes that spawn their own child processes in Node.js?
If a child process in Node.js spawn their own child processes, kill() method will not kill the child process’s own child processes. For example, if I start a process that starts it’s own child processes via child_process module, killing that child process will not make my program to quit.
var spawn = require('child_process').spawn;
var child = spawn('my-command');
child.kill();
The program above will not quit if my-command
spins up some more processes.
PID range hack:
We can start child processes with {detached: true} option so those processes will not be attached to main process but they will go to a new group of processes. Then using process.kill(-pid) method on main process we can kill all processes that are in the same group of a child process with the same pid group. In my case, I only have one processes in this group.
var spawn = require('child_process').spawn;
var child = spawn('my-command', {detached: true});
process.kill(-child.pid);
Please note - before pid. This converts a pid to a group of pids for process kill() method.
Q. Is Node.js entirely based on a single-thread?
Yes, it is true that Node.js processes all requests on a single thread. But it is just a part of the theory behind Node.js design. In fact, more than the single thread mechanism, it makes use of events and callbacks to handle a large no. of requests asynchronously.
Moreover, Node.js has an optimized design which utilizes both JavaScript and C++ to guarantee maximum performance. JavaScript executes at the server-side by Google Chrome v8 engine. And the C++ lib UV library takes care of the non-sequential I/O via background workers.
To explain it practically, let’s assume there are 100s of requests lined up in Node.js queue. As per design, the main thread of Node.js event loop will receive all of them and forwards to background workers for execution. Once the workers finish processing requests, the registered callbacks get notified on event loop thread to pass the result back to the user.
Q. How does Node.js handle child threads?
Node.js is a single threaded language which in background uses multiple threads to execute asynchronous code. Node.js is non-blocking which means that all functions ( callbacks ) are delegated to the event loop and they are ( or can be ) executed by different threads. That is handled by Node.js run-time.
- Nodejs Primary application runs in an event loop, which is in a single thread.
- Background I/O is running in a thread pool that is only accessible to C/C++ or other compiled/native modules and mostly transparent to the JS.
- Node v11/12 now has experimental worker_threads, which is another option.
- Node.js does support forking multiple processes ( which are executed on different cores ).
- It is important to know that state is not shared between master and forked process.
- We can pass messages to forked process ( which is different script ) and to master process from forked process with function send.
Q. How does Node.js support multi-processor platforms, and does it fully utilize all processor resources?
Since Node.js is by default a single thread application, it will run on a single processor core and will not take full advantage of multiple core resources. However, Node.js provides support for deployment on multiple-core systems, to take greater advantage of the hardware. The Cluster module is one of the core Node.js modules and it allows running multiple Node.js worker processes that will share the same port.
The cluster module helps to spawn new processes on the operating system. Each process works independently, so you cannot use shared state between child processes. Each process communicates with the main process by IPC and pass server handles back and forth.
Cluster supports two types of load distribution:
- The main process listens on a port, accepts new connection and assigns it to a child process in a round robin fashion.
- The main process assigns the port to a child process and child process itself listen the port.
Q. If Node.js is single threaded then how it handles concurrency?
Node js despite being single-threaded is the asynchronous nature that makes it possible to handle concurrency and perform multiple I/O operations at the same time. Node js uses an event loop to maintain concurrency and perform non-blocking I/O operations.
As soon as Node js starts, it initializes an event loop. The event loop works on a queue (which is called an event queue) and performs tasks in FIFO (First In First Out) order. It executes a task only when there is no ongoing task in the call stack. The call stack works in LIFO(Last In First Out) order. The event loop continuously checks the call stack to check if there is any task that needs to be run. Now whenever the event loop finds any function, it adds it to the stack and runs in order.
function add(a,b){
return a+b;
}
function print(n){
console.log(`Two times the number ${n} is `+add(n,n));
}
print(5);
Here, when the code executes, the function print(5) will be invoked and will push into the call stack. When the function is called, it starts consoling the statement inside it but before consoling the whole statement it encounters another function add(n,n) and suspends its current execution, and pushes the add function into the top of the call stack. Now the function will return the addition a+b and then popped out from the stack and now the previously suspended function will start running and will log the output to console and then this function too will get pop from the stack and now the stack is empty. So this is how a call stack works.
Q. How to kill child processes that spawn their own child processes in Node.js?
If a child process in Node.js spawn their own child processes, kill() method will not kill the child process’s own child processes. For example, if I start a process that starts it’s own child processes via child_process module, killing that child process will not make my program to quit.
var spawn = require('child_process').spawn;
var child = spawn('my-command');
child.kill();
The program above will not quit if my-command
spins up some more processes.
PID range hack:
We can start child processes with {detached: true} option so those processes will not be attached to main process but they will go to a new group of processes. Then using process.kill(-pid) method on main process we can kill all processes that are in the same group of a child process with the same pid group. In my case, I only have one processes in this group.
var spawn = require('child_process').spawn;
var child = spawn('my-command', {detached: true});
process.kill(-child.pid);
Please note - before pid. This converts a pid to a group of pids for process kill() method.
NODE.JS WEB MODULE
Q. Can you create http server in Node.js, explain the code used for it?
Yes, we can create HTTP Server in Node.js. We can use the <http-server>
command to do so.
Following is the sample code.
var http = require('http');
var requestListener = function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Welcome Viewers\n');
}
var server = http.createServer(requestListener);
server.listen(4200); // The port where you want to start with.
Q. How to load HTML in Node.js?
To load HTML in Node.js we have to change the “Content-type” in the HTML code from text/plain to text/html.
fs.readFile(filename, "binary", function(err, file) {
if(err) {
response.writeHead(500, {"Content-Type": "text/plain"});
response.write(err + "\n");
response.end();
return;
}
response.writeHead(200);
response.write(file, "binary");
response.end();
});
Now we will modify this code to load an HTML page instead of plain text.
fs.readFile(filename, "binary", function(err, file) {
if(err) {
response.writeHead(500, {"Content-Type": "text/html"});
response.write(err + "\n");
response.end();
return;
}
response.writeHead(200, {"Content-Type": "text/html"});
response.write(file);
response.end();
});
Q. How can you listen on port 80 with Node?
Instead of running on port 80 we can redirect port 80 to your application's port (>1024) using
iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
Q. What are the middleware functions in Node.js?
Middleware functions are functions that have access to the request object (req), the response object (res), and the next
function in the application's request-response cycle.
Middleware functions can perform the following tasks:
- Execute any code.
- Make changes to the request and the response objects.
- End the request-response cycle.
- Call the next middleware in the stack.
If the current middleware function does not end the request-response cycle, it must call next()
to pass control to the next middleware function. Otherwise, the request will be left hanging.
The following figure shows the elements of a middleware function call:
Middleware functions that return a Promise will call next(value)
when they reject or throw an error. next
will be called with either the rejected value or the thrown Error.
Q. Explain the use of next in node.js with example?
The next function is a function in the Express router which, when invoked, executes the middleware succeeding the current middleware.
Example: Middleware function myLogger
To load the middleware function, call app.use()
, specifying the middleware function. For example, the following code loads the myLogger middleware function before the route to the root path (/).
const express = require("express");
const app = express();
const myLogger = function (req, res, next) {
console.log("LOGGED");
next();
};
app.use(myLogger);
app.get("/", (req, res) => {
res.send("Hello World!");
});
app.listen(3000);
⚝ Try this example on CodeSandbox
Note: The next()
function is not a part of the Node.js or Express API, but is the third argument that is passed to the middleware function. The next()
function could be named anything, but by convention it is always named “next”. To avoid confusion, always use this convention.
Q. Why to use Express.js?
ExpressJS is a prebuilt NodeJS framework that can help you in creating server-side web applications faster and smarter. Simplicity, minimalism, flexibility, scalability are some of its characteristics and since it is made in NodeJS itself, it inherited its performance as well.
Express.js is a light-weight web application framework to help organize your web application into an MVC architecture on the server side. You can then use a database like MongoDB
with Mongoose
(for modeling) to provide a backend for your Node.js application. Express.js basically helps you manage everything, from routes, to handling requests and views.
It has become the standard server framework for node.js. Express is the backend part of something known as the MEAN stack. The MEAN is a free and open-source JavaScript software stack for building dynamic web sites and web applications which has the following components;
- MongoDB - The standard NoSQL database
- Express.js - The default web applications framework
- Angular.js - The JavaScript MVC framework used for web applications
- Node.js - Framework used for scalable server-side and networking applications.
The Express.js framework makes it very easy to develop an application which can be used to handle multiple types of requests like the GET, PUT, and POST and DELETE requests.
using Express:
var express=require('express');
var app=express();
app.get('/',function(req,res) {
res.send('Hello World!');
});
var server=app.listen(3000,function() {});
Q. Write the steps for setting up an Express JS application?
1. Install Express Generator:
C:\node>npm install -g express-generator
2. Create an Express Project:
C:\node>express --view="ejs" nodetest1
3. Edit Dependencies:
MAKE SURE TO CD INTO YOUR nodetest FOLDER. OK, now we have some basic structure in there, but we're not quite done. You'll note that the express-generator routine created a file called package.json in your nodetest1 directory. Open this up in a text editor and it'll look like this:
// package.json
{
"name": "nodetest1",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"cookie-parser": "~1.4.3",
"debug": "~2.6.9",
"ejs": "~2.5.7",
"express": "~4.16.0",
"http-errors": "~1.6.2",
"morgan": "~1.9.0"
}
}
This is a basic JSON file describing our app and its dependencies. We need to add a few things to it. Specifically, calls for MongoDB and Monk.
C:\node\nodetest1>npm install --save monk@^6.0.6 mongodb@^3.1.13
4. Install Dependencies:
C:\node\nodetest1>npm install
C:\node\nodetest1>npm start
Node Console
> nodetest1@0.0.0 start C:\node\nodetest1
> node ./bin/www
Q. What is your favourite HTTP framework and why?
1. Express.js:
Express provides a thin layer on top of Node.js with web application features such as basic routing, middleware, template engine and static files serving, so the drastic I/O performance of Node.js doesn’t get compromised.
Express is a minimal, un-opinionated framework. it doesn’t apply any of the prevalent design patterns such as MVC, MVP, MVVM or whatever is trending out of the box. For fans of simplicity, this is a big plus among all other frameworks because you can build your application with your own preference and no unnecessary learning curve. This is especially advantageous when creating a new personal project with no historical burden, but as the project or developing team grows, lack of standardization may lead to extra work for project/code management, and worst case scenario it may lead to the inability to maintain.
2. Generator:
Even though the framework is un-opinionated, it does have the generator that generates specific project folder structure. After installing express-generator npm package and creating application skeleton with generator command, an application folder with clear hierarchy will be created to help you organize images, front-end static JavaScript, stylesheet files and HTML template files.
npm install express-generator -g
express helloapp
3. Middleware:
Middleware are basically just functions that have full access to both request and response objects.
var app = express();
app.use(cookieParser());
app.use(bodyParser());
app.use(logger());
app.use(authentication());
app.get('/', function (req, res) {
// ...
});
app.listen(3000);
An Express application is essentially Node.js with a host of middleware functions, whether you want to customize your own middleware or take advantage of the built-in middlewares of the framework, Express made the process natural and intuitive.
4. Template Engine:
Template engines allow developer to embed backend variables into HTML files, and when requested the template file will be rendered to plain HTML format with the variables interpolated with their actual values. By default, the express-generator uses Pug (originally known as Jade) template engine, but other options like Mustache and EJS also work with Express seamlessly.
5. Database Integration:
As a minimal framework, Express does not consider database integration as a required aspect within its package, thus it leans toward no specific database usage whatsoever. While adopting a particular data storage technology, be it MySQL, MongoDB, PostgreSQL, Redis, ElasticSearch or something else, it’s just a matter of installing the particular npm package as database driver. These third party database drivers do not conform to unified syntax when doing CRUD instructions, which makes switching databases a big hassle and error prone.
Q. Why should you separate Express 'app' and 'server'?
Keeping the API declaration separated from the network related configuration (port, protocol, etc) allows testing the API in-process, without performing network calls, with all the benefits that it brings to the table: fast testing execution and getting coverage metrics of the code. It also allows deploying the same API under flexible and different network conditions. Bonus: better separation of concerns and cleaner code.
API declaration, should reside in app.js:
var app = express();
app.use(bodyParser.json());
app.use("/api/events", events.API);
app.use("/api/forms", forms);
Server network declaration, should reside in /bin/www:
var app = require('../app');
var http = require('http');
/**
* Get port from environment and store in Express.
*/
var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);
/**
* Create HTTP server.
*/
var server = http.createServer(app);
Q. How can you make sure your dependencies are safe?
The only option is to automate the update / security audit of your dependencies. For that there are free and paid options:
- npm outdated
- Trace by RisingStack
- NSP
- GreenKeeper
- Snyk
- npm audit
- npm audit fix
Q. What is npm in Node.js?
NPM stands for Node Package Manager. It provides following two main functionalities.
- It works as an Online repository for node.js packages/modules which are present at <nodejs.org>.
- It works as Command line utility to install packages, do version management and dependency management of Node.js packages. NPM comes bundled along with Node.js installable. We can verify its version using the following command-
npm --version
NPM helps to install any Node.js module using the following command.
npm install <Module Name>
For example, following is the command to install a famous Node.js web framework module called express-
npm install express
Q. Why npm shrinkwrap is useful?
NPM shrinkwrap lets you lock down the versions of installed packages and their descendant packages. It helps you use same package versions on all environments (development, staging, production) and also improve download and installation speed. Having same versions of packages on all environments can help you test systems and deploy with confidence. If all tests pass on one machine, you can be sure that it will pass on all other because you know that you use same code!
npm shrinkwrap
It should create new npm-shrinkwrap.json file with information about all packages you use
Q. What is the difference between req.params and req.query?
params are a part of a path in URL and they're also known as URL variables. for example, if you have the route /books/:id, then the “id” property will be available as req.params.id. req.params default value is an empty object {}.
A query string is a part of a URL that assigns values to specified parameters. A query string commonly includes fields added to a base URL by a Web browser or other client application, for example as part of an HTML form. A query is the last part of URL
Q. How to make post request in Node.js?
Following code snippet can be used to make a Post Request in Node.js.
var request = require('request');
request.post('http://www.example.com/action', { form: { key: 'value' } },
function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body)
}
});
Q. What are Promises in Node.js?
It allows to associate handlers to an asynchronous action's eventual success value or failure reason. This lets asynchronous methods return values like synchronous methods: instead of the final value, the asynchronous method returns a promise for the value at some point in the future.
Promises in node.js promised to do some work and then had separate callbacks that would be executed for success and failure as well as handling timeouts. Another way to think of promises in node.js was that they were emitters that could emit only two events: success and error.The cool thing about promises is you can combine them into dependency chains (do Promise C only when Promise A and Promise B complete).
The core idea behind promises is that a promise represents the result of an asynchronous operation. A promise is in one of three different states:
- pending - The initial state of a promise.
- fulfilled - The state of a promise representing a successful operation.
- rejected - The state of a promise representing a failed operation. Once a promise is fulfilled or rejected, it is immutable (i.e. it can never change again).
Creating a Promise:
var myPromise = new Promise(function(resolve, reject){
....
})
Q. How can you secure your HTTP cookies against XSS attacks?
1. When the web server sets cookies, it can provide some additional attributes to make sure the cookies won't be accessible by using malicious JavaScript. One such attribute is HttpOnly.
Set-Cookie: [name]=[value]; HttpOnly
HttpOnly makes sure the cookies will be submitted only to the domain they originated from.
2. The "Secure" attribute can make sure the cookies are sent over secured channel only.
Set-Cookie: [name]=[value]; Secure
3. The web server can use X-XSS-Protection response header to make sure pages do not load when they detect reflected cross-site scripting (XSS) attacks.
X-XSS-Protection: 1; mode=block
4. The web server can use HTTP Content-Security-Policy response header to control what resources a user agent is allowed to load for a certain page. It can help to prevent various types of attacks like Cross Site Scripting (XSS) and data injection attacks.
Content-Security-Policy: default-src 'self' *.http://sometrustedwebsite.com
Q. How to make an HTTP POST request using Node.js?
const https = require('https')
const obj = {
"userId":1,
"id":1,
"title":"whatever",
"completed":false
}
const data = JSON.stringify(obj)
const options = {
hostname: 'jsonplaceholder.typicode.com',
port: 443,
path: '/todos',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length
}
}
const req = https.request(options, res => {
console.log(`statusCode: ${res.statusCode}`)
res.on('data', d => {
process.stdout.write(d)
})
})
req.on('error', error => {
console.error(error)
})
req.write(data)
req.end()
Q. What is asynchronous programming in Node.js?
Asynchronous programming is a form of parallel programming that allows a unit of work to run separately from the primary application thread. When the work is complete, it notifies the main thread (as well as whether the work was completed or failed). There are numerous benefits to using it, such as improved application performance and enhanced responsiveness.
Q. What is the difference between Asynchronous and Non-blocking?
1. Asynchronous:
The architecture of asynchronous explains that the message sent will not give the reply on immediate basis just like we send the mail but do not get the reply on an immediate basis. It does not have any dependency or order. Hence improving the system efficiency and performance. The server stores the information and when the action is done it will be notified.
2. Non-Blocking:
Nonblocking immediately responses with whatever data available. Moreover, it does not block any execution and keeps on running as per the requests. If an answer could not be retrieved then in those cases API returns immediately with an error. Nonblocking is mostly used with I/O(input/output). Node.js is itself based on nonblocking I/O model. There are few ways of communication that a nonblocking I/O has completed. The callback function is to be called when the operation is completed. Nonblocking call uses the help of javascript which provides a callback function.
Q. How node.js prevents blocking code?
Blocking vs Non-blocking
Blocking is when the execution of additional JavaScript in the Node.js process must wait until a non-JavaScript operation completes. This happens because the event loop is unable to continue running JavaScript while a blocking operation is occurring.
Synchronous methods in the Node.js standard library that use libuv are the most commonly used blocking operations. Native modules may also have blocking methods. Blocking methods execute synchronously
and non-blocking methods execute asynchronously
.
Example:
// Blocking
const fs = require('fs');
const data = fs.readFileSync('/file.md'); // blocks here until file is read
console.log(data);
moreWork(); // will run after console.log
// Non-blocking
const fs = require('fs');
fs.readFile('/file.md', (err, data) => {
if (err) throw err;
console.log(data);
});
moreWork(); // will run before console.log
Q. Name the types of API functions in Node.js?
There are two types of API functions in Node.js:
- Asynchronous, Non-blocking functions
- Synchronous, Blocking functions
1. Blocking functions:
In a blocking operation, all other code is blocked from executing until an I/O event that is being waited on occurs. Blocking functions execute synchronously.
Example:
const fs = require('fs');
const data = fs.readFileSync('/file.md'); // blocks here until file is read
console.log(data);
// moreWork(); will run after console.log
The second line of code blocks the execution of additional JavaScript until the entire file is read. moreWork () will only be called after Console.log
2. Non-blocking functions:
In a non-blocking operation, multiple I/O calls can be performed without the execution of the program being halted. Non-blocking functions execute asynchronously.
Example:
const fs = require('fs');
fs.readFile('/file.md', (err, data) => {
if (err) throw err;
console.log(data);
});
// moreWork(); will run before console.log
Since fs.readFile()
is non-blocking, moreWork() does not have to wait for the file read to complete before being called. This allows for higher throughput.
Q. What is difference between put and patch?
PUT and PATCH are HTTP verbs and they both relate to updating a resource. The main difference between PUT and PATCH requests are in the way the server processes the enclosed entity to modify the resource identified by the Request-URI.
In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced.
With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.
Also, another difference is that when you want to update a resource with PUT request, you have to send the full payload as the request whereas with PATCH, you only send the parameters which you want to update.
The most commonly used HTTP verbs POST, GET, PUT, DELETE are similar to CRUD (Create, Read, Update and Delete) operations in database. We specify these HTTP verbs in the capital case. So, the below is the comparison between them.
POST
- createGET
- readPUT
- updateDELETE
- delete
PATCH: Submits a partial modification to a resource. If you only need to update one field for the resource, you may want to use the PATCH method.
Q. What is difference between promises and async-await in Node.js?
1. Promises:
A promise is used to handle the asynchronous result of an operation. JavaScript is designed to not wait for an asynchronous block of code to completely execute before other synchronous parts of the code can run. With Promises, we can defer the execution of a code block until an async request is completed. This way, other operations can keep running without interruption.
States of Promises:
Pending
: Initial State, before the Promise succeeds or fails.Resolved
: Completed PromiseRejected
: Failed Promise, throw an error
Example:
function logFetch(url) {
return fetch(url)
.then(response => {
console.log(response);
})
.catch(err => {
console.error('fetch failed', err);
});
}
2. Async-Await:
Await
is basically syntactic sugar for Promises. It makes asynchronous code look more like synchronous/procedural code, which is easier for humans to understand.
Putting the keyword async
before a function tells the function to return a Promise. If the code returns something that is not a Promise
, then JavaScript automatically wraps it into a resolved promise with that value. The await
keyword simply makes JavaScript wait until that Promise
settles and then returns its result.
Example:
async function logFetch(url) {
try {
const response = await fetch(url);
console.log(response);
}
catch (err) {
console.log('fetch failed', err);
}
}
Q. Mention the steps by which you can async in Node.js?
ES 2017 introduced Asynchronous functions. Async functions are essentially a cleaner way to work with asynchronous code in JavaScript.
1. Async/Await:
- The newest way to write asynchronous code in JavaScript.
- It is non blocking (just like promises and callbacks).
- Async/Await was created to simplify the process of working with and writing chained promises.
- Async functions return a Promise. If the function throws an error, the Promise will be rejected. If the function returns a value, the Promise will be resolved.
Syntax
// Normal Function
function add(x,y){
return x + y;
}
// Async Function
async function add(x,y){
return x + y;
}
2. Await:
Async functions can make use of the await expression. This will pause the async function and wait for the Promise to resolve prior to moving on.
Example:
function doubleAfter2Seconds(x) {
return new Promise(resolve => {
setTimeout(() => {
resolve(x * 2);
}, 2000);
});
}
async function addAsync(x) {
const a = await doubleAfter2Seconds(10);
const b = await doubleAfter2Seconds(20);
const c = await doubleAfter2Seconds(30);
return x + a + b + c;
}
addAsync(10).then((sum) => {
console.log(sum);
});
Q. How to use Q promise in Node.js?
A promise is an object that represents the return value or the thrown exception that the function may eventually provide. A promise can also be used as a proxy for a remote object to overcome latency.
Promise is relatively an easy implementation for asynchronous operation. The promise object returned from the function represents an operation which is not completed yet, but it guarantees to the caller of the operation that the operation will be completed in future.
Promise has the following states:
- Pending - asynchronous operation is not yet completed.
- Fulfilled - asynchronous operation is completed successfully.
- Rejected - asynchronous operation is terminated with an error.
- Settled - asynchronous operation is either fulfilled or rejected.
- Callback - function is executed if the promise is executed with value.
- Errback - function is executed if the promise is rejected.
Moving to Promises from Callback
On the first pass, promises can mitigate the Pyramid of Doom: the situation where code marches to the right faster than it marches forward.
step1(function (value1) {
step2(value1, function(value2) {
step3(value2, function(value3) {
step4(value3, function(value4) {
// Do something with value4
});
});
});
});
With a promise library, it can flatten the pyramid.
Q.fcall(promisedStep1)
.then(promisedStep2)
.then(promisedStep3)
.then(promisedStep4)
.then(function (value4) {
// Do something with value4
})
.catch(function (error) {
// Handle any error from all above steps
})
.done();
Reference:
Q. What are async functions in Node?
Q. How do you convert an existing callback API to promises?
ToDo
# NODE.JS ROUTING
Q. How does routing work in Node.js?
Routing defines the way in which the client requests are handled by the application endpoints. We define routing using methods of the Express app object that correspond to HTTP methods; for example, app.get()
to handle GET
requests and app.post
to handle POST
requests, app.all()
to handle all HTTP methods and app.use()
to specify middleware as the callback function.
These routing methods "listens" for requests that match the specified route(s) and method(s), and when it detects a match, it calls the specified callback function.
Syntax:
app.METHOD(PATH, HANDLER)
Where:
- app is an instance of express.
- METHOD is an
HTTP request method
. - PATH is a path on the server.
- HANDLER is the function executed when the route is matched.
a) Route methods:
// GET method route
app.get('/', function (req, res) {
res.send('GET request')
})
// POST method route
app.post('/login', function (req, res) {
res.send('POST request')
})
// ALL method route
app.all('/secret', function (req, res, next) {
console.log('Accessing the secret section ...')
next() // pass control to the next handler
})
b) Route paths:
Route paths, in combination with a request method, define the endpoints at which requests can be made. Route paths can be strings, string patterns, or regular expressions.
The characters ?
, +
, *
, and ()
are subsets of their regular expression counterparts. The hyphen (-)
and the dot (.)
are interpreted literally by string-based paths.
Example:
// This route path will match requests to /about.
app.get('/about', function (req, res) {
res.send('about')
})
// This route path will match acd and abcd.
app.get('/ab?cd', function (req, res) {
res.send('ab?cd')
})
// This route path will match butterfly and dragonfly
app.get(/.*fly$/, function (req, res) {
res.send('/.*fly$/')
})
c) Route parameters:
Route parameters are named URL segments that are used to capture the values specified at their position in the URL. The captured values are populated in the req.params
object, with the name of the route parameter specified in the path as their respective keys.
Example:
app.get('/users/:userId', function (req, res) {
res.send(req.params)
})
Response methods:
Method | Description |
---|---|
res.download() | Prompt a file to be downloaded. |
res.end() | End the response process. |
res.json() | Send a JSON response. |
res.jsonp() | Send a JSON response with JSONP support. |
res.redirect() | Redirect a request. |
res.render() | Render a view template. |
res.send() | Send a response of various types. |
res.sendFile() | Send a file as an octet stream. |
res.sendStatus() | Set the response status code and send its string representation as the response body. |
d) Router method:
var express = require('express')
var router = express.Router()
// middleware that is specific to this router
router.use(function timeLog (req, res, next) {
console.log('Time: ', Date.now())
next()
})
// define the home page route
router.get('/', function (req, res) {
res.send('Birds home page')
})
// define the about route
router.get('/about', function (req, res) {
res.send('About birds')
})
module.exports = router
# NODE.JS ERROR HANDLING
Q. What is the preferred method of resolving unhandled exceptions in Node.js?
Unhandled exceptions in Node.js can be caught at the Process level by attaching a handler for uncaughtException event.
process.on('uncaughtException', function(err) {
console.log('Caught exception: ' + err);
});
Process is a global object that provides information about the current Node.js process. Process is a listener function that is always listening to events.
Few events are :
- Exit
- disconnect
- unhandledException
- rejectionHandled
# NODE.JS LOGGING
Q. How to debug an application in Node.js?
1. node-inspector:
npm install -g node-inspector
Run
node-debug app.js
2. Debugging:
- Debugger
- Node Inspector
- Visual Studio Code
- Cloud9
- Brackets
3. Profiling:
1. node --prof ./app.js
2. node --prof-process ./the-generated-log-file
4. Heapdumps:
- node-heapdump with Chrome Developer Tools
5. Tracing:
- Interactive Stack Traces with TraceGL
6. Logging:
Libraries that output debugging information
- Caterpillar
- Tracer
- scribbles
Libraries that enhance stack trace information
- Longjohn
# NODE.JS TESTING
Q. What is a stub?
Stubbing and verification for node.js tests. Enables you to validate and override behaviour of nested pieces of code such as methods, require() and npm modules or even instances of classes. This library is inspired on node-gently, MockJS and mock-require.
Features of Stub:
- Produces simple, lightweight Objects capable of extending down their tree
- Compatible with Nodejs
- Easily extendable directly or through an ExtensionManager
- Comes with predefined, usable extensions
Stubs are functions/programs that simulate the behaviours of components/modules. Stubs provide canned answers to function calls made during test cases. Also, you can assert on with what these stubs were called.
A use-case can be a file read, when you do not want to read an actual file:
var fs = require('fs');
var readFileStub = sinon.stub(fs, 'readFile', function (path, cb) {
return cb(null, 'filecontent');
});
expect(readFileStub).to.be.called;
readFileStub.restore();
Q. What is a test pyramid?
The "Test Pyramid" is a metaphor that tells us to group software tests into buckets of different granularity. It also gives an idea of how many tests we should have in each of these groups. It shows which kinds of tests you should be looking for in the different levels of the pyramid and gives practical examples on how these can be implemented.
Mike Cohn's original test pyramid consists of three layers that your test suite should consist of (bottom to top):
- Unit Tests
- Service Tests
- User Interface Tests
# NODE.JS MISCELLANEOUS
Q. What is crypto in Node.js?
The Node.js Crypto module supports cryptography. It provides cryptographic functionality that includes a set of wrappers for open SSL's hash HMAC, cipher, decipher, sign and verify functions.
Hash: A hash is a fixed-length string of bits i.e. procedurally and deterministically generated from some arbitrary block of source data.
HMAC: HMAC stands for Hash-based Message Authentication Code. It is a process for applying a hash algorithm to both data and a secret key that results in a single final hash.
Encryption Example using Hash and HMAC
const crypto = require('crypto');
const secret = 'abcdefg';
const hash = crypto.createHmac('sha256', secret)
.update('Welcome to JavaTpoint')
.digest('hex');
console.log(hash);
- Encryption example using Cipher
const crypto = require('crypto');
const cipher = crypto.createCipher('aes192', 'a password');
var encrypted = cipher.update('Hello JavaTpoint', 'utf8', 'hex');
encrypted += cipher.final('hex');
console.log(encrypted);
- Decryption example using Decipher
const crypto = require('crypto');
const decipher = crypto.createDecipher('aes192', 'a password');
var encrypted = '4ce3b761d58398aed30d5af898a0656a3174d9c7d7502e781e83cf6b9fb836d5';
var decrypted = decipher.update(encrypted, 'hex', 'utf8');
decrypted += decipher.final('utf8');
console.log(decrypted);
Q. How to execute an external program from within Node.js?
const { exec } = require('child_process');
exec('"/path/to/test file/test.sh" arg1 arg2');
exec('echo "The \\$HOME variable is $HOME"');
Q. What is REPL?
REPL (READ, EVAL, PRINT, LOOP) is a computer environment similar to Shell (Unix/Linux) and command prompt. Node comes with the REPL environment when it is installed. System interacts with the user through outputs of commands/expressions used. It is useful in writing and debugging the codes. The work of REPL can be understood from its full form:
- Read: It reads the inputs from users and parses it into JavaScript data structure. It is then stored to memory.
- Eval: The parsed JavaScript data structure is evaluated for the results.
- Print: The result is printed after the evaluation.
- Loop: Loops the input command. To come out of NODE REPL, press ctrl+c twice
Simple Expression
$ node
> 10 + 20
30
> 10 + ( 20 * 30 ) - 40
570
>
Q. What does the runtime environment mean in Node.js?
The Node.js runtime is the software stack responsible for installing your web service's code and its dependencies and running your service.
The Node.js runtime for App Engine in the standard environment is declared in the app.yaml
file:
runtime: nodejs10
The runtime environment is literally just the environment your application is running in. This can be used to describe both the hardware and the software that is running your application. How much RAM, what version of node, what operating system, how much CPU cores, can all be referenced when talking about a runtime environment.
Q. Explain usage of NODE_ENV?
NODE_ENV is an environment variable made popular by the express web server framework. When a node application is run, it can check the value of the environment variable and do different things based on the value.
For example, when we work on a project and there are production and development environments. We don't need to use caching in the development env. So we set
$ NODE_ENV=development
and use the code below
if (process.env.NODE_ENV === 'development')
useCaching = false;
Upon that, if the project runs on production it will use caching.
Q. How assert works in Node.js?
The assert module provides a way of testing expressions. If the expression evaluates to 0, or false, an assertion failure is being caused, and the program is terminated.
This module was built to be used internally by Node.js.
// Sample usage
const assert = require('assert');
assert(50 > 70, "50 is less than 70.");
Q. When should you npm and when yarn?
- npm
It is the default method for managing packages in the Node.js runtime environment. It relies upon a command line client and a database made up of public and premium packages known as the the npm registry. Users can access the registry via the client and browse the many packages available through the npm website. Both npm and its registry are managed by npm, Inc.
node -v
npm -v
- Yarn
Yarn was developed by Facebook in attempt to resolve some of npm’s shortcomings. Yarn isn’t technically a replacement for npm since it relies on modules from the npm registry. Think of Yarn as a new installer that still relies upon the same npm structure. The registry itself hasn’t changed, but the installation method is different. Since Yarn gives you access to the same packages as npm, moving from npm to Yarn doesn’t require you to make any changes to your workflow.
npm install yarn --global
Comparing Yarn vs npm
- Fast: Yarn caches every package it downloads so it never needs to again. It also parallelizes operations to maximize resource utilization so install times are faster than ever.
- Reliable: Using a detailed, but concise, lockfile format, and a deterministic algorithm for installs, Yarn is able to guarantee that an install that worked on one system will work exactly the same way on any other system.
- Secure: Yarn uses checksums to verify the integrity of every installed package before its code is executed.
- Offline Mode: If you've installed a package before, you can install it again without any internet connection.
- Deterministic: The same dependencies will be installed the same exact way across every machine regardless of install order.
- Network Performance: Yarn efficiently queues up requests and avoids request waterfalls in order to maximize network utilization.
- Multiple Registries: Install any package from either npm or Bower and keep your package workflow the same.
- Network Resilience: A single request failing won't cause an install to fail. Requests are retried upon failure.
- Flat Mode: Resolve mismatching versions of dependencies to a single version to avoid creating duplicates.
Q. What is the use of DNS module in Node.js?
DNS is a node module used to do name resolution facility which is provided by the operating system as well as used to do an actual DNS lookup. No need for memorising IP addresses – DNS servers provide a nifty solution of converting domain or subdomain names to IP addresses. This module provides an asynchronous network wrapper and can be imported using the following syntax.
const dns = require('dns');
Example: dns.lookup()
function
const dns = require('dns');
dns.lookup('www.google.com', (err, addresses, family) => {
console.log('addresses:', addresses);
console.log('family:',family);
});
Example: resolve4()
and reverse()
functions
const dns = require('dns');
dns.resolve4('www.google.com', (err, addresses) => {
if (err) throw err;
console.log(`addresses: ${JSON.stringify(addresses)}`);
addresses.forEach((a) => {
dns.reverse(a, (err, hostnames) => {
if (err) {
throw err;
}
console.log(`reverse for ${a}: ${JSON.stringify(hostnames)}`);
});
});
});
Example: print the localhost name using lookupService()
function
const dns = require('dns');
dns.lookupService('127.0.0.1', 22, (err, hostname, service) => {
console.log(hostname, service);
// Prints: localhost
});
Q. What is JIT and how is it related to Node.js?
Node.js has depended on the V8 JavaScript engine to provide code execution in the language. The V8 is a JavaScript engine built at the google development center, in Germany. It is open source and written in C++.
It is used for both client side (Google Chrome) and server side (node.js) JavaScript applications. A central piece of the V8 engine that allows it to execute JavaScript at high speed is the JIT (Just In Time) compiler.
This is a dynamic compiler that can optimize code during runtime. When V8 was first built the JIT Compiler was dubbed Full Codegen. Then, the V8 team implemented Crankshaft, which included many performance optimizations that FullCodegen did not implement.
The V8
was first designed to increase the performance of the JavaScript execution inside web browsers. In order to obtain speed, V8 translates JavaScript code into more efficient machine code instead of using an interpreter.
It compiles JavaScript code into machine code at execution by implementing a JIT (Just-In-Time) compiler like a lot of modern JavaScript engines such as SpiderMonkey or Rhino (Mozilla) are doing. The main difference with V8 is that it doesn’t produce bytecode or any intermediate code.
Q. How to generate and verify the checksum of the given string in Nodejs
The checksum (aka hash sum) calculation is a one-way process of mapping an extensive data set of variable length (e.g., message, file), to a smaller data set of a fixed length (hash). The length depends on a hashing algorithm.
For the checksum generation, we can use node crypto()
module. The module uses createHash(algorithm)
to create a checksum (hash) generator. The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform.
Example:
const crypto = require('crypto');
// To get a list of all available hash algorithms
crypto.getHashes() // [ 'md5', 'sha1', 'sha3-256', ... ]
// Create hash of SHA1 type
const key = "MY_SECRET_KEY";
// 'digest' is the output of hash function containing
// only hexadecimal digits
hashPwd = crypto.createHash('sha1').update(key).digest('hex');
console.log(hashPwd); //ef5225a03e4f9cc953ab3c4dd41f5c4db7dc2e5b
0 Comments