0% found this document useful (0 votes)
10 views

Mongodb

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Mongodb

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 161

MongoDB

What is MongoDB

• It is an open-source document database that provides high


performance, high availability, and automatic scaling developed
and supported by a company named 10gen.
• MongoDB is available under General Public license for free, and it
is also available under Commercial license from the manufacturer.
• MongoDB was designed to work with commodity servers. Now it
is used by the company of all sizes, across all industry.
History of MongoDB

• The initial development of MongoDB began in 2007 when the


company was building a platform as a service similar to window
azure.
• MongoDB was developed by a NewYork based organization named
10gen which is now known as MongoDB Inc. It was initially
developed as a PAAS (Platform as a Service).
• Later in 2009, it is introduced in the market as an open source
database server that was maintained and supported by MongoDB Inc.
• The first ready production of MongoDB has been considered from
version 1.4 which was released in March 2010.
• MongoDB 7.0 was the latest and stable version which was released in
2023.
Purpose of Building MongoDB

• All the modern applications require big data, fast features


development, flexible deployment, and the older database systems
not competent enough, so the MongoDB was needed.
The primary purpose of building
MongoDB is:
• Scalability
• Performance
• High Availability
• Scaling from single server deployments to large, complex multi-site
architectures.
• Key points of MongoDB
• Develop Faster
• Deploy Easier
• Scale Bigger
Example of Document-Oriented
Database
• MongoDB is a document-oriented database. It is a key feature of
MongoDB. It offers a document-oriented storage. It is very simple
you can program it easily.
• MongoDB stores data as documents, so it is known as document-
oriented database.

FirstName = "John",
Address = "Detroit",
Spouse = [{Name: "Angela"}]
. FirstName ="John",
Address = "Wick"
Features of MongoDB
1. Support ad hoc queries
2. Indexing
3. Replication
4. Duplication of data
5. Load balancing
6. Supports map reduce and aggregation tools.
7. Uses JavaScript instead of Procedures.
8. It is a schema-less database written in C++.
9. Provides high performance.
10. Stores files of any size easily without complicating your
stack.
11. Easy to administer in the case of failures.
INDEXING
• Indexes support efficient execution of queries in MongoDB. Without
indexes, MongoDB must scan every document in a collection to return
query results. If an appropriate index exists for a query, MongoDB
uses the index to limit the number of documents it must scan.
• Although indexes improve query performance, adding an index has
negative performance impact for write operations. For collections with
a high write-to-read ratio, indexes are expensive because each insert
must also update any indexes.
• _id is default index in MongoDB
Index Names

• The default name for an index is the concatenation of the indexed


keys and each key's direction in the index (1 or -1) using underscores
as a separator. For example, an index created on { item : 1, quantity: -
1 } has the name item_1_quantity_-1.

• You cannot rename an index once created. Instead, you must drop
and recreate the index with a new name.
Create an Index

• Indexes support efficient execution of queries in MongoDB. If your


application is repeatedly running queries on the same fields, you can
create an index on those fields to improve performance for those
queries.

• To create an index, use the createIndex() shell method or equivalent


method for your driver. This page shows examples for the MongoDB
Shell and drivers.
Example

• This example creates a single key descending index on the name field:

• collection.createIndex( { name : -1 }, function(err, result) {


• console.log(result);
• callback(result);
•}
Index Sort Order
For a single-field index, the sort order (ascending or descending) of the index key
does not matter because MongoDB can traverse the index in either direction.
Results
• To confirm that the index was created, use
• mongosh
• to run the db.collection.getIndexes() method:

• db.collection.getIndexes()

• Output:

• [
• { v: 2, key: { _id: 1 }, name: '_id_1' },
• { v: 2, key: { name: -1 }, name: 'name_-1' }
• ]
Specify an Index Name
• When you create an index, you can give the index a custom name. Giving
your index a name helps distinguish different indexes on your collection. For
example, you can more easily identify the indexes used by a query in the
query plan's explain results if your indexes have distinct names.

• To specify the index name, include the name option when you create the
index:

• db.<collection>.createIndex(
• { <field>: <value> },
• { name: "<indexName>" }
•)
Drop an Index

• You can remove a specific index from a collection. You may need to drop an index if
you see a negative performance impact, want to replace it with a new index, or no
longer need the index.

• To drop an index, use one of the following shell methods:

• db.collection.dropIndex()
Drops a specific index from the collection.
• db.collection.dropIndexes()
Drops all removable indexes from the collection or an array of indexes, if specified.
NoSQL Database

• NoSQL Database is used to refer a non-SQL or non relational


database.
• It provides a mechanism for storage and retrieval of data other than
tabular relations model used in relational databases. NoSQL database
doesn't use tables for storing data. It is generally used to store big data
and real-time web applications.
NoSQL MongoDB alternatives

1. Apache Cassandra
2. Redis
3. OrientDB
4. DynamoDB
5. CouchDB
6. ArangoDB
7. RethinkDB
Redis

• Redis stores data in RAM, so you can access data directly from
memory. While this provides low-latency responses, it also
limits the volume of data you can store. Redis saves the dataset
to disk through snapshotting and append-only file (AOF)
logging, which provides data durability.
• Redis stores data as key-value pairs, where each data entry
has a unique key. It supports various data types like sorted sets,
hashes, sets, lists, and strings. Keys can be any length
Key differences: Redis vs. MongoDB
• Scaling
Horizontal scaling enables MongoDB to handle large volumes of data efficiently. It uses
sharding to distribute data across multiple regions and nodes. Redis doesn’t offer the same
degree of scalability as MongoDB. Redis only uses a single shard by default for primary
operations.
• Availability
Both MongoDB and Redis support availability through replication. However, MongoDB
supports a higher degree of availability by using replica sets. MongoDB can create up to 50
copies of your data. In contrast, Redis doesn’t provide automatic failover by default.
• Integrity
MongoDB supports multi-document atomic, consistent, isolated, and durable (ACID)
transactions. Conversely, Redis does not provide built-in ACID support.
• Query language
MongoDB provides a high level of flexibility in its querying, even performing complex
spatial computations and data analysis functions. In contrast, Redis is optimized for fast
key-value access operations rather than complex querying and searching capabilities.
CouchDB
• Apache CouchDB developed by Apache Software Foundation and
initially released in 2005. CouchDB is written in Erlang. It is an open-
source database that uses different formats and protocols to store,
transfer, and process its data. Apache CouchDB uses JSON to store
data, JavaScript as its query language using MapReduce. Documents
are the primary unit of data in CouchDB and they also include
metadata. Document fields are uniquely named and contain values of
varying types and there is no set limit to text size or element count.
CouchDB MongoDB
JSON format. BSON format.
The database contains documents. The database contains collections.
It favors availability. It favors consistency.
It is written in Erlang. It is written in C++.
It is eventually consistent. It is strongly consistent.
MongoDB is faster than CouchDB. MongoDB provides faster read speeds.
It follows Map/Reduce creating a collection and
It follows the Map/Reduce query method.
object-based query language.
It uses HTTP/REST-based interface. It uses a TCP/IP based interface.
CouchDB provides support for Mobile
Devices.
MongoDB provides no mobile support.
It can run on Apple iOS and Android
devices.
CouchDB offers master-master MongoDB offers master-slave
and master-slave replication. replication.
CouchDB is not suitable for a
rapidly growing database where MongoDB is an apt choice for a
the structure is not clearly defined rapidly growing database.
from the beginning.
CouchDB uses map-reduce
MongoDB is easier to learn as it is
functions and it will difficult for
closest in syntax to SQL.
users with a traditional SQL
learning experience.
It follows MVCC (Multi Version
It follows Update-in-place.
Concurrency Control).
History behind the creation of NoSQL
Databases
• In the early 1970, Flat File Systems are used. Data were stored in flat files and
the biggest problems with flat files are each company implement their own
flat files and there are no standards. It is very difficult to store data in the files,
retrieve data from files because there is no standard way to store data.
• Then the relational database was created by E.F. Codd and these databases
answered the question of having no standard way to store data. But later
relational database also get a problem that it could not handle big data, due to
this problem there was a need of database which can handle every types of
problems then NoSQL database was developed.
Advantages of NoSQL

• It supports query language.


• It provides fast performance.
• It provides horizontal scalability.
MongoDB Advantages
• MongoDB is schema less. It is a document database in which one collection
holds different documents.
• There may be difference between number of fields, content and size of the
document from one to other.
• Structure of a single object is clear in MongoDB.
• There are no complex joins in MongoDB.
• MongoDB provides the facility of deep query because it supports a powerful
dynamic query on documents.
• It is very easy to scale.
• It uses internal memory for storing working sets and this is the reason of its
fast access.
• Easy to use
• Light Weight
• Extremely faster than RDBMS
Where MongoDB should be used

• Big and complex data


• Mobile and social infrastructure
• Content management and delivery
• User data management
• Data hub
Performance analysis of MongoDB and
RDBMS
• In relational database (RDBMS) tables are using as storing elements,
while in MongoDB collection is used.
• In the RDBMS, we have multiple schema and in each schema we
create tables to store data while, MongoDB is a document oriented
database in which data is written in BSON format which is a JSON
like format.
• MongoDB is almost 100 times faster than traditional database systems.
MongoDB Datatypes
Data Types Description
String String is the most commonly used datatype. It is used to store
data. A string must be UTF 8 valid in mongodb.
Integer Integer is used to store the numeric value. It can be 32 bit or 64
bit depending on the server you are using.
Boolean This datatype is used to store boolean values. It just shows
YES/NO values.
Double Double datatype stores floating point values.
Min/Max Keys This datatype compare a value against the lowest and highest
bson elements.
Arrays This datatype is used to store a list or multiple values into a single
key.

Object Object datatype is used for embedded documents.


Null It is used to store null values.
Symbol It is generally used for languages that use a specific type.

Date This datatype stores the current date or time in UNIX time format. It
makes you possible to specify your own date time by creating object
of date and pass the value of date, month, year into it.
How to download MongoDB

• You can download an appropriate version of MongoDB which your


system supports, from the
link "http://www.mongodb.org/downloads" to install the MongoDB
on Windows. You should choose correct version of MongoDB
acording to your computer's Window. If you are not sure what
Window version are you using, open your command prompt and
execute this command:
C:\ wmic os get osarchitecture
How to set up the MongoDB
environment

A data directory is required in MongoDB to store all the information. Its by


default data directory path is \data\db. you can create this folder by
command prompt.
md\data\db
For example:
• If you want to start MongoDB, run mongod.exe
• You can do it from command prompt.
C:\Program Files\MongoDB\bin\mongod.exe
This will start the mongoDB database process. If you get a message
"waiting for connection" in the console output, it indicates that the
mongodb.exe process is running successfully.
When you connect to the MongoDB through the mongo.exe shell, you
should follow these steps:
Open another command prompt.
At the time of connecting, specify the data directory if necessary.
Note: If you use the default data directory while MongoDB installation,
there is no need to specify the data directory.
For example:
C:\mongodb\bin\mongo.exe
If you use the different data directory while MongoDB installation,
specify the directory when connecting.
Which version of MongoDB is preferred?
• MongoDB takes advantage of memory-mapped files. Thus, the total
storage size of the server is limited to 2GB when a 32-bit build of
MongoDB is used.
• However, if you run a 64-bit build of MongoDB, you can access
virtually unlimited storage sizes. This makes the 64-bit build of
MongoDB the preferred option.
Data Modeling in MongoDB

In MongoDB, data has a flexible schema. It is totally different from SQL


database where you had to determine and declare a table's schema before
inserting data. MongoDB collections do not enforce document structure.
The main challenge in data modeling is balancing the need of the
application, the performance characteristics of the database engine, and the
data retrieval patterns.
Consider the following things while
designing the schema in MongoDB
• Always design schema according to user requirements.
• Do join on write operations not on read operations.
• Objects which you want to use together, should be combined into one
document. Otherwise they should be separated (make sure that there
should not be need of joins).
• Optimize your schema for more frequent use cases.
• Do complex aggregation in the schema.
• You should duplicate the data but in a limit, because disc space is
cheaper than computer time.
Example for a Website

• Every post is distinct (contains unique title, description and url).


• Every post can have one or more tags.
• Every post has the name of its publisher and total number of likes.
• Each post can have zero or more comments and the comments
must contain user name, message, data-time and likes.
For the above requirement, a minimum of three tables are required in
RDBMS.
MongoDB Scheme
{
_id: POST_ID
title: TITLE_OF_POST,
description: POST_DESCRIPTION,
by: POST_BY,
url: URL_OF_POST,
tags: [TAG1, TAG2, TAG3],
likes: TOTAL_LIKES,
comments: [
{
user: 'COMMENT_BY',
message: TEXT,
datecreated: DATE_TIME,
like: LIKES
},
{
user: 'COMMENT_BY',
message: TEST,
dateCreated: DATE_TIME,
like: LIKES
}}}
MongoDB Create Database
Use Database method:
• There is no create database command in MongoDB. Actually, MongoDB do
not provide any command to create database.
• It may be look like a weird concept, if you are from traditional SQL
background where you need to create a database, table and insert values in the
table manually.
• Here, in MongoDB you don't need to create a database manually because
MongoDB will create it automatically when you save the value into the
defined collection at first time.
• You also don't need to mention what you want to create, it will be
automatically created at the time you save the value into the defined collection.
How and when to create database
If there is no existing database, the following command is used to create a new database.
Syntax:
1.use DATABASE_NAME
If the database already exists, it will return the existing database.
Let' take an example to demonstrate how a database is created in MongoDB. In the following
example, we are going to create a database “Rajandb".
See this example
>use Rajandb
Swithched to db Rajandb
To check the currently selected database, use the command db:
>db
Rajandb
To check the database list, use the command show dbs:
show dbs

local 0.078GB

Here, your created database “Rajandb" is not present in the list, insert at least
one document into it to display database:

>db.movie.insert({"name":“RAJAN SALUJA"})
WriteResult({ "nInserted": 1})
>show dbs

Rajandb 0.078GB local 0.078GB


MongoDB Drop Database
The dropDatabase command is used to drop a database. It also deletes the
associated data files. It operates on the current database.
Syntax:
db.dropDatabase()
This syntax will delete the selected database. In the case you have not selected
any database, it will delete default "test" database.
To check the database list, use the command show dbs:
show dbs
Rajandb 0.078GB
local 0.078GB
If you want to delete the database “Rajandb", use the dropDatabase() command as follows:
>use Rajandb
switched to the db Rajandb
>db.dropDatabase()
{ "dropped": “Rajandb", "ok": 1}
Now check the list of databases:
>show dbs
MongoDB Create Collection
In MongoDB, db.createCollection(name, options) is used to create
collection. But usually you don?t need to create collection. MongoDB
creates collection automatically when you insert some documents. It
will be explained later. First see how to create collection:
Syntax:
db.createCollection(name, options)
Here,
• Name: is a string type, specifies the name of the collection to be
created.
>db.createCollection("SSSIT")
{ "ok" : 1 }
To check the created collection, use the command "show collections".
>show collections
SSSIT
How does MongoDB create collection
automatically
MongoDB creates collections automatically when you insert some
documents. For example: Insert a document named seomount into a
collection named SSSIT. The operation will create the collection if the
collection does not currently exist.
>db.SSSIT.insert({"name" : "seomount"})
>show collections
SSSIT
If you want to see the inserted document, use the find() command.
Syntax:
db.collection_name.find()
MongoDB Drop collection

• In MongoDB, db.collection.drop() method is used to drop a collection


from a database. It completely removes a collection from the database
and does not leave any indexes associated with the dropped collections.
• The db.collection.drop() method does not take any argument and produce
an error when it is called with an argument. This method removes all the
indexes associated with the dropped collection.
• Syntax:
db.COLLECTION_NAME.drop()
MongoDB insert documents

• In MongoDB, the db.collection.insert() method is used to add or insert new


documents into a collection in your database.
Upsert
• There are also two methods "db.collection.update()" method and
"db.collection.save()" method used for the same purpose. These methods add
new documents through an operation called upsert.
• Upsert is an operation that performs either an update of existing document or
an insert of new document if the document to modify does not exist.
Example
db.collection1.insert(
{
course: "java",
details: {
duration: "6 months",
Trainer: “Rajan"
},
Batch: [ { size: "Small", qty: 15 }, { size: "Medium", qty: 25 }
],
category: "Programming language"
}
)
MongoDB insert multiple documents

var Allcourses =
[ {
Course: "Web Designing",
{
details: { Duration: "3 months", Trainer: "Rashmi Desai" },
Course: "Java",
details: { Duration: "6 months", Trainer: "Sonoo Jaiswal" },
Batch: [ { size: "Small", qty: 5 }, { size: "Large", qty: 10 } ]
Batch: [ { size: "Medium", qty: 25 } ], ,
category: "Programming Language" category: "Programming Language"
}, }
{ ];
Course: ".Net", Inserts the documents
details: { Duration: "6 months", Trainer: "Prashant Verma" }, Pass this Allcourses array to the db.collection.insert() method to
perform a bulk insert.
Batch: [ { size: "Small", qty: 5 }, { size: "Medium", qty: 10 }, > db.Collection1.insert( Allcourses );
],
category: "Programming Language"
},
MongoDB update documents
In MongoDB, update() method is used to update or modify the existing documents of a collection.
Syntax:
db.COLLECTION_NAME.update(SELECTIOIN_CRITERIA, UPDATED_DATA)
Example
Consider an example which has a collection name collection1. Insert the following documents in collection:
db.Rajandb.insert(
{
course: “Node",
details: {
duration: "6 months",
Trainer: “Rajan"
},
Batch: [ { size: "Small", qty: 15 }, { size: "Medium", qty: 25 } ],
category: "Programming language"
}
)
1. >db.collection1.update({'course':'java'},{$set:{'course':'android'}})
• Check the updated document in the collection:
{
_id : '123'
friends: [
{name: 'allen', emails: [{email: '11111', using: 'true'}]}
]
}
modify user's friends' emails ' email, whose _id is '123

db.users.update ({_id: '123'}, {$set: {"friends.0.emails.$.email" : '2222'} })


MongoDB Delete documents
• In MongoDB, the db.colloction.remove() method is used to delete documents
from a collection. The remove() method works on two parameters.
• 1. Deletion criteria: With the use of its syntax you can remove the
documents from the collection.
• 2. JustOne: It removes only one document when set to true or 1.
• Syntax:
• db.collection_name.remove (DELETION_CRITERIA)
• Remove all documents
db.collection1.remove({})
• Remove all documents that match a condition
• The following example will remove all documents from the collection1
collection where the type field is equal to programming language.
db.collection1.remove( { type : "programming language" } )
• Remove a single document that match a condition
• If you want to remove a single document that match a specific condition,
call the remove() method with justOne parameter set to true or 1.
• The following example will remove a single document from the javatpoint
collection where the type field is equal to programming language.
db.collection1.remove( { type : "programming language" }, 1 )
MongoDB Query documents
In MongoDB, the db.collection.find() method is used to retrieve documents from a
collection. This method returns a cursor to the retrieved documents.
The db.collection.find() method reads operations in mongoDB shell and retrieves documents
containing all their fields.
• Note: You can also restrict the fields to return in the retrieved documents by using some
specific queries. For example: you can use the db.collection.findOne() method to return a
single document. It works same as the db.collection.find() method with a limit of 1.
• Syntax:
db.COLLECTION_NAME.find({})
• Select all documents in a collection:
To retrieve all documents from a collection, put the query document ({}) empty. It will be like
this:
db.COLLECTION_NAME.find()
• For example: If you have a collection name "canteen" in your database which has some
fields like foods, snacks, beverages, price etc. then you should use the following query to
select all documents in the collection "canteen".
1.db.canteen.find()
Node and Mongodb
Install MongoDB Driver
Let us try to access a MongoDB database with Node.js.
To download and install the official MongoDB driver, open the
Command Terminal and execute the following:
Download and install mongodb package:
npm install mongodb
Creating a Database
• To create a database in MongoDB, start by creating a MongoClient object, then specify a connection URL with the correct ip address
and the name of the database you want to create.
• MongoDB will create the database if it does not exist, and make a connection to it.

const {MongoClient} = require('mongodb');


async function main(){
const url ="mongodb://localhost:27017/database1"
const client = new MongoClient(url);

try {
// Connect to the MongoDB cluster
await client.connect();
console.log("database created")
} catch (e) {
console.error(e);
} finally {
await client.close();
}
}
main().catch(console.error);
Creating a Collection
• const {MongoClient} = require('mongodb');
• async function main(){
• const url ="mongodb://localhost:27017"
• const client = new MongoClient(url);
• var dbo = client.db("ram");
• console.log(dbo.databaseName);
• await client.connect();
• dbo.createCollection("customers1");
• console.log("Collection created!");
• }
• main().catch(console.error);
Insert Into Collection
To insert a record, or document as it is called in MongoDB, into a collection, we use the
insertOne() method.
A document in MongoDB is the same as a record in MySQL
The first parameter of the insertOne() method is an object containing the name(s) and
value(s) of each field in the document you want to insert.

const {MongoClient} = require('mongodb');


async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
var myobj = { name: "Company Inc", address: "Highway 37" };
dbo.collection("customers1").insertOne(myobj);
console.log("Data created");
}
main().catch(console.error);
Insert Multiple Documents
• To insert multiple documents into a collection in MongoDB, we use the insertMany() method.
• The first parameter of the insertMany() method is an array of objects, containing the data you want to insert.
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
var myobj = [
{ name: 'John', address: 'Highway 71’}, { name: 'Peter', address: 'Lowstreet 4'},
{ name: 'Amy', address: 'Apple st 652’}, { name: 'Hannah', address: 'Mountain 21'},
{ name: 'Michael', address: 'Valley 345’}, { name: 'Sandy', address: 'Ocean blvd 2’},
{ name: 'Betty', address: 'Green Grass 1’},{ name: 'Richard', address: 'Sky st 331'},
{ name: 'Susan', address: 'One way 98’},{ name: 'Vicky', address: 'Yellow Garden 2'},
{ name: 'Ben', address: 'Park Lane 38’} ];
dbo.collection("customers5").insertMany(myobj);
console.log("multiple data inserted");
}
main().catch(console.error);
The Result Object
• When executing the insertMany() method, a result object is returned.
• The result object contains information about how the insertion affected the database. The
object returned from the example above looked like this:
{
result: { ok: 1, n: 14 },
ops: [
{ name: 'John', address: 'Highway 71', _id: 58fdbf5c0ef8a50b4cdd9a84 },
{ name: 'Peter', address: 'Lowstreet 4', _id: 58fdbf5c0ef8a50b4cdd9a85 },
{ name: 'Amy', address: 'Apple st 652', _id: 58fdbf5c0ef8a50b4cdd9a86 },
{ name: 'Hannah', address: 'Mountain 21', _id: 58fdbf5c0ef8a50b4cdd9a87 },
{ name: 'Michael', address: 'Valley 345', _id: 58fdbf5c0ef8a50b4cdd9a88 },
{ name: 'Sandy', address: 'Ocean blvd 2', _id: 58fdbf5c0ef8a50b4cdd9a89 },
{ name: 'Betty', address: 'Green Grass 1', _id: 58fdbf5c0ef8a50b4cdd9a8a },
{ name: 'Richard', address: 'Sky st 331', _id: 58fdbf5c0ef8a50b4cdd9a8b },
{ name: 'Susan', address: 'One way 98', _id: 58fdbf5c0ef8a50b4cdd9a8c },
{ name: 'Vicky', address: 'Yellow Garden 2', _id: 58fdbf5c0ef8a50b4cdd9a8d },
{ name: 'Ben', address: 'Park Lane 38', _id: 58fdbf5c0ef8a50b4cdd9a8e },
insertedCount: 11,
insertedIds: [
58fdbf5c0ef8a50b4cdd9a84,
58fdbf5c0ef8a50b4cdd9a85,
58fdbf5c0ef8a50b4cdd9a86,
58fdbf5c0ef8a50b4cdd9a87,
58fdbf5c0ef8a50b4cdd9a88,
58fdbf5c0ef8a50b4cdd9a89,
The _id Field
• If you do not specify an _id field, then MongoDB will add one for you and
assign a unique id for each document.
• In the example above no _id field was specified, and as you can see from the
result object, MongoDB assigned a unique _id for each document.
• If you do specify the _id field, the value must be unique for each document:
var MongoClient = require('mongodb').MongoClient;
var url = "mongodb://localhost:27017/"; dbo.collection("products").insertMany(myobj, function(er
r, res) {
MongoClient.connect(url, function(err, db) { if (err) throw err;
if (err) throw err; console.log(res);
var dbo = db.db("mydb"); db.close();
var myobj = [ });
{ _id: 154, name: 'Chocolate Heaven'}, });
{ _id: 155, name: 'Tasty Lemon'},
{ _id: 156, name: 'Vanilla Dream'}
];
• {
• result: { ok: 1, n: 3 },
• ops: [
• { _id: 154, name: 'Chocolate Heaven },
• { _id: 155, name: 'Tasty Lemon },
• { _id: 156, name: 'Vanilla Dream } ],
• insertedCount: 3,
• insertedIds: [
• 154,
• 155,
• 156 ]
• }
Node.js MongoDB Find
• In MongoDB we use the find and findOne methods to find data in a
collection.
Just like the SELECT statement is used to find data in a table in a
MySQL database.

• Find One
To select data from a collection in MongoDB, we can use the
findOne() method.
The findOne() method returns the first occurrence in the selection.
The first parameter of the findOne() method is a query object. In this
example we use an empty query object, which selects all documents in
a collection (but returns only the first document).
Example
• const {MongoClient} = require('mongodb');
• async function main(){
• const url ="mongodb://localhost:27017"
• const client = new MongoClient(url);
• var dbo = client.db("ram");
• console.log(dbo.databaseName);
• await client.connect();
• const custom = dbo.collection("customers1");
• const query = { name: "Amy"};
• const cust1 = await custom.findOne(query);
• console.log(cust1);
• }
• main().catch(console.dir);
Node.js MongoDB Query
Filter the Result
When finding documents in a collection, you can filter the result by using a query object.
The first argument of the find() method is a query object, and is used to limit the search.
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
const custom = dbo.collection("customers1");
const query={name:"Company Inc",address:"Highway 37"};
const cust1 = await custom.findOne(query);
console.log(cust1);
}
main().catch(console.dir);
Filter With Regular Expression
• You can write regular expressions to find exactly what you are searching for.
• Regular expressions can only be used to query strings.
• To find only the documents where the "address" field starts with the letter “H", use the
regular expression /^H/:
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo. ollection("customers1");
const query={addresdatabaseName);
await client.connect();
const custom = dbo.cs: /^H/ };
const cust1 = await custom.find(query).toArray();
console.log(cust1);
}
main().catch(console.dir);
MongoDB Covered Query

• The MongoDB covered query is one which uses an index and does not
have to examine any documents. An index will cover a query if it
satisfies the following conditions:
• All fields in a query are part of an index.
• All fields returned in the results are of the same index.
• Consider a document in examples collection.
•{
• "_id": ObjectId("53402597d852426020000002"),
• "contact": "1234567809",
• "dob": "01-01-1991",
• "gender": "M",
• "name": "ABC",
• "user_name": "abcuser"
•}
• >db.examples.createIndex({gender:1,user_name:1})
• >db.examples.find({gender:"M"},{user_name:1,_id:0})
• From the above examples, we can say that MongoDB will not look into
database documents but it will fetch the required data from indexed data
Node.js MongoDB Sort
• Sort the Result
use the sort() method to sort the result in ascending or descending order.
The sort() method takes one parameter, an object defining the sorting order.
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
const custom = dbo.collection("customers1");
var mysort = { name: 1 };
const cust1 = await custom.find().sort(mysort).toArray();
console.log(cust1);
}
main().catch(console.dir);
{ name: -1 } // descending
Node.js MongoDB Delete
• Delete Document
To delete a record, or document as it is called in MongoDB, we use the deleteOne()
method.The first parameter of the deleteOne() method is a query object defining which
document to delete.
Note: If the query finds more than one document, only the first occurrence is deleted.
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
const custom = dbo.collection("customers1");
const query={name:"Company Inc",address:"Highway 37"};
const cust1 = await custom.deleteOne(query);
console.log("One doc deleted");
}
main().catch(console.dir);
Delete Many
• To delete more than one document, use the deleteMany() method.
The first parameter of the deleteMany() method is a query object defining which
documents to delete.
Example
Delete all documents were the address starts with the letter "O":

const {MongoClient} = require('mongodb');


async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
const custom = dbo.collection("customers1");
const query={name:/^O/};
const cust1 = await custom.deleteMany(query);
console.log("Customer having names starting wiith O deleted");
}
main().catch(console.dir);
Node.js MongoDB Drop
• Drop Collection
• You can delete a table, or collection as it is called in MongoDB, by using the drop()
method.The drop() method takes a callback function containing the error object and
the result parameter which returns true if the collection was dropped successfully,
otherwise it returns false.
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
const custom = dbo.collection("customers1");
const cust1 = await custom.drop();
console.log(cust1);
}
main().catch(console.dir);
Dropping all things from the database
• use [database];
• db.dropDatabase();
• And to remove the users:

• db.dropAllUsers();
• Update Document
Node.js MongoDB Update
You can update a record, or document as it is called in MongoDB, by using the
updateOne() method. The first parameter of the updateOne() method is a query object
defining which document to update.
• Note: If the query finds more than one record, only the first occurrence is updated. The
second parameter is an object defining the new values of the document.
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
const custom = dbo.collection("customers2");
var myquery = { address: "Canyon 123" };
var newvalues = { $set: {address: "Kharar" } };
const cust1 = await custom.updateOne(myquery, newvalues);
console.log(cust1);
}
main().catch(console.dir);
Update Many Documents
To update all documents that meets the criteria of the query, use the updateMany() method.
Update all documents where the name starts with the letter "S":
const {MongoClient} = require('mongodb');
async function main(){
const url ="mongodb://localhost:27017"
const client = new MongoClient(url);
var dbo = client.db("ram");
console.log(dbo.databaseName);
await client.connect();
const custom = dbo.collection("customers2");
var myquery = { address: /^S/ };
var newvalues = { $set: {name: "Rajan" } };
const cust1 = await custom.updateMany(myquery, newvalues);
console.log(cust1);
}
Node.js MongoDB Limit
To limit the result in MongoDB, we use the limit() method. The limit() method
takes one parameter, a number defining how many documents to return. Consider
you have a "customers" collection and you want to limit the result upto 5 docs:
var MongoClient = require('mongodb').MongoClient;
var url = "mongodb://localhost:27017/";

MongoClient.connect(url, function(err, db) {


if (err) throw err;
var dbo = db.db("mydb");
dbo.collection("customers").find().limit(5).toArray(function(err, result) {
if (err) throw err;
console.log(result);
db.close();
});
});
Profiler in MongoDB
• The database profiler collects detailed information about Database
Commands executed against a running mongod instance. This includes
CRUD operations as well as configuration and administration
commands. The profiler writes all the data it collects to a system.

Level Description
0 The profiler is off and does not collect any data. This is the
default profiler level.
1 The profiler collects data for operations that take longer
than the value.

2 The profiler collects data for all operations.


Aggregation in MongoDB

• In MongoDB, aggregation operations process the data


records/documents and return computed results. It collects values from
various documents and groups them together and then performs
different types of operations on that grouped data like sum, average,
minimum, maximum, etc to return a computed result. It is similar to
the aggregate function of SQL.
• MongoDB provides three ways to perform aggregation
• Aggregation pipeline
• Map-reduce function
• Single-purpose aggregation
Node.js MongoDB Join
• var MongoClient = require('mongodb').MongoClient;
var url = "mongodb://127.0.0.1:27017/";
MongoClient.connect(url, function(err, db) {
if (err) throw err;
var dbo = db.db("mydb");
dbo.collection('orders').aggregate([ MongoDB is not a relational database, but you
{ $lookup: can perform a left outer join by using
{
from: 'products', the $lookup stage.
localField: 'product_id', The $lookup stage lets you specify which
foreignField: '_id',
as: 'orderdetails' collection you want to join with the current
} collection, and which fields that should match.
} Consider you have a "orders" collection and a
]).toArray(function(err, res) {
if (err) throw err; "products" collection:
console.log(JSON.stringify(res));
db.close();
});
});
Aggregation pipeline
In MongoDB, the aggregation pipeline consists of stages and each stage
transforms the document. Or in other words, the aggregation pipeline is a
multi-stage pipeline, so in each state, the documents taken as input and
produce the resultant set of documents now in the next stage(id available)
the resultant documents taken as input and produce output, this process is
going on till the last stage. The basic pipeline stages provide filters that will
perform like queries and the document transformation modifies the
resultant document and the other pipeline provides tools for grouping and
sorting documents. You can also use the aggregation pipeline in sharded
collection.
• In the above example of a collection of train fares in the first stage. Here, the
$match stage filters the documents by the value in class field i.e. class: “first-
class” and passes the document to the second stage. In the Second Stage, the
$group stage groups the documents by the id field to calculate the sum of fare
for each unique id.
• Here, the aggregate() function is used to perform aggregation it can have
three operators stages, expression and accumulator.
Stages
• Each stage starts from stage operators which are:
• $match: It is used for filtering the documents can reduce the amount of
documents that are given as input to the next stage.
• $project: It is used to select some specific fields from a collection.
• $group: It is used to group documents based on some value.
• $sort: It is used to sort the document that is rearranging them
• $skip: It is used to skip n number of documents and passes the remaining
documents
• $limit: It is used to pass first n number of documents thus limiting them.
• $unwind: It is used to unwind documents that are using arrays i.e. it
deconstructs an array field in the documents to return documents for each
element.
• $out: It is used to write resulting documents to a new collection
Expressions
• It refers to the name of the field in input documents for e.g. { $group : {
_id : “$id“, total:{$sum:”$fare“}}} here $id and $fare are expressions.
Accumulators
• These are basically used in the group stage
• sum: It sums numeric values for the documents in each group
• count: It counts total numbers of documents
• avg: It calculates the average of all given values from all
documents
• min: It gets the minimum value from all the documents
• max: It gets the maximum value from all the documents
• first: It gets the first document from the grouping
• last: It gets the last document from the grouping
Examples
• In the following examples, we are working with:

• Database: MCA

• Collection: students

• Documents: Seven documents that contain the details of the students


in the form of field-value pairs.
Displaying the total number of students in one city only
• const {MongoClient} = require('mongodb');
• async function main(){
• const url ="mongodb://localhost:27017"
• const client = new MongoClient(url);
• var dbo = client.db("MCA");
• console.log(dbo.databaseName);
• await client.connect();
• const custom = dbo.collection("students");
• const pipeline = [
• { $match: { address: "Chandigarh" } },
• { $count: "total students in Chandigarh" }
• ];
• // Execute the aggregation
• const aggCursor = custom.aggregate(pipeline);
• // Print the aggregated results
• for await (const doc of aggCursor) {
• console.log(doc);
• }
• }
• main().catch(console.dir);
Displaying the total number of students in two cities and maximum age from 2
cities
• const {MongoClient} = require('mongodb');
• async function main(){
• const url ="mongodb://localhost:27017"
• const client = new MongoClient(url);
• var dbo = client.db("MCA");
• console.log(dbo.databaseName);
• await client.connect();
• const custom = dbo.collection("students");
• const pipeline = [
• {$group: {_id:"$address", total_st: {$sum:1}, max_age:{$max:"$Age"} } }
• ];
• const aggCursor = custom.aggregate(pipeline);
• for await (const doc of aggCursor) {
• console.log(doc);
• }
• }
• main().catch(console.dir);
Displaying details of students whose age is greater than 30 using match stage
• const {MongoClient} = require('mongodb');
• async function main(){
• const url ="mongodb://localhost:27017"
• const client = new MongoClient(url);
• var dbo = client.db("MCA");
• console.log(dbo.databaseName);
• await client.connect();
• const custom = dbo.collection("students");
• const pipeline = [
• {$match:{Age:{$gt:30}}}
• ];
• const aggCursor = custom.aggregate(pipeline);
• for await (const doc of aggCursor) {
• console.log(doc);
• }
• }
• main().catch(console.dir);
Sorting the students on the basis of age
• const {MongoClient} = require('mongodb');
• async function main(){
• const url ="mongodb://localhost:27017"
• const client = new MongoClient(url);
• var dbo = client.db("MCA");
• console.log(dbo.databaseName);
• await client.connect();
• const custom = dbo.collection("students");
• const pipeline = [
• {$match:{Age:{$gt:30}}}
• ];
• const aggCursor = custom.aggregate(pipeline);
• for await (const doc of aggCursor) {
• console.log(doc);
• }
• }
• main().catch(console.dir);
Map Reduce
• Map reduce is used for aggregating results for the large volume of data. Map
reduce has two main functions one is a map that groups all the documents and
the second one is the reduce which performs operation on the grouped data.
• Syntax:
db.collectionName.mapReduce(mappingFunction, reduceFunction,
{out:'Result'});

var mapfunction = function(){emit(this.age, this.marks)}


var reducefunction = function(key, values){return Array.sum(values)}
db.studentsMarks.mapReduce(mapfunction, reducefunction, {'out':'Result'})
Single Purpose Aggregation

• It is used when we need simple access to document like counting the


number of documents or for finding all distinct values in a document.
It simply provides the access to the common aggregation process using
the count(), distinct(), and estimatedDocumentCount() methods, so
due to which it lacks the flexibility and capabilities of the pipeline.

db.studentsMarks.distinct("name")
Concurrency

MongoDB allows multiple clients to read and write the same data. To
ensure consistency, MongoDB uses locking and concurrency control to
prevent clients from modifying the same data simultaneously. Writes to a
single document occur either in full or not at all, and clients always see
consistent data.
Concurrency
• MongoDB uses multi-granularity locking that allows operations to
lock at the global, database or collection level, and allows for
individual storage engines to implement their own concurrency
control below the collection.
• MongoDB uses reader-writer locks that allow concurrent readers
shared access to a resource, such as a database or collection.
• In addition to a shared (S) locking mode for reads and an exclusive
(X) locking mode for write operations, intent shared (IS) and intent
exclusive (IX) modes indicate an intent to read or write a resource
using a finer granularity lock. When locking at a certain granularity,
all higher levels are locked using an intent lock.
What is CRUD in MongoDB?
• CRUD operations describe the conventions of a user-interface that let users
view, search, and modify parts of the database.
• MongoDB documents are modified by connecting to a server, querying the
proper documents, and then changing the setting properties before sending the
data back to the database to be updated. CRUD is data-oriented, and it’s
standardized according to HTTP action verbs.
• When it comes to the individual CRUD operations:
• The Create operation is used to insert new documents in the MongoDB
database.
• The Read operation is used to query a document in the database.
• The Update operation is used to modify existing documents in the database.
• The Delete operation is used to remove documents in the database.
Primary-key, Foreign-key Relationship in
Mongo DB
By default, MongoDB doesn't support primary key-foreign key
relationships. Every document in MongoDB contains an _id key field
that uniquely identifies the document. However, this concept can be
implemented by embedding one copy inside another.
Data Models in MongoDB
• Embedded data model: in this model, we store related pieces of
information in a single database record. As a result, applications will
need to issue fewer database calls to retrieve or update data. This
embedded document model is also known as a de-normalized data
model.
Accessing embedded/nested documents

• In MongoDB, you can access the fields of nested/embedded


documents of the collection using dot notation and when you are
using dot notation, then the field and the nested field must be inside
the quotation marks.

• Syntax:

• "field.nestedField": value
Example
• db.Courses.find({name: {first: "Rohit",
• middle: "Kumar",
• last: "Singh"}}).pretty()
The name field has three nested fields first, middle and last.
Normalized Data Models
Namespace in MongoDB
• The canonical name for a collection or index in MongoDB. The
namespace is a combination of the database name and the name of the
collection or index, like so: [database-name].[collection-or-index-
name]. All documents belong to a namespace.
• MongoDB stores BSON (Binary Interchange and Structure Object
Notation) objects in the collection. The concatenation of the collection
name and database name is called a namespace.
Sharding

• Sharding is the process of storing data records across multiple


machines and it is MongoDB's approach to meeting the demands of
data growth. As the size of the data increases, a single machine may
not be sufficient to store the data nor provide an acceptable read and
write throughput. Sharding solves the problem with horizontal scaling.
With sharding, you add more machines to support data growth and the
demands of read and write operations.
Why Sharding?

• In replication, all writes go to master node


• Latency sensitive queries still go to master
• Single replica set has limitation of 12 nodes
• Memory can't be large enough when active dataset is big
• Local disk is not big enough
• Vertical scaling is too expensive
Sharding in MongoDB
• Shards − Shards are used to store data. They provide high availability and
data consistency. In production environment, each shard is a separate replica
set.
• Config Servers − Config servers store the cluster's metadata. This data
contains a mapping of the cluster's data set to the shards. The query router uses
this metadata to target operations to specific shards. In production
environment, sharded clusters have exactly 3 config servers.
• Query Routers − Query routers are basically mongo instances, interface with
client applications and direct operations to the appropriate shard. The query
router processes and targets the operations to shards and then returns results to
the clients. A sharded cluster can contain more than one query router to divide
the client request load. A client sends requests to one query router. Generally, a
sharded cluster have many query routers.
Replication
The replication method of scaling horizontally creates multiple copies of
the same database on multiple machines. Usually, one machine is
designated as the primary machine (e.g., first machine where database
changes are made) and all database changes made to that database are
propagated to all other database replicas (e.g., the other machines with
the same database). This ensures that all instances of the database are
up-to-date.
The advantage of this horizontal scaling
• system availability
• fault tolerance is greatly improved.
• Specifically, if the primary machine has an outage, one of the other
existing machines can be promoted to the status of primary machine.
• since all machines have the same database with the same data stored,
the system can continue to operate without interruption.
• In addition, due to the existence of more machines, improved
performance can also occur as data requests can be distributed across
multiple machines.
MongoDB Backup and Restore

There are some best practices you should follow when using the MongoDB
backup and restore services for your MongoDB clusters.
1.MongoDB uses both regular JSON and Binary JSON (BSON) file formats.
It’s better to use BSON when backing up and restoring. While JSON is easy to
work with, it doesn’t support all of the data types that BSON supports, and it
may lead to the loss of fidelity.
2.You don’t need to explicitly create a MongoDB database, as it will be
automatically created when you specify a database to import from. Similarly, a
structure for a collection will be created whenever the first document is
inserted into the database.
3. When creating a new cluster, you have the option to turn on cloud
backup. While you can also enable cloud backups when modifying
an existing cluster, you should turn this feature on by default, as it
will prevent data loss.
4.If a snapshot fails, Atlas will automatically attempt to create
another snapshot. While you can use a fallback snapshot to restore a
cluster, it should only be done when absolutely necessary. Fallback
snapshots are created using a different process, and they may have
inconsistent data.
5.Use secondary servers for backups as this helps avoid degrading
the performance of the primary node.
6. Time the backup of data sets around periods of low
bandwidth/traffic. Backups can take a long time, especially if the
data sets are quite large.
7. Use a replica set connection string when using unsupervised
scripts. A standalone connection string will fail if the MongoDB
host proves unavailable.
Backup Types

• here are two types of backups in MongoDB:


• logical backups
• physical backups.
Logical Backups
Logical backups dump data from databases into backup files, formatted
as a BSON file. During the logical backup process, client APIs are used
to get the data from the server. The data is encrypted, serialized, and
written as either a “.bson,” “.json,” or “.csv” file, depending on the
backup utility used. If you have enabled field level encryption, backing
up data will ensure that the field remains encrypted.
Logical Backups
• MongoDB supplies two utilities to manage logical backups: Mongodump
and Mongorestore.
• The Mongodump command dumps a backup of the database into the “.bson”
format, and this can be restored by providing the logical statements found in
the dump file to the databases.
• The Mongorestore command is used to restore the dump files created by
Mongodump. Index creation happens after the data is restored.
Logical backups
• Logical backups copy the data itself. They don’t copy any of the
physical files relating to the data (like control files, log files,
executables, etc.). They are typically used to archive databases, verify
database structures, and move databases across different environments
and operating systems.
• If you have one server that contains a collection you need in another
server, you could use a MongoDB logical backup to migrate the
collection from the original server to the target server.
Physical Backups
• Physical backups are snapshots of the data files in MongoDB at a given
point in time. The snapshots can be used to cleanly recover the database,
as they include all the data found when the snapshot was created. Physical
backups are critical when attempting to back up large databases quickly.
• There are currently no provided, out-of-the-box solutions for creating
physical backups with MongoDB. While you can create physical backups
with LVM snapshots or block storage volume snapshots, it’s easier to use
MongoDB Atlas. When using MongoDB Atlas, you can utilize any disk
snapshot created by your cloud service provider. Alternatively, cloud
backups can be made using either the MongoDB Cloud Manager or
the Ops Manager.
Callback Hell
• Callbacks Hell is the situation in which we have complex nested
callbacks. As we have mentioned the term “Callback” so Before diving
into Callback Hell details, let’s know a little about what Callback is.
• A Callback is a function that is called automatically when a particular
task is completed. Basically, it allows the program to run other code until
a certain task is not completed. This function allows you to perform a
large number of I/O operations that can be handled by your OS without
waiting for any I/O operation to finish which makes nodeJs highly
scalable.
Example
• const fs = require("fs");
• fs.readFile("file_path", "utf8", function (err, data) {
• if (err) {
• // Handle the error
• } else {
• // Process the file text given with data
• }
• });
Callback Hell
• Callback hell in Node.js is the situation in which we have complex
nested callbacks. In this, each callback takes arguments that have been
obtained as a result of previous callbacks. In this manner, The code
structure looks like a pyramid, which leads to less readability and
difficulty in maintenance. Also, if there is an error in one function,
then all other functions get affected.
• const fs = require("fs");
• const textFile = "input.txt";
• fs.exists(textFile, function (exists) {
• if (exists) {
• fs.stat(textFile, function (err, res) {
• if (err) {
• throw err;
• }
• if (res.isFile()) {
• fs.readFile(textFile,
• "utf-8", function (err, data) {
• if (err) {
• throw err;
• }
• console.log(data);
• });
• }
• });
• }
• });
How can we avoid the “Callback Hell”?
• Promise: With the help of Promises callback hell can be avoided.
Promises in javascript are a way to handle asynchronous operations in
Node.js. It allows us to return a value from an asynchronous function
like a synchronous function. When we return something from an
asynchronous method it returns a promise which can be used to
consume the final value when it is available in the future with the help
of the then() method or awaits inside of async functions. The syntax to
create a promise is mentioned below.
const promise = new Promise(function (resolve, reject) {
// code logic
});
Example
In this example, we are
• const fs = require('fs');
• const fsPromises = require('fs').promises;
reading a textfile named
• fs.promises.readFile("input.txt") “input.txt” with the help
• .then(function (data) { of promises.
• console.log("" + data);
• })
• .catch(function (error) {
• console.log(error);
• })
Async.js
• Another way to avoid callback hell is, we have this npm module called
Async. Async is helpful in managing asynchronous JavaScript. Some
useful methods of Async are series, parallel, waterfall, etc. It works for
browsers as well.
Mongoose
Mongoose is an Object Data Modeling (ODM) library for MongoDB. It defines
a strongly-typed-schema, with default values and schema validations which are
later mapped to a MongoDB document.
Why Mongoose?

Mongoose provides an incredible amount of functionality around creating and


working with schemas. Mongoose currently contains eight Schema Types that a
property is saved as when it is persisted to MongoDB. It manages relationships
between data, provides schema validation, and is used to translate between
objects in code and the representation of those objects in MongoDB.
Mongoose Installation
• Install mongoose:

• Step 1: You can visit the link Install mongoose to install the mongoose
module. You can install this package by using this command.

• npm install mongoose


• Step 2: Now you can import the mongoose module in your file using:

• const mongoose = require('mongoose');


Node.js Mongoose – Connect to MongoDB
• var mongoose = require('mongoose');
• mongoose.connect('mongodb://localhost:27017/tutorialkart');

• var db = mongoose.connection;

• db.on('error', console.error.bind(console, 'connection error:'));

• db.once('open', function() {
• console.log("Connection Successful!");
• });
How to recognize a Model
• Let us consider that we are operating a bookstore and we need to develop a
Node.js Application for maintaining the book store. Also we chose MongoDB
as the database for storing data regarding books. The simplest item of
transaction here is a book. Hence, we shall define a model called Book and
transact objects of Book Model between Node.js and MongoDB. Mongoose
helps us to abstract at Book level, during transactions with the database.
Derive a custom schema

var BookSchema = mongoose.Schema({


name: String,
price: Number,
quantity: Number
});
Compile Schema to Model
• Once we derive a custom schema, we could compile it to a model.
• Following is an example where we define a model named Book with the help
of BookSchema.
• var Book = mongoose.model('Book', BookSchema, <collection_name>);
• <collection_name> is the name of the collection you want the documents go
to.
Initialize a Document

• We may now use the model to initialize documents of the Model.

• var book1 = new Book({ name: 'Introduction to Mongoose', price: 10, quantity: 25 });
• book1 is a Document of model Book.
Mongoose – Insert Document to MongoDB

To insert a single document to MongoDB, call save() method on


document instance. Callback function(err, document) is an optional
argument to save() method. Insertion happens asynchronously and any
operations dependent on the inserted document has to happen in
callback function for correctness.
var mongoose = require('mongoose');
// make a connection
mongoose.connect('mongodb://localhost:27017/BCA');
// get reference to database
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function() {
console.log("Connection Successful!");

// define Schema
var BookSchema = mongoose.Schema({
name: String,
price: Number,
quantity: Number
});
• // compile schema to model
• var Book = mongoose.model('Book', BookSchema,
'bookstore');
// a document instance
• var book1 = new Book({ name: 'Introduction to Mongoose',
price: 10, quantity: 25 });
• // save model to database
• book1.save().then(function () {console.log("document
inserted"); })
• .catch(function (error) {
• console.log(error);
• });
• });
Node.js Mongoose – Insert Multiple Documents
to MongoDB
• To insert Multiple Documents to MongoDB using Mongoose, use
Model.collection.insert(docs_array, options, callback_function);
method. Callback function has error and inserted_documents as
arguments.

• Syntax of insert() method


• Model.collection.insert(docs, options, callback)
• Or with the help of insertMany
• const mongoose = require('mongoose');

// Database connection
• mongoose.connect('mongodb://127.0.0.1:27017/MCA3A', {
• useNewUrlParser: true,
• useUnifiedTopology: true
• });

// User model
• const User = mongoose.model('User', {
• name: { type: String },
• age: { type: Number }
• });

// Function call
• User.insertMany([
• { name: 'Gourav', age: 20},
• { name: 'Kartik', age: 20},
• { name: 'Niharika', age: 20}
• ]).then(function(){
• console.log("Data inserted") // Success
• }).catch(function(error){
• console.log(error) // Failure
• });
Update in Mongoose
• The prototype.update() method of the Mongoose API can be used to update
the document in a collection. It can be used on a document object present in
the collection to update or modify its field values. This method will update
the document and returns an object with various properties.
• Syntax:
• doc.updateone()
• The prototype.updateone() method accepts three parameters:
• update: It is an object array that contains key-value pairs where keys are the
columns/attributes in the document.
• options: It is an object with various properties.
• callback: It is a callback function that will run once execution is completed.
• Return Value: The prototype.updateone() function returns a promise. The
result contains an object with various properties.
• // Require mongoose module
• const mongoose = require("mongoose");

// Set Up the Database connection
• mongoose.connect("mongodb://localhost:27017/MCA3A", {
• useNewUrlParser: true,
• useUnifiedTopology: true,
• });

const userSchema = new mongoose.Schema({
• name: String,
• age: Number,
• });

// Defining userSchema model
• const User = mongoose.model("User", userSchema);

const updateDoc = async () => {

// Finding document object using doc _id
• const doc = await User.findById("651670c61f019f149c0184c0");

const output = await doc.updateOne({ name: "User1 Updated" })
• console.log(output)
• }
• updateDoc();
Mongoose | deleteOne() Function

• The deleteOne() function is used to delete the first document that


matches the conditions from the collection. It behaves like the
remove() function but deletes at most one document regardless of the
single option.
• const mongoose = require('mongoose');

// Database connection
• mongoose.connect('mongodb://127.0.0.1:27017/MCA3A', {
• useNewUrlParser: true,
• useUnifiedTopology: true
• });

// User model
• const User = mongoose.model('User', {
• name: { type: String },
• age: { type: Number }
• });

// Function call
• // Delete first document that matches
• // the condition i.e age >= 10
• User.deleteOne({ age: { $gte: 20 } }).then(function(){
• console.log("Data deleted"); // Success
• }).catch(function(error){
• console.log(error); // Failure
• });
Mongoose Document
Model.deleteMany()
• The Model.deleteMany() method of the Mongoose API is used to
delete more than one document in the collection at a time in one go.
We can provide an object which contains a condition to the
deleteMany() and can be executed on the model object. It will delete
all the documents from the collection that will match the given
condition.
• Syntax:
• Model.deleteMany()
Reading Data From A CSV file
• const main = async () => {
• const csv = require('fast-csv'); // parses CSV files
• const fs = require('fs');
• const path = require('path');
• const mongoose = require('mongoose');
• mongoose.connect('mongodb://127.0.0.1:27017/PPP');
• const db = mongoose.connection;
• db.on('error', console.error.bind(console, 'MongoDB connection
error:'));
• const PostSchema = mongoose. Schema({
• title: { type: Number, required: true },
• count: { type: Number, required: true }
• });
• var Post = mongoose.model('Post', PostSchema,
'bookstore');
• // read csv file
• const posts = [];
• fs.createReadStream(path.join(__dirname, '/posts.csv'))
• .pipe(csv.parse({ headers: true }))
• .on('error', error => console.error(error))
• .on('data', function(data) {
• data['_id'] = new mongoose.Types.ObjectId();
• data['title']=new mongoose.Types.Decimal128(data.title)
• posts.push(data);
• })
• .on('end', function(){
• // insert posts into db
• Post.insertMany(posts);
• console.log(`${posts.length} + posts have been successfully
uploaded.`);
• return;
• });
• }
• main().catch((error) => {
• console.error(error);
• process.exit();
Node.js Express.js Mongoose.js MongoDB Easy
Build REST API

• Node.js, Express.js, Mongoose.js, and MongoDB are great


combinations for building easy and fast REST API. You will see how
fast that combination is than other existing frameworks because
Node.js is a packaged compilation of Google’s V8 JavaScript engine
and it works on non-blocking and event-driven I/O. Express.js is a
Javascript web server that has the complete function of web
development including REST API.
EXPRESS REST API Architecture
STEPS FOR REST API
• Step #1: Create Express.js Project and Install Required Modules
• Step #2: Add Mongoose.js Module as ORM for MongoDB
• Step #3: Create Product Mongoose Model
• Step #4: Create Routes for the REST API endpoint
• Step #5: Test REST API Endpoints
Creating Application and installing Modules
• Create a Folder ExpressAPI
• Get VS Terminal in this folder
• Write npm init on terminal
• Write Package name, detail, keywords, authors etc
Creating Application and installing Modules
• npm i express –save
• npm i install mongoose –save
• npm i install http-status-codes –save
• npm i -g nodemon
• npm i dotenv –save
Creating Express Server
• //import express (after npm install express)
• const express = require('express');
• // create new express app and save it as "app"
• const app = express();
• // server configuration
• const PORT = 9000;
• // make the server listen to requests
• app.listen(PORT, () => {
• console.log(`My server running at: http://localhost:${PORT}/`);
• });
Defining Mongoose Schemas
Defining Mongoose Schemas
• const mongoose=require('mongoose');
• const Schema = mongoose.Schema;
• const objectId = mongoose.Schema.ObjectId;
• const productSchema = new Schema({
• _id:{type:objectId,auto:true},
• name:{type:String,required:true},
• unit_price:{type:Number,required:true},
• category:{type:objectId,ref:'categories'}
• },
• {versionkey:false});
• const product=mongoose.model('products',productSchema);
• module.exports(product);
Multi Modeling in a Collection
• Mongoose has inheritance object property and Discriminators
functions that let you have multiple models with overlapping
schemas on top of the same underlying MongoDB collection.
Example
• var mongoose = require('mongoose'),
• Schema = mongoose.Schema;

// ==== connect to database ====
• mongoose.connect('mongodb://127.0.0.1:27017/temp');

// ==== base object to be inherited ====
• var baseObj = {
• a : Number,
• b : String
• };
• // ==== inherit objects as needed at runtime ====

// inherited objects
• var objA = Object.create(baseObj);
• objA.c = [Number];

var objB = Object.create(baseObj);
• objB.c = String;

// create schema and model
• var SchemaA = new Schema(objA),
• modelA = mongoose.model('modelA', SchemaA, 'collectionAB');
• var SchemaB = new Schema(objB),
• modelB = mongoose.model('modelB', SchemaB, 'collectionAB');

// save a record
• var recordA = new modelA({
• a:1
• , b : 'a'
• , c : [1,2,3]
• });

var recordB = new modelB({
• a:2
• , b : 'z'
• , c : 'rajan'
• });

recordA.save();
• recordB.save();
Difference between promise and async
await in Node.js
• There are different ways to handle operations in NodeJS or in
JavaScript. For asynchronous execution, different processes run
simultaneously and are handled once the result of each process is
available. There are different ways to handle the asynchronous code in
NodeJS or in JavaScript which are:
• Callbacks
• Promises
• Async/Await
Promises:
• A Promise in NodeJS is similar to a promise in real life. It is an assurance that
something will be done. Promise is used to keep track of whether the
asynchronous event has been executed or not and determines what happens
after the event has occurred. It is an object having 3 states namely:
• Pending: Initial State, before the event has happened.
• Resolved: After the operation completed successfully.
• Rejected: If the operation had error during execution, the promise fails.
• Example: While requesting data from a database server, the Promise is in a
pending state until the data is received. If the data is received successfully,
then the Promise is in resolved state and if the data could not be received
successfully, the Promise is in the rejected state.
• const promise = new Promise(function (resolve, reject) {
• const string1 = “rajan";
• const string2 = “rajan";
• if (string1 === string2) {
• resolve();
• } else {
• reject();
• }
• });

• promise
• .then(function () {
• console.log("Promise resolved successfully");
• })
• .catch(function () {
• console.log("Promise is rejected");
• });
Async/Await:
• Async/Await is used to work with promises in asynchronous functions. It is
basically syntactic sugar for promises. It is just a wrapper to restyle code and
make promises easier to read and use. It makes asynchronous code look more
like synchronous/procedural code, which is easier to understand.
• await can only be used in async functions. It is used for calling an async
function and waits for it to resolve or reject. await blocks the execution of the
code within the async function in which it is located.
• Error Handling in Async/Await: For a successfully resolved promise, we
use try and for rejected promise, we use catch. To run a code after the
promise has been handled using try or catch, we can .finally() method. The
code inside .finally() method runs once regardless of the state of the promise.
const helperPromise = function () {
• const promise = new Promise(function (resolve, reject) {
• const x = “rajan";
• const y = “rajan";
• if (x === y) {
• resolve("Strings are same");
• } else {
• reject("Strings are not same");
• }
• });
• return promise;
• };
• async function demoPromise() {
• try {
• let message = await helperPromise();
• console.log(message);
• } catch (error) {
• console.log("Error: " + error);
• }
• }
• demoPromise();
Sr.no Promise Async/Await

Promise is an object representing


Async/Await is a syntactic sugar for promises, a
intermediate state of operation which is
1. wrapper making the code execute more
guaranteed to complete its execution at some
synchronously.
point in future.

Promise has 3 states – resolved, rejected and It does not have any states. It returns a promise either
2.
pending. resolved or rejected.

If the function “fxn1” is to executed after the


If the function “fxn1” is to executed after await, then
promise, then promise.then(fxn1) continues
3. await X() suspends execution of the current function
execution of the current function after adding
and then fxn1 is executed.
the fxn1 call to the callback chain.

Error handling is done using .then() and Error handling is done using .try() and .catch()
4.
.catch() methods. methods.

Using Async/Await makes it easier to read and


Promise chains can become difficult to
5. understand the flow of the program as compared to
understand sometimes.
promise chains.
Promises chaining

• When we have a sequence of asynchronous tasks to be


performed one after another — for instance, loading scripts.
How can we code it well?
• Promises provide a couple of recipes to do that.
• Promise Chaining is a simple concept by which we may
initialize another promise inside our .then() method and
accordingly we may execute our results. The function inside
then captures the value returned by the previous promise
Example
• new Promise(function(resolve, reject) {
• setTimeout(() => resolve(1), 1000); // (*)
• }).then(function(result) { // (**)
• alert(result); // 1
• return result * 2;
• }).then(function(result) { // (***)
• alert(result); // 2
• return result * 2;
• }).then(function(result) {
• alert(result); // 4
• return result * 2;
• });
• The idea is that the result is passed through the chain of .then handlers.
• Here the flow is:
• The initial promise resolves in 1 second (*),
• Then the .then handler is called (**), which in turn creates a new
promise (resolved with 2 value).
• The next then (***) gets the result of the previous one, processes it
(doubles) and passes it to the next handler.
• …and so on.
• As the result is passed along the chain of handlers, we can see a
sequence of alert calls: 1 → 2 → 4.

You might also like