DBMSL Manual Final Version
DBMSL Manual Final Version
E (Computer Engineering)
Lab Manual
Database Management System Laboratory
Prepared By:
Vision
We are committed to produce good human beings along with good engineers
Mission
Holistic development of students and teachers is what we believe in and work for. We strive to
achieve this by imbibing a unique value system, transparent work culture, excellent academic
and physical environment conductive to learning, creativity and technology transfer. Our
mandate is to generate, preserve and share knowledge for developing a vibrant society
This manual is intended for the Third year students of Computer Science and Engineering in the
subject of Database Management System. This manual typically contains practical/Lab Sessions
related Database Management System covering various aspects related the subject to enhanced
understanding.
Students are advised to thoroughly go through this manual rather than only topics
mentioned in the syllabus as practical aspects are the key to understanding and conceptual
visualization of theoretical aspects covered in the books.
Good Luck
1. Students should be regular and come prepared for the lab practice.
2. In case a student misses a class, it is his/her responsibility to complete that missed
experiment(s).
3. Students should bring the observation book, lab journal and lab manual. Prescribed
textbook and class notes can be kept ready for reference if required.
4. They should implement the given Program individually.
5. While conducting the experiments students should see that their programs would meet the
following criteria:
Programs should be interactive with appropriate prompt messages, error messages if
any,and descriptive messages for outputs.
Programs should perform input validation (Data type, range error, etc.) and give
appropriate error messages and suggest corrective actions.
Comments should be used to give the statement of the problem and every function should
indicate the purpose of the function, inputs and outputs.
Statements within the program should be properly indented
Use meaningful names for variables and functions.
Make use of Constants and type definitions wherever needed.
6. Once the experiment(s) get executed, they should show the program and results to the
instructors and copy the same in their observation book.
7. Questions for lab tests and exam need not necessarily be limited to the questions in the
manual, but could involve some variations and / or combinations of the questions
COURSE OBJECTIVES
COURSE OUTCOME
1 Study of Open Source NOSQL Database: MongoDB (Installation, Basic CRUD operations, Execution)
Design and Develop MongoDB Queries using CRUD operations. (Use CRUD operations, SAVE
2
method, logical operators)
GROUP A
ASSIGNMENT NO: 1
Database Applications:
Banking: transactions
Airlines: reservations, schedules
Universities: registration, grades
Sales: customers, products, purchases
Online retailers: order tracking, customized recommendations
Manufacturing: production, inventory, orders, supply chain
Human resources: employee records, salaries, tax deductions
Data Models:
Relational model
Entity-Relationship data model (mainly for database design)
Object-based data models (Object-oriented and Object-relational)
Semi-structured data model (XML)
Other older models:
Network model
Hierarchical model
A database is a means of storing information in such a way that information can be retrieved
from it. In simplest terms, a relational database is one that presents information in tables with rows and
columns. A table is referred to as a relation in the sense that it is a collection of objects of the same type
(rows). Data in a table can be related according to common keys or concepts, and the ability to retrieve
related data from a table is the basis for the term relational database. A Database Management System
(DBMS) handles the way data is stored, maintained, and retrieved. In the case of a relational database,
a Relational Database Management System (RDBMS) performs these tasks.
A relational database management system (RDBMS) is a program that lets you create,
update, and administer a relational database. Most commercial RDBMS’s use the Structured Query
Language (SQL) to access the database, although SQL was invented after the development of the
relational model and is not necessary for its use.
MySQL open source RDBMS overview: MySQL is a popular open source relational database
management system (RDBMS) choice for web-based applications. Developers, database administrators
and DevOps teams use MySQL to build and manage next-generation web- and cloud-based
applications. With most open source RDBMS options, MySQL is available in several different editions
and runs on Windows, OS X, Solaris, FreeBSD and other variants of Linux and Unix:
MySQL Classic Edition, available to only independent software vendors, OEMs and value-
added resellers, is designed to be an Embeddable database for read-intensive applications.
MySQL Community Edition Is the free downloadable version of MySQL available under the
GNU General Public License (GPL).
MySQL Standard Edition Is the entry-level RDBMS offering for online transaction processing
MySQL Enterprise Edition Adds advanced features, management tools (including OEM for
MySQL) and technical support.
MySQL Cluster Carrier Grade Editionis designed for Web and cloud development.
Data types supported by MySQL open source RDBMS: MySQL data types include numeric types,
date and time types, string types (including binary, character and Binary Large Object), and spatial
types. Additionally, MySQL will map certain data types from other DBMS to MySQL data types for
easier portability.
Step 1 — Installing MySQL: There are two ways to install MySQL. You can either use one of the
versions included in the APT package repository by default (which are 5.5 and 5.6), or you can install
the latest version (currently 5.7) by manually adding MySQL’s repository first. You can just use the
mysql-server APT package, which just installs the latest version for your Linuxdistribution. To install
MySQL this way, update the package index on your server and install the package with apt-get.
You’ll be prompted to create a root password during the installation. Choose a secure one and make
sure you remember it, because you’ll need it later. Move on to step two from here.
Step 2 — Configuring MySQL: First, you’ll want to run the included security script. This changes
some of the less secure default options for things like remote root logins and sample users.
sudomysql_secure_installation
This will prompt you for the root password you created in step one. You can press ENTER to
accept the defaults for all the subsequent questions, with the exception of the one that asks if
you’d like to change the root password. You just set it in step one, so you don’t have to change
it now.
Next, we’ll initialize the MySQL data directory, which is where MySQL stores its data. How
you do this depends on which version of MySQL you’re running. You can check your version
of MySQL with the following command.
mysql –version
If you’re using a version of MySQL earlier than 5.7.6, you should initialize the data directory by
running mysql_install_db.
sudomysql_install_db
Step 3 — Testing MySQL: Regardless of how you installed it, MySQL should have started running
automatically. To test this, check its status.
service mysql status
If MySQL isn’t running, you can start it with sudo service mysql start.
For an additional check, you can try connecting to the database using the mysqladmin tool, which is a
client that lets you run administrative commands. For example, this command says to connect to
MySQL as root (-u root), prompt for a password (-p), and return the version.
mysqladmin -p -u root version
Conclusion: Studied concept of relational databases, MySQL DBMS and steps to install MySQL on
Ubuntu O.S.
ASSIGNMENT NO: 2
Title: Design and Develop SQL DDL statements which demonstrate the use of SQL objects such as
Tables, View, Sequence, Synonym
Objectives: To learn and SQL DDL statements which demonstrate the use of SQL objects such as
Table, View, Index, Sequence, Synonym
Outcomes: Students will be able to learn concepts of DDL commands.
Requirements:
Software Requirements: Maria DB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD, Cores:
Single (Dual/Quad Core is recommended) , RAM: 4 GB (6 GBrecommended)
Theory:
1. Data Definition Language: DDL (Data Definition Language) statements are used to create, delete,
or change the objects of a database. Typically a database administrator is responsible for using DDL
statements or production databases in a large database system. It contains the commands used to create
and destroy databases and database objects. These commands will primarily be used by database
administrators during the setup and removal phases of a database project.
Table Creation:
Rules:
Reserved words cannot be used.
Underscore, numerals, letters are allowed but not blank space.
Maximum length for the table name is 30 characters.
2 different tables should not have same name.
We should specify a unique column name.
We should specify proper data type along with width.
We can include “not null” condition when needed. By default it is ‘null’.
DDL Commands:
a) Create Table Command:
The CREATE TABLE statement is used to create a new table in a database.
Syntax:
CREATE TABLE <TABLE_NAME>
(
column_name1 datatype1(size),
column_name2 datatype2(size),
column_name3 datatype3(size),
column_name4 datatype4(size)
……
);
Example:
CREATE TABLE Student
( student_id INT,
name VARCHAR(100),
age INT);
The above example will create a new table with name Student in the current database with 3
columns, namely student_id, name and age, where the column student_id will only store
integer, name will hold upto 100 characters and age will again store only integer value.
Example:
CREATE TABLE SALARY AS SELECT ID, SALARY FROM CUSTOMERS;
Above example creates a table SALARY using the CUSTOMERS table and having the fields
customer ID and customer SALARY.
Syntax:
The basic syntax of an ALTER TABLE command to add a “New Column” in an existing table
is as follows:
ALTER TABLE table_name ADD column_name datatype;
The basic syntax of an ALTER TABLE command to “Drop Column” in an existing table is as
follows:
ALTER TABLE table_name DROP COLUMN column_name;
The basic syntax of an ALTER TABLE command to change the “Data Type” of a column in a
table is as follows:
ALTER TABLE table_name MODIFY COLUMN column_name datatype;
The basic syntax of an ALTER TABLE command to add a “Not Null” constraint to a column in
a table is as follows:
ALTER TABLE table_name MODIFY column_name datatype NOT NULL;
The basic syntax of ALTER TABLE to “Add Unique Constraint” to a table is as follows:
ALTER TABLE table_name
ADD CONSTRAINT MyUniqueConstraint UNIQUE(column1, column2...);
The basic syntax of an ALTER TABLE command to “Add Check Constraint” to a table is as
follows:
ALTER TABLE table_name ADD CONSTRAINT MyUniqueConstraint CHECK
(CONDITION);
The basic syntax of an ALTER TABLE command to “Add Primary Key” constraint to a table is
as follows:
ALTER TABLE table_name ADD CONSTRAINT MyPrimaryKey PRIMARY KEY
(column1, column2...);
The basic syntax of an ALTER TABLE command to “Drop Constraint” from a table is as
follows:
ALTER TABLE table_name DROP INDEX MyUniqueConstraint;
The basic syntax of an ALTER TABLE command to “Drop Primary Key” constraint from a
table is as follows.
ALTER TABLE table_name DROP PRIMARY KEY;
Example 1:
Following is the example to ADD a New Column to an existing table named Customers:
Now, the CUSTOMERS table is changed and a new column name Gender will be added to the
Customers table.
Example 2:
Following is the example to DROP Gender column from the existing table.
Now, the Customers table is changed and the Gender column will be dropped from the
Customers table.
c) Renaming a table:
To rename a table, use the RENAME option of the ALTER TABLE statement. The following
example renames table test_tbl to alt_test:
Syntax:
DROP TABLE table_name;
Example:
DROP TABLE personal_info;
Syntax:
CREATE [OR REPLACE] VIEW view_name AS
SELECT columns
FROM tables
[WHERE conditions];
The CREATE VIEW statement creates a new view, or replaces an existing view if
the OR REPLACE clause is given. If the view does not exist, CREATE OR REPLACE
VIEW is the same as CREATE VIEW. If the view does exist, CREATE OR REPLACE
VIEW replaces it.
The select_statement is a SELECT statement that provides the definition of the view.
The select_statement can select from base tables or other views.
The view definition is “frozen” at creation time and is not affected by subsequent changes to the
definitions of the underlying tables. For example, if a view is defined as SELECT * on a table,
new columns added to the table later do not become part of the view, and columns dropped
from the table will result in an error when selecting from the view.
A view belongs to a database. By default, a new view is created in the default database. To
create the view explicitly in a given database, use db_name.view_name syntax to qualify the
view name with the database name:
CREATE VIEW test.v AS SELECT * FROM t;
A view can be created from many kinds of SELECT statements. It can refer to base tables or
other views. It can use joins, UNION, and subqueries. The SELECT need not even refer to any
tables:
CREATE VIEW v_today (today) AS SELECT CURRENT_DATE;
The following example defines a view that selects two columns from another table as well as an
expression calculated from those columns:
Arguments:
Name Description
Example:
CREATE INDEX autid ON newauthor(aut_id);
The above MySQL statement will create an INDEX on 'aut_id' column for 'newauthor' table.
Example:
CREATE UNIQUE INDEX newautid ON newauthor(aut_id);
The above MySQL statement will create an UNIQUE INDEX on 'aut_id' column for
'newauthor' table.
4. MySQL Sequence:-
In MySQL, a sequence is a list of integers generated in the ascending order i.e., 1,2,3… Many
applications need sequences to generate unique numbers mainly for identification e.g., customer
ID in CRM, employee numbers in HR, equipment numbers in services management system, etc.
Example:
The following statement creates a table named employees that has the emp_no column is
an AUTO_INCREMENT column:
CREATE TABLE employees(emp_no INT(4) AUTO_INCREMENT PRIMARY KEY,
first_name VARCHAR(50), last_name VARCHAR(50));
5. MySQL Synonym:-
A synonym is an alias or alternate name for a table, view, sequence, or other schema object.
They are used mainly to make it easy for users to access database objects owned by other users.
Because a synonym is just an alternate name for an object, it requires no storage other than its
definition. When an application uses a synonym, the DBMS forwards the request to the
synonym's underlying base object.
There are two categories of synonyms, public and private.A public synonym can be used
to allow easy access to an object for all system users. In fact, the individual creating a public
synonym does not own the synonym-rather,it will belong to the PUBLIC user group that exists
within Oracle.Private synonyms, on the other hand,belong to the system user that creates them
and reside in that user's schema.
Syntax:
CREATE [PUBLIC|PRIVATE] SYNONYM [synonym name] FOR TABLE|VIEW;
Example:
CREATE SYNONYM emp_syn FOR employees;
The above statement will create a synonym named emp_syn of employees table.
Dropping a Synonym
A user can drop the synonym which it owns. A synonym can be dropped as follows:
DROP SYNONYM emp_syn;
Conclusion: Awareness of Different SQL Objects such as Table, View, Index, Sequence and
Synonym and implemented successfully
ASSIGNMENT NO.: 3
Title: Design atleast 10 SQL queries for suitable database application using SQL DML
statements: Insert, Update, Delete with operators, functions, and set operator.
Objective: To understand the concept of DML statement like Insert, Select, Update, operators
and set operator.
Outcome: Students will be able to execute basic DML commands such as insert, select, update,
delete with operators, functions and set operators.
Requirements:
Software Requirements: Maria DB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal
AMD, Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GBrecommended)
Theory:
1. INSERT Command:
The INSERT command in MYSQL is used to add records to an existing table.
2. SELECT Command:
The SELECT statement is used to fetch the data from a database table which returns this data in
the form of a result table. These result tables are called result-sets.
Syntax:
The basic syntax of the SELECT statement is as follows −
SELECT column1, column2, columnN FROM table_name;
Here, column1, column2... are the fields of a table whose values are to be fetched. To fetch all
the records of the table, the following syntax is used:
3.UPDATE Command:
The UPDATE command is used to modify the existing records in a table. The WHERE clause
can be used with the UPDATE command to update the selected rows, otherwise all the rows
would be affected.
Syntax:
UPDATE table_name
SET column1 = value1, column2 = value2...., columnN = valueN
WHERE [condition];
Example:
The following query will update the Address for a customer whose ID number is 6 in the table.
UPDATE Customers SET Address = 'Pune' WHERE ID = 6;
4. DELETE Command:
The DELETE command is used to delete the existing records from a table. The WHERE clause
can be used with a DELETE command to delete the selected rows, otherwise all the records
would be deleted.
Syntax:
DELETE FROM table_name WHERE [condition];
Example 1:
The following command will DELETE a customer, whose ID is 6.
DELETE FROM customers WHERE ID = 6;
Example 2:
The following command will delete all the rows from the Customers table −
DELETE FROM Customers;
5. Aggregate Functions:
MySQL aggregate functions retrieve a single value after performing a calculation on a set of
values. In general, aggregate functions ignore null values. Often, aggregate functions are
accompanied by the GROUP BY clause of the SELECT statement.
Following are the MySQL Aggregate Functions:
1) AVG:
The AVG function is used to find the average value.
Syntax:
SELECT AVG(column_name) FROM table_name;
Example:
In the following example the average order amount is displayed from the orders table.
SELECT AVG(amount) FROM orders;
Department of Computer Engineering 20
Database Management Systems Lab T.E (Computer Engineering)
2) SUM:
The SUM function is used to find the sum or total.
Syntax:
SELECT SUM(column_name) FROM table_name;
Example:
In the following example, the sum i.e., total of the order amount will be displayed from the
orders table.
SELECT SUM(amount) FROM orders;
3) MIN:
The MIN function is used to find the minimum value.
Syntax:
SELECT MIN(column_name) FROM table_name;
Example:
In the following example, the minimum order amount will be displayed from the orders table.
SELECT MIN(amount) FROM orders;
4)MAX:
The MAX function is used to find the maximum value.
Syntax:
SELECT MAX(column_name) FROM table_name;
Example:
In the following example, the maximum order amount will be displayed from the orders table:
SELECT MAX(amount) FROM orders;
5)COUNT:
The COUNT function is used to find the number of rows matching the given condition.
Syntax:
SELECT COUNT (column_name) FROM table_name;
Example:
In the following example, we are listing total number of orders in the orders table.
SELECT COUNT(*) FROM orders;
6. Ordering Operation:
The MySQL ORDER BY clause is used to sort the records in your result set.
Syntax:
SELECT expressions FROM tables
[WHERE conditions]
ORDER BY expression [ ASC | DESC ];
Parameters or Arguments:
ASC - Optional. It sorts the result set in ascending order by expression (default, if no modifier
is provider).
DESC - Optional. It sorts the result set in descending order by expression.
Example:
SELECT last_name, first_name, city FROM contacts WHERE last_name = 'Johnson'
ORDER BY city DESC;
The above MySQL ORDER BY example would return all records sorted by the city field in
descending order.
7. Numeric Functions:
MySQL numeric functions are used primarily for numeric manipulation and/or mathematical
calculations.
Function Description
ABS Returns the absolute value of a number
AVG Returns the average value of an expression
CEIL Returns the smallest integer value that is greater than or equal to a number
COS Returns the cosine of a number
COT Returns the cotangent of a number
COUNT Returns the number of records in a select query
DEGREES Converts a radian value into degrees
DIV Used for integer division
EXP Returns e raised to the power of number
FLOOR Returns the largest integer value that is less than or equal to a number
GREATEST Returns the greatest value in a list of expressions
LEAST Returns the smallest value in a list of expressions
LOG Returns the natural logarithm of a number or the logarithm of a number to
a specified base
LOG10 Returns the base-10 logarithm of a number
LOG2 Returns the base-2 logarithm of a number
MAX Returns the maximum value of an expression
MIN Returns the minimum value of an expression
MOD Returns the remainder of n divided by m
PI Returns the value of PI displayed with 6 decimal places
POW Returns m raised to the nth power
POWER Returns m raised to the nth power
RADIANS Converts a value in degrees to radians
ROUND Returns a number rounded to a certain number of decimal places
SIN Returns the sine of a number
SQRT Returns the square root of a number
SUM Returns the summed value of an expression
TAN Returns the tangent of a number
TRUNCATE Returns a number truncated to a certain number of decimal places
8. Date Functions:
MySQL provides many useful date functions that allow you to manipulate date effectively.
1) To get the current date and time, you use NOW() function:
SELECT NOW();
2) To get only date part of a DATETIME value, you use the DATE() function.
SELECT DATE(NOW());
3) To get the current system date, you use CURDATE() function as follows:
SELECT CURDATE();
4) To format a date value, you use DATE_FORMAT function. The following statement
formats the date asmm/dd/yyyy using the date format pattern %m/%d/%Y :
SELECT DATE_FORMAT(CURDATE(), '%m/%d/%Y') today;
Department of Computer Engineering 22
Database Management Systems Lab T.E (Computer Engineering)
8. Set Operations:
SQL supports few Set operations which can be performed on the table data. These are used to get
meaningful results from data stored in the table, under different special conditions.
Types of SET operations:
1. Union
2. Union All
3. Intersect
4. Minus
1. Union
Union is used to combine the results of two or more SELECT statements. However it will
eliminate duplicate rows from its resultset. In case of union, number of columns and datatype
must be same in both the tables, on which UNION operation is being applied.
Fig: Union
Example of Union:
SELECT * FROM First UNION SELECT * FROM Second;
2. Union All
This operation is similar to Union. But it also shows the duplicate rows.
Fig:Union ALL
Example of Union All:
SELECT * FROM First UNION ALL SELECT * FROM Second;
3. Intersect
Intersect operation is used to combine two SELECT statements, but it only retuns the records which
are common from both SELECT statements. In case of Intersect the number of columns and
datatype must be same.
NOTE: MySQL does not support INTERSECT operator.
Fig:Intersect
Example of Intersect:
SELECT * FROM First INTERSECT SELECT * FROM Second;
4. Minus
The Minus operation combines results of two SELECT statements and return only those in the final
result, which belongs to the first set of the result.
Fig:Minus
Example of Minus:
SELECT * FROM First MINUS SELECT * FROM Second;
Conclusion: Implemented all SQL DML commands like Insert, Select, Update, Delete with operators,
functions and set operator.
ASSIGNMENT NO.: 4
Title: Design at least 10 SQL queries for suitable database application using SQL DML statements: all
types of Join, Sub-query and View.
Objective: To understand the concept of DML statements: all types of Join, Sub-query and View.
Outcome: Students will be able to execute basic DML commands: types of Joins, sub-queries and
view.
Requirements:
Software Requirements: Maria DB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
1. Joins:
SQL Join is used to fetch data from two or more tables, which is joined to appear as single set of
data. It is used for combining column from two or more tables by using values common to both
tables. JOIN Keyword is used in SQL queries for joining two or more tables. Minimum required
condition for joining table, is (n-1) where n, is number of tables. A table can also join to itself,
which is known as, Self Join.
Types of Joins:
Following are the types of JOIN:
1. Inner Join
2. Outer Join
3. Self Join
Syntax:
SELECT columns FROM table1 INNER JOIN table2
ON table1.column = table2.column;
Visual Illustration:
In the following visual diagram, the MySQL INNER JOIN returns the shaded area:
The MySQL INNER JOIN would return the records where table1 and table2 intersect.
Example:
SELECT suppliers.supplier_id, suppliers.supplier_name, orders.order_date FROM suppliers
INNER JOIN orders ON suppliers.supplier_id = orders.supplier_id;
Department of Computer Engineering 25
Database Management Systems Lab T.E (Computer Engineering)
This MySQL INNER JOIN example would return all rows from the suppliers and orders tables
where there is a matching supplier_id value in both the suppliers and orders tables.
The LEFT JOIN keyword returns all records from the left table (table1), and the matched
records from the right table (table2). The result is NULL from the right side, if there is no
match. In some databases LEFT JOIN is called LEFT OUTER JOIN.
Syntax:
The above LEFT OUTER JOIN example will return all rows from the suppliers table and only
those rows from the orders table where the joined fields are equal. If a supplier_id value in the
suppliers table does not exist in the orders table, all fields in the orders table will display as
<null>in the result set.
Example 2:
Consider a table called suppliers with two fields (supplier_id and supplier_name). It contains
the following data:
supplier_id supplier_name
10000 IBM
10001 Hewlett Packard
10002 Microsoft
10003 NVIDIA
Consider the second table called orders with three fields (order_id, supplier_id, and
order_date). It contains the following data:
If run the SELECT statement (that contains a LEFT OUTER JOIN) below:
SELECT suppliers.supplier_id, suppliers.supplier_name, orders.order_date FROM suppliers
LEFT JOIN orders ON suppliers.supplier_id = orders.supplier_id;
The rows for Microsoft and NVIDIA will be included because a LEFT OUTER JOIN was used.
However, the order_date field for those records contains a <null> value.
Syntax:
SELECT column_name(s) FROM table1
RIGHT JOIN table2 ON table1.column_name = table2.column_name;
Example 1:
SELECT orders.order_id, orders.order_date, suppliers.supplier_name FROM suppliers RIGHT
JOIN orders ON suppliers.supplier_id = orders.supplier_id;
This RIGHT OUTER JOIN example will return all rows from the orders table and only those
rows from the suppliers table where the joined fields are equal. If a supplier_id value in the
orders table does not exist in the suppliers table, all fields in the suppliers table will display as
<null> in the result set.
Example 2:
Consider a table called suppliers with two fields (supplier_id and supplier_name). It contains
the following data:
supplier_id supplier_name
10000 Apple
10001 Google
Consider the second table called orders with three fields (order_id, supplier_id, and
order_date). It contains the following data:
If run the SELECT statement (that contains a RIGHT OUTER JOIN) below:
SELECT orders.order_id, orders.order_date, suppliers.supplier_name FROM suppliers RIGHT
JOIN orders ON suppliers.supplier_id = orders.supplier_id;
The row for 500127 (order_id) will be included because a RIGHT OUTER JOIN was used.
However, the supplier_name field for that record will contain a <null> value.
Syntax:
SELECT column_name(s) FROM table1
FULL OUTER JOIN table2 ON table1.column_name = table2.column_name;
Example:
Consider the following two tables:
Table 1 − CUSTOMERS Table is as follows:
+----+----------+-----+-----------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+-----------+----------+
| 1 | Ramesh | 32 | Ahmedabad | 2000.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+-----------+----------+
Table 2 − ORDERS Table is as follows.
+-----+---------------------+-------------+--------+
|OID | DATE | CUSTOMER_ID | AMOUNT |
+-----+---------------------+-------------+--------+
| 102 | 2009-10-08 00:00:00 | 3 | 3000 |
| 100 | 2009-10-08 00:00:00 | 3 | 1500 |
| 101 | 2009-11-20 00:00:00 | 2 | 1560 |
| 103 | 2008-05-20 00:00:00 | 4 | 2060 |
+-----+---------------------+-------------+--------+
Now, let us join these two tables using FULL JOIN as follows.
SELECT ID, NAME, AMOUNT, DATE FROM CUSTOMERS FULL JOIN ORDERS ON
CUSTOMERS.ID = ORDERS.CUSTOMER_ID;
Note: MySQL does not support FULL JOIN. In this case, use UNION ALL clause to combine
these two JOINS as shown below:
SELECT ID, NAME, AMOUNT, DATE FROM CUSTOMERS LEFT JOIN ORDERS ON
CUSTOMERS.ID = ORDERS.CUSTOMER_ID UNION ALL SELECT ID, NAME, AMOUNT,
DATE FROM CUSTOMERS RIGHT JOIN ORDERS ON CUSTOMERS.ID =
ORDERS.CUSTOMER_ID
3) Self Join:A self JOIN is a regular join, but the table is joined with itself.
Syntax:
SELECT column_name(s) FROM table1 T1, table1 T2 WHERE condition;
Example:
Consider the following table.
CUSTOMERS Table is as follows:
+----+----------+-----+-----------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+-----------+----------+
| 1 | Ramesh | 32 | Ahmedabad | 2000.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+-----------+----------+
1.3. View:
View is a data object which does not contain any data. Contents of the view are the resultant of a
base table. They are operated just like base table but they don’t contain any data of their own. The
difference between a view and a table is that views are definitions built on top of other tables (or
views). If data is changed in the underlying table, the same change is reflected in the view. A view
can be built on top of a single or multiple tables.
1) Creating View:
Syntax:
CREATE [OR REPLACE] VIEW view_name AS SELECT columns FROM tables [WHERE
conditions];
Example:
CREATE VIEW hardware_suppliers AS SELECT supplier_id, supplier_name FROM suppliers
WHERE category_type = 'Hardware';
This CREATE VIEW example would create a virtual table based on the result set of the SELECT
statement. You can now query the MySQL VIEW as follows:
SELECT * FROM hardware_suppliers;
The above MySQL statement will create a view 'view_purchase' taking all the records of
invoice_no, book_name and cate_id columns of purchase table, if category id (cate_id) satisfies the
condition defined within a subquery (followed by cate_id=).
The subquery retrieves only cate_ids from book_mast table, which contain books with 201 pages.
5) Alter a View:
The definition of a VIEW in MySQL can be modified without dropping it by using the ALTER
VIEW statement.
Syntax:
ALTER VIEW view_name AS SELECT columns FROM table WHERE conditions;
Example:
ALTER VIEW hardware_suppliers AS SELECT supplier_id, supplier_name, address, city
FROM suppliers WHERE category_type = 'Hardware';
This ALTER VIEW example in MySQL would update the definition of the VIEW
called hardware_suppliers without dropping it. In this example, we are adding
the address and city columns to the VIEW.
4) Drop View:
Once a VIEW has been created in MySQL, it can be dropped or deleted with the DROP VIEW
statement.
Syntax:
DROP VIEW [IF EXISTS] view_name;
Example:
DROP VIEW hardware_suppliers;
This DROP VIEW example will drop/delete the MySQL VIEW called hardware_suppliers.
1.4. Sub-queries:
In MySQL, a subquery is a query within a query. You can create subqueries within your SQL
statements. These subqueries can reside in the WHERE clause, the FROM clause, or the SELECT
clause.
The main advantages of subqueries are:
They allow queries that are structured so that it is possible to isolate each part of a statement.
They provide alternative ways to perform operations that would otherwise require complex
joins and unions.
Many people find subqueries more readable than complex joins or unions. Indeed, it was the
innovation of subqueries that gave people the original idea of calling the early SQL “Structured
Query Language.”
Example:
SELECT * FROM t1 WHERE column1 = (SELECT column1 FROM t2);
In this example, SELECT * FROM t1 ... is the outer query (or outer statement), and (SELECT
column1 FROM t2) is the subquery. We say that the subquery is nested within the outer query, and
in fact it is possible to nest subqueries within other subqueries, to a considerable depth. A subquery
must always appear within parentheses.
FROM address_book a
This query finds out all the employees who live in the same city and country as customers.
The subquery returns a table of two columns. It's returned to outer query and City and Country in
employees table are compared with each row in the table.
Conclusion: Thus have implemented SQL DML statements: all types of Joins, Sub-query and
View successfully.
ASSIGNMENT NO: 5
Title: Unnamed PL/SQL code block: Use of Control structure and Exception handling is mandatory.
Write a PL/SQL block of code for the following requirements: - Schema:
1. Borrower (Rollin, Name, DateofIssue, NameofBook, Status)
2. Fine (Roll_no,Date,Amt)
Accept roll_no& name of book from user.
Check the number of days (from date of issue), if days are between 15 to 30 then fine amounts
will be Rs 5per day.
If no. of days>30, per day fine will be Rs 50 per day & for days less than 30, Rs. 5 per day.
After submitting the book, status will change from I to R.
If condition of fine is true, then details will be stored into fine table.
Frame the problem statement for writing PL/SQL block in line with above statement.
Objective: Understand the concept of Unnamed PL/SQL code, different Control Structure and
exception handling.
Requirements:
Software Requirements: Maria DB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
The anonymous block has three basic sections that are the declaration, execution, and exception
handling. Only the execution section is mandatory and the others are optional.
The declaration section allows you to define data types, structures, and variables. You often
declare variables in the declaration section by giving those names, data types, and initial values.
The execution section is required in a block structure and it must have at least one statement.
The execution section is the place where you put the execution code or business logic code.
You can use both procedural and SQL statements inside the execution section.
The exception handling section is starting with the EXCEPTIONkeyword. The exception
section is the place that you put the code to handle exceptions. You can either catch or handle
exceptions in the exception section.
Syntax:
IFcondition THEN statement1;
ElSEIF condition2 THEN statement2;
ENDIF;
With each iteration of the loop, the sequence of statements is executed, then control resumes at the top
of the loop. You use an EXIT statement to stop looping and prevent an infinite loop. You can place one
Department of Computer Engineering 36
Database Management Systems Lab T.E (Computer Engineering)
or more EXIT statements anywhere inside a loop, but not outside a loop. There are two forms
of EXIT statements: EXIT and EXIT-WHEN.
Exception:
PL/SQL supports programmers to catch such conditions using EXCEPTION block in the program and
an appropriate action is taken against the error condition. There are two types of exceptions
1. System-defined exceptions
2. User-defined exceptions
Raising Exception:
Exception are raised by the database server automatically whenever there is any internal database error,
but exception can be raised explicitly by the programmer by using the command RAISE. Following is
the syntax for raising an exception.
Syntax:
DECLARE
Exception_name EXCEPTION;
BEGIN
IF condition THEN
RAISE exception_name;
ENDIF;
EXCEPTION
WHEN exception_name THEN
STATEMENT
User-defined Exceptions:
Steps to be followed to use user-defined exceptions:
• They should be explicitly declared in the declaration section.
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhostsinhgad]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 5.5.43-MariaDB MariaDB Server
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help.Type '\c' to clear the current input statement.
MariaDB [(none)]> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [test]> create table Fine(Roll int(2),CurrentDatedate,Amtint(3));
Query OK, 1 row affected (0.01 sec)
MariaDB [test]> create table borrower(Roll int(2),SNamevarchar(10),dateofissuedate,booknamevarchar(10),status
varchar(4));
Query OK, 0 row affected (0.02 sec)
MariaDB [test]> insert into borrower values(1,'Maya','2018-06-09','DBMS','No');
Query OK, 1 row affected (0.01 sec)
->setfine_amt=temp*5;
->insert into Fine values(rollno,curdate(),fine_amt);
->else
->set temp=Days-30;
->setfine_amt=(temp*50)+(15*5);
->insert into Fine values(rollno,curdate(),fine_amt);
->end if;
->end;
-> #
Query OK, 0 rows affected (0.08 sec)
->else
->set temp=Days-30;
->setfine_amt=(temp*50)+(15*5);
->insert into Fine values(rollno,curdate(),fine_amt);
->end if;
->end;
-> @
Query OK, 0 rows affected (0.00 sec)
+------+---------+-------------+----------+--------+
| Roll | SName | dateofissue | bookname | status |
+------+---------+-------------+----------+--------+
| 1 | Maya | 2018-06-09 | DBMS | No |
| 2 | Kavya | 2018-01-09 | SEPM | R |
| 3 | RIYA | 2018-02-09 | ISEE | R |
| 4 | Priya | 2018-04-09 | TOC | R |
| 5 | Piya | 2018-08-22 | ADSL | NO |
| 6 | praniti| 2018-08-22 | MICRO | R |
+------+---------+-------------+----------+--------+
6 rows in set (0.00 sec)
Conclusion: Successfully implemented the PL/SQL code with proper understanding of different
control structure and exception handling.
ASSIGNMENT NO: 6
Title: Cursors: (All types: Implicit, Explicit, Cursor FOR Loop, Parameterized Cursor)
Write a PL/SQL block of code using parameterized Cursor that will merge the data available in the
newly created table N_RollCall with the data available in the table O_RollCall. If the data in the first
table already exist in the second table then that data should be skipped.
Objective: To study cursor PL/SQL block of code.
Requirements:
Software Requirements: Maria DB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
PL/SQL Cursor
When an SQL statement is processed, Oracle creates a memory area, known as the context area, for
processing an SQL statement, which contains all the information needed for processing the statement;
for example, the number of rows processed, etc.
A cursor is a pointer to this context area. PL/SQL controls the context area through a cursor. A
cursor holds the rows (one or more) returned by a SQL statement. The set of rows the cursor holds is
referred to as the active set. You can name a cursor so that it could be referred to in a program to fetch
and process the rows returned by the SQL statement, one at a time. There are two types of cursors
Declare a Cursor:
Description
A cursor is a SELECT statement that is defined within the declaration section of PL/SQL code. The
three different syntaxes to declare a cursor.
CURSOR cursor_name
IS
SELECT_statement;
CURSOR cursor_name
RETURN field%ROWTYPE
IS
SELECT_statement;
Implicit cursors
Explicit cursors
%FOUND
1
Returns TRUE if an INSERT, UPDATE, or DELETE statement affected one or more rows or a SELECT INTO statement
returned one or more rows. Otherwise, it returns FALSE.
%NOTFOUND
2 The logical opposite of %FOUND. It returns TRUE if an INSERT, UPDATE, or DELETE statement affected no rows, or a
SELECT INTO statement returned no rows. Otherwise, it returns FALSE.
%ISOPEN
3 Always returns FALSE for implicit cursors, because Oracle closes the SQL cursor automatically after executing its associated
SQL statement.
%ROWCOUNT
4 Returns the number of rows affected by an INSERT, UPDATE, or DELETE statement, or returned by a SELECT INTO
statement.
CURSOR C_name IS
SELECT statement;
OPEN C_name;
CLOSE C_name;
Working with MySQL cursor
The cursor declaration must be after any variable declaration. If declare a cursor before variables
declaration, MySQL will issue an error. A cursor must always be associated with a SELECT statement.
Next, open the cursor by using the OPEN statement. The OPEN statement initializes the result set
for the cursor; therefore, call the OPEN statement before fetching rows from the result set.
OPEN cursor_name;
Then, use the FETCH statement to retrieve the next row pointed by the cursor and move the
cursor to the next row in the result set.
FETCH cursor_name INTO variables list;
After that, check to see if there is any row available before fetching it. Finally, call
the CLOSE statement to deactivate the cursor and release the memory associated with it as follows:
CLOSE cursor_name;
When the cursor is no longer used, close it.
When working with MySQL cursor, also declare a NOT FOUND handler to handle the situation when
the cursor could not find any row. Because each time call the FETCH statement, the cursor attempts to
read the next row in the result set. When the cursor reaches the end of the result set, it will not be able
to get the data, and a condition is raised. The handler is used to handle this condition.
To declare a NOT FOUND handler, use the following syntax:
DECLARE CONTINUE HANDLER FOR NOT FOUND SET finished = 1;
The finished is a variable to indicate that the cursor has reached the end of the result set. Notice that the
handler declaration must appear after variable and cursor declaration inside the stored procedures.
The following diagram illustrates how MySQL cursor works.
To develop a stored procedure that builds an email list of all employees in the employeestable in
the sample database. First, declare some variables, a cursor for looping over the emails of employees,
and a NOT FOUNDhandler:
DELIMITER $$
CREATE PROCEDURE build_email_list (INOUT email_list varchar(4000))
BEGIN
DECLARE v_finished INTEGER DEFAULT 0;
DECLARE v_email varchar(100) DEFAULT "";
-- declare cursor for employee email
DEClARE email_cursor CURSOR FOR
SELECT email FROM employees;
-- declare NOT FOUND handler
DECLARE CONTINUE HANDLER
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
+--------+--------+-------------+
| rollno | name | total_marks |
+--------+--------+-------------+
| 1 | Ravi | 933 |
| 2 | sagar | 450 |
| 3 | sarita | 1300 |
| 4 | avi | 250 |
| 5 | raj | 675 |
+--------+--------+-------------+
5 rows in set (0.00 sec)
-> BEGIN
-> DECLARE rolln INT;
-> DECLARE stu_name CHAR(50);
-> DECLARE marks INT;
-> DECLARE done INT DEFAULT FALSE;
-> DECLARE cur1 CURSOR FOR SELECT rollno,name,total_marks FROM stud_marks;
-> DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=TRUE;
-> OPEN cur1;
-> read_loop:LOOP
-> FETCH cur1 INTO rolln,stu_name,marks;
-> IF done THEN
-> LEAVE read_loop;
-> END IF;
-> IF(marks>=990 AND marks<=1500)THEN
-> END //
ASSIGNMENT NO.: 7
Requirements:
Software Requirements: Maria DB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
Procedures and Functions are the subprograms which can be created and saved in the database as
database objects. They can be called or referred inside the other blocks also.
Parameter:
The parameter is variable or placeholder of any valid PL/SQL data-type through which the PL/SQL
subprogram exchange the values with the main code. This parameter allows to give input to the
subprograms and to extract from these subprograms.
These parameters should be defined along with the subprograms at the time of creation.
These parameters are included n the calling statement of these subprograms to interact the
values with the subprograms.
The datatype of the parameter in the subprogram and the calling statement should be same.
The size of the datatype should not mention at the time of parameter declaration, as the size is
dynamic for this type.
OUT Parameter:
This parameter is used for getting output from the subprograms.
It is a read-write variable inside the subprograms. Their values can be changed inside the
subprograms.
In the calling statement, these parameters should always be a variable to hold the value from the
current subprograms.
IN OUT Parameter:
This parameter is used for both giving input and for getting output from the subprograms.
It is a read-write variable inside the subprograms. Their values can be changed inside the
subprograms.
In the calling statement, these parameters should always be a variable to hold the value from the
subprograms.
These parameter types should be mentioned at the time of creating the subprograms.
RETURN
RETURN is the keyword that instructs the compiler to switch the control from the subprogram to the
calling statement. In subprogram RETURN simply means that the control needs to exit from the
subprogram. Once the controller finds RETURN keyword in the subprogram, the code after this will be
skipped.
Normally, parent or main block will call the subprograms, and then the control will shift from
those parent blocks to the called subprograms. RETURN in the subprogram will return the control back
to their parent block. In the case of functions RETURN statement also returns the value. The datatype
of this value is always mentioned at the time of function declaration. The datatype can be of any valid
PL/SQL data type.
Procedure in PL/SQL
A Procedure is a subprogram unit that consists of a group of PL/SQL statements. Each procedure in
Oracle has its own unique name by which it can be referred. This subprogram unit is stored as a
database object. Below are the characteristics of this subprogram unit.
Procedures are standalone blocks of a program that can be stored in the database.
Call to these procedures can be made by referring to their name, to execute the PL/SQL
statements.
It is mainly used to execute a process in PL/SQL.
It can have nested blocks, or it can be defined and nested inside the other blocks or packages.
It contains declaration part (optional), execution part, exception handling part (optional).
The values can be passed into the procedure or fetched from the procedure through parameters.
These parameters should be included in the calling statement.
Procedure can have a RETURN statement to return the control to the calling block, but it cannot
return any values through the RETURN statement.
Procedures cannot be called directly from SELECT statements. They can be called from another
block or through EXEC keyword.
Syntax:
CREATE OR REPLACE PROCEDURE
<procedure_name>
(
<parameter IN/OUT <data type>
..
.
)
[ IS | AS ]
<Declaration part>
BEGIN
<Execution part>
EXCEPTION
<Exception handling part>
END;
CREATE PROCEDURE instructs the compiler to create new procedure. Keyword 'OR
REPLACE' instructs the compile to replace the existing procedure (if any) with the current one.
Procedure name should be unique.
Keyword 'IS' will be used, when the procedure is nested into some other blocks. If the
procedure is standalone then 'AS' will be used. Other than this coding standard, both have the
same meaning.
Procedures: Example
The below example creates a procedure ‘employer_details’ which gives the details of the employee.
Function:
A function is a standalone PL/SQL subprogram. Like PL/SQL procedure, functions have a unique
name by which it can be referred. These are stored as PL/SQL database objects. Below are some of the
characteristics of functions.
Functions are a standalone block that is mainly used for calculation purpose.
Function use RETURN keyword to return the value, and the datatype of this is defined at the
time of creation.
A Function should either return a value or raise the exception, i.e. return is mandatory in
functions.
Function with no DML statements can be directly called in SELECT query whereas the
function with DML operation can only be called from other PL/SQL blocks.
It can have nested blocks, or it can be defined and nested inside the other blocks or packages.
It contains declaration part (optional), execution part, exception handling part (optional).
The values can be passed into the function or fetched from the procedure through the
parameters.
These parameters should be included in the calling statement.
Function can also return the value through OUT parameters other than using RETURN.
Since it will always return the value, in calling statement it always accompanies with
assignment operator to populate the variables.
CREATE FUNCTION instructs the compiler to create a new function. Keyword 'OR
REPLACE' instructs the compiler to replace the existing function (if any) with the current one.
The Function name should be unique.
RETURN datatype should be mentioned.
Keyword 'IS' will be used, when the procedure is nested into some other blocks. If the
procedure is standalone then 'AS' will be used. Other than this coding standard, both have the
same meaning.
Function: Example
Let’s create a frunction called ''employer_details_func' similar to the one created in stored proc
1> CREATE OR REPLACE FUNCTION employer_details_func
2> RETURN VARCHAR(20);
3> IS
5> emp_name VARCHAR(20);
6> BEGIN
7> SELECT first_name INTO emp_name
8> FROM emp_tbl WHERE empID = '100';
9> RETURN emp_name;
10> END;
11> /
If the exception raised in the subprogram is not handled in the subprogram exception handling
section, then it will propagate to the calling block.
Both can have as many parameters as required.
Both are treated as database objects in PL/SQL.
Procedure Function
Use OUT parameter to return the Use RETURN to return the value
value
RETURN will simply exit the RETURN will exit the control from
control from subprogram. subprogram and also returns the value
Return datatype will not be specified Return datatype is mandatory at the time of
at the time of creation creation
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 5.5.43-MariaDB MariaDB Server
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
ASSIGNMENT NO.: 8
Title: Database Trigger (All Types: Row level and Statement level triggers, Before and After
Triggers). Write a database trigger on Library table. The System should keep track of the records that
are being updated or deleted. The old value of updated or deleted records should be added in
Library_Audit table.
Objectives: To understand the concept of MySQL database trigger.
Requirements:
Software Requirements: Maria DB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
2. Types of Triggers:
There are two types of triggers based on which level it is triggered.
1) Row level Trigger: An event is triggered for each row updated, inserted or deleted.
2) Statement level Trigger: An event is triggered for each SQL statement executed.
[OF col_name]
ON table_name
[REFERENCING OLD AS o NEW AS n]
[FOR EACH ROW]
WHEN (condition)
BEGIN
--- sql statements
END;
Arguments or Parameters:
CREATE [OR REPLACE ] TRIGGER trigger_name - This clause creates a trigger with the
given name or overwrites an existing trigger with the same name.
{BEFORE | AFTER | INSTEAD OF } - This clause indicates at what time should the trigger get
fired. i.e for example: before or after updating a table. INSTEAD OF is used to create a trigger
on a view. before and after cannot be used to create a trigger on a view.
{INSERT [OR] | UPDATE [OR] | DELETE} - This clause determines the triggering event.
More than one triggering events can be used together separated by OR keyword. The trigger
gets fired at all the specified triggering event.
[OF col_name] - This clause is used with update triggers. This clause is used when you want to
trigger an event only when a specific column is updated.
CREATE [OR REPLACE ] TRIGGER trigger_name - This clause creates a trigger with the
given name or overwrites an existing trigger with the same name.
[ON table_name] - This clause identifies the name of the table or view to which the trigger is
associated.
[REFERENCING OLD AS o NEW AS n] - This clause is used to reference the old and new
values of the data being changed. By default, you reference the values as :old.column_name or
:new.column_name. The reference names can also be changed from old (or new) to any other
user-defined name. You cannot reference old values when inserting a record, or new values
when deleting a record, because they do not exist.
[FOR EACH ROW] - This clause is used to determine whether a trigger must fire when each
row gets affected ( i.e. a Row Level Trigger) or just once when the entire sql statement is
executed(i.e.statement level Trigger).
WHEN (condition) - This clause is valid only for row level triggers. The trigger is fired only for
rows that satisfy the condition specified.
5. Trigger Examples:
Example 1:
The price of a product changes constantly. It is important to maintain the history of the prices of the
products.
A trigger can be created to update the 'product_price_history' table when the price of the product is
updated in the 'product' table.
1) Create the 'product' table and 'product_price_history' table:
CREATE TABLE product_price_history
(product_id number(5),
product_name varchar2(32),
supplier_name varchar2(32),
unit_price number(7,2) );
Department of Computer Engineering 58
Database Management Systems Lab T.E (Computer Engineering)
4)If we ROLLBACK the transaction before committing to the database, the data inserted to the table is
also rolled back.
Example 2:
In the following example, we have two tables: emp_details and log_emp_details. To insert some
information into log_ emp_details table (which have three fields employee id and salary and edttime)
every time, when an INSERT happen into emp_details table we have used the following trigger :
DELIMITER
$$
USE `hr`
$$
CREATE
DEFINER=`root`@`127.0.0.1`
TRIGGER `hr`.`emp_details_AINS`
AFTER INSERT ON `hr`.`emp_details`
FOR EACH ROW
-- Edit trigger body code below this line. Do not edit lines above this one
BEGIN
INSERT INTO log_emp_details
VALUES(NEW.employee_id, NEW.salary, NOW());
END$$
Now insert one record in emp_details table see the records both in emp_details and log_emp_details
tables :
mysql> INSERT INTO emp_details VALUES(236, 'RABI', 'CHANDRA', 'RABI','590.423.45700',
'2013-01-12', 'AD_VP', 15000, .5);
+-------------+------------+-----------+---------+----------+----------------+
| EMPLOYEE_ID | FIRST_NAME | LAST_NAME | JOB_ID | SALARY | COMMISSION_PCT |
+-------------+------------+-----------+---------+----------+----------------+
| 100 | Steven | King | AD_PRES | 24000.00 | 0.10 |
| 101 | Neena | Kochhar | AD_VP | 17000.00 | 0.50 |
| 102 | Lex | De Haan | AD_VP | 17000.00 | 0.50 |
| 103 | Alexander | Hunold | IT_PROG | 9000.00 | 0.25 |
| 104 | Bruce | Ernst | IT_PROG | 6000.00 | 0.25 |
| 105 | David | Austin | IT_PROG | 4800.00 | 0.25 |
| 236 | RABI | CHANDRA | AD_VP | 15000.00 | 0.50 |
+-------------+------------+-----------+---------+----------+----------------+
+-------------+----------+---------------------+
| emp_details | SALARY | EDTTIME |
+-------------+----------+---------------------+
| 100 | 24000.00 | 2011-01-15 00:00:00 |
| 101 | 17000.00 | 2010-01-12 00:00:00 |
| 102 | 17000.00 | 2010-09-22 00:00:00 |
| 103 | 9000.00 | 2011-06-21 00:00:00 |
| 104 | 6000.00 | 2012-07-05 00:00:00 |
| 105 | 4800.00 | 2011-06-21 00:00:00 |
| 236 | 15000.00 | 2013-07-15 16:52:24 |
+-------------+----------+---------------------+
Example 3:
In the following example, before insert a new record in emp_details table, a trigger check the column
value of FIRST_NAME, LAST_NAME, JOB_ID and
- If there are any space(s) before or after the FIRST_NAME, LAST_NAME, TRIM() function will
remove those.
- The value of the JOB_ID will be converted to upper cases by UPPER() function.
Example:
DROP TRIGGER orders_before_insert;
This example uses the ALTER TRIGGER statement to drop the trigger called orders_before_insert.
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 5.5.43-MariaDB MariaDB Server
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [assign8]> create trigger update_lib after update on library for each row insert into
library_audit(accession,title,author,publisher,operation) values(old.accession,old.title,old.author,old.publisher,'UPDATE');
Query OK, 0 rows affected (0.07 sec)
MariaDB [assign8]> create trigger delete_lib after delete on library for each row insert into library_audit(accession,
title,author,publisher,operation) values(old.accession,old.title,old.author,old.publisher,'DELETE');
Query OK, 0 rows affected (0.09 sec)
MariaDB [assign8]> insert into library(accession,title,author,publisher) values(101,'OPERATING
SYSTEM','KORTH','MCG');
Query OK, 1 row affected (0.01 sec)
MariaDB [assign8]> insert into library(accession,title,author,publisher) values(102,'PROGRAMMING IN
C','BALAGURUSWAMY','BPB');
Query OK, 1 row affected (0.01 sec)
MariaDB [assign8]> insert into library(accession,title,author,publisher) values(103,'HOW TO LEARN
C','KANETKAR','TECHMAX');
Query OK, 1 row affected (0.01 sec)
GROUP B
ASSIGNMENT NO.: 1
Title: Study of Open Source NOSQL Database: MongoDB (Installation, Basic CRUD operations,
Execution)
Requirements:
Software Requirements:Fedora 20, MYSQL, MongoDB, Java/Python
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
Introduction to MongoDB:
MongoDB is a document database with the scalability and flexibility that you want with the
querying and indexing that you need
MongoDB stores data in flexible, JSON-like documents, meaning fields can vary from
document to document and data structure can be changed over time
The document model maps to the objects in your application code, making data easy to work
with
Ad hoc queries, indexing, and real time aggregation provide powerful ways to access and
analyze your data
MongoDB is a distributed database at its core, so high availability, horizontal scaling, and
geographic distribution are built in and easy to use
MongoDB is free and open-source, published under the GNU Affero General Public License.
1) Create Operations
Create or insert operations add new documents to a collection. If the collection does not currently exist,
insert operations will create the collection.
MongoDB provides the following methods to insert documents into a collection:
db.collection.insertOne()
db.collection.insertMany()
In MongoDB, insert operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
Example 1:
The following example inserts a new document into the inventory collection. If the document does not
specify an _id field, MongoDB adds the _id field with an ObjectId value to the new document.
db.inventory.insertOne(
{item: "canvas", qty: 100, tags: ["cotton"], size: { h: 28, w: 35.5, uom: "cm" } })
Example 2:
The following example inserts three new documents into the inventory collection. If the documents do
not specify an _id field, MongoDB adds the _id field with an ObjectId value to each document.
db.inventory.insertMany([
{item: "journal", qty: 25, tags: ["blank", "red"], size: { h: 14, w: 21, uom: "cm" } },
{item: "mat", qty: 85, tags: ["gray"], size: { h: 27.9, w: 35.5, uom: "cm" } },
{item: "mousepad", qty: 25, tags: ["gel", "blue"], size: { h: 19, w: 22.85, uom: "cm" } }
])
2) Read Operations:
Read operations retrieve documents from a collection; i.e. queries a collection for documents.
MongoDB provides the following methods to read documents from a collection:
Syntax:
db.collection.find()
Example:
The following example selects from the inventory collection all documents where
the status equals "D":
db.inventory.find( { status: "D" } )
3) Update Operation:
Update operations modify existing documents in a collection. MongoDB provides the following
methods to update documents of a collection:
db.collection.updateOne()
db.collection.updateMany()
db.collection.replaceOne()
In MongoDB, update operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
Example:
db.users.update({name:"John Doe"}, {age: 28})
4) Delete Operations:
Delete operations remove documents from a collection. The remove() method is used to delete data or
sets of data from your database:
Example:
db.users.remove({name: "John Doe"})
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost Documents]# ls
abhi.asm Dining.java mongo-java-driver-2.12.2.jar
abhishek EL-III Ass_A1 overlap
abhishek.asm jdk-8u161-nb-8_2-linux-x64.sh overlap.asm~
abhishek.o Link to sachin.cpp overlap.o
add.asm mongo overlapp.asm
a.out mongodb-linux-x86_64-2.6.3 S151024282.cpp
cloudsim-3.0.2 mongodb-linux-x86_64-3.0.2 sachin.cpp~
cloudsim-3.0.2.jar mongo-java-driver-2.12.2 vishal
[root@localhost Documents]# cd mongodb-linux-x86_64-3.0.2/bin/
> db.createCollection("Employee");
{ "ok" : 1 }
>
db.Employee.insert([{"empID":21,"Name":"Tejal","Salary":200},{"empID":22,"Name":"Harsh","Salary":180},{"empID":2
4,"Name":"Nayan","Salary":230},{"empID":38,"Name":"Rutuja","Salary":210}]);
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 4,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
> db.Employee.find().pretty();
{
"_id" : ObjectId("5b486d9703c720f873a7839e"),
"empID" : 21,
"Name" : "Tejal",
"Salary" : 200
}
{
"_id" : ObjectId("5b486d9703c720f873a7839f"),
"empID" : 22,
"Name" : "Harsh",
"Salary" : 180
}
{
"_id" : ObjectId("5b486d9703c720f873a783a0"),
"empID" : 24,
"Name" : "Nayan",
"Salary" : 230
}
{
"_id" : ObjectId("5b486d9703c720f873a783a1"),
"empID" : 38,
"Name" : "Rutuja",
"Salary" : 210
}
> db.Employee.update({"Name":"Rutuja"},{$set:{"Salary":225}});
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.Employee.find().pretty();
{
"_id" : ObjectId("5b486d9703c720f873a7839e"),
"empID" : 21,
"Name" : "Tejal",
"Salary" : 200
}
{
"_id" : ObjectId("5b486d9703c720f873a7839f"),
"empID" : 22,
"Name" : "Harsh",
"Salary" : 180
}
{
"_id" : ObjectId("5b486d9703c720f873a783a0"),
"empID" : 24,
"Name" : "Nayan",
"Salary" : 230
}
{
"_id" : ObjectId("5b486d9703c720f873a783a1"),
"empID" : 38,
"Name" : "Rutuja",
"Salary" : 225
}
> db.Employee.remove({"Name":"Harsh"})
WriteResult({ "nRemoved" : 1 })
> db.Employee.find().pretty();
{
"_id" : ObjectId("5b486d9703c720f873a7839e"),
"empID" : 21,
"Name" : "Tejal",
"Salary" : 200
}
{
"_id" : ObjectId("5b486d9703c720f873a783a0"),
"empID" : 24,
"Name" : "Nayan",
"Salary" : 230
}
{
"_id" : ObjectId("5b486d9703c720f873a783a1"),
"empID" : 38,
"Name" : "Rutuja",
"Salary" : 225
}
ASSIGNMENT NO.: 2
Title: Design and Develop MongoDB Queries using CRUD operations. (Use CRUD operations, SAVE
method, logical operators).
Objective: To study and learn MongoDB all basic operations as well as SAVE method and logical
operations.
Requirements:
Software Requirements:Fedora 20, MYSQL, MongoDB, Java/Python
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
MongoDB
MongoDB is a document database with the scalability and flexibility that you want with the
querying and indexing that you need
MongoDB stores data in flexible, JSON-like documents, meaning fields can vary from
document to document and data structure can be changed over time
The document model maps to the objects in your application code, making data easy to work
with
Ad hoc queries, indexing, and real time aggregation provide powerful ways to access and
analyze your data
MongoDB is a distributed database at its core, so high availability, horizontal scaling, and
geographic distribution are built in and easy to use
MongoDB is free and open-source, published under the GNU Affero General Public License.
Create Operations
Create or insert operations add new documents to a collection. If the collection does not currently exist,
insert operations will create the collection.
MongoDB provides the following methods to insert documents into a collection:
db.collection.insertOne()
db.collection.insertMany()
In MongoDB, insert operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
Example 1:
The following example inserts a new document into the inventory collection. If the document does not
specify an _id field, MongoDB adds the _id field with an ObjectId value to the new document.
db.inventory.insertOne(
{item: "canvas", qty: 100, tags: ["cotton"], size: { h: 28, w: 35.5, uom: "cm" } })
Example 2:
The following example inserts three new documents into the inventory collection. If the documents do
not specify an _id field, MongoDB adds the _id field with an ObjectId value to each document.
db.inventory.insertMany([
{ item: "journal", qty: 25, tags: ["blank", "red"], size: { h: 14, w: 21, uom: "cm" } },
{ item: "mat", qty: 85, tags: ["gray"], size: { h: 27.9, w: 35.5, uom: "cm" } },
{ item: "mousepad", qty: 25, tags: ["gel", "blue"], size: { h: 19, w: 22.85, uom: "cm" } }
])
Read Operations:
Read operations retrieves documents from a collection; i.e. queries a collection for documents.
MongoDB provides the following methods to read documents from a collection:
Syntax:
db.collection.find()
Example:
The following example selects from the inventory collection all documents where
the status equals "D":
db.inventory.find( { status: "D" } )
Update Operation:
Update operations modify existing documents in a collection. MongoDB provides the following
methods to update documents of a collection:
db.collection.updateOne()
db.collection.updateMany()
db.collection.replaceOne()
In MongoDB, update operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
Example:
db.users.update({name:"John Doe"}, {age: 28})
Delete Operations:
Delete operations remove documents from a collection. The remove() method is used to delete data or
sets of data from your database:
Example:
>db.mycol.save(
{
"_id" : ObjectId(5983548781331adf45ec7), "title":" New Topic",
"by":"Mongo"
}
)
Logical Operations:
1) AND Operation:
The MongoDB $and operator performs a logical AND operation on an array of two or more
expressions and retrieves the documents which satisfy all the expressions in the array. The $and
operator uses short-circuit evaluation. If the first expression (e.g. <expression1>) evaluates to false,
MongoDB will not evaluate the remaining expressions.
Syntax: { $and: [ { <exp1> }, { <exp2> } , ... , { <expN> } ] }
Example:
db.student.find({$and:[{"gender":"Male"},{"grd_point":{ $gte: 31 }},{"class":"VI"}]}).pretty();
2) OR operation:
The $or operator performs a logical OR operation on an array of two or more <expressions> and selects
the documents that satisfy at least one of the <expressions>.
Syntax:
{ $or: [ { <expression1> }, { <expression2> }, ... , { <expressionN> } ]
Example:
db.inventory.find( { $or: [ { quantity: { $lt: 20 } }, { price: 10 } ] } )
This query will select all documents in the inventory collection where either the quantity field value is
less than 20 or the price field value equals 10.
3) NOT operation:
$not performs a logical NOT operation on the specified <operator-expression> and selects the
documents that do not match the <operator-expression>. This includes documents that do not contain
the field.
Syntax: { field: { $not: { <operator-expression> } } }
Example:
db.inventory.find( { price: { $not: { $gt: 1.99 } } } )
This query will select all documents in the inventory collection where:
the price field value is less than or equal to 1.99 or
the price field does not exist
Comparison Operators:
1) $gt
$gt selects those documents where the value of the field is greater than (i.e. >) the specified value.
Syntax: {field: {$gt: value} }
Example:
db.inventory.find( { qty: { $gt: 20 } } )
This query will select all documents in the inventory collection where the qty field value is greater
than 20.
2) $lt
$lt selects the documents where the value of the field is less than (i.e. <) the specified value.
Syntax: {field: {$lt: value} }
Example:
db.inventory.find( { qty: { $lt: 20 } } )
This query will select all documents in the inventory collection where the qty field value is less
than 20.
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# cd Documents
[root@localhost Documents]# ls
5970502.zip eclipse mongodb-linux-x86_64-3.0.2 mysql-connector-java-5.1.46 org report_template_final
tecompsyllabus.pdf
com META-INF mongodb-linux-x86_64-3.0.2.tgz mysql-connector-java-5.1.46.zip pymongo
Sample_PPT_Review1-1.pptx
[root@localhost Documents]# cd mongodb-linux-x86_64-3.0.2/bin/
[root@localhost bin]# ./mongod
---------------------------------------------------------------------------------------------------------------
Open in New Tab
[sinhgad@localhost bin]$ ./mongo
MongoDB shell version: 3.0.2
connecting to: test
Server has startup warnings:
2018-08-07T09:24:36.947+0530 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user,
which is not recommended.
2018-08-07T09:24:36.947+0530 I CONTROL [initandlisten]
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten]
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** WARNING:
/sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten]
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag
is 'always'.
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten]
=======================================================================
> use office;
switched to db office
=======================================================================
> db.createCollection("Employee");
{ "ok" : 0, "errmsg" : "collection already exists", "code" : 48 }
> db.createCollection("Employee1");
{ "ok" : 1 }
>
db.Employee1.insert([{"empID":21,"Name":"John","Salary":200},{"empID":22,"Name":"cloria","Salary":300},{"empID":
23,"Name":"Rachel","Salary":400},{"empID":24,"Name":"Chandler","Salary":500}]);
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 4,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
=======================================================================
> db.Employee1.find().pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 400
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
=======================================================================
> db.Employee1.update({"Name":"Rachel"},{$set:{"Salary":450}});
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.Employee1.find().pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
====================================================================
> db.Employee1.find({"Salary":{$gt:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a3"), "empID" : 23, "Name" : "Rachel", "Salary" : 450 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a4"), "empID" : 24, "Name" : "Chandler", "Salary" : 500 }
> db.Employee1.find({"Salary":{$gte:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a2"), "empID" : 22, "Name" : "cloria", "Salary" : 300 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a3"), "empID" : 23, "Name" : "Rachel", "Salary" : 450 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a4"), "empID" : 24, "Name" : "Chandler", "Salary" : 500 }
> db.Employee1.find({"Salary":{$lt:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a1"), "empID" : 21, "Name" : "John", "Salary" : 200 }
> db.Employee1.find({"Salary":{$lte:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a1"), "empID" : 21, "Name" : "John", "Salary" : 200 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a2"), "empID" : 22, "Name" : "cloria", "Salary" : 300 }
> db.Employee1.find({"Salary":{$ne:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a1"), "empID" : 21, "Name" : "John", "Salary" : 200 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a3"), "empID" : 23, "Name" : "Rachel", "Salary" : 450 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a4"), "empID" : 24, "Name" : "Chandler", "Salary" : 500 }
=====================================================================
> db.Employee1.find({$and:[{"Name":"cloria"},{"empID":22}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
=====================================================================
> db.Employee1.find({"Salary":{$gt:190},$or:[{"Name":"John"},{"Salary":200}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
> db.Employee1.find({"Salary":{$gt:190},$or:[{"Name":"cloria"},{"Salary":300}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
======================================================================
> db.Employee1.find({"Salary":{$gt:190},$or:[{"Name":"cloria"},{"Salary":200}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
=======================================================================
> db.Employee1.save({"_id":100,"Name":"Jesica","empID":24});
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : 100 })
> db.Employee1.find().pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
{ "_id" : 100, "Name" : "Jesica", "empID" : 24 }
> db.Employee1.remove({"Name":"cloria"})
WriteResult({ "nRemoved" : 1 })
> db.Employee1.find().pretty()
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
{ "_id" : 100, "Name" : "Jesica", "empID" : 24 }
=================================================================
> db.Employee1.find({$and:[{$or:[{"Name":"Chandler"},{"Name":"Rachel"}]},{empID:{$gt:21}}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
Conclusion:
Successfully designed and developed MongoDB Queries using CRUD operations, Save method and
Logical operations.
ASSIGNMENT No: 3
Title: Implement aggregation and indexing with suitable example using MongoDB
Objective: To study and learn aggregation and indexing with suitable example using MongoDB
Requirements:
Software Requirements: MongoDB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
Aggregation:
Operations that process data sets and return calculated results are called aggregations. MongoDB
provides data aggregations that examine data sets and perform calculations on them. Aggregation is run
on the mongod instance to simplify application codes and limit resource requirements.
Similar to queries, aggregation operations in MongoDB use collections of documents as an input and
return results in the form of one or more documents.
The aggregation framework in MongoDB is based on data processing pipelines. Documents pass
through multi-stage pipelines and get transformed into an aggregated result. The most basic pipeline
stage in the aggregation framework provides filters that function like queries. It also provides document
transformations that modify the output document. The pipeline operations group and sort documents by
defined field or fields. In addition, they perform aggregation on arrays.
Pipeline stages can use operators to perform tasks such as calculate the average or concatenate a
string. The pipeline uses native operations within MongoDB to allow efficient data aggregation and is
the favored method for data aggregation.
db.collection.aggregate (pipeline, options)
options document Optional. Additional options that aggregate() passes to the aggregate command.
Error Handling
If an error occurs, the aggregate() helper throws an exception.
Cursor Behavior
In the mongo shell, if the cursor returned from the db.collection.aggregate() is not assigned to a
variable using the var keyword, then the mongo shell automatically iterates the cursor up to 20
times.Cursors returned from aggregation only supports cursor methods that operate on evaluated
cursors (i.e. cursors whose first batch has been retrieved), such as the following methods:
cursor.map()
cursor.hasNext()
cursor.objsLeftInBatch()
cursor.next()
cursor.itcount()
cursor.toArray()
cursor.pretty()
cursor.forEach()
Example
{ _id: 1, cust_id: "abc1", ord_date: ISODate("2012-11-02T17:04:11.102Z"), status: "A", amount: 50 }
{ _id: 2, cust_id: "xyz1", ord_date: ISODate("2013-10-01T17:04:11.102Z"), status: "A", amount: 100 }
{ _id: 3, cust_id: "xyz1", ord_date: ISODate("2013-10-12T17:04:11.102Z"), status: "D", amount: 25 }
{ _id: 4, cust_id: "xyz1", ord_date: ISODate("2013-10-11T17:04:11.102Z"), status: "D", amount: 125 }
{ _id: 5, cust_id: "abc1", ord_date: ISODate("2013-11-12T17:04:11.102Z"), status: "A", amount: 25 }
Indexing:
Indexes are data structures that can store collection’s data set in a form that is easy to traverse. Queries
are efficiently executed with the help of indexes in MongoDB. Indexes help MongoDB find documents
that match the query criteria without performing a collection scan. If a query has an appropriate index,
MongoDB uses the index and limits the number of documents it examines.
Indexes store field values in the order of the value.The order in which the index entries are made
support operations, such as equality matches and range-based queries. MongoDB sorts and returns the
results by using the sequential order of the indexes. The indexes of MongoDB are similar to the indexes
in any other databases.MongoDB defines the indexes at the collection level for use in any field or
subfield.
Types of Index
MongoDB supports the following index types for querying.
1) Default _id: Each MongoDB collection contains an index on the default _id (Read as
underscore id) field. If no value is specified for _id, the language driver or the mongod (read as
mongo D) creates a _id field and provides an ObjectId (read as Object ID) value.
2) Single Field: For a single-field index and sort operation, the sort order of the index keys do not
matter. MongoDB can traverse the indexes either in the ascending or descending order.
3) Compound Index: For multiple fields, MongoDB supports user-defined indexes, such as
compound indexes. The sequential order of fields in a compound index is significant in
MongoDB.
4) Multikey Index: To index array data, MongoDB uses multikey indexes. When indexing a field
with an array value, MongoDB makes separate index entries for each array element.
5) Geospatial Index: To query geospatial data, MongoDB uses two types of indexes—2d indexes
(read as two D indexes) and 2d sphere (read as two D sphere) indexes.
Text Indexes: These indexes in MongoDB searches data string in a collection.
6) Hashed Indexes: MongoDB supports hash-based sharding and provides hashed indexes. These
indexes the hashes of the field value.
Name Description
Returns a value from the first document for each group. Order is only defined if the
$first
documents are in a defined order.
Returns a value from the last document for each group. Order is only defined if the documents
$last
are in a defined order.
Name Description
Returns an array of unique expression values for each group. Order of the array elements is
$addToSet
undefined.
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# cd Documents
[root@localhost Documents]# ls
5970502.zip mongodb-linux-x86_64-3.0.2 pymongo
assignment 10.docx mongodb-linux-x86_64-3.0.2.tgz report_template_final
com mysql-connector-java-5.1.46 Sample_PPT_Review1-1.pptx
eclipse mysql-connector-java-5.1.46.zip tecompsyllabus.pdf
META-INF org
[root@localhost Documents]# cd mongodb-linux-x86_64-3.0.2/bin/
[root@localhost bin]# ./mongod
=================================================================
Open in New Tab
=======================================================================
> db.createCollection("music");
{ "ok" : 1 }
=======================================================================
> db.music.insert({"SrNo":1,"Name":"Akon","Song":"Lonely","Year":2005,"Status":"W"});
WriteResult({ "nInserted" : 1 })
> db.music.insert({"SrNo":2,"Name":"SeanPaul","Song":"Rockabye","Year":1996,"Status":"W"});
WriteResult({ "nInserted" : 1 })
> db.music.insert({"SrNo":3,"Name":"Pitbull","Song":"Temparature","Year":2000,"Status":"W"});
WriteResult({ "nInserted" : 1 })
> db.music.insert({"SrNo":4,"Name":"LuisFonsi","Song":"Despacito","Year":2010,"Status":"A"});
WriteResult({ "nInserted" : 1 })
> db.music.insert({"SrNo":5,"Name":"SeanPaul","Song":"Cheapthrills","Year":2015,"Status":"A"});
WriteResult({ "nInserted" : 1 })
=====================================================================
> db.music.find().pretty();
{
"_id" : ObjectId("5b692b7d0236dca616eb0ea9"),
"SrNo" : 1,
"Name" : "Akon",
"Song" : "Lonely",
"Year" : 2005,
"Status" : "W"
}
{
"_id" : ObjectId("5b692d3c0236dca616eb0eaa"),
"SrNo" : 2,
"Name" : "SeanPaul",
"Song" : "Rockabye",
"Year" : 1996,
"Status" : "W"
}
{
"_id" : ObjectId("5b692d670236dca616eb0eab"),
"SrNo" : 3,
"Name" : "Pitbull",
"Song" : "Temparature",
"Year" : 2000,
"Status" : "W"
}
{
"_id" : ObjectId("5b692da80236dca616eb0eac"),
"SrNo" : 4,
"Name" : "LuisFonsi",
"Song" : "Despacito",
"Year" : 2010,
"Status" : "A"
}
{
"_id" : ObjectId("5b692ddd0236dca616eb0ead"),
"SrNo" : 5,
"Name" : "SeanPaul",
"Song" : "Cheapthrills",
"Year" : 2015,
"Status" : "A"
}
=======================================================================
> db.music.aggregate([{$match:{Status:"W"}}])
{ "_id" : ObjectId("5b692b7d0236dca616eb0ea9"), "SrNo" : 1, "Name" : "Akon", "Song" : "Lonely", "Year" : 2005,
"Status" : "W" }
{ "_id" : ObjectId("5b692d3c0236dca616eb0eaa"), "SrNo" : 2, "Name" : "SeanPaul", "Song" : "Rockabye", "Year" : 1996,
"Status" : "W" }
{ "_id" : ObjectId("5b692d670236dca616eb0eab"), "SrNo" : 3, "Name" : "Pitbull", "Song" : "Temparature", "Year" : 2000,
"Status" : "W" }
=======================================================================
> db.music.aggregate([{$match:{Status:"W"}},{$group:{_id:"Status",total:{$sum:"$Year"}}}])
{ "_id" : "Status", "total" : 6001 }
=====================================================================
> db.music.aggregate([{$match:{Status:"W"}},{$group:{_id:"Status",total:{$avg:"$Year"}}}])
{ "_id" : "Status", "total" : 2000.3333333333333 }
======================================================================
> db.music.aggregate([{$match:{Status:"W"}},{$group:{_id:"Status",total:{$max:"$Year"}}}])
{ "_id" : "Status", "total" : 2005 }
=======================================================================
> db.music.aggregate([{$match:{Status:"W"}},{$group:{_id:"Status",total:{$min:"$Year"}}}])
{ "_id" : "Status", "total" : 1996 }
=======================================================================
> db.music.distinct("Year")
[ 2005, 1996, 2000, 2010, 2015 ]
=======================================================================
> use te;
switched to db te
=======================================================================
> db.te.insert({"Rollno":1,"Name":"a","Marks":54});
WriteResult({ "nInserted" : 1 })
> db.te.insert({"Rollno":2,"Name":"b","Marks":56});
WriteResult({ "nInserted" : 1 })
> db.te.insert({"Rollno":3,"Name":"c","Marks":64});
WriteResult({ "nInserted" : 1 })
=======================================================================
> db.te.find().pretty();
{
"_id" : ObjectId("5b6930a60236dca616eb0eae"),
"Rollno" : 1,
"Name" : "a",
"Marks" : 54
}
{
"_id" : ObjectId("5b6930b60236dca616eb0eaf"),
"Rollno" : 2,
"Name" : "b",
"Marks" : 56
}
{
"_id" : ObjectId("5b6930c50236dca616eb0eb0"),
"Rollno" : 3,
"Name" : "c",
"Marks" : 64
}
=======================================================================
> db.te.ensureIndex({"Name":1});
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.te.ensureIndex({"Name":1,"Rollno":1});
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 2,
"numIndexesAfter" : 3,
"ok" : 1
}
=======================================================================
> db.te.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "te.te"
},
{
"v" : 1,
"key" : {
"Name" : 1
},
"name" : "Name_1",
"ns" : "te.te"
},
{
"v" : 1,
"key" : {
"Name" : 1,
"Rollno" : 1
},
"name" : "Name_1_Rollno_1",
"ns" : "te.te"
}
]
=======================================================================
> db.te.dropIndexes();
{
"nIndexesWas" : 3,
"msg" : "non-_id indexes dropped for collection",
"ok" : 1
}
ASSIGNMENT No: 4
Title: Implement Map reduces operation with suitable example using MongoDB
Objective: To study and learn Map reduce operation with MongoDB
Requirements:
Software Requirements: MongoDB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
Map-Reduce:
Map-reduce is a data processing paradigm for condensing large volumes of data into useful aggregated
results. MongoDB uses mapReduce command for map-reduce operations. MapReduce is generally
used for processing large data sets.
MapReduce Command
Following is the syntax of the basic mapReduce command −
>db.collection.mapReduce(
function() {emit(key,value);}, //map function
function(key,values) {return reduceFunction}, { //reduce function
out: collection,
query: document,
sort: document,
limit: number
}
)
The map-reduce function first queries the collection, then maps the result documents to emit key-value
pairs, which is then reduced based on the keys that have multiple values.
In the above syntax −
map is a javascript function that maps a value with a key and emits a key-value pair
reduce is a javascript function that reduces or groups all the documents having the same key
out specifies the location of the map-reduce query result
query specifies the optional selection criteria for selecting documents
sort specifies the optional sort criteria
limit specifies the optional maximum number of documents to be returned
Example:
Consider the following map-reduce operations on a collection orders that contains documents of the
following prototype:
{
_id: ObjectId("50a8240b927d5d8b5891743c"),
cust_id: "abc123",
ord_date: new Date("Oct 04, 2012"),
status: 'A',
price: 25,
items: [ { sku: "mmm", qty: 5, price: 2.5 },
{ sku: "nnn", qty: 5, price: 2.5 } ]
}
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# cd Documents
[root@localhost Documents]# ls
5970502.zip META-INF org
assignment 10.docx mongodb-linux-x86_64-3.0.2 pymongo
Assignment11.docx mongodb-linux-x86_64-3.0.2.tgz report_template_final
com mysql-connector-java-5.1.46 Sample_PPT_Review1-1.pptx
eclipse mysql-connector-java-5.1.46.zip tecompsyllabus.pdf
[root@localhost Documents]# cd mongodb-linux-x86_64-3.0.2/bin/
[root@localhost bin]# ./mongod
=====================================================================
Open in new Tab
[sinhgad@localhost bin]$ ./mongo
MongoDB shell version: 3.0.2
connecting to: test
Server has startup warnings:
2018-08-07T11:43:41.841+0530 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user,
which is not recommended.
2018-08-07T11:43:41.841+0530 I CONTROL [initandlisten]
2018-08-07T11:43:41.841+0530 I CONTROL [initandlisten]
ASSIGNMENT No: 5
In MongoDB, insert operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
db.users.insertone ( Collection
{
Name:”sue”, Field:value Documents
Age: 26, Field:value
}
)
Read Operations: retrieve documents from a collection; i.e. queries a collection for documents.
MongoDB provides the following methods to read documents from a collection:
db.users.find( Collection
{age:{$gt:18}}, Query criteria
{name:1,address: 1} Projection
).limit (5) Cursor modifier
Update Operations:
Update operations modify existing documents in a collection. MongoDB provides the following
methods to update documents of a collection:
db.collection.updateOne()
db.collection.updateMany()
db.collection.replaceOne()
In MongoDB, update operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
These filters use the same syntax as read operations.
Delete Operations
Delete operations remove documents from a collection. MongoDB provides the following methods to
delete documents of a collection:
db.collection.deleteOne()
db.collection.deleteMany()
In MongoDB, delete operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
Select All Documents in a Collection: To select all documents in the collection, pass an empty
document as the query filter parameter to the find method. The query filter parameter determines the
select criteria:
db.collection.find( {} )
To specify equality conditions, use <field>:<value> expressions in the query filter document:
The following example selects from the inventory collection all documents where
the status equals "D":
Specify Conditions Using Query Operators: A query filter document can use the query operators to
specify conditions in the following form:
The following example retrieves all documents from the inventory collection where status equals
either "A"or "D":
The following example retrieves all documents in the inventory collection where
the status equals "A" and qty is less than ($lt) 30:
Department of Computer Engineering 90
Database Management Systems Lab T.E (Computer Engineering)
Specify OR Conditions
Using the $or operator, you can specify a compound query that joins each clause with a
logical OR conjunction so that the query selects the documents in the collection that match at least one
condition.
The following example retrieves all documents in the collection where the status equals "A" or qty is
less than ($lt) 30:
In the following example, the compound query document selects all documents in the collection where
the status equals "A" and either qty is less than ($lt) 30 or item starts with the character p:
db.inventory.find( {
Status: "A",
$or: [ { qty: { $lt: 30 } }, { item: /^p/ } ]
})
SELECT * FROM inventory WHERE status = "A" AND (qty < 30 OR item LIKE "p %")
$lt(lessthan):- For specify AND condition, An equality match on the field and less than($lt)
comparison match on field.
$gt(greater than):- For specify OR conditions, the field qty has a value greater than($gt)100 or the
value of the price field is less than ($lt) 9.95
Example: db.inventory.find({type:”food”,$or:[{qty:{$gt:100}},{price:{$lt:9.95}}]})
To remove users along with all its collections you have to go through admin:
Use admin
Switched to db admin
Department of Computer Engineering 91
Database Management Systems Lab T.E (Computer Engineering)
Program:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# cd Documents
[root@localhost Documents]# ls
5970502.zip eclipse mongodb-linux-x86_64-3.0.2 mysql-connector-java-5.1.46 org report_template_final
tecompsyllabus.pdf
com META-INF mongodb-linux-x86_64-3.0.2.tgz mysql-connector-java-5.1.46.zip pymongo
Sample_PPT_Review1-1.pptx
[root@localhost Documents]# cd mongodb-linux-x86_64-3.0.2/bin/
[root@localhost bin]# ./mongod
---------------------------------------------------------------------------------------------------------------
Open in New Tab
[sinhgad@localhost bin]$ ./mongo
MongoDB shell version: 3.0.2
connecting to: test
Server has startup warnings:
2018-08-07T09:24:36.947+0530 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user,
which is not recommended.
2018-08-07T09:24:36.947+0530 I CONTROL [initandlisten]
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten]
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** WARNING:
/sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten]
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag
is 'always'.
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-08-07T09:24:36.961+0530 I CONTROL [initandlisten]
=======================================================================
> use office;
switched to db office
=======================================================================
> db.createCollection("Employee");
{ "ok" : 0, "errmsg" : "collection already exists", "code" : 48 }
> db.createCollection("Employee1");
{ "ok" : 1 }
=======================================================================
>
db.Employee1.insert([{"empID":21,"Name":"John","Salary":200},{"empID":22,"Name":"cloria","Salary":300},{"empID":
23,"Name":"Rachel","Salary":400},{"empID":24,"Name":"Chandler","Salary":500}]);
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 4,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
=======================================================================
> db.Employee1.find().pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 400
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
=======================================================================
> db.Employee1.update({"Name":"Rachel"},{$set:{"Salary":450}});
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.Employee1.find().pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
====================================================================
> db.Employee1.find({"Salary":{$gt:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a3"), "empID" : 23, "Name" : "Rachel", "Salary" : 450 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a4"), "empID" : 24, "Name" : "Chandler", "Salary" : 500 }
> db.Employee1.find({"Salary":{$gte:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a2"), "empID" : 22, "Name" : "cloria", "Salary" : 300 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a3"), "empID" : 23, "Name" : "Rachel", "Salary" : 450 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a4"), "empID" : 24, "Name" : "Chandler", "Salary" : 500 }
> db.Employee1.find({"Salary":{$lt:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a1"), "empID" : 21, "Name" : "John", "Salary" : 200 }
> db.Employee1.find({"Salary":{$lte:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a1"), "empID" : 21, "Name" : "John", "Salary" : 200 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a2"), "empID" : 22, "Name" : "cloria", "Salary" : 300 }
> db.Employee1.find({"Salary":{$ne:300}});
{ "_id" : ObjectId("5b691a9835bf2db0d02756a1"), "empID" : 21, "Name" : "John", "Salary" : 200 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a3"), "empID" : 23, "Name" : "Rachel", "Salary" : 450 }
{ "_id" : ObjectId("5b691a9835bf2db0d02756a4"), "empID" : 24, "Name" : "Chandler", "Salary" : 500 }
=====================================================================
> db.Employee1.find({$and:[{"Name":"cloria"},{"empID":22}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
=====================================================================
> db.Employee1.find({"Salary":{$gt:190},$or:[{"Name":"John"},{"Salary":200}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
> db.Employee1.find({"Salary":{$gt:190},$or:[{"Name":"cloria"},{"Salary":300}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
======================================================================
> db.Employee1.find({"Salary":{$gt:190},$or:[{"Name":"cloria"},{"Salary":200}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
=======================================================================
> db.Employee1.save({"_id":100,"Name":"Jesica","empID":24});
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : 100 })
> db.Employee1.find().pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a2"),
"empID" : 22,
"Name" : "cloria",
"Salary" : 300
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
{ "_id" : 100, "Name" : "Jesica", "empID" : 24 }
> db.Employee1.remove({"Name":"cloria"})
WriteResult({ "nRemoved" : 1 })
> db.Employee1.find().pretty()
{
"_id" : ObjectId("5b691a9835bf2db0d02756a1"),
"empID" : 21,
"Name" : "John",
"Salary" : 200
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500
}
{ "_id" : 100, "Name" : "Jesica", "empID" : 24 }
=================================================================
> db.Employee1.find({$and:[{$or:[{"Name":"Chandler"},{"Name":"Rachel"}]},{empID:{$gt:21}}]}).pretty();
{
"_id" : ObjectId("5b691a9835bf2db0d02756a3"),
"empID" : 23,
"Name" : "Rachel",
"Salary" : 450
}
{
"_id" : ObjectId("5b691a9835bf2db0d02756a4"),
"empID" : 24,
"Name" : "Chandler",
"Salary" : 500}
ASSIGNMENT NO: 6
Theory:
Introduction to JSON:
JSON or JavaScript Object Notation is a lightweight text-based open standard designed for human-
readable data interchange. Conventions used by JSON are known to programmers, which include C,
C++, Java, Python, Perl, etc.
JSON stands for JavaScript Object Notation.
The format was specified by Douglas Crockford.
It was designed for human-readable data interchange.
It has been extended from the JavaScript scripting language.
The filename extension is .json.
JSON Internet Media type is application/json.
The Uniform Type Identifier is public.json.
Uses of JSON
It is used while writing JavaScript based applications that include browser extensions and
websites.
JSON format is used for serializing and transmitting structured data over network connection.
It is primarily used to transmit data between a server and web applications.
Web services and APIs use JSON format to provide public data.
It can be used with modern programming languages.
Characteristics of JSON
"language": "C++",
"edition": "second",
"author": "E.Balagurusamy"
}
]
}
After understanding the above program, we will try another example. Let's save the below code
as json.htm –
<html>
<head>
<title>JSON example</title>
document.write("<hr />");
document.write(object2.language + " programming language can be studied " + "from book written by " +
object2.author);
document.write("<hr />");
</script>
</head>
<body>
</body>
</html>
Now open json.htm using IE or any other javascript enabled browser that produces the following
result.
Number
1
double- precision floating-point format in JavaScript
String
2
double-quoted Unicode with backslash escaping
Boolean
3
true or false
Array
4
an ordered sequence of values
Value
5
it can be a string, a number, true or false, null etc
Object
6
an unordered collection of key:value pairs
Whitespace
7
can be used between any pair of tokens
null
8
empty
JSON objects can be created with JavaScript. Let us see the various ways of creating JSON objects
using JavaScript −
Creation of an empty Object −
var JSONObj = {};
Creation of a new Object −
var JSONObj = new Object();
Creation of an object with attribute bookname with value in string, attribute price with
numeric value. Attribute is accessed by using '.' Operator −
var JSONObj = { "bookname ":"VB BLACK BOOK", "price":500 };
This is an example that shows creation of an object in javascript using JSON, save the below code
as json_object.htm –
<html>
<head>
<title>Creating Object JSON with JavaScript</title>
"Scala" : [
{ "Name" : "Scala for the Impatient", "price" : 1000 },
{ "Name" : "Scala in Depth", "price" : 1300 }]
}
var i = 0
document.writeln("<table border = '2'><tr>");
for(i = 0;i<books.Pascal.length;i++){
document.writeln("<td>");
document.writeln("<table border = '1' width = 100 >");
document.writeln("<tr><td><b>Name</b></td><td width = 50>" + books.Pascal[i].Name+"</td></tr>");
document.writeln("<tr><td><b>Price</b></td><td width = 50>" + books.Pascal[i].price +"</td></tr>");
document.writeln("</table>");
document.writeln("</td>");
}
for(i = 0;i<books.Scala.length;i++){
document.writeln("<td>");
document.writeln("<table border = '1' width = 100 >");
document.writeln("<tr><td><b>Name</b></td><td width = 50>" + books.Scala[i].Name+"</td></tr>");
document.writeln("<tr><td><b>Price</b></td><td width = 50>" + books.Scala[i].price+"</td></tr>");
document.writeln("</table>");
document.writeln("</td>");
}
document.writeln("</tr></table>");
</script>
</head>
<body>
</body>
</html>
Now let's try to open Json Array Object using IE or any other javaScript enabled browser. It produces
the following result –
ASSIGNMENT NO: 7
Requirements:
Software Requirements: MongoDB, Fedora 20, MYSQL
Hardware Requirements: CPU: Intel Core or Xeon 3GHz (or Dual Core 2GHz) or equal AMD,
Cores: Single (Dual/Quad Core is recommended), RAM: 4 GB (6 GB recommended)
Theory:
Mapping between JSON and Java entities
JSON simply maps entities from the left side to the right side while decoding or parsing, and maps
entities from the right to the left while encoding.
JSON Java
string java.lang.String
number java.lang.Number
true|false java.lang.Boolean
null null
array java.util.List
object java.util.Map
On decoding, the default concrete class of java.util. List is org.json.simple.JSONArray and the default
concrete class of java.util.Map is org.json.simple.JSONObject.
Encoding JSON in Java: Following is a simple example to encode a JSON object using Java JSON
Object which is a subclass of java.util.HashMap. No ordering is provided. If you need the strict
ordering of elements, use JSONValue.toJSONString (map) method with ordered map implementation
such as java.util.LinkedHashMap.
Import org.json.simple.JSONObject;
class JsonEncodeDemo {
public static void main(String[] args){
JSONObject obj = new JSONObject();
obj.put("name", "foo");
obj.put("num", new Integer(100));
obj.put("balance", new Double(1000.21));
obj.put("is_vip", new Boolean(true));
System.out.print(obj);
}
}
On compiling and executing the above program the following result will be generated −
{"balance": 1000.21, "num":100, "is_vip":true, "name":"foo"}
Following is another example that shows a JSON object streaming using Java JSONObject −
import org.json.simple.JSONObject;
class JsonEncodeDemo {
obj.put("name","foo");
obj.put("num",new Integer(100));
obj.put("balance",new Double(1000.21));
obj.put("is_vip",new Boolean(true));
class JsonDecodeDemo {
public static void main(String[] args){
try{
Object obj = parser.parse(s);
JSONArray array = (JSONArray)obj;
System.out.println(obj2.get("1"));
s = "{}";
obj = parser.parse(s);
System.out.println(obj);
s = "[5,]";
obj = parser.parse(s);
System.out.println(obj);
s = "[5,,2]";
obj = parser.parse(s);
System.out.println(obj);
}catch(ParseException pe){
On compiling and executing the above program, the following result will be generated
Field "1"
{"2":{"3":{"4":[5,{"6":7}]}}}
{}
[5]
[5,2]
Conclusion: Successfully implemented encoding and decoding the JSON objects using Java
GROUP C
ASSIGNMENT NO.: 1
Title: Write a program to implement MongoDB database connectivity with PHP/ python/Java
Implement Database navigation operations (add, delete, edit etc. ) using ODBC/JDBC.
Objective: To study and perform MongoDB database connectivity with Java and perform database
navigation operations.
Requirements:
Software Requirements: Eclipse, JDK 1.6, MongoDB, MongoDB-Java-Driver, Fedora 20
Hardware Requirements: Minimum 2GB RAM.
Theory:
Introduction:
MongoDB is the leading NoSQL database system which has become very popular for recent years due
to its dynamic schema nature and advantages over big data like high performance, horizontal
scalability, replication, etc. Unlike traditional relational database systems which provide JDBC-
compliant drivers, MongoDB comes with its own non-JDBC driver called Mongo Java Driver.
MongoDB can be used with Java programs by using:
1. JDBC API: It can be used to interact with MongoDB from Java, or
2. Mongo Java Driver API.
Steps for connecting MongoDB with Java and performing navigation operations:
1) Import package:
import com.mongodb.*;
2) Create connection:
To connect database, specify the database name. If the database doesn't exist then MongoDB creates it
automatically.
3) Create Database:
DB db = mongo.getDB("database name");
4) Create Collection:
To create a collection, createCollection() method of com.mongodb.client.MongoDatabase class is used.
DBCollection coll = db.getCollection(“Collection Name");
5) Insert Document:
To insert a document into MongoDB, insert() method of com.mongodb.client.MongoCollection class is
used.
6) Display document:
To select all documents from the collection, find() method is used. This method returns a cursor, so you
need to iterate this cursor.
7) Update Document:
To update a document from the collection, updateOne() method is used.
8) Remove document:
BasicDBObject searchQuery = new BasicDBObject();
searchQuery.put("name", “Monika");
Coll.remove(searchQuery);
Example:
import com.mongodb.*;
public class conmongo {
DB db = mongoClient.getDB( "mydb" );
DBCollection coll = db.createCollection(“Stud",null);
BasicDBObject doc1 = new BasicDBObject("rno","1").append("name",“Mona");
BasicDBObject doc2 = new BasicDBObject("rno","2").append("name","swati");
coll.insert(doc1);
coll.insert(doc2);
DBCursor cursor = coll.find(searchQuery);
while (cursor.hasNext())
{
System.out.println(cursor.next());
}
BasicDBObject query = new BasicDBObject();
query.put("name", “Monika");
BasicDBObject N1 = new BasicDBObject();
N1.put("name", “Ragini");
BasicDBObject S1= new BasicDBObject();
S1.put("$set", newDocument);
coll.update(query, S1);
BasicDBObject R1 = new BasicDBObject();
R1.put("name", “Monika");
coll.remove(R1);
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
Or:
MongoClient mongoClient = new MongoClient("db1.server.com");
After the connection is established, we can obtain a database and make authentication (if the server is
running in secure mode), for example:
MongoClient mongoClient = new MongoClient();
DB db = mongoClient.getDB("test");
char[] password = new char[] {'s', 'e', 'c', 'r', 'e', 't'};
boolean authenticated = db.authenticate("root", password);
if (authenticated) {
System.out.println("Successfully logged in to MongoDB!");
}
else
{
System.out.println("Invalid username/password");
}
By default, MongoDB server is running in trusted mode which doesn’t require authentication.
mongodb://root:[email protected]:27027
Connecting to the users database on server db2.server.com:
mongodb://db2.server.com/users
Connecting to the products database on a named MongoDB
server db3.server.com running on port 27027 with user tom and password secret:
mongodb://tom:[email protected]:27027/products
Connecting to a replica set of three servers:
mongodb://db1.server.com,db2.server.com,db3.server.com
Program:
Steps:
1. Open Eclipse.
2. FileNewJava ProjectNextEnter project name: assign16nextFinish
3. If package explorer is invisible then follow the following steps:
Windowshow viewselect package Explorer (Can see project name)
4. Right click on project nameSelect new Class Give class name Student (name of class should be the name of database
created in terminal)ok
5. Download mongo-java-driver-2.10.1.jar file
Procedure to add jar file:
1.In project explorer window right click on project nameSelect Build pathConfigure Build path(open the configuration
window automatically)select library tabclick on add external jar buttonselect mongo-java-driver-2.10.1.jar file.
6. Open terminalcreate database college in MongoDBcreate table Student. MongoDB code is as follows:
[sinhgad@localhost ~]$ su
Password:
[root@localhost sinhgad]# cd Documents
[root@localhost Documents]# ls
abhi.asm DBMS mongo-java-driver-2.12.2.jar
abhishek Dining.java overlap
abhishek.asm EL-III Ass_A1 overlap.asm~
abhishek.o jdk-8u161-nb-8_2-linux-x64.sh overlap.o
add.asm Link to sachin.cpp overlapp.asm
a.out mongo S151024282.cpp
Assignment 9.docx mongodb-linux-x86_64-2.6.3 sachin.cpp~
cloudsim-3.0.2 mongodb-linux-x86_64-3.0.2 vishal
cloudsim-3.0.2.jar mongo-java-driver-2.12.2
[root@localhost Documents]# cd mongodb-linux-x86_64-3.0.2/bin/
[root@localhost bin]# ./mongod
2018-08-17T08:50:19.338+0530 I JOURNAL [initandlisten] journal dir=/data/db/journal
2018-08-17T08:50:19.709+0530 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag
is 'always'.
2018-08-17T09:08:19.182+0530 I NETWORK [conn2] end connection 127.0.0.1:47754 (1 connection now open)
> db.createCollection("Student");
{ "ok" : 1 }
> db.Student.insert([{"rollno":1,"name":"abc"},{"rollno":2,"name":"def"}]);
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 2,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
> db.Student.find().pretty();
{
"_id" : ObjectId("5b763fea9472e3780266978e"),
"rollno" : 1,
"name" : "abc"
}
{
"_id" : ObjectId("5b763fea9472e3780266978f"),
"rollno" : 2,
"name" : "def"
}
7. Open eclipse (containing project) and enter below code in class file Student
Student.java file (Create in Eclipse)
package assign16;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBCursor;
import com.mongodb.MongoClient;
import java.util.Set;
{
System.out.println(cur.next());
i++;
}
Output:-
Student
system.indexes
{ "_id" : { "$oid" : "5b763fea9472e3780266978e"} , "rollno" : 1.0 , "name" : "abc"}
{ "_id" : { "$oid" : "5b763fea9472e3780266978f"} , "rollno" : 2.0 , "name" : "def"}
Conclusion: Successfully performed MongoDB database connectivity with Java and performed
database navigation operations.
ASSIGNMENT NO.: 2
Title: Implement MYSQL/Oracle database connectivity with PHP/ python/Java. Implement Database
navigation operations (add, delete, edit,) using ODBC/JDBC.
Objective: To perform MySQL database connectivity with Java and perform database navigation
operations using JDBC-ODBC Driver.
Requirements:
Software Requirements: Eclipse, JDK 1.6, MySQL, Java-MySQL Connector, Fedora 20.
Hardware Requirements: Minimum 2GB RAM.
Theory:
Introduction to JDBC
Java Database Connectivity (JDBC) is an Application Programming Interface (API) used to connect
Java application with Database. JDBC is used to interact with various type of Database such as Oracle,
MS Access, My SQL and SQL Server. JDBC can also be defined as the platform-independent interface
between a relational database and Java programming. It allows java program to execute SQL statement
and retrieve result from database.
JDBC Driver:-
JDBC Driver is required to process SQL requests and generate result. The following are the different
types of driver available in JDBC.
Type-1 Driver or JDBC-ODBC bridge
Type-2 Driver or Native API or Partly Java Driver
Type-3 Driver or Network Protocol Driver
Type-4 Driver or Thin Driver or Pure Java Driver
Installation Steps:-
In Eclipse perform following steps:
1. File - New – Java Project –Give Project Name – ok
2. In project Explorer window- right click on project name-new- class- give Class name- ok
3. In project Explorer window- right click on project name- Build path- Configure build path-
Libraries- Add External Jar - Java-MySQL Connector
4. In MySQL, create one database and in that database create a table.
2) Create a Connection
getConnection() method of DriverManager class is used to create a connection.
Syntax:
getConnection(String url)
getConnection(String url, String username, String password)
getConnection(String url, Properties info)
Example:
Connection con =
DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:XE","username","password");
Example of Connectivity:
importjava.sql.*;
importjava.util.*;
class Main
{
public static void main(String a[])
{
//Creating the connection
String url = "jdbc:oracle:thin:@localhost:1521:xe";
String user = "system";
String pass = "12345";
Statement st = con.createStatement();
int m = st.executeUpdate(sql);
if (m == 1)
System.out.println("inserted successfully : "+sql);
else
System.out.println("insertion failed");
con.close();
}
catch(Exception ex)
Department of Computer Engineering 117
Database Management Systems Lab T.E (Computer Engineering)
{
System.err.println(ex);
}
}
}
In this code, create a parameterized SQL INSERT statement and create a PreparedStatement from
the Connectionobject. To set values for the parameters in the INSERT statement, we use
the PreparedStatement‘s setString () methods because all these columns in the table Users are of type
VARCHAR which is translated to String type in Java. Note that the parameter index is 1-based (unlike
0-based index in Java array).
Finally call the PreparedStatement’s executeUpdate () method to execute the INSERT
statement. This method returns an update count indicating how many rows in the table were affected by
the query, so checking this return value is necessary to ensure the query was executed successfully. In
this case, executeUpdate () method should return 1 to indicate one record was inserted.
2) SELECT operation:
The following code snippet queries all records from the Users table and print out details for each
record:
String sql = "SELECT * FROM Users";
int count = 0;
while (result.next()){
String name = result.getString(2);
Department of Computer Engineering 118
Database Management Systems Lab T.E (Computer Engineering)
The while loop iterates over the rows contained in the result set by repeatedly checking return value of
the ResultSet’s next() method. The next() method moves a cursor forward in the result set to check if
there is any remaining record. For each iteration, the result set contains data for the current row, and we
use the ResultSet’s getXXX(column index/column name) method to retrieve value of a specific column
in the current row.
3) UPDATE operation:
The following code snippet will update the record of “Bill Gates” as we inserted previously:
String sql = "UPDATE Users SET password=?, fullname=?, email=? WHERE username=?";
4) DELETE operation:
The following code snippet will delete a record whose username field contains “bill”:
Program:
Steps:
1. Open Eclipse
2. FileNewJava ProjectNextEnter project name: assign17nextFinish
3. If package explorer is invisible then follow the following steps:
Windowshow viewselect package Explorer (Can see project name)
4. Right click on project nameSelect new Class Give class name office(name of class should be the name of database
created in terminal)ok
5. Download mysql-connector-java.jar file
Procedure to add jar file:
1.In project explorer window right click on project nameSelect Build pathConfigure Build path(open the configuration
window automatically)select library tabclick on add external jar buttonselect mysql-connector-java.jar file.
6. Open terminalcreate database officecreate table student
7. Open eclipse (containing project) and enter below code in class file office
Office.java file (Create in Eclipse)
import java.sql.*;
public class office {
}
Output:
1 abc
2 def
Conclusion: Studied JDBC, types of JDBC Drivers and performed MySQL database connectivity with
Java and also performed database navigation operations.