Port:: Temporary Stored Procedures
Port:: Temporary Stored Procedures
Sql-server: T-sql(Transact)
Port:
1. Which TCP/IP port does SQL Server run on? - SQL Server runs on port 1433 but we can
also change it for better security.
2. From where can you change the default port? - From the Network Utility TCP/IP properties
What are the authentication modes in SQL Server? How can it be changed?
To change authentication mode in SQL Server click Start, Programs, Microsoft SQL Server and
click SQL Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server
program group. Select the server then from the Tools menu select SQL Server Configuration
Properties, and choose the Security page.
+++++++++++++++++++++++++++++++++++++++=
SET NOCOUNT ON
GO
CREATE PROC #tempInsertProc
@id integer
AS
INSERT INTO foo (bar) VALUES (@id)
GO
EXEC #tempInsertProc 10
GO
EXEC #tempInsertProc 11
GO
EXEC #tempInsertProc 12
GO
DROP PROC #tempInsertProc
GO
SET NOCOUNT OFF
GO
Temporary stored procedures on Microsoft SQL Server are prefixed with a pound sign #. One
pound sign means that its temporary within the session, two pound signs ## means its a
global temporary procedure, which can be called by any connection to the SQL server during
its lifetime.
Your probably wondering why create temporary procedures, when you can just create a
permanent stored procedure? In most cases its probably better to use a permanent SP, but if
your like me, and don't like putting too much logic in the DB, but need to use a stored
procedure, then these are one way to go.
2
What is Normalization?
It is set of rules that have been established to aid in the design of tables that are meant to be
connected through relationships. This set of rules is known as Normalization.
The database community has developed a series of guidelines for ensuring that databases are
normalized. These are referred to as normal forms and are numbered from one (the lowest form
of normalization, referred to as first normal form or 1NF) through five (fifth normal form or 5NF).
First normal form (1NF) sets the very basic rules for an organized database:
• Create separate tables for each group of related data and identify each row with a unique
column or set of columns (the primary key).
Second normal form (2NF) further addresses the concept of removing duplicative data:
• Remove subsets of data that apply to multiple rows of a table and place them in separate
tables.
• Create relationships between these new tables and their predecessors through the use of
foreign keys.
• Remove columns that are not dependent upon the primary key.
What is denormalization
If the table has more than one candidate key, one of them will become the primary key, and the
rest are called alternate keys.
A key formed by combining at least two or more columns is called composite key.
A primary key is the field(s) in a table that uniquely defines the row in the table; the values in the
primary key are always unique.
A foreign key is a constraint that establishes a relationship between two tables. This relationship
typically involves the primary key field(s) from one table with an adjoining set of field(s) in
another table (although it could be the same table). The adjoining field(s) is the foreign key.c
1. primary key doesn’t allow NULLs, but unique key allows one NULL only.
2. by default primary key creates a clustered index on the column, where are unique creates
a nonclustered index by default.
4
What is Index?
an index is a data structure used to provide quick access to data in a database table or
view.
When queries are run against a db, an index on that db basically helps in the way the data is
sorted to process the query for faster and data retrievals are much faster when we have an index.
An index is a physical structure containing pointers to the data. Indices are created in an existing
table to locate rows more quickly and efficiently. It is possible to create an index on one or more
columns of a table, and each index is given a name. The users cannot see the indexes; they are
just used to speed up queries.
Effective indexes are one of the best ways to improve performance in a database application. A
table scan happens when there is no index available to help a query. In a table scan SQL Server
examines every row in the table to satisfy the query results. Table scans are sometimes
unavoidable, but on large tables, scans have a terrific impact on performance.
Syntax
e.g…
FirstName nvarchar(20),
HourlySalary money
);
GO
GO
ON Employees(EmployeeNumber);
To Delete Index
In this formula, replace the TableName with the name of the table that contains the index.
Replace the IndexName with the name of the index you want to get rid of.
Here is an example:
USE Exercise;
GO
GO
A clustered index is a special type of index that reorders the way records in the table are
6
physically stored. Because of this sorting, table can have only one clustered index. The leaf nodes
of a clustered index contain the data pages.
Non-clustered indexes contain a row identifier at the leaf level of the index. This row identifier is a
pointer to a location of the data on the disk. This allows you to have more than one non-clustered
index per table.
A non clustered index is a special type of index in which the logical order of the index does not
match the physical stored order of the rows on disk. The leaf node of a non clustered index does
not consist of the data pages. Instead, the leaf nodes contain index rows.
Explain about Clustered and non clustered index? How to choose between a Clustered
Index and a Non-Clustered Index?
User‐Defined Functions allow defining its own T‐SQL functions that can accept 0 or more
parameters and return a single scalar data value or a table data type.
3 types:
alternative to a view as the user‐defined function can pass parameters into a T‐SQL select
command and in essence provide us with a parameterized, non‐updateable view of the
underlying tables.
Multi‐statement Table‐Value User‐Defined Function
A Multi‐Statement Table‐Value user‐defined function returns a table and is also an
exceptional alternative to a view as the function can support multiple T‐SQL statements to
build the final result where the view is limited to a single SELECT statement. Also, the
ability to pass parameters into a TSQL select command or a group of them gives us the
capability to in essence create a parameterized, non‐updateable view of the data in the
underlying tables. Within the create function command you must define the table structure
that is being returned. After creating this type of user‐defined function, It can be used in
the FROM clause of a T‐SQL command unlike the behavior found when using a stored
procedure which can also return record sets.
Sub‐queries are often referred to as sub‐selects, as they allow a SELECT statement to be executed
arbitrarily within the body of another SQL statement. A sub‐query is executed by enclosing it in a
set of parentheses. Sub‐queries are generally used to return a single row as an atomic value,
though they may be used to compare values against multiple rows with the IN keyword.
A subquery is a SELECT statement that is nested within another T‐SQL statement. A subquery
SELECT statement if executed independently of the T‐SQL statement, in which it is nested, will
return a resultset. Meaning a subquery SELECT statement can standalone and is not depended on
the statement in which it is nested. A subquery SELECT statement can return any number of
values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING,
and/or ORDER BY clauses of a T‐SQL statement. A Subquery can also be used as a parameter to a
function call. Basically a subquery can be used anywhere an expression can be used.
+++++++++++++++++++++++++++++++++++++++++++++
transaction.
A-Atomicity: Using Transactions either none or all the statements inside the transaction will
execute successfully.
C-Consistency: Using Transaction we can ensure that using the SQL statement we moved the
table/tables from one consistent state to another.
I-Isolation: By using Isolation levels along with transactions we can ensure that no other SQL
statement is using the table while transaction is in progress.
D-Durability: No data should be lost. One good thing with Transactions are we can roll back if we
feel there is some problem with the query. However we have logs from where we can restore our
data.
There are many advantages to this approach: read-intensive applications typically want more index
structures data redundancies and even other views of data. Transaction processing systems want the best
write throughput while incurring only the most minimal overhead. The access patterns of readers and writers
typically differ: Readers are more prone to larger analysis types of queries and writers are more prone to
singleton inserts updates and deletes. When these activities are separated the administrator can focus on
recovery strategies for a smaller more manageable transaction processing system. OLTP databases tend to be
much smaller than data redundant decision-support or analysis-oriented databases.
1. READ UNCOMMITTED
2. READ COMMITTED
3. REPEATABLE READ
4. SERIALIZABLE
1. Read uncommitted
When it's used SQL Server not issue shared locks while reading data. So you can read an uncommitted
transaction that might get rolled back later. This isolation level is also called dirty read. This is the lowest
isolation level. It ensures only that a physically corrupt data will not be read.
2.Read committed
This is the default isolation level in SQL Server. When it's used SQL Server will use shared locks while reading
data. It ensures that a physically corrupt data will not be read and will never read data that another
application has changed and not yet committed but it does not ensure that the data will not be changed
before the end of the transaction.
3.Repeatable read
When it's used the dirty reads and nonrepeatable reads cannot occur. It means that locks will be placed on all
data that is used in a query and another transactions cannot update the data.
4.Serializable
Most restrictive isolation level. When it's used then phantom values cannot occur. It prevents other users from
updating or inserting rows into the data set until the transaction is complete.
++++++++++++++++++++++++++++++++++
difference between DELETE TABLE and TRUNCATE TABLE commands?
TRUNCATE TABLE is functionally identical to DELETE statement with no WHERE clause: both
remove all rows in the table. So, the DELETE statement removes rows one at a time and records
9
an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by
deallocating the data pages used to store the table’s data, and only the page deallocations are
recorded in the transaction log.
DELETE TABLE is a logged operation, so the deletion of each row gets logged in the transaction
log, which makes it slow. TRUNCATE TABLE also deletes all the rows in a table, but it won’t log the
deletion of each row, instead it logs the deallocation of the data pages of the table, which makes it
faster. Of course, TRUNCATE TABLE can be rolled back. But TRUNCATE TABLE is faster and uses
fewer system and transaction log resources than DELETE.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead,
use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it
cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed
view
TRUNCATE TABLE removes all rows from a table, but the table structure and its columns,
constraints, indexes and so on remain. The counter used by an identity for new rows is reset to
the seed for the column. If you want to retain the identity counter, use DELETE instead. If you
want to remove table definition and its data, use the DROP TABLE statement.
Some of the tools/ways that help you troubleshooting performance problems are: SET
SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler,
Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer
What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to
acquire a lock on the other’s piece. Each process would wait indefinitely for the other to release
the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and
terminates one user’s process.
A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of
overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and
refuses further shared locks. A livelock also occurs when read transactions monopolize a table or
page, forcing a write transaction to wait indefinitely. Check out SET DEADLOCK_PRIORITY and
“Minimizing Deadlocks” in SQL Server books online
Join actually puts data from two or more tables into a single result set.
Cross Join
A cross join that does not have a WHERE clause produces the Cartesian product of the tables
involved in the join. The size of a Cartesian product result set is the number of rows in the first
table multiplied by the number of rows in the second table. The common example is when
company wants to combine each product with a pricing table to analyze each product at each
price.
Inner Join
A join that displays only the rows that have a match in both joined tables is known as inner Join.
This is the default type of join in the Query and View Designer.
Outer Join
A join that includes rows even if they do not have related rows in the joined table is an Outer Join.
You can create three different outer join to specify the unmatched rows to be included:
Left Join/Left Outer Join: In Left Outer Join all rows in the first‐named table i.e. "left" table, which
appears leftmost in the JOIN clause are included. Unmatched rows in the right table do not
appear.
Right Outer Join: In Right Outer Join all rows in the second‐named table i.e. "right" table, which
appears rightmost in the JOIN clause are included. Unmatched rows in the left table are not
included.
Full Outer Join: In Full Outer Join all rows in all joined tables are included, whether they are
matched or not.
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion.
A self join can be of any type, as long as the joined tables are the same. A self join is rather
unique in that it involves a relationship with only one table. The common example is when
company has a hierarchal reporting structure whereby one member of staff reports to another.
Self Join can be Outer Join or Inner Join.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++=
controls on a web form. The values from the dataset are automatically displayed in the controls
without having to write separate code to display them.
http://www.csharp-station.com/Tutorials/AdoDotNet/Lesson03.aspx
Data-Provider
Application ----------------------------------------------> Data-Source
Data-Source
data source is a database, but it could also be a text file, an Excel spreadsheet, or an XML file.
there are many different types of databases available. For example, there is Microsoft SQL
Server, Microsoft Access, Oracle, Borland Interbase, and IBM DB2, just to name a few.
ADO.NET is an object-oriented set of libraries that allows you to interact with data sources.
Data-Provider
Since different data sources expose different protocols, we need a way to communicate with the
right data source using the right protocol. Some older data sources use the ODBC protocol, many
newer data sources use the OleDb protocol, and there are more data sources every day that allow
you to communicate with them directly through .NET ADO.NET class libraries.
ADO.NET provides a relatively common way to interact with data sources, but comes in different
sets of libraries for each way you can talk to a data source. These libraries are called Data
Providers and are usually named for the protocol or data source type they allow you to interact
with.
table 1. ADO.NET Data Providers are class libraries that allow a common way to interact with
specific data sources or protocols. The library APIs have prefixes that indicate which provider they
support.
Provider Name API prefix Data Source Description
ODBC Data Provider Odbc Data Sources with an ODBC interface. Normally older data bases.
OleDb Data Provider OleDb Data Sources that expose an OleDb interface, i.e. Access or Excel.
Oracle Data Provider Oracle For Oracle Databases.
SQL Data Provider Sql For interacting with Microsoft SQL Server.
Borland Data Provider Bdp Generic access to many databases such as Interbase, SQL
Server, IBM DB2, and Oracle.
An example may help you to understand the meaning of the API prefix. One of the first ADO.NET
objects you'll learn about is the connection object, which allows you to establish a connection to a
data source. If we were using the OleDb Data Provider to connect to a data source that exposes
an OleDb interface, we would use a connection object named OleDbConnection. Similarly, the
connection object name would be prefixed with Odbc or Sql for an OdbcConnection object on an
12
Odbc data source or a SqlConnection object on a SQL Server database, respectively. Since we are
using MSDE in this tutorial (a scaled down version of SQL Server) all the API objects will have the
Sql prefix. i.e. SqlConnection.
ADO.NET Objects
ADO.NET includes many objects you can use to work with data.
================================================
table 1. ADO.NET Connection Strings contain certain key/value pairs for specifying how to make a
database connection. They include the location, name of the database, and security credentials.
Connection String-Parameter Name Description
Data Source Identifies the server. Could be local machine, machine domain
name, or IP Address.
Initial Catalog Database name.
Integrated Security Set to SSPI to make connection with user's Windows login
User ID Name of user configured in SQL Server.
Password Password matching SQL Server User ID.
using System;
using System.Data;
using System.Data.SqlClient;
/// <summary>
/// Demonstrates how to work with SqlConnection objects
/// </summary>
class SqlConnectionDemo
{
static void Main()
{
// 1. Instantiate the connection
SqlConnection conn = new SqlConnection(
"Data Source=(local);Initial Catalog=Northwind;Integrated Security=SSPI");
Initial Catalog/AttachDbFilename
try
{
// 2. Open the connection
conn.Open();
//
// 4. Use the connection
//
Example: Data-Set And Data-adapter (binding data of a table to a data grid view)
dataGridView1.DataSource = dsExercise;
dataGridView1.DataMember = dsExercise.Tables[0].TableName;
}
Connection-Pooling
-------------------------------
15
Connection pooling enables an application to use a connection from a pool of connections that do
not need to be re-established for each use. Once a connection has been created and placed in a
connection pool, an application can reuse that connection without performing the complete
connection creation process.
By default, the connection pool is created when the first connection with a unique connection
string connects to the database. The pool is populated with connections up to the minimum pool
size. Additional connections can be added until the pool reaches the maximum pool size.
When a user request a connection, it is returned from the pool rather than establishing new
connection and, when a user releases a connection, it is returned to the pool rather than being
released. But be sure than your connections use the same connection string each time. Here is the
Syntax
---------------------------------
Usually, we have a configuration file specific to our application and keep the static information like
Connection String in it. That in turn means that most of the time we want to connect to the same
database server, same database, and with the same user name and password, for every small and
big data.
ADO.NET with IIS uses a technique called connection pooling, which is very helpful in applications
with such designs. What it does is, on first request to database, it serves the database call. Once
it is done and when the client application requests for closing the connection, ADO.NET does not
destroy the complete connection, rather it creates a connection pool and puts the released
connection object in the pool and holds the reference to it. And next time when the request to
execute any query/stored proc comes up, it bypasses the hefty process of establishing the
connection and just picks up the connection from the connection pool and uses that for this
database call. This way, it can return the results comparatively faster.
When any database request is made through ADO.NET, ADO.NET searches for the pool associated
with the exact match for the connection string, in the same app domain and process. If such a
pool is not found, ADO.NET creates a new one for it, however, if it is found, it tries to fetch the
usable connection from that pool. If no usable free connection is found in the pool, a new
connection is created and added to the pool. This way, new connections keep on adding to the
pool till Max Pool Size is reached, after that when ADO.NET gets request for further connections, it
waits for Connection Timeout time and then errors out.
We can explicitly close the connection by using Close() or Dispose() methods of connection object
Data-set / Data-reader
set is disconnected-architecture, r is connected-arch..
r is read-only and farward only, s is cached data and can modify the data
using sql-command ops, reader can take data, for set - need to use data-adapter to get the data
When application needs to access data from more than one table “DataSet” forms the best choice.
But one of the biggest drawbacks of DataSet is speed. As “DataSet” carry considerable overhead
because of relations, multiple tables etc speed is slower than “DataReader”. Always try to use
“DataReader” wherever possible, as it’s meant specially for speed performance.
Data-adapter
16
is a bridge b/w data-set and data-source. D-a has some commands to act on data(update, insert,
delete, fill)
They provide logic that would get data from the data store and populates the tables in the
DataSet, or pushes the changes in the DataSet back into the data store.
FillSchema :- Uses the SelectCommand to extract just the schema for a table from the data
source, and creates an empty table in the DataSet object with all the corresponding constraints.
eg.
SqlDataAdapter dAd = new SqlDataAdapter();
DataTable dTable = new DataTable();
DataSet dSet = new DataSet();
----
---
dAd.Fill(dTable); // will also work
dAd.Fill(dSet); // will also work
We should only use DataSet as parameter when we are expecting more than one result set is
being returned from database.
How can we check that some changes have been made to dataset since it was loaded?
OR How can we cancel all changes done in dataset? OR How do we get values, which are
changed in a dataset?
For tracking down changes, Dataset has two methods, which comes as rescue “Get Changes “and
17
“Has Changes”.
Get Changes
Returns dataset, which are changed since it, was loaded, or since Accept changes was executed.
Has Changes
Or abandon all changes since the dataset was loaded use “Reject Changes This property indicates
that has any changes been made since the dataset was loaded or accept changes method was
executed.
Note: - One of the most misunderstood things about these properties is that it tracks the changes
of actual database. That is a fundamental mistake; actually the changes are related to only
changes with dataset and have nothing to with changes happening in actual database. As dataset
are disconnected and do not know anything about the changes happening in actual database.
They are used to connect connection object to Data reader or dataset. Following are the methods
provided by command object:-
• ExecuteNonQuery: -
Executes the command defined in the Command Text property against the connection defined in
the Connection property for a query that does not return any row (an UPDATE, DELETE, or
INSERT). Returns an Integer indicating the number of rows affected by the query.
• ExecuteReader: -
Executes the command defined in the Command Text property against the connection defined in
the Connection property. Returns a "reader" object that is connected to the resulting row set
within the database, allowing the rows to be retrieved.
• ExecuteScalar: -
Executes the command defined in the Command Text property against the connection defined in
the Connection property. Returns only single value (effectively the first column of the first row of
the resulting row set any other returned columns and rows are discarded. It is fast and efficient
when only a "singleton" value is required
--------------------------------------------------------------------------------------------
First Method: Filtering and Sorting with the DataTable Select Method
a Filter Expression might look like this:
"OrderDate >= '01.03.1998' AND OrderDate <= '31.03.1998'"
A typical Sort Expression is imply the name of the column to sort following by an optional ASC or
DESC.
"OrderDate DESC"
DataGrid or other data bound controls. To accomplish this, use a DataView as shown later in this
article.
Default DataView
"LastName = 'Smith'"
To return only those columns with null values, use the following expression:
After you set the RowFilter Property, ADO.NET hides (but does not
eliminate) all rows in the associated DataTable object's Rows collection that
don't match the filter expression. The DataView.Count property returns the
number of rows remaining unhidden in the view.
Sorting the DataView
To sort a DataView, construct a sort expression string, note that the Sort
property can accept any number of columns on which to sort the Rows
collection. For example use the the following sort expression string:
--------------------------------------------------------------------------------
// Filter and Sort with the DataTable Select Method
private void BtnFilterAndSort_Click(object sender, System.EventArgs e)
{
string strText;
string strExpr;
string strSort;
DataRow[] foundRows;
DataTable myTable;
myTable = ds.Tables["Orders"];
19
// Use the Select method to find all rows matching the filter.
foundRows = myTable.Select(strExpr, strSort);
+++++++++++++++++++++++++++++++++++++++++++++++++
+++
// Apply Filter Expression
ds.Tables[0].DefaultView.RowFilter = strFilterExpression;
//SoRTING++++++++++++++++++++++++
// IF Radiobox "Ascending" is checked, then
// sort ascending ...
if (rbAsc.Checked)
{
strSort = cmbSortArg.Text + " ASC"; // Note space after "
}
// ... else descending
else
{
strSort = cmbSortArg.Text + " DESC"; // Note space after "
}
20
+++++++++++++++++++++++++++++++++++++++++++++++++
+++
Filtering and Sorting with the DataViewManager (Third Method)
++++++++++++++++++++++++++++++++++++++++++++++++++++++
AS
AS
Body of Procedure
To get the results of creating a stored procedure, you must execute it (in other words, to use a
stored procedure, you must call it). To execute a procedure, you use the EXECUTE keyword
followed by the name of the procedure. Although there are some other issues related to
executing a procedure, for now, we will consider that the simplest syntax to call a procedure is:
EXECUTE ProcedureName
For example, to create a stored procedure that would hold a list of students from a table named
Students, you would create the procedure as follows:
AS
BEGIN
FROM Students
END
Function
Here is an example
RETURNS varchar(100)
AS
BEGIN
END
Parameterised fn:
@Number2 Decimal(6,2))
RETURNS Decimal(6,2)
BEGIN
RETURN @Result
END;
Function calling:
After a function has been created, you can use the value it returns. Using a function is also
referred to as calling it. To call a function, you must qualify its name. To do this, type the name
of the database in which it was created, followed by the period operator, followed by dbo,
followed by the period operator, followed by the name of the function, and its parentheses. The
formula to use is:
DatabaseName.dbo.FunctionName()
1>Procedure can return zero or n values whereas function can return one value which is
mandatory.
2>Procedures can have input,output parameters for it whereas functions can have only input
parameters.
3>Procedure allow select as well as DML statement in it whereas function allow only select
statement in it.
4>Functions can be called from procedure whereas procedures cannot be called from function.
5>Exception can be handled by try-catch block in a procedure whereas try-catch block cannot be
used in a function.
7>Procedures can not be utilized in a select statement whereas function can be embedded in a
select statement.
+++++++++++++++++++++++++++++++++++=
A return parameter is always returned by a stored procedure, and it is meant to indicate the
23
success or failure of the stored procedure. The return parameter is always an INT data type.
An OUTPUT parameter is designated specifically by the developer, and it can return other types of
data, such as characters and numeric values. (There are some limitations on the data types that
can be used as output parameters.) You can use multiple OUTPUT parameters in a stored
procedure, whereas you can only use one return parameter.
Packages
One significant difference between stored procedures in PL/SQL and T-SQL is the Oracle package
construct used by PL/SQL. There is no equivalent in T-SQL. A package is a container for logically
related programming blocks such as stored procedures and functions. It has two parts:
• Specification (or spec): Defines the name of the package and supplies method signatures
(prototypes) for each stored procedure or function in the package. The header also defines any
global declarations. The style of the spec is similar to a C or C++ header file.
• Body: Contains code for the stored procedures and functions defined in the package header.
The parameters for each stored procedure or function appear within parentheses and are separated by
commas. Each parameter is optionally tagged with one of three identifiers:
• IN: The value is passed into the PL/SQL block from the calling application. IN is the default
direction if one is not specified.
• OUT: A value generated by the stored procedure and passed back to the calling application.
• INOUT: A value that is passed into the PL/SQL block, possibly modified within the block, and
returned to the calling application.
Each parameter is also tagged to indicate the data type.
The following package spec defines four procedures that create, retrieve, update, and delete data from
the LOCATIONS table in the HR schema.
CREATE OR REPLACE PACKAGE CRUD_LOCATIONS AS
TYPE T_CURSOR IS REF CURSOR;
PROCEDURE GetLocations (cur_Locations OUT T_CURSOR);
PROCEDURE UpdateLocations (p_location_id IN NUMBER,
p_street_address IN VARCHAR2,
p_postal_code IN VARCHAR2,
p_city IN VARCHAR2,
p_state_province IN VARCHAR2,
p_country_id IN CHAR);
PROCEDURE DeleteLocations (p_location_id IN NUMBER);
PROCEDURE InsertLocations (p_location_id OUT NUMBER,
p_street_address IN VARCHAR2,
p_postal_code IN VARCHAR2,
p_city IN VARCHAR2,
p_state_province IN VARCHAR2,
p_country_id IN CHAR);
END CRUD_LOCATIONS;
The following excerpt from the package body for the above package spec shows the implementation
details for the first procedure in the GetLocations package:
CREATE OR REPLACE PACKAGE BODY CRUD_LOCATIONS AS
24
END CRUD_LOCATIONS;
----------------------------------------------------------------------------------------------------
My storedprocedure returns 3 resultsset , does the datareader can handle ?
Working with Multiple Result Sets
Oracle does not support batch queries, so you cannot return multiple result sets from a command. With
a stored procedure, returning multiple result sets is similar to returning a single result set; you have to
use REF CURSOR output parameters. To return multiple result sets, use multiple REF CURSOR
output parameters.
The package spec that returns two result sets—all EMPLOYEES and JOBS records—follows:
CREATE OR REPLACE PACKAGE SELECT_EMPLOYEES_JOBS AS
TYPE T_CURSOR IS REF CURSOR;
PROCEDURE GetEmployeesAndJobs (
cur_Employees OUT T_CURSOR,
cur_Jobs OUT T_CURSOR
);
END SELECT_EMPLOYEES_JOBS;
The package body follows:
CREATE OR REPLACE PACKAGE BODY SELECT_EMPLOYEES_JOBS AS
PROCEDURE GetEmployeesAndJobs
(
cur_Employees OUT T_CURSOR,
cur_Jobs OUT T_CURSOR
)
IS
BEGIN
-- return all EMPLOYEES records
OPEN cur_Employees FOR
SELECT * FROM Employees;
cmd.Connection = conn;
cmd.CommandText = "SELECT_EMPLOYEES_JOBS.GetEmployeesAndJobs";
// add the parameters including the two REF CURSOR types to retrieve
// the two result sets
cmd.Parameters.Add("cur_Employees", OracleType.Cursor).Direction =
ParameterDirection.Output;
cmd.Parameters.Add("cur_Jobs", OracleType.Cursor).Direction =
ParameterDirection.Output;
cmd.CommandType = CommandType.StoredProcedure;
// create a relation
ds.Relations.Add("EMPLOYEES_JOBS_RELATION",
ds.Tables["JOBS"].Columns["JOB_ID"],
ds.Tables["EMPLOYEES"].Columns["JOB_ID"]);
---------------------------------------------------------------------------------------------------
// Step 2:Execute the Query and store the results into temp table
foreach(var x in groupbyfilter)
x.CopyToDataTable(dt, LoadOption.OverwriteChanges);
is a set of technologies for copying and distributing data and database objects from one
database to another and then synchronizing between databases to maintain consistency.
Using replication, you can distribute data to different locations and to remote or mobile
users over local and wide area networks, dial-up connections, wireless connections, and
the Internet.
What is Trigger?
A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE)
occurs. Triggers are stored in and managed by the DBMS. Triggers are used to maintain the
referential integrity of data by changing the data in a systematic fashion. A trigger cannot be
called or executed; DBMS automatically fires the trigger as a result of a data modification to the
associated table.
Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that
is stored at the database level. Stored procedures, however, are not event‐drive and are not
attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking
a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also
execute stored procedures.
Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so
when the trigger is fired because of data modification it can also cause another data modification,
thereby firing another trigger. A trigger that contains data modification logic within itself is called
a nested trigger. (Read More Here)
Syntax
sysname,
Schema_Name>.<Trigger_Name,
sysname,
Trigger_Name>
AS
27
BEGIN
END
e.g..
--
=======================================================
=
-- Database: CeilInn4
-- this occurred
--
=======================================================
=
ON Rooms
AFTER INSERT
28
AS
BEGIN
END
Types of Triggers:
INSTEAD-OF triggers are procedures that execute in place of a Data Manipulation Language (DML)
statement on a table. For example, if I have an INSTEAD-OF-UPDATE trigger on TableA, and I
execute an update statement on that table, the code in the INSTEAD-OF-UPDATE trigger will
execute instead of the update statement that I executed.
An AFTER trigger executes after a DML statement has taken place in the database. These types of
triggers are very handy for auditing data changes that have occurred in your database tables.
View
When studying data analysis, a query is a technique of isolating a series of columns and/or
records of a table. This is usually done for the purpose of data analysis. This can also be done to
create a new list of items for any particular reason. Most of the time, a query is created
temporarily, such as during data analysis while using a table, a form, or a web page. After using
such a temporary list, it is then dismissed.
Many database applications, including Microsoft SQL Server, allow you to create a query and be
able to save it for later use, or even to use it as if it were its own table. This is the idea behind a
view.
Definition:
A view is a list of columns or a series of records retrieved from one or more existing tables, or
as a combination of one or more views and one or more tables. Based on this, before creating a
view, you must first decide where its columns and records would come from. Obviously the
easiest view is one whose columns and records come from one table.
Syntax
29
SELECT Statement
e.g…
SELECT dbo.Genders.Gender,
dbo.Persons.FirstName, dbo.Persons.LastName
ON dbo.Genders.GenderID = dbo.Persons.GenderID
++++++++++++++++++++++++++++++++++++++++++++==
Well cursors help us to do an operation on a set of data that we retreive by commands such as
Select columns from table. For example : If we have duplicate records in a table we can remove it
by declaring a cursor which would check the records during retreival one by one and remove rows
which have duplicate values.
Collation is basically the sort order. There are three types of sort order Dictionary case sensitive,
Dictonary - case insensitive and Binary