0% found this document useful (0 votes)
240 views

SQL Progrmming

This document provides an overview of modules for a training course on programming Microsoft SQL server databases. The first module introduces database development tasks in SQL server and covers topics like SQL server architecture, components, instances, and development tools. The second module covers designing and implementing SQL server databases, including how data is stored, considerations for disk storage, RAID levels, file placement and sizing, and system databases. The third module covers designing and implementing database tables, including normalizing data, primary and foreign keys, data types, schemas, and creating/altering tables.

Uploaded by

Zoran Milinkovic
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
240 views

SQL Progrmming

This document provides an overview of modules for a training course on programming Microsoft SQL server databases. The first module introduces database development tasks in SQL server and covers topics like SQL server architecture, components, instances, and development tools. The second module covers designing and implementing SQL server databases, including how data is stored, considerations for disk storage, RAID levels, file placement and sizing, and system databases. The third module covers designing and implementing database tables, including normalizing data, primary and foreign keys, data types, schemas, and creating/altering tables.

Uploaded by

Zoran Milinkovic
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 237

IV

Programming Microsoft
SQL server Databases
Content

1. An Introduction to Database Development


2. Designing and Implementing SQL Server Database
3. Designing and Implementing Tables
4. Ensuring Data Integrity Through Constraints
5. Introduction to Indexes
6. Designing Optimized Index Strategies
7. Implementing Views
8. Implementing Stored Procedures
9. Implementing User-Defined Functions
10. Responding to Data Manipulation Via Triggers
11. Implementing Managed Code in SQL Server
12. SQL Server Concurrency
13. Performance and Monitoring
Module 1
An Introduction to Database
Development
Module Overview

• Introduction to the SQL Server Platform


• SQL Server Database Development Tasks
SQL Server Architecture

• SQL Server architecture:


• SQL Server Operating System (SQLOS)
• Database Engine
• Query processor

• Complete set of enterprise-ready technologies


and tools
• Low total cost of ownership
• Highly integrated platform
SQL Server Components

• Components
• Database Engine
• SQL Server Analysis Services, Reporting Services, and
Integration Services.
• Master Data Services and Data Quality Services

• Tools
• SQL Server Management Studio and Data Tools
• Configuration Manager
• Profiler
• Tuning Advisor
• DQS Client
SQL Server Instances

• SQL Server components are instance-aware


• Allow for a degree of isolation
• Can assist in upgrade situations
• Two instance types:
• Default instance
• Named instance
SQL Server Database Development Tasks

• Common Database Development Tasks


• Development Tools for SQL Server
• Demonstration: Using SSMS and SSDT
Common Database Development Tasks
Development Tools for SQL Server
Demonstration: Using SSMS and SSDT

In this demonstration, you will see how to:


• Use SSMS to connect to an instance of SQL
Server
• Run a Transact-SQL script
• Open a SQL Server Management Studio project
• Connect to servers and databases
• Use SSDT to run a Transact-SQL script
Module 2
Designing and Implementing SQL
Server Database
How Data Is Stored in SQL Server

Primary data file: .mdf Transaction


Secondary data file: .ndf log file: .ldf

Page: 8 KB

Extent: eight
contiguous 8 KB pages
Considerations for Disk Storage Devices

• Direct Attached Storage: disks connected by a


RAID controller

• Storage Area Network: disks connected by a


network and available to multiple servers

• Windows Storage Pools: commodity disk drives


grouped together to create one large storage
space
RAID Levels

RAID 0 RAID 1
ACEGIK ABCDEF
BDFHJL ABCDEF

RAID 5 RAID 10
A#EG#K ACEGIK
BC#HI# BDFHJL
#DF#JL ACEGIK
BDFHJL
Determining File Placement and Number of Files

• Isolate log and data files at the physical disk level


• Determine the number and location of data files
based on performance and maintenance
considerations
• Use additional files to spread data across storage
locations
• Use smaller data files when easier maintenance is
needed
• Use data files as units of backup and restore

• Determine log file requirements


• Use a single log file in most situations, because log files
are written sequentially
Ensuring Sufficient File Capacity

• Estimate the size of data, log files and tempdb:


• Perform load testing with the actual application
• Check with the application vendor

• Set the size to a reasonable size:


• Leave enough space for new data, without the need to
regularly expand
• Monitor data and log file usage
• Plan for manual expansion
• Keep autogrowth enabled to allow for unexpected
growth
SQL Server System Databases

System
Description
Database
master Stores all system-level configuration

Holds SQL Server Agent configuration


msdb
data

model Provides the template for new databases

tempdb Holds temporary data

Contains system objects that are mapped


resource
to the sys schema of databases
Creating User Databases

• Create databases:
• In SQL Server Management
Studio
• By using the CREATE
DATABASE statement

CREATE DATABASE Sales


ON
(NAME = Sales_dat, FILENAME = ‘M:\Data\Sales.mdf', SIZE =
100MB, MAXSIZE = 500MB, FILEGROWTH = 20% )
LOG ON
(NAME = Sales_log, FILENAME = 'L:\Logs\Sales.ldf', SIZE = 20MB,
MAXSIZE = UNLIMITED, FILEGROWTH = 10MB );
Demonstration: Creating Databases

In this demonstration, you will see how to:


• Create a database by using SQL Server
Management Studio
• Create a database by using the CREATE
DATABASE statement
Module 3
Designing and Implementing
Tables
Module Overview

• Designing Tables
• Data Types
• Working with Schemas
• Creating and Altering Tables
What Is a Table?

• Relational databases store data in tables (relations)


• Defined by a collection of columns (identified by name)
• Contain zero or more rows

• Tables typically represent a type of object or entity


• Employees, purchase orders, customers, and sales orders
are examples of entities
• Consistent naming convention for tables is important

• Tables are a security boundary


• Each row usually represents a single instance of the
object or entity
• One employee, or one purchase order, for example
• Rows of tables have no order
Normalizing Data

• Normalization is a process
• Ensures that database structures are appropriate
• Ensures that poor design characteristics are avoided

• Edgar F. Codd invented the relational model


• Introduced the concept of normalization
• Referred to the degrees of normalization as forms

• Database designs should initially be normalized


• Denormalization might be applied later to improve
performance or to make analysis of data more
straightforward
Common Normalization Forms

• First Normal Form


• Eliminate repeating groups in individual tables
• Create a separate table for each set of related data
• Identify each set of related data by using a primary key

• Second Normal Form


• Non-key columns should not be dependent on only
part of a primary key
• These columns should be in a separate table and
related by using a foreign key
• Third Normal Form
• Eliminate fields that do not depend on the key
Primary Keys

• The primary key uniquely identifies each row within a


table
• Candidate key could be used to uniquely identify a row
• Must be unique and cannot be NULL (unknown)
• Can involve multiple columns
• Should not change
• Primary key is one candidate key
• Most tables will only have a single candidate key
• Debate surrounding natural vs. surrogate keys
• Natural key: formed from data related to the entity
• Surrogate key: usually codes or numbers
Foreign Keys

• Foreign keys are references between tables:


• Foreign key in one table holds the primary key from
another table
• Self-references are permitted

• Rows that do not exist in the referenced table


cannot be inserted in a referencing table
• Rows cannot be deleted or updated without
cascading options
• Multiple foreign keys can exist in one table
Working with System Tables

• SQL Server provides a set of system tables


• Should not be directly modified or queried

• In SQL Server 2005, most system tables were


replaced by a set of permission-based system
views
• Some system tables in the msdb database are
still useful
• dbo.backupset
• dbo.restorehistory
• dbo.sysjobhistory
Designing for Concurrency

• SQL Server uses locking to manage concurrency:


• Locks can be made at the row, page, or table level
• Row locking increases concurrency, but there is an overhead of
maintaining many locks
• OLTP databases should be normalized to increase
concurrency:
• Transactions should be small and fast
• Smaller tables (fewer columns) less likely to cause locking issues
• Data warehouse tables should be denormalized—no
modifications to create locking problems
Implementing Surrogate Keys

• Use IDENTITY with CREATE or ALTER TABLE to create a


unique, sequentially numbered column
• SET IDENTITY_INSERT ON allows explicit values to be inserted
into a column with IDENTITY
• @@IDENTITY returns the last value created by the IDENTITY
column across all sessions
• SCOPE_IDENTITY() returns the last value created by the IDENTITY
column for the current session
• SEQUENCE creates a numbered list that can be used by
all tables in the database
• The value increments each time the next value is requested
• Can be restarted
• MINVALUE and MAXVALUE set boundaries
Converting Data Between Data Types
Working with International Character Data

• Unicode
• Is a worldwide character-encoding standard
• Simplifies software localization
• Improves multilingual character processing
• Is implemented in SQL Server as double-byte for
Unicode types
• Requires N prefix
• Uses LEN() to return number of characters
• Uses DATALENGTH() to return the number of bytes
Working with Schemas

• What Is a Schema?
• Object Name Resolution
• Creating Schemas
• Demonstration: Working with Schemas
What Is a Schema?

• Schemas are containers for objects such as:


• Tables
• Stored procedures
• Functions
• Types
• Views

• Schemas are security boundaries


• Permissions can be granted at the schema level to
apply to all objects within a schema
• Simplifies security configuration
Object Name Resolution

• If the schema name is omitted, rules apply for


name resolution
• Users can have a default schema assigned
• Users who have no default schema will have dbo as
their default schema
• Initially, the user’s default schema is searched
• If the object is not found in the default schema, the
dbo schema is also searched
• When referencing an object in a statement, users
should specify both the schema and the object
name
• Select ProductID FROM Production.Product;
Creating Schemas

• Schemas are created by using the CREATE


SCHEMA command
• Schemas have owners
• Objects contained within schemas also have owners
• Specify objects and permissions when the
schema is created
Demonstration: Working with Schemas

In this demonstration, you will see how to:


• Create a schema
• Create a schema with an included object
• Drop a schema
Creating and Altering Tables

• Creating Tables
• Dropping Tables
• Altering Tables
• Demonstration: Working with Tables
• Temporary Tables
• Demonstration: Working with Temporary Tables
• Computed Columns
• Demonstration: Working with Computed Columns
Creating Tables

• Tables are created using the CREATE TABLE


statement
• Specify column names and data types
• Specify NULL or NOT NULL
• Specify the primary key
Dropping Tables

• Tables are removed by using the DROP TABLE


statement
• Reference tables (via foreign keys) cannot be
dropped
• All permissions, constraints, indexes, and triggers
are also dropped
• Code that references the table, such as a stored
procedure, is not dropped
Altering Tables

• Use the ALTER TABLE statement to modify tables


• ALTER TABLE retains permissions to the table
• ALTER TABLE retains the data in the table
• ALTER TABLE is used to:
• Add or drop columns and constraints
• Enable or disable constraints and triggers
Demonstration: Working with Tables

In this demonstration, you will see how to:


• Create tables and alter tables
• Drop tables
Temporary Tables

• Session temporary tables are only visible to their


creators in the same session and same scope or
subscope
• Created with # prefix
• Dropped when the user disconnects or when out of
scope
• Should be deleted in code rather than depending on
automatic drop
• Often created by using SELECT INTO statements

• Global temporary tables are visible to all users


• Created with ## prefix
• Deleted when all users referencing the table disconnect
Demonstration: Working with Temporary Tables

In this demonstration, you will see how to:


• Create local temporary tables
• Create global temporary tables
• Access a global temporary table from another session
Computed Columns

• Computed columns are derived from other


columns or functions
• Computed columns are often used to provide
easier access to data without denormalizing it
• Persisted computed columns improve SELECT
performance of computed columns in some
situations
Demonstration: Working with Computed Columns

In this demonstration, you will see how to:


• Work with computed columns
• Use PERSISTED columns
Module 4
Ensuring Data Integrity Through
Constraints
Module Overview

• Enforcing Data Integrity


• Implementing Data Domain Integrity
• Implementing Entity and Referential Integrity
Enforcing Data Integrity

• Data Integrity Across Application Layers


• Types of Data Integrity
• Options for Enforcing Data Integrity
Data Integrity Across Application Layers

• Applications are often layered in a hierarchy

• Integrity can be enforced at each level:


• User interface tier
• Middle tier
• Data tier

• It is likely to be found at all tiers:


• Leading to the complexity of keeping all the constraint
management functional and nonfunctional code
synchronised
Types of Data Integrity

• Domain Integrity
• Defines the allowed values in columns

• Entity Integrity
• Primary key uniquely identifies each row within a table

• Referential integrity
• Defines the relationship between tables
Options for Enforcing Data Integrity

• Data type: defines the type of data that can be


stored in a column
• Nullability: determines whether a value must be
present in a column
• Constraints: defines rules that limit the values
that can be stored in a column, or how values in
different columns must be related; also default
values
• Triggers: define code that is executed
automatically when data in a table is modified
Implementing Data Domain Integrity

• Data Types
• DEFAULT Constraints
• CHECK Constraints
• Demonstration: Data and Domain Integrity
Data Types

• Choosing data types is an important decision


when designing tables
• They can be assigned by using:
• System data types
• Alias data types
• User-defined data types
DEFAULT Constraints

• Default constraints
• Provide default values for columns
• Used if INSERT provides no column value
• Must produce data compatible with the data type for
the column
CHECK Constraints

• Check constraints
• Limit the values that are accepted in a column
• Only rejects FALSE outcomes
• NULL evaluates to UNKNOWN and not FALSE
• Can be defined at table level to refer to multiple
columns
Demonstration: Data and Domain Integrity

In this demonstration, you will see how to:


• Enforce data and domain integrity
Implementing Entity and Referential Integrity

• PRIMARY KEY Constraints


• UNIQUE Constraints
• IDENTITY Constraints
• Working with Sequences
• Demonstration: Sequences Demonstration
• FOREIGN KEY Constraints
• Cascading Referential Integrity
• Considerations for Constraint Checking
• Demonstration: Entity and Referential Integrity
PRIMARY KEY Constraints

• Primary keys
• Are used to uniquely identify a row in a table
• Must be unique and not NULL
• May involve multiple columns that form a composite
key
UNIQUE Constraints

• Unique constraints
• Ensure that values in a column are unique
• One row may have a NULL value
• You can have multiple unique columns
IDENTITY Constraints

• IDENTITY property
• Automatically generates column values
• You can specify a seed (starting number) and an
increment
• Default seed and increment are both 1
• SCOPE IDENTITY(), @@IDENTITY return current value
Working with Sequences

• Sequence objects:
• Are user-defined, schema-bound objects
• Are not tied to any particular table
• Can be used to ease migration from other database
engines
Demonstration: Sequences Demonstration

• In this demonstration, you will see how to:

• Work with identity constraints


• Create a sequence, and use it to provide key values for
multiple tables
FOREIGN KEY Constraints

• Foreign key constraints:


• Are used to enforce relationships between tables
• Check the existence of parent when inserting child
• Check the existence of children when deleting parent
• Can be self-referencing
Cascading Referential Integrity

• Cascading referential integrity


• Is controlled by the CASCADE option of the FOREIGN
KEY constraint
• NO ACTION (default): return error and rollback operation
• CASCADE: update foreign keys in referencing tables. Delete
rows in referencing tables
• SET DEFAULT: set foreign keys in referencing tables to default
values
• SET NULL: set foreign keys in referencing tables to NULL
Considerations for Constraint Checking

• Give constraints meaningful names


• Constraints can be created, changed, and
dropped without having to drop and recreate the
table
• Perform error checking in your applications and
transactions
• Referential constraints can be also be suspended
• To improve performance during large batch jobs
• To avoid checking existing data when you add new
constraints to a table containing valid data
• You must have the name that you or the system
supplied to suspend the checks
Demonstration: Entity and Referential Integrity

In this demonstration, you will see how to:


• Define entity integrity for tables
• Define referential integrity for tables
• Define cascading actions to relax the default
referential integrity constraint
Module 5
Introduction to Indexes
Module Overview

• Core Indexing Concepts


• Data Types and Indexes
• Heaps, Clustered, and Nonclustered Indexes
• Single Column and Composite Indexes
Core Indexing Concepts

• How SQL Server Accesses Data


• The Need for Indexes
• Index Structures
• Selectivity, Density and Index Depth
• Index Fragmentation
• Demonstration: Viewing Index Fragmentation
How SQL Server Accesses Data

• Table Scan
• SQL Server reads all table pages
• Any query can be satisfied by a table scan
• Will result in the slowest response to a query
• A table without indexes is called a heap

• Index
• SQL Server uses index pages to find the desired rows
• Different types
• Clustered and nonclustered
• Rowstore and columnstore
The Need for Indexes
Index Structures
Selectivity, Density and Index Depth

• Selectivity
• A measure of how many rows are returned compared to the total
number of rows
• High selectivity means a small number of rows when related to the
total number of rows

• Density
• A measure of the lack of uniqueness of data in the table
• High density indicates a large number of duplicates

• Index Depth
• Number of levels within the index
• Common misconception that indexes are deep
Index Fragmentation

• How does fragmentation occur?


• SQL Server reorganizes index pages when data
modifications cause index pages to split

• Types of fragmentation:
• Internal – pages are not full
• External – pages are out of logical sequence

• Detecting fragmentation
• SQL Server Management Studio – Index Properties
• System function – sys.dm_db_index_physical_stats
Demonstration: Viewing Index Fragmentation

In this demonstration, you will see how to:


• Identify fragmented indexes
• View the fragmentation of an index in SSMS
Data Types and Indexes

• Numeric Index Data


• Character Index Data
• Date-Related Index Data
• GUID Index Data
• BIT Index Data
• Indexing Computed Columns
Numeric Index Data

• Using numeric values in indexes


• Benefits
• Small in size
• More values can fit in a single page
• Faster to read

• Negatives
• Small data types will be more dense
Character Index Data

• Character data types in indexes


• Benefits
• Character data is often searched
• Better performance than a heap

• Negatives
• Slower to search than a numeric index
• Can become fragmented because data does not tend
to be sequential
Date-Related Index Data

• Using date data types in indexes


• Benefits
• Smaller in size
• More values can fit in a single page
• Quite faster to read

• Negatives
• Small data types will be more dense
GUID Index Data

• Using the GUID data type in indexes


• Benefits
• Highly selective
• Fast to read

• Negatives
• Updates and deletes do not perform as well
BIT Index Data

• Using BIT data type in indexes


• Benefits
• Extremely small in size
• More values can fit in a single page
• Fast to read; in some circumstances could be highly
selective
• Negatives
• The index will be very dense
Indexing Computed Columns

• Indexing computed columns


• Benefits
• Calculated values are stored in the index
• Values updated automatically

• Negatives
• Frequent changes can impair performance
• Computed columns must be deterministic
Heaps, Clustered, and Nonclustered Indexes

• Heaps
• Operations on a Heap
• Clustered Indexes
• Operations on a Clustered Index
• Primary Keys and Clustering Keys
• Nonclustered Indexes
• Operations on Nonclustered Indexes
• Demonstration: Working with Clustered and
Nonclustered Indexes
Heaps

A heap is a table with:


• No specified order for pages within the table

• No specified order for rows within each page

Data will be inserted in the first space in a page


that is found
Operations on a Heap

• INSERT
• Each new row can be placed in the first available page with
sufficient space
• UPDATE
• The row can remain on the same page if it still fits; otherwise, it
can be removed from the current page and placed on the first
available page with sufficient space
• DELETE
• Frees up space on the current page
• Data is not overwritten, space is just flagged as available for reuse
• SELECT
• Entire table needs to be scanned
Clustered Indexes

A clustered index:
• Has pages that are logically ordered

• Has rows that are logically ordered, and where


possible, physically ordered within pages
• Can only be declared once on a table

The logical order is specified by a clustering key


Operations on a Clustered Index

• INSERT
• Each new row must be placed into the correct logical position
• May involve splitting pages of the table
• UPDATE
• The row can remain in the same place if it still fits and if the clustering key
value is still the same
• If the row no longer fits on the page, the page needs to be split
• If the clustering key has changed, the row needs to be removed and
placed in the correct logical position within the table
• DELETE
• Frees up space by flagging the data as unused
• SELECT
• Queries related to the clustering key can seek
• Queries related to the clustering key can scan and avoid sorts
Primary Keys and Clustering Keys

• Primary key
• Must be unique
• Cannot contain NULL values
• Only one per table
• Implemented as a constraint

• Clustering key
• Must be unique
• Specifies the logical ordering of rows
• Only one per table
• Can be automatically created
Nonclustered Indexes

A nonclustered index :
• Can be on a heap or clustered index

• Will take up extra space

• Needs to be updated when the underlying


data is modified
A table can have a maximum of 999 nonclustered
indexes.
Operations on Nonclustered Indexes

• INSERT
• Each nonclustered index that is added to a table will decrease the
performance of inserts
• UPDATE
• The index will need to be kept up to date if the location of the
data changes
• DELETE
• Similar to updates, deleted data needs to be removed from the
index
• SELECT
• Performance improvements for queries that the index covers
Demonstration: Working with Clustered and
Nonclustered Indexes

In this demonstration, you will see how to:


• Create a clustered index
• Create a covering index
Single Column and Composite Indexes

• Single Column vs. Composite Indexes


• Ascending vs. Descending Indexes
• Index Statistics
Single Column vs. Composite Indexes

• Indexes are not always constructed on a single


column
• Composite indexes tend to be more useful than
single column indexes:
• Having an index sorted first by customer, then by order
date, makes it easy to find orders for a particular
customer on a particular date
• Two columns together might be selective while neither
is selective on its own
• Index on A,B is not the same as an index on B,A
Ascending vs. Descending Indexes

• Indexes could be constructed in ascending or


descending order
• In general, for single column indexes, both are
equally useful
• Each layer of a SQL Server index is double-linked (that
is, linked in both directions)
• SQL Server can start at either end and work towards
the other end
• Each component of a composite index can be
ascending or descending
• Might be useful for avoiding sort operations
Index Statistics

• SQL Server needs to have knowledge of the layout of the


data in a table or index before it optimizes and executes
queries
• Needs to create a reasonable plan for executing the query
• Important to know the usefulness of each index
• Selectivity is the most important metric
• By default, SQL Server automatically creates statistics on
indexes
• Can be disabled
• Recommendation is to leave auto-creation and auto-update
enabled
Module 6
Designing Optimized Index
Strategies
Module Overview

• Index Strategies
• Managing Indexes
• Execution Plans
• The Database Engine Tuning Advisor
Index Strategies

• Covering Indexes
• Using the INCLUDE Clause
• Heap vs. Clustered Index
• Filtered Index
Covering Indexes

• A covering index includes all the columns


returned by a query
• No need for the query optimizer to use clustered
indexes
• Use covering indexes for:
• Frequently used queries
• Poorly performing queries

• When the query is no longer needed, drop the


covering index
Using the INCLUDE Clause

• SQL Server indexes have a number of limitations


• Columns added with the INCLUDE clause are
nonkey columns
• The INCLUDE clause adds columns at the leaf
level of an index
• Use INCLUDE when:
• You would exceed the number of columns or max size
for an index
• You want to create a covering index
• You want to add columns with larger data types that
are only used in the SELECT statement
• Nonkey column data is stored twice
Heap vs. Clustered Index

• A heap is a table without a clustered index


• Rarely used
• Perhaps for very small tables
• Perhaps for tables that require the data writing to disk
very quickly
• A clustered index defines the physical sequence
of table data
• Only one clustered index per table
• Often the primary key
• Tables normally have a clustered index
Filtered Index

• A filtered index is an index on a subset of a table


• Suitable for queries that access a defined subset
of data, such as:
• Sales for the northern region
• Only finished goods

• Filtered indexes can only be created on


nonclustered indexes
• Not clustered indexes
• Not views
Managing Indexes

• What Is Fill Factor?


• What Is Pad Index?
• Implementing Fill Factor and Padding
What Is Fill Factor?

• What is fill factor?


• A setting that defines spare space on the leaf level of index pages
• Set either for the server, or for individual indexes
• A fill factor of 75 fills each leaf page up to 75 percent full
• By leaving space, you reduce the need for page splits as
new data is added
• Provided data is added evenly between pages
• You can set fill factor
• For the SQL Server instance
• For each index
What Is Pad Index?

• WITH PAD_INDEX specifies that space should be


left at the intermediate node-level pages of an
index
• PAD_INDEX is used together with FILL FACTOR
• The fill factor percentage value is used by pad
index
• PAD_INDEX is off by default
Implementing Fill Factor and Padding

• Set the default fill factor at the instance level


• Using SSMS Object Explorer, right-click the instance name and
select Database Properties
• Type or select Fill Factor value
• Or use Transact-SQL and sp_configure
• Using Transact-SQL, use CREATE INDEX or ALTER INDEX
to set fill factor and pad index
• Index properties can be set using SSMS and Object
Explorer
• Expand the tree to the relevant index
• Right-click and select Properties
• Options > Storage > set fill factor and pad index
Managing Statistics

• Statistics are used by the query optimizer to


estimate the cost of each execution plan
• Out-of-date or missing statistics lead to
suboptimal query performance
• Statistics are automatically created and updated
• Use sp_createstats to update all single-column
statistics in the database
• Regular maintenance on indexes does not affect
statistics
• Updating statistics causes cached query plans to
be recompiled
Execution Plans

• What Is an Execution Plan?


• Actual vs. Estimated Execution Plans
• Common Execution Plan Elements
• Methods for Capturing Plans
• Execution Plan Related DMVs
• Live Query Statistics
What Is an Execution Plan?

• Transact-SQL specifies what data is required; not how to


retrieve it
• The query optimizer finds a good plan
• Not always the very best plan
• Good enough within a reasonable time
• Not necessary for some DDL statements
• Simple queries have a trivial plan
• It is a cost-based optimizer, and will compare a number
of plans for each query
• The query optimizer uses statistics
• DDL statements do not normally have alternate plans
Actual vs. Estimated Execution Plans

• Display an execution plan by:


• Clicking the icon on the toolbar or right-clicking the
query window
• Estimated execution plan (Ctrl-L)
• Creates a plan but does not execute the query
• Estimated number of rows based on statistics

• Actual execution plan (Ctrl-M)


• Mostly the same as the estimated plan
• Includes actual number of rows returned
• If different, statistics are out of date or missing

• Query plans may be cached and reused


Common Execution Plan Elements

• Data retrieval operators


• Scan: reads records sequentially
• Seek: finds the appropriate record in an index
• Join operators
• Nested Loop: the second input is searched once for each value in
the first input; the second input is inexpensive to search
• Merge Join: two sorted inputs are interleaved
• Hash Match: a hash table is built from the first input; this is
compared against hash values from the second input—typically,
large, unsorted inputs
• Parallel query plans have at least once instance of the
Gather Streams operator
Methods for Capturing Plans

• Graphical plan
• Right-click and Save Execution Plan As
• Saved in XML format with a .sqlplan extension
• .slqplan is associated with SSMS
Live Query Statistics

• Live Query Statistics gives you a real-time view of


how a query is being executed
• New in SQL Server 2016, and also works with SQL
Server 2014 database engine
• Download the latest version of SSMS
• You cannot use Live Query Statistics with natively
compiled stored procedures
The Database Engine Tuning Advisor

• Introduction to the Database Engine Tuning


Advisor
• Using the Database Engine Tuning Advisor
Introduction to the Database Engine Tuning Advisor

• The Database Engine Tuning Advisor helps you


improve query performance
• It is a stand-alone tool
• It makes recommendations, such as:
• New indexes or indexed views
• Statistics that need to be updated
• Aligned or nonaligned partitions
• Make better use of existing indexes

• The Database Engine Tuning Advisor uses a


saved workload
Using the Database Engine Tuning Advisor

• First-time use must be by a member of the


sysadmin role
• Tables are created in the msdb database
• Accepts different workload formats:
• Plan cache
• SQL Profiler trace
• Transact-SQL script
• XML file
Improving Query Performance

• Use the Top Resource Consuming Queries view


to find long-running queries
• View the execution plan for the query to find the
cause of low performance
• Force a query to use a plan, or:
• Rewrite the query until the duration decreases to an
acceptable time
• Query the data captured by the Query Store by using
the sys.query_store_plan view
Module 7
Designing and Implementing Views
Module Overview

• Introduction to Views
• Creating and Managing Views
• Performance Considerations for Views
What Is a View?

• A view is a stored query expression:


• It behaves like a table, but the data is stored in the
underlying tables
• It has a name
• It can be referenced by other queries

• Filter records by restricting the columns or rows


that are returned
• Use views to:
• Simplify the complex underlying relationship between
tables
• Prevent unauthorized access to data
Types of Views

• User-defined views:
• Views (sometimes called standard views)
• Indexed views
• Partitioned views

• System views:
• System catalog views
• Dynamic management views (DMVs)
• Compatibility views
• Information schema views
Advantages of Views

• Views have many advantages:


• Create a simplified view of the underlying table
relationships
• Provide data security—users see only what they need
to see
• Can create an interface between underlying data
structures and external applications
• If changes are made to tables, you need to make sure the
views still work
• Simplify reporting by providing data in the correct
format
System Views

• Catalog views:
• Views onto internal system metadata
• Organized into categories, such as object views,
schema views, or linked server views
• Compatibility views:
• Provide backward compatibility for SQL Server 2000
system tables
• Do not use for new development work

• Information schema views:


• Comply with the ISO standard
Creating and Managing Views

• Create a View
• Drop a View
• Alter a View
• Ownership Chains and Views
• Sources of Information About Views
• Updateable Views
• Hide View Definitions
• Demonstration: Creating, Altering, and Dropping a
View
Create a View

• Use the CREATE VIEW statement to create a new


view
• View attributes
• WITH ENCRYPTION
• WITH SCHEMABINDING
• WITH VIEW_METADATA

• WITH CHECK option


• Ensures new records conform to the view definition
Drop a View

• To remove a view from the database, use the


DROP VIEW statement
• Also drops associated permissions
• Multiple views can be dropped in a single
statement
• Comma-delimited list of views to be deleted
Alter a View

• To alter an existing view definition, use the ALTER


VIEW statement
• Does not alter associated permissions
Ownership Chains and Views

• Views and tables that have the same owner:


• Another user can be given access to the view—even if
they don’t have permissions on the table
• This enables views to filter information from the
underlying table
• If a view and underlying tables have different
owners:
• Another user will not have access to the view
• Applies even if the owner of the view has access to the
tables
Sources of Information about Views

• Use SSMS to list all views in a database


• List columns, triggers, indexes, and statistics
• Script View As to create scripts for existing views

• Use Transact-SQL:
• sys.views – lists views in database
• OBJECT_DEFINITION() – returns the definition of non-
encrypted views
• sys.sql_expression_dependencies – lists objects,
including other views, that depend on an object
Updateable Views

• Data can be modified through a view, providing


that:
• The view includes columns from only one table
• The columns directly reference the table columns
• No aggregations or computations
• The updates comply with the base table constraints
• NULL or NOT NULL
• Primary and foreign keys can be enforced
• WITH CHECK option prevents data being inserted that
does not comply with the view definition
Hide View Definitions

• Use the WITH ENCRYPTION option to obfuscate


the view definition
• Limited protection
• Do not rely on WITH ENCRYPTION to protect the
view definition
Performance Considerations for Views

• Views and Dynamic Resolution


• Indexed Views
• Nested View Considerations
• Partitioned Views
Indexed Views

• An indexed view is materialized and data is


stored on disk
• Nonindexed views store only the view definition
• An indexed view is not a table
• Two ways indexed views are used:
• Directly in a query; in some situations, the indexed view
will be faster
• Indirectly by the query optimizer; when it is beneficial
to use the indexed view instead of the underlying
tables
Nested View Considerations

• A nested view is one that calls another view—


that view may call another view, and so on
• Disadvantages include:
• Broken ownership chains
• Poorly performing queries that are difficult to debug
• Problems maintaining tangled code

• Advantages include:
• Once a view has been written, tested, and documented,
it can be used just like a table
Partitioned Views

• A partitioned view is a view onto a partitioned


table
• A partitioned table is a large table that has been
divided horizontally
• All tables have the same columns and data types
• Use a WITH CHECK constraint
• Update the underlying tables through the view
• A partitioned view may be local or distributed
• Performance benefits
• Faster querying
• Faster indexing
Module 8
Designing and Implementing
Stored Procedures
Introduction to Stored Procedures

• What Is a Stored Procedure?


• Benefits of Stored Procedures
• Working with System Stored Procedures
• Statements Not Permitted in Stored Procedures
What Is a Stored Procedure?

• When applications interact with SQL Server, there are two


basic ways to execute Transact-SQL code
• Every statement can be issued directly by the application
• Groups of statements can be stored on the server as stored
procedures and given a name—the application then calls the
procedures by name
• Stored procedures
• Are similar to procedures or methods in other languages
• Can have input parameters
• Can have output parameters
• Can return sets of rows
• Are executed by the EXECUTE Transact-SQL statement
• Can be created in managed code or Transact-SQL
Benefits of Stored Procedures

• Can enhance the security of an application


• Users can be given permission to execute a stored procedure
without permission to the objects that it accesses
• Enables modular programming
• Create once, but call many times and from many applications
• Enables the delayed binding of objects
• Can create a stored procedure that references a database object
that does not exist yet
• Can avoid the need for ordering in object creation
• Can improve performance
• A single statement requested across the network can execute 100s
of lines of Transact-SQL code
• Better opportunities for execution plan reuse
Working with System Stored Procedures

• A large number of system stored procedures are supplied


with SQL Server
• Two basic types of system stored procedure
• System stored procedures: typically used for administrative
purposes either to configure servers, databases, or objects, or to
view information about them
• System extended stored procedures: extend the functionality of
SQL Server
• Key difference is how they are coded
• System stored procedures are Transact-SQL code in the master
database
• System extended stored procedures are references to DLLs
Statements Not Permitted in Stored Procedures

• Some Transact-SQL statements are not allowed:


• CREATE AGGREGATE
• CREATE DEFAULT
• CREATE or ALTER FUNCTION
• CREATE or ALTER PROCEDURE
• SET PARSEONLY
• SET SHOWPLAN TEXT
• USE databasename
• CREATE RULE
• CREATE SCHEMA
• CREATE or ALTER TRIGGER
• CREATE or ALTER VIEW
• SET SHOWPLAN ALL or SET SHOWPLAN XML
Working with Stored Procedures

• Creating a Stored Procedure


• Executing a Stored Procedure
• Altering a Stored Procedure
• Dropping a Stored Procedure
• Stored Procedures Error Handling
• Transaction Handling
• Stored Procedure Dependencies
• Guidelines for Creating Stored Procedures
• Obfuscating Stored Procedures
• Demonstration: Stored Procedures
Creating a Stored Procedure

• CREATE PROCEDURE is used to create new stored


procedures
• The procedure must not already exist, otherwise ALTER
must be used or the procedure dropped first
• CREATE PROCEDURE must be the only statement in a
batch
Executing a Stored Procedure

• EXECUTE statement
• Used to execute stored procedures and other objects
such as dynamic SQL statements stored in a string
• Use two- or three-part naming when executing
stored procedures to avoid SQL Server having to
carry out unnecessary searches
Altering a Stored Procedure

• ALTER PROCEDURE statement


• Used to replace a stored procedure
• Retains the existing permissions on the procedure
Dropping a Stored Procedure

• DROP PROCEDURE removes one or more stored


procedures from the current database
• sys.procedures system view gives details on
stored procedures in the current database
• sp_dropextendedproc to drop system extended
stored procedures
Stored Procedures Error Handling

• Include error handling in your stored procedures


• Use the TRY … CATCH construct to handle errors
• BEGIN TRY <code> END TRY
• BEGIN CATCH <error handling code> END CATCH
• Error functions used within a CATCH block
• ERROR_NUMBER()
• ERROR_SEVERITY()
• ERROR_STATE()
• ERROR_PROCEDURE()
• ERROR_LINE()
• ERROR_MESSAGE()
Stored Procedure Dependencies

New system views replace the use of sp_depends


• sys.sql_expression_dependencies
• Contains one row per dependency by name on user-
defined entities in the current database
• sys.dm_sql_referenced_entities
• Contains one row for each entity referenced by another
entity
• sys. dm_sql_referencing_entities
• Contains one row for each entity referencing another
entity
Guidelines for Creating Stored Procedures

• Qualify names inside stored procedures


• Keep consistent SET options
• SET NOCOUNT ON
• Apply consistent naming conventions (and no
sp_ prefix)
• Use @@nestlevel to see current nesting level (32
is the maximum number of levels)
• Use return codes to identify reasons various
execution outcomes
• Keep to one procedure for each task
Obfuscating Stored Procedures

• WITH ENCRYPTION clause


• Encrypts stored procedure definition stored in SQL
Server
• Protects stored procedure creation logic to a limited
extent
• Is generally not recommended
Implementing Parameterized Stored Procedures

• Working with Parameterized Stored Procedures


• Using Input Parameters
• Using Output Parameters
• Parameter Sniffing and Performance
Working with Parameterized Stored Procedures

• Parameterized stored procedures contain three


major components
• Input parameters
• Output parameters
• Return values
Using Input Parameters

• Parameters have the @ prefix, a data type, and


optionally a default value
• Parameters can be passed in order, or by name
• Parameters should be validated early in
procedure code
Using Output Parameters

• OUTPUT must be specified


• When declaring the parameter
• When executing the stored procedure
Module 9
Designing and Implementing
User-Defined Functions
Overview of Functions

• Types of Functions
• System Functions
Types of Functions

• Types of functions:
• Scalar functions
• Table-valued functions
• Inline and multistatement functions
• System functions
• Functions cannot modify data
System Functions

• SQL Server includes a large number of built-in functions


• Rowset functions: for example, OPENQUERY, OPENROWSET
• Aggregate functions: for example, AVG, MAX, SUM
• Ranking functions: for example, RANK, ROW_NUMBER
• Scalar functions include:
• Configuration functions
• Conversion functions
• Cursor functions
• Date and time functions
• Mathematical functions
• Security functions
Designing and Implementing Scalar Functions

• What Is a Scalar Function?


• Creating Scalar Functions
• Deterministic and Nondeterministic Functions
• Demonstration: Working with Scalar Functions
What Is a Scalar Function?

Scalar functions:
• Return a single data value
• Can return any data type except rowversion,
cursor, and table when implemented in
Transact-SQL
• Can return any data type except for rowversion,
cursor, table, text, ntext, and image when
implemented in managed code
Creating Scalar Functions

• Scalar UDFs:
• Return a single data type from a database
• Usually include parameters
• Use two-part naming
• Stop on error
• CREATE FUNCTION must be the only statement in a
batch
Deterministic and Nondeterministic Functions

• Deterministic functions
• Always return the same result given the same input
(and the same database state)
• Nondeterministic
• May return different results given a specific input

• Built-in functions
• Can be deterministic or nondeterministic
Demonstration: Working with Scalar Functions

In this demonstration, you will see how to:


• Work with scalar functions
Designing and Implementing Table-Valued
Functions

• What Are Table-Valued Functions?


• Inline Table-Valued Functions
• Multistatement Table-Valued Functions
• Demonstration: Implementing Table-Valued
Functions
What Are Table-Valued Functions?

• Table-valued functions
• TVFs return a TABLE data type
• Inline TVFs have a function body with only a single
SELECT statement
• Multistatement TVFs construct, populate, and return a
table within the function
• TVFs are queried like a table
• TVFs are often used like parameterized views
Inline Table-Valued Functions

• Inline table-valued functions


• Returns a single result set
• There is no function body with BEGIN and END
• The returned table definition taken is from the SELECT
statement
• Can be seen as a parameterized view
Multistatement Table-Valued Functions

• Function body is enclosed by BEGIN and END


• Definition of returned table must be supplied
• Table variable is populated within function body
and then returned
Demonstration: Implementing Table-Valued Functions

In this demonstration, you will see how to:


• Implement TVFs
Considerations for Implementing Functions

• Performance Impacts of Scalar Functions


• Performance Impacts of Table-Valued Functions
Performance Impacts of Scalar Functions

• The code for scalar functions is not incorporated


into the query
• Different to views where the code is incorporated
into the query
• Scalar functions used in SELECT lists or WHERE
clause predicates can impact performance
Performance Impacts of Table-Valued Functions

• The code for inline TVFs is incorporated into the


surrounding query
• The code for multistatement TVFs is not
incorporated into the surrounding query
• Performance can be poor except where executed
only once in a query
• Very common cause of performance problems
• CROSS APPLY can cause TVFs to be repeatedly
executed
Guidelines for Creating Functions

• Determine function type


• Create one function for one task
• Qualify object names inside function
• Consider the performance impacts of how you
intend to use the function
• Particular issues exist with the inability to index
function results
• Functions cannot contain structured exception
handling
Alternatives to Functions

• Comparing Table-Valued Functions and Stored


Procedures
• Comparing Table-Valued Functions and Views
Comparing Table-Valued Functions and Stored
Procedures

• Both can often achieve similar outcomes


• Some source applications can only call one or the other
• Functions
• Can have their output consumed more easily in code
• Can return table output in a variable
• Cannot have data-related side effects
• Often cause significant performance issues when they are
multistatement functions
• Stored procedures
• Can alter the data
• Can execute dynamic SQL statements
• Can include detailed exception handling
• Can return multiple result sets
Comparing Table-Valued Functions and Views

• Both can often achieve similar outcomes


• Views
• Can be consumed by almost all applications
• Are very similar to tables
• Can be updatable
• Can have INSTEAD OF triggers associated with them
• TVFs
• Are like parameterized views
• Can often lead to significant performance problems
• Can be updatable when inline
• Avoid multistatement TVFs if there is any option
to apply the same logic inline
Module 10
Responding to Data Manipulation
Via Triggers
Module Overview

• Designing DML Triggers


• Implementing DML Triggers
• Advanced Trigger Concepts
Designing DML Triggers

• What Are DML Triggers?


• AFTER Triggers vs. INSTEAD OF Triggers
• Inserted and Deleted Virtual Tables
• SET NOCOUNT ON
• Considerations for Triggers
What Are DML Triggers?

• Triggers are special stored procedures which:


• Fire for INSERT, UPDATE, or DELETE DML operations
• Fire on DDL statements such as CREATE, ALTER, or
DROP
• Provide complex logic and meaningful error messages

• Multiple triggers can be fired


AFTER Triggers vs. INSTEAD OF Triggers

Two types of triggers can be implemented in managed code


or Transact-SQL:
• AFTER triggers
• Fire after the event to which they relate
• Are treated as part of the same transaction as the statement that
triggered them
• Can roll back the statement that triggered them (and any
transaction of which that statement was part)
• INSTEAD OF triggers
• Make it possible to execute alternate code, unlike a BEFORE trigger
in other database engines
• Are often used to create updatable views with more than one base
table
Inserted and Deleted Virtual Tables

• Inserted and deleted virtual tables


• Provide access to state of the data before and after the
modification began
• Are often joined to the modified table data
• Are available in both AFTER and INSTEAD OF triggers
• Deleted table
• DELETE statements – rows just deleted
• UPDATE statements – original row contents
• Inserted table
• INSERT statements – rows just inserted
• UPDATE statements – modified row contents
SET NOCOUNT ON

• Triggers should not return rows of data


• Client applications often check the number of rows that
are affected by data modification statements
• Triggers that are affected by data modification
statements
• SET NOCOUNT ON avoids affecting the outer statements
• Returning rowsets has been deprecated
• Use the configuration setting “disallow results from
triggers” to prevent triggers from returning resultsets
Considerations for Triggers

• Constraints:
• Are preferred to triggers
• Avoid data modification overhead on violation

• Triggers:
• Are complex to debug
• Use a rowversion store in tempdb database
• Excessive usage can impact tempdb performance
• Can increase the duration of transactions

• Managing Trigger Security


Implementing DML Triggers

• AFTER INSERT Triggers


• Demonstration: Working with AFTER INSERT
Triggers
• AFTER DELETE Triggers
• Demonstration: Working with AFTER DELETE
Triggers
• AFTER UPDATE Triggers
• Demonstration: Working with AFTER UPDATE
Triggers
AFTER INSERT Triggers

• INSERT statement is executed


• AFTER INSERT trigger then fires
• Ensures that multirow inserts are supported
Demonstration: Working with AFTER INSERT Triggers

In this demonstration, you will see how to:


• Create an AFTER INSERT trigger
AFTER DELETE Triggers

• DELETE statement is executed


• AFTER DELETE trigger then fires
• Multirow deletes
• Truncate table
Demonstration: Working with AFTER DELETE Triggers

In this demonstration, you will see how to:


• Create and test AFTER DELETE triggers
AFTER UPDATE Triggers

• UPDATE statement is executed


• AFTER UPDATE trigger then fires
• Trigger processes updated rows at the same time
Demonstration: Working with AFTER UPDATE Triggers

In this demonstration, you will see how to:


• Create and test AFTER UPDATE triggers
Advanced Trigger Concepts

• INSTEAD OF Triggers
• Demonstration: Working with INSTEAD OF
Triggers
• How Nested Triggers Work
• Considerations for Recursive Triggers
• UPDATE Function
• Firing Order for Triggers
• Alternatives to Triggers
INSTEAD OF Triggers

• INSERT, UPDATE or DELETE statement requested


to be executed
• Statement does not execute
• INSTEAD OF trigger code executes instead
• Updatable views are a common use
Demonstration: Working with INSTEAD OF Triggers

In this demonstration, you will see how to:


• Create and test an INSTEAD OF DELETE trigger
How Nested Triggers Work

• Turned on at the server level


• Complex to debug
Considerations for Recursive Triggers

• Recursive triggers are disabled by default


• To turn them on:
• ALTER DATABASE db SET RECURSIVE_TRIGGERS ON
• Direct Recursion
• Indirect Recursion
• Considerations:
• Careful design and testing to ensure that the 32-level nesting limit
is not exceeded
• Difficult to control the order of table updates
• Can usually be replaced by nonrecursive logic
• The RECURSIVE TRIGGERS option only affects direct recursion
UPDATE Function

• UPDATE function—is a column being updated?


• Used in AFTER INSERT and AFTER UPDATE
• COLUMNS UPDATED function returns bitmap of
columns being updated
Firing Order for Triggers

• Multiple triggers may be created for a single


event
• You cannot specify the order in which the
triggers will fire
• With sp_settriggerorder, you can specify which
triggers will fire first and last
Alternatives to Triggers

Many developers use triggers in situations where


alternatives would be preferable:
• Use constraints for checking values
• Use defaults for values not supplied during
inserts
• Use foreign key constraints to check for
referential integrity
• Use computed and persisted computed columns
• Use indexed views for precalculating aggregates
Module 11
Implementing Managed Code in
SQL Server
Module Overview

• Introduction to CLR Integration in SQL Server


• Implementing and Publishing CLR Assemblies
Introduction to CLR Integration in SQL Server

• Options for Extending SQL Server Functionality


• Introduction to the .NET Framework
• .NET Common Language Runtime
• Why Use Managed Code with SQL Server?
• Considerations When Using Managed Code
• Appropriate Use of Managed Code
Options for Extending SQL Server Functionality

• With CLR managed code, you can extend the


functionality of SQL Server
• Executes under the management of the .NET Framework
CLR
• Written in Visual C# or Visual Basic
• Use it to create user-defined types, aggregates,
mathematical functions, and other functionality
• SQL Server components such as SSIS, SSAS, and
SSRS are also extensible
• Extended stored procedures are deprecated
• Prone to memory leaks and other performance issues
• Use CLR managed code instead
Introduction to the .NET Framework

• The Win32 and Win64 APIs evolved over time


• They are complex and inconsistent in their design

• The .NET Framework is an object-oriented


development framework
• It is consistent and well-designed
• It includes thousands of class libraries
• It provides a layer of abstraction above the Windows
operating system
• Generally well-regarded by developers

• .NET Framework provides a good basis for


writing code to extend SQL Server functionality
.NET Common Language Runtime

• CLR is the runtime environment for the .NET


Framework—it provides a number of services
including:
• Running code
• Providing services
• Avoiding memory leaks through garbage collection
• Destroys objects that are no longer used
• Operating within other programming environments
• Enabling interoperability between languages through
the common language specification (CLS)
Why Use Managed Code with SQL Server?

• The .NET Framework provides a rich class library


• Managed code can be used to create objects that
you normally create using Transact-SQL:
• User-defined functions (scalar and table-valued)
• Stored procedures
• Triggers (DML and DDL)

• Managed code can be used to create new types


of objects:
• User-defined data types
• User-defined aggregates
Considerations When Using Managed Code

• Considerations when using managed code:


• Portability—upgrading your database
• Maintainability—consider expertise required to maintain:
• The database
• Managed code components
• Consider using a three-tier architecture
• Database tier
• Mid tier
• Presentation tier
• Transact-SQL is well suited to working with SQL Server tables and
data
• Use managed code sparingly; otherwise consider a three-tier
architecture
Appropriate Use of Managed Code
Implementing and Publishing CLR Assemblies

• What Is an Assembly?
• Assembly Permission Sets
• SQL Server Data Tools
• Publishing a CLR Assembly
• Demonstration: Creating a User-Defined Function
What Is an Assembly?

• Managed code is deployed in SQL Server using an


assembly
• A SQL Server assembly is:
• A specially structured .dll file
• Self-describing through its manifest

• Assemblies contain compiled executable managed


code
• They might also contain other recourses
• Use SQL Server Data Tools (SSDT) to create
managed code, and to publish it to SQL Server
SQL Server Data Tools

• SQL Server Development Tools (SSDT) was introduced with


SQL Server 2012
• Integrates with Visual Studio
• Familiar development environment
• Use SSDT to develop and deploy CLR assemblies
• SSDT templates for:
• Aggregates
• Stored procedures
• Triggers
• User-defined functions
• User-defined types
• Permission level is SAFE by default
Publishing a CLR Assembly

• Visual Studio and SSDT is a familiar environment


for .NET software developers
• SSDT contains a number of SQL CLR templates:
• Aggregate
• Stored procedure
• User-defined function
• User-defined type

• Build the assembly and publish to your database


Demonstration: Creating a User-Defined Function

In this demonstration, you will see how to:


• Develop a simple function using CLR C# managed code
• Publish an assembly
Module 12
SQL Server Concurrency
Module Overview

• Concurrency and Transactions


• Locking Internals
Concurrency and Transactions

• Concurrency Models
• Concurrency Problems
• Transaction Isolation Levels
• Working with Row Versioning Isolation Levels
• Transactions
• Working with Transactions
• Demonstration: Analyzing Concurrency Problems
Concurrency Models

• Pessimistic concurrency:
• Data integrity maintained using locks
• Only one user can access a data item at once
• Writers block readers and other writers; readers block
writers
• Optimistic concurrency:
• Data is checked for changes before update
• Minimal locking
Concurrency Problems

• Dirty read
• Uncommitted data is included in results

• Lost update
• Two concurrent updates; the first update is lost

• Non-repeatable read
• Data changes between two identical SELECT statements
within a transaction
• Phantom read
• Data is read, then deleted before reading completes

• Double read
• Data in a range is read twice because the range key
value changes
Transaction Isolation Levels

• READ UNCOMMITTED
• READ COMMITTED
• REPEATABLE READ
• SERIALIZABLE
• SNAPSHOT
Transactions

• A logical unit of work, made up of one or more


Transact-SQL statements
• Atomicity
• Consistency
• Isolation
• Durability

• Transaction management modes:


• Auto-commit
• Explicit transactions
• Implicit transactions
• Batch-scoped transactions
Working with Transactions

• Naming Transactions:
• Label only; no effect on code

• Nesting Transactions:
• Only the state of the outer transaction has any effect
• @@TRANCOUNT track transaction nesting

• Terminating Transactions:
• Resource error
• SET XACT_ABORT
• Connection closure

• Transaction Best Practices:


• Keep transactions as short as possible
Demonstration: Analyzing Concurrency Problems

In this demonstration, you will see:


• Examples of concurrency problems
• How changes to transaction isolation levels
address concurrency problems
Locking Internals
Locking Architecture

• Locking architecture is designed as a balance


between consistency and concurrency
Lock Granularity and Hierarchy

• RID, KEY
• PAGE
• TABLE
• DATABASE
Lock Escalation

• Reduces lock manager memory overhead by


converting many fine-grained locks to a single
coarser-grained lock
• Row and page locks escalate to table locks
Lock Modes

• Data Lock Modes:


• Shared
• Exclusive
• Update
• Intent

• Special Lock Modes:


• Schema
• Conversion
• Bulk update
The Data Modification Process

• Relevant data pages located in the Buffer Pool


• Locks before data modification:
• Update lock on affected rows
• Intent exclusive lock on pages table
• Intent exclusive lock on table
• Shared lock on database
• Data modification locks:
• Update lock converted to an exclusive lock on affected rows
• Intent exclusive lock on pages table
• Intent exclusive lock on table
• Shared lock on database
Deadlock Internals

• Deadlocks are resolved by the Lock Manager:


• Runs every five seconds by default; frequency increases
as deadlocks are detected
• Deadlock victim is selected and terminated

• Use SQL Server Profiler to analyze deadlocks


Module 13
Performance and Monitoring
SQL Server Profiler

• SQL Trace, and SQL Server Profiler


• Events, Filters and other properties
• Extended Events
SQL Trace, and SQL Server Profiler

• SQL Trace and SQL Server Profiler are tools for


collecting trace information about activity on a
SQL Server instance
• Extended Events is the successor to SQL Trace
Extended Events Architecture

• Extended Events engine provides capabilities


• User defines session
• Session collects event
• Event triggers action
• Event is filtered by predicate
• Session writes to target
• A package defines the objects available to a session
Demonstration: Creating a Profiler Session

In this demonstration, you will learn how to:


• Create an Extended Events session
Demonstration: Creating an Extended Events Session

In this demonstration, you will learn how to:


• Create an Extended Events session
Performance Monitor

• Windows Performance Monitor is a snap-in for


Microsoft Management Console
• Displays real-time performance data
• Sales performance data to text files or a database
• Enables creation of custom data collector set
• Can respond to alerts
• Start by typing Performance Monitor from the
start screen
Performance Monitor Counters

• Performance Monitor allows you to:


• Monitor real-time system performance
• Collect data in response to events
• Collect scheduled data

• Performance counts include:


• CPU usage
• Memory usage
• Disk usage
• SQL Server statistics

You might also like