Entity Framework Core
Entity Framework Core
Entity Framework
Compare EF Core & EF6
EF6 and EF Core in the Same Application
Porting from EF6 to EF Core
Validate Requirements
Porting an EDMX-Based Model
Porting a Code-Based Model
Entity Framework Core
What's New
Roadmap
EF Core 3.0 (in preview)
New features
Breaking changes
EF Core 2.2 (latest release)
EF Core 2.1
EF Core 2.0
EF Core 1.1
EF Core 1.0
Upgrading from previous versions
From 1.0 RC1 to RC2
From 1.0 RC2 to RTM
From 1.x to 2.0
Get Started
Installing EF Core
.NET Core
New Database
ASP.NET Core
Interactive tutorial
New Database
Existing Database
EF Core and Razor Pages
Universal Windows Platform (UWP)
New Database
.NET Framework
New Database
Existing Database
Fundamentals
Connection Strings
Logging
Connection Resiliency
Testing
Testing with SQLite
Testing with InMemory
Configuring a DbContext
Creating a Model
Including & Excluding Types
Including & Excluding Properties
Keys (primary)
Generated Values
Required/optional properties
Maximum Length
Concurrency Tokens
Shadow Properties
Relationships
Indexes
Alternate Keys
Inheritance
Backing Fields
Value Conversions
Data Seeding
Entity Type Constructors
Table Splitting
Owned Entity Types
Query Types
Alternating models with same DbContext
Spatial Data (GIS)
Relational Database Modeling
Table Mapping
Column Mapping
Data Types
Primary Keys
Default Schema
Computed Columns
Sequences
Default Values
Indexes
Foreign Key Constraints
Alternate Keys (Unique Constraints)
Inheritance (Relational Database)
Managing Database Schemas
Migrations
Team Environments
Custom Operations
Using a Separate Project
Multiple Providers
Custom History Table
Create and Drop APIs
Reverse Engineering (Scaffolding)
Querying Data
Basic Query
Loading Related Data
Client vs. Server Evaluation
Tracking vs. No-Tracking
Raw SQL Queries
Asynchronous Queries
How Query Works
Global Query Filters
Query Tags
Saving Data
Basic Save
Related Data
Cascade Delete
Concurrency Conflicts
Transactions
Asynchronous Saving
Disconnected Entities
Explicit values for generated properties
Supported .NET Implementations
Database Providers
Microsoft SQL Server
Memory-Optimized Tables
SQLite
SQLite Limitations
InMemory (for Testing)
Writing a Database Provider
Provider-impacting changes
Tools & Extensions
Command-Line Reference
Package Manager Console (Visual Studio)
.NET Core CLI
Design-time DbContext Creation
Design-time Services
EF Core API Reference
Entity Framework 6
What's New
Roadmap
Past Releases
Upgrading To EF6
Visual Studio Releases
Get Started
Fundamentals
Get Entity Framework
Working with DbContext
Understanding Relationships
Async Query & Save
Configuration
Code-Based
Config File
Connection Strings
Dependency Resolution
Connection Management
Connection Resiliency
Retry Logic
Transaction Commit Failures
Databinding
WinForms
WPF
Disconnected Entities
Self Tracking Entities
Walkthrough
Logging & Interception
Performance
Performance Considerations (Whitepaper)
Using NGEN
Using Pre-Generated Views
Providers
EF6 Provider Model
Spatial Support in Providers
Using Proxies
Testing with EF6
Using Mocking
Writing Your Own Test Doubles
Testability with EF4 (Article)
Creating a model
Using Code First
Workflows
With a New Database
With an Existing Database
Data Annotations
DbSets
Data Types
Enums
Spatial
Conventions
Built-In Conventions
Custom Conventions
Model Conventions
Fluent Configuration
Relationships
Types and Properties
Using in Visual Basic
Stored Procedure Mapping
Migrations
Automatic Migrations
Working with Existing Databases
Customizing Migrations History
Using Migrate.exe
Migrations in Team Environments
Using EF Designer
Workflows
Model-First
Database-First
Data types
Complex Types
Enums
Spatial
Split Mappings
Entity Splitting
Table Splitting
Inheritance Mappings
Table per Hierarchy
Table per Type
Mapping Stored Procedures
Query
Update
Mapping Relationships
Multiple Diagrams
Selecting Runtime Version
Code Generation
Legacy ObjectContext
Advanced
EDMX File Format
Defining Query
Multiple Result Sets
Table-Valued Functions
Keyboard Shortcuts
Querying Data
Load Method
Local Data
Tracking and No Tracking Queries
Using Raw SQL Queries
Querying Related Data
Saving Data
Change Tracking
Auto Detect Changes
Entity State
Property Values
Handling Concurrency Conflicts
Using Transactions
Data Validation
Additional Resources
Blogs
Case Studies
Contribute
Getting Help
Glossary
School Sample Database
Tools & Extensions
Licenses
EF5
Chinese Simplified
Chinese Traditional
German
English
Spanish
French
Italian
Japanese
Korean
Russian
EF6
Prerelease
Chinese Simplified
Chinese Traditional
German
English
Spanish
French
Italian
Japanese
Korean
Russian
EF6 API Reference
Entity Framework Documentation
Entity Framework
Entity Framework 6
EF 6 is a tried and tested data access technology with many years of features and
stabilization.
Choosing
Find out which version of EF is right for you.
Port to EF Core
Guidance on porting an existing EF 6 application to EF Core.
EF Core
all
Get Started
Overview
Create a Model
Query Data
Save Data
Tutorials
.NET Framework
.NET Core
ASP.NET Core
UWP
more…
Database providers
SQL Server
MySQL
PostgreSQL
SQLite
more…
API Reference
DbContext
DbSet<TEntity>
more…
EF 6
EF 6 is a tried and tested data access technology with many years of features and
stabilization.
Get Started
Learn how to access data with Entity Framework 6.
API Reference
Browse the Entity Framework 6 API, organized by namespace.
Compare EF Core & EF6
11/15/2018 • 4 minutes to read • Edit Online
Entity Framework is an object-relational mapper (O/RM ) for .NET. This article compares the two versions: Entity
Framework 6 and Entity Framework Core.
Entity Framework 6
Entity Framework 6 (EF6) is a tried and tested data access technology. It was first released in 2008, as part of .NET
Framework 3.5 SP1 and Visual Studio 2008 SP1. Starting with the 4.1 release it has shipped as the
EntityFramework NuGet package. EF6 runs on the .NET Framework 4.x, which means it runs only on Windows.
EF6 continues to be a supported product, and will continue to see bug fixes and minor improvements.
Feature comparison
EF Core offers new features that won't be implemented in EF6 (such as alternate keys, batch updates, and mixed
client/database evaluation in LINQ queries. But because it's a new code base, it also lacks some features that EF6
has.
The following tables compare the features available in EF Core and EF6. It's a high-level comparison and doesn't
list every feature or explain differences between the same feature in different EF versions.
The EF Core column indicates the product version in which the feature first appeared.
Creating a model
FEATURE EF 6 EF CORE
Querying data
FEATURE EF6 EF CORE
Saving data
FEATURE EF6 EF CORE
Database providers
FEATURE EF6 EF CORE
1 There is currently a paid provider available for Oracle. A free official provider for Oracle is being worked on.
2 The SQL Server Compact and Jet providers only work on .NET Framework (not on .NET Core).
.NET implementations
FEATURE EF6 EF CORE
Next steps
For more information, see the documentation:
Overview - EF Core
Overview - EF6
Using EF Core and EF6 in the Same Application
8/27/2018 • 2 minutes to read • Edit Online
It is possible to use EF Core and EF6 in the same .NET Framework application or library by installing both NuGet
packages.
Some types have the same names in EF Core and EF6 and differ only by namespace, which may complicate using
both EF Core and EF6 in the same code file. The ambiguity can be easily removed using namespace alias directives.
For example:
If you are porting an existing application that has multiple EF models, you can choose to selectively port some of
them to EF Core, and continue using EF6 for the others.
Porting from EF6 to EF Core
8/27/2018 • 2 minutes to read • Edit Online
Because of the fundamental changes in EF Core we do not recommend attempting to move an EF6 application to
EF Core unless you have a compelling reason to make the change. You should view the move from EF6 to EF Core
as a port rather than an upgrade.
Before porting from EF6 to EF Core: Validate your
Application's Requirements
8/27/2018 • 2 minutes to read • Edit Online
Before you start the porting process it is important to validate that EF Core meets the data access requirements for
your application.
Missing features
Make sure that EF Core has all the features you need to use in your application. See Feature Comparison for a
detailed comparison of how the feature set in EF Core compares to EF6. If any required features are missing,
ensure that you can compensate for the lack of these features before porting to EF Core.
Behavior changes
This is a non-exhaustive list of some changes in behavior between EF6 and EF Core. It is important to keep these in
mind as your port your application as they may change the way your application behaves, but will not show up as
compilation errors after swapping to EF Core.
DbSet.Add/Attach and graph behavior
In EF6, calling DbSet.Add() on an entity results in a recursive search for all entities referenced in its navigation
properties. Any entities that are found, and are not already tracked by the context, are also be marked as added.
DbSet.Attach() behaves the same, except all entities are marked as unchanged.
EF Core performs a similar recursive search, but with some slightly different rules.
The root entity is always in the requested state (added for DbSet.Add and unchanged for DbSet.Attach ).
For entities that are found during the recursive search of navigation properties:
If the primary key of the entity is store generated
If the primary key is not set to a value, the state is set to added. The primary key value is
considered "not set" if it is assigned the CLR default value for the property type (for example,
0 for int , null for string , etc.).
EF Core does not support the EDMX file format for models. The best option to port these models, is to generate a
new code-based model from the database for your application.
For example, here is the command to scaffold a model from the Blogging database on your SQL Server LocalDB
instance.
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;"
Microsoft.EntityFrameworkCore.SqlServer
TIP
See Getting Started with EF Core on ASP.NET Core with an Existing Database for an additional reference on how to work with
an existing database,
Porting an EF6 Code-Based Model to EF Core
8/27/2018 • 2 minutes to read • Edit Online
If you've read all the caveats and you are ready to port, then here are some guidelines to help you get started.
Swap namespaces
Most APIs that you use in EF6 are in the System.Data.Entity namespace (and related sub-namespaces). The first
code change is to swap to the Microsoft.EntityFrameworkCore namespace. You would typically start with your
derived context code file and then work out from there, addressing compilation errors as they occur.
Existing migrations
There isn't really a feasible way to port existing EF6 migrations to EF Core.
If possible, it is best to assume that all previous migrations from EF6 have been applied to the database and then
start migrating the schema from that point using EF Core. To do this, you would use the Add-Migration command
to add a migration once the model is ported to EF Core. You would then remove all code from the Up and Down
methods of the scaffolded migration. Subsequent migrations will compare to the model when that initial migration
was scaffolded.
Entity Framework (EF ) Core is a lightweight, extensible, open source and cross-platform version of the popular
Entity Framework data access technology.
EF Core can serve as an object-relational mapper (O/RM ), enabling .NET developers to work with a database
using .NET objects, and eliminating the need for most of the data-access code they usually need to write.
EF Core supports many database engines, see Database Providers for details.
The Model
With EF Core, data access is performed using a model. A model is made up of entity classes and a context object
that represents a session with the database, allowing you to query and save data. See Creating a Model to learn
more.
You can generate a model from an existing database, hand code a model to match your database, or use EF
Migrations to create a database from your model, and then evolve it as your model changes over time.
using Microsoft.EntityFrameworkCore;
using System.Collections.Generic;
namespace Intro
{
public class BloggingContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
Querying
Instances of your entity classes are retrieved from the database using Language Integrated Query (LINQ ). See
Querying Data to learn more.
Saving Data
Data is created, deleted, and modified in the database using instances of your entity classes. See Saving Data to
learn more.
Next steps
For introductory tutorials, see Getting Started with Entity Framework Core.
What is new in EF Core
3/25/2019 • 2 minutes to read • Edit Online
Future releases
EF Core 3.0 is currently available in preview. EF Core 3.0 brings a set of new features and also breaking changes
you should be aware of when upgrading.
For more details on how we plan future releases like 3.0 and beyond, see the EF Core Roadmap.
Recent releases
EF Core 2.2 (latest stable release)
EF Core 2.1
Past releases
EF Core 2.0
EF Core 1.1
EF Core 1.0
Entity Framework Core Roadmap
4/20/2019 • 3 minutes to read • Edit Online
IMPORTANT
Please note that the feature sets and schedules of future releases are always subject to change, and although we will try to
keep this page up to date, it may not reflect our latest plans at all times.
EF Core 3.0
With EF Core 2.2 out the door, our main focus is now EF Core 3.0. See What's new in EF Core 3.0 for information
on planned new features and intentional breaking changes included in this release.
Schedule
The release schedule for EF Core is in-sync with the .NET Core release schedule.
Backlog
The Backlog Milestone in our issue tracker contains issues that we either expect to work on someday, or we think
someone from the community could tackle. Customers are welcome to submit comments and votes on these
issues. Contributors looking to work on any of these issues are encouraged to first start a discussion on how to
approach them.
There's never a guarantee that we'll work on any given feature in a specific version of EF Core. As in all software
projects, priorities, release schedules, and available resources can change at any point. But if we intend to resolve
an issue in a specific timeframe, we'll assign it to a release milestone instead of the backlog milestone. We
routinely move issues between the backlog and release milestones as part of our release planning process.
We'll likely close an issue if we don't plan to ever address it. But we can reconsider an issue that we previously
closed if we get new information about it.
IMPORTANT
Please note that the feature sets and schedules of future releases are always subject to change, and although we will try to
keep this page up to date, it may not reflect our latest plans at all times.
EF Core 3.0 is currently under development and available as preview packages published to the NuGet Gallery.
Current previews of EF Core 3.0 only include minor improvements and breaking changes we have made in
preparation for the rest of the 3.0 work.
Successive preview releases will contain more of the features planned for EF Core 3.0.
New features included in EF Core 3.0 (currently in
preview)
4/2/2019 • 4 minutes to read • Edit Online
IMPORTANT
Please note that the feature sets and schedules of future releases are always subject to change, and although we will try to
keep this page up to date, it may not reflect our latest plans at all times.
The following list includes the major new features planned for EF Core 3.0. Most of these features are not included
in the current preview, but will become available as we make progress towards RTM.
The reason is that at the beginning of the release we are focusing on implementing planned breaking changes.
Many of these breaking changes are improvements to EF Core on their own. Many others are required to unblock
further improvements.
For a complete list of bug fixes and enhancements underway, you can see this query in our issue tracker.
LINQ improvements
Tracking Issue #12795
Work on this feature has started but it isn't included in the current preview.
LINQ enables you to write database queries without leaving your language of choice, taking advantage of rich
type information to get IntelliSense and compile-time type checking. But LINQ also enables you to write an
unlimited number of complicated queries, and that has always been a huge challenge for LINQ providers. In the
first few versions of EF Core, we solved that in part by figuring out what portions of a query could be translated to
SQL, and then by allowing the rest of the query to execute in memory on the client. This client-side execution can
be desirable in some situations, but in many other cases it can result in inefficient queries that may not be
identified until an application is deployed to production. In EF Core 3.0, we're planning to make profound changes
to how our LINQ implementation works, and how we test it. The goals are to make it more robust (for example, to
avoid breaking queries in patch releases), to enable translating more expressions correctly into SQL, to generate
efficient queries in more cases, and to prevent inefficient queries from going undetected.
Cosmos DB support
Tracking Issue #8443
This feature is included in the current preview, but isn't complete yet.
We're working on a Cosmos DB provider for EF Core, to enable developers familiar with the EF programing
model to easily target Azure Cosmos DB as an application database. The goal is to make some of the advantages
of Cosmos DB, like global distribution, "always on" availability, elastic scalability, and low latency, even more
accessible to .NET developers. The provider will enable most EF Core features, like automatic change tracking,
LINQ, and value conversions, against the SQL API in Cosmos DB. We started this effort before EF Core 2.2, and
we have made some preview versions of the provider available. The new plan is to continue developing the
provider alongside EF Core 3.0.
Dependent entities sharing the table with the principal are now
optional
Tracking Issue #9005
This feature will be introduced in EF Core 3.0-preview 4.
Consider the following model:
Starting with EF Core 3.0, if OrderDetails is owned by Order or explicitly mapped to the same table it will be
possible to add an Order without an OrderDetails and all of the OrderDetails properties except the primary key
will be mapped to nullable columns. When querying EF Core will set OrderDetails to null if any of its required
properties doesn't have a value or if it has no required properties besides the primary key and all properties are
null .
C# 8.0 support
Tracking Issue #12047 Tracking Issue #10347
Work on this feature has started but it isn't included in the current preview.
We want our customers to take advantage of some of the new features coming in C# 8.0 like async streams
(including await foreach ) and nullable reference types while using EF Core.
IMPORTANT
Please note that the feature sets and schedules of future releases are always subject to change, and although we will try to
keep this page up to date, it may not reflect our latest plans at all times.
The following API and behavior changes have the potential to break applications developed for EF Core 2.2.x
when upgrading them to 3.0.0. Changes that we expect to only impact database providers are documented under
provider changes. Breaks in new features introduced from one 3.0 preview to another 3.0 preview aren't
documented here.
You can also obtain it a local tool when you restore the dependencies of a project that declares it as a tooling
dependency using a tool manifest file.
context.Products.FromSqlRaw(
"SELECT * FROM Products WHERE Name = {0}",
product.Name);
context.Products.FromSqlInterpolated(
$"SELECT * FROM Products WHERE Name = {product.Name}");
Note that both of the queries above will produce the same parameterized SQL with the same SQL parameters.
Why
Method overloads like this make it very easy to accidentally call the raw string method when the intent was to call
the interpolated string method, and the other way around. This could result in queries not being parameterized
when they should have been.
Mitigations
Switch to use the new method names.
modelBuilder
.Entity<Blog>()
.Property(e => e.Id)
.ValueGeneratedNever();
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public string Id { get; set; }
context.ChangeTracker.CascadeDeleteTiming = CascadeTiming.OnSaveChanges;
context.ChangeTracker.DeleteOrphansTiming = CascadeTiming.OnSaveChanges;
New behavior
Starting with EF Core 3.0, there is now fluent API to configure a navigation property to the owner using
WithOwner() . For example:
The configuration related to the relationship between owner and owned should now be chained after
WithOwner() similarly to how other relationships are configured. While the configuration for the owned type itself
would still be chained after OwnsOne()/OwnsMany() . For example:
eb.ToTable("OrderDetails");
eb.HasKey(e => e.AlternateId);
eb.HasIndex(e => e.Id);
eb.HasData(
new OrderDetails
{
AlternateId = 1,
Id = -1
});
});
Additionally calling Entity() , HasOne() , or Set() with an owned type target will now throw an exception.
Why
This change was made to create a cleaner separation between configuring the owned type itself and the
relationship to the owned type. This in turn removes ambiguity and confusion around methods like
HasForeignKey .
Mitigations
Change configuration of owned type relationships to use the new API surface as shown in the example above.
Dependent entities sharing the table with the principal are now
optional
Tracking Issue #9005
This change is introduced in EF Core 3.0-preview 4.
Old behavior
Consider the following model:
Before EF Core 3.0, if OrderDetails is owned by Order or explicitly mapped to the same table then an
OrderDetails instance was always required when adding a new Order .
New behavior
Starting with 3.0, EF Core allows to add an Order without an OrderDetails and maps all of the OrderDetails
properties except the primary key to nullable columns. When querying EF Core sets OrderDetails to null if any
of its required properties doesn't have a value or if it has no required properties besides the primary key and all
properties are null .
Mitigations
If your model has a table sharing dependent with all optional columns, but the navigation pointing to it is not
expected to be null then the application should be modified to handle cases when the navigation is null . If this
is not possible a required property should be added to the entity type or at least one property should have a non-
null value assigned to it.
Before EF Core 3.0, if OrderDetails is owned by Order or explicitly mapped to the same table then updating just
OrderDetails will not update Version value on client and the next update will fail.
New behavior
Starting with 3.0, EF Core propagates the new Version value to Order if it owns OrderDetails . Otherwise an
exception is thrown during model validation.
Why
This change was made to avoid a stale concurrency token value when only one of the entities mapped to the same
table is updated.
Mitigations
All entities sharing the table have to include a property that is mapped to the concurrency token column. It's
possible the create one in shadow -state:
Before EF Core 3.0, the ShippingAddress property would be mapped to separate columns for BulkOrder and
Order by default.
New behavior
Starting with 3.0, EF Core only creates one column for ShippingAddress .
Why
The old behavoir was unexpected.
Mitigations
The property can still be explicitly mapped to separate column on the derived types:
Before EF Core 3.0, the CustomerId property would be used for the foreign key by convention. However, if Order
is an owned type, then this would also make CustomerId the primary key and this isn't usually the expectation.
New behavior
Starting with 3.0, EF Core doesn't try to use properties for foreign keys by convention if they have the same name
as the principal property. Principal type name concatenated with principal property name, and navigation name
concatenated with principal property name patterns are still matched. For example:
Why
This change was made to avoid erroneously defining a primary key property on the owned type.
Mitigations
If the property was intended to be the foreign key, and hence part of the primary key, then explicitly configure it as
such.
New behavior
Starting with 3.0, EF Core closes the connection as soon as it's done using it.
Why
This change allows to use multiple contexts in the same TransactionScope . The new behavior also matches EF6.
Mitigations
If the connection needs to remain open explicit call to OpenConnection() will ensure that EF Core doesn't close it
prematurely:
modelBuilder.UsePropertyAccessMode(PropertyAccessMode.PreferFieldDuringConstruction);
modelBuilder
.Entity<Blog>()
.Property(e => e.Id)
.HasField("_id");
modelBuilder
.Entity<Blog>()
.Property("Id");
New behavior
Starting with EF Core 3.0, a field-only property must match the field name exactly.
modelBuilder
.Entity<Blog>()
.Property("_id");
Why
This change was made to avoid using the same field for two properties named similarly, it also makes the
matching rules for field-only properties the same as for properties mapped to CLR properties.
Mitigations
Field-only properties must be named the same as the field they are mapped to. In a later preview of EF Core 3.0,
we plan to re-enable explicitly configuring a field name that is different from the property name:
modelBuilder
.Entity<Blog>()
.Property("Id")
.HasField("_id");
modelBuilder
.Entity<Blog>()
.Property(e => e.Id)
.ValueGeneratedOnAdd();
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public string Id { get; set; }
Mitigations
Update application code to not attempt lazy-loading with a disposed context, or configure this to be a no-op as
described in the exception message.
modelBuilder.Entity<Samurai>().HasOne("Entrance").WithOne();
The code looks like it is relating Samurai to some other entity type using the Entrance navigation property, which
may be private.
In reality, this code attempts to create a relationship to some entity type called Entrance with no navigation
property.
New behavior
Starting with EF Core 3.0, the code above now does what it looked like it should have been doing before.
Why
The old behavior was very confusing, especially when reading the configuration code and looking for errors.
Mitigations
This will only break applications that are explicitly configuring relationships using strings for type names and
without specifying the navigation property explicitly. This is not common. The previous behavior can be obtained
through explicitly passing null for the navigation property name. For example:
modelBuilder.Entity<Samurai>().HasOne("Some.Entity.Type.Name", null).WithOne();
The return type for several async methods has been changed from
Task to ValueTask
Tracking Issue #15184
This change is introduced in EF Core 3.0-preview 4.
Old behavior
The following async methods previously returned a Task<T> :
DbContext.FindAsync()
DbSet.FindAsync()
DbContext.AddAsync()
DbSet.AddAsync()
ValueGenerator.NextValueAsync() (and deriving classes)
New behavior
The aforementioned methods now return a ValueTask<T> over the same T as before.
Why
This change reduces the number of heap allocations incurred when invoking these methods, improving general
performance.
Mitigations
Applications simply awaiting the above APIs only need to be recompiled - no source changes are necessary. A
more complex usage (e.g. passing the returned Task to Task.WhenAny() ) typically require that the returned
ValueTask<T> be converted to a Task<T> by calling AsTask() on it. Note that this negates the allocation
reduction that this change brings.
Why
This change simplifies the implementation of the aforementioned interfaces.
Mitigations
Use the new extension methods.
Why
This change simplifies the implementation of the aforementioned extension methods.
Mitigations
Use the new extension methods.
UPDATE MyTable
SET GuidColumn = hex(substr(GuidColumn, 4, 1)) ||
hex(substr(GuidColumn, 3, 1)) ||
hex(substr(GuidColumn, 2, 1)) ||
hex(substr(GuidColumn, 1, 1)) || '-' ||
hex(substr(GuidColumn, 6, 1)) ||
hex(substr(GuidColumn, 5, 1)) || '-' ||
hex(substr(GuidColumn, 8, 1)) ||
hex(substr(GuidColumn, 7, 1)) || '-' ||
hex(substr(GuidColumn, 9, 2)) || '-' ||
hex(substr(GuidColumn, 11, 6))
WHERE typeof(GuidColumn) == 'blob';
In EF Core, you could also continue using the previous behavior by configuring a value converter on these
properties.
modelBuilder
.Entity<MyEntity>()
.Property(e => e.GuidProperty)
.HasConversion(
g => g.ToByteArray(),
b => new Guid(b));
Microsoft.Data.Sqlite remains capable of reading Guid values from both BLOB and TEXT columns; however, since
the default format for parameters and constants has changed you'll likely need to take action for most scenarios
involving Guids.
UPDATE MyTable
SET CharColumn = char(CharColumn)
WHERE typeof(CharColumn) = 'integer';
In EF Core, you could also continue using the previous behavior by configuring a value converter on these
properties.
modelBuilder
.Entity<MyEntity>()
.Property(e => e.CharProperty)
.HasConversion(
c => (long)c,
i => (char)i);
Microsoft.Data.Sqlite also remains capable of reading character values from both INTEGER and TEXT columns, so
certain scenarios may not require any action.
Migration IDs are now generated using the invariant culture's calendar
Tracking Issue #12978
This change is introduced in EF Core 3.0-preview 4.
Old behavior
Migration IDs were inadvertently generated using the current culture's calendar.
New behavior
Migration IDs are now always generated using the invariant culture's calendar (Gregorian).
Why
The order of migrations is important when updating the database or resolving merge conflicts. Using the
invariant calendar avoids ordering issues that can result from team members having different system calendars.
Mitigations
This change affects anyone using a non-Gregorian calendar where the year is greater than the Gregorian calendar
(like the Thai Buddhist calendar). Existing migration IDs will need to be updated so that new migrations are
ordered after existing migrations.
The migration ID can be found in the Migration attribute in the migrations' designer files.
[DbContext(typeof(MyDbContext))]
-[Migration("25620318122820_MyMigration")]
+[Migration("20190318122820_MyMigration")]
partial class MyMigration
{
Why
Aligns the naming of this warning event with all other warning events.
Mitigations
Use the new name. (Note that the event ID number has not changed.)
New behavior
Starting with EF Core 3.0, foreign key constraint names are now referred to as the "constraint name". For example:
Why
This change brings consistency to naming in this area, and also clarifies that this is the name of the foreign key
constraint, and not the column or property name that the foreign key is defined on.
Mitigations
Use the new name.
Microsoft.EntityFrameworkCore.Design is now a
DevelopmentDependency package
Tracking Issue #11506
This change is introduced in EF Core 3.0-preview 4.
Old behavior
Before EF Core 3.0, Microsoft.EntityFrameworkCore.Design was a regular NuGet package whose assembly could
be referenced by projects that depended on it.
New behavior
Starting with EF Core 3.0, it is a DevelopmentDependency package. Which means that the dependency won't flow
transitively into other projects, and that you can no longer, by default, reference its assembly.
Why
This package is only intended to be used at design time. Deployed applications shouldn't reference it. Making the
package a DevelopmentDependency reinforces this recommendation.
Mitigations
If you need to reference this package to override EF Core's design-time behavior, you can update update
PackageReference item metadata in your project. If the package is being referenced transitively via
Microsoft.EntityFrameworkCore.Tools, you will need to add an explicit PackageReference to the package to
change its metadata.
using NetTopologySuite.Geometries;
namespace MyApp
{
public class Friend
{
[Key]
public string Name { get; set; }
[Required]
public Point Location { get; set; }
}
}
And you can execute database queries based on spatial data and operations:
var nearestFriends =
(from f in context.Friends
orderby f.Location.Distance(myLocation) descending
select f).Take(5).ToList();
For more information on this feature, see the spatial types documentation.
Query tags
This feature simplifies the correlation of LINQ queries in code with generated SQL queries captured in logs.
To take advantage of query tags, you annotate a LINQ query using the new TagWith() method. Using the spatial
query from a previous example:
var nearestFriends =
(from f in context.Friends.TagWith(@"This is my spatial query!")
orderby f.Location.Distance(myLocation) descending
select f).Take(5).ToList();
Besides numerous bug fixes and small functional and performance enhancements, EF Core 2.1 includes some
compelling new features:
Lazy loading
EF Core now contains the necessary building blocks for anyone to author entity classes that can load their
navigation properties on demand. We have also created a new package, Microsoft.EntityFrameworkCore.Proxies,
that leverages those building blocks to produce lazy loading proxy classes based on minimally modified entity
classes (for example, classes with virtual navigation properties).
Read the section on lazy loading for more information about this topic.
Value conversions
Until now, EF Core could only map properties of types natively supported by the underlying database provider.
Values were copied back and forth between columns and properties without any transformation. Starting with EF
Core 2.1, value conversions can be applied to transform the values obtained from columns before they are applied
to properties, and vice versa. We have a number of conversions that can be applied by convention as necessary, as
well as an explicit configuration API that allows registering custom conversions between columns and properties.
Some of the application of this feature are:
Storing enums as strings
Mapping unsigned integers with SQL Server
Automatic encryption and decryption of property values
Read the section on value conversions for more information about this topic.
Data Seeding
With the new release it will be possible to provide initial data to populate a database. Unlike in EF6, seeding data is
associated to an entity type as part of the model configuration. Then EF Core migrations can automatically
compute what insert, update or delete operations need to be applied when upgrading the database to a new
version of the model.
As an example, you can use this to configure seed data for a Post in OnModelCreating :
Read the section on data seeding for more information about this topic.
Query types
An EF Core model can now include query types. Unlike entity types, query types do not have keys defined on them
and cannot be inserted, deleted or updated (that is, they are read-only), but they can be returned directly by
queries. Some of the usage scenarios for query types are:
Mapping to views without primary keys
Mapping to tables without primary keys
Mapping to queries defined in the model
Serving as the return type for FromSql() queries
Read the section on query types for more information about this topic.
Read the section on Include with derived types for more information about this topic.
System.Transactions support
We have added the ability to work with System.Transactions features such as TransactionScope. This will work on
both .NET Framework and .NET Core when using database providers that support it.
Read the section on System.Transactions for more information about this topic.
By including ToList() in the right place, you indicate that buffering is appropriate for the Orders, which enable the
optimization:
Note that this query will be translated to only two SQL queries: One for Customers and the next one for Orders.
[Owned] attribute
It is now possible to configure owned entity types by simply annotating the type with [Owned] and then making
sure the owner entity is added to the model:
[Owned]
public class StreetAddress
{
public string Street { get; set; }
public string City { get; set; }
}
Microsoft.EntityFrameworkCore.Abstractions package
The new package contains attributes and interfaces that you can use in your projects to light up EF Core features
without taking a dependency on EF Core as a whole. For example, the [Owned] attribute and the ILazyLoader
interface are located here.
TIP
If you find any unexpected incompatibility or any issue in the new features, or if you have feedback on them, please report it
using our issue tracker.
New features in EF Core 2.0
3/6/2019 • 9 minutes to read • Edit Online
Modeling
Table splitting
It is now possible to map two or more entity types to the same table where the primary key column(s) will be
shared and each row will correspond to two or more entities.
To use table splitting an identifying relationship (where foreign key properties form the primary key) must be
configured between all of the entity types sharing the table:
modelBuilder.Entity<Product>()
.HasOne(e => e.Details).WithOne(e => e.Product)
.HasForeignKey<ProductDetails>(e => e.Id);
modelBuilder.Entity<Product>().ToTable("Products");
modelBuilder.Entity<ProductDetails>().ToTable("Products");
Owned types
An owned entity type can share the same CLR type with another owned entity type, but since it cannot be
identified just by the CLR type there must be a navigation to it from another entity type. The entity containing the
defining navigation is the owner. When querying the owner the owned types will be included by default.
By convention a shadow primary key will be created for the owned type and it will be mapped to the same table as
the owner by using table splitting. This allows to use owned types similarly to how complex types are used in EF6:
modelBuilder.Entity<Order>().OwnsOne(p => p.OrderDetails, cb =>
{
cb.OwnsOne(c => c.BillingAddress);
cb.OwnsOne(c => c.ShippingAddress);
});
Read the section on owned entity types for more information on this feature.
Model-level query filters
EF Core 2.0 includes a new feature we call Model-level query filters. This feature allows LINQ query predicates (a
boolean expression typically passed to the LINQ Where query operator) to be defined directly on Entity Types in
the metadata model (usually in OnModelCreating). Such filters are automatically applied to any LINQ queries
involving those Entity Types, including Entity Types referenced indirectly, such as through the use of Include or
direct navigation property references. Some common applications of this feature are:
Soft delete - An Entity Types defines an IsDeleted property.
Multi-tenancy - An Entity Type defines a TenantId property.
Here is a simple example demonstrating the feature for the two scenarios listed above:
We define a model-level filter that implements multi-tenancy and soft-delete for instances of the Post Entity Type.
Note the use of a DbContext instance level property: TenantId . Model-level filters will use the value from the
correct context instance (that is, the context instance that is executing the query).
Filters may be disabled for individual LINQ queries using the IgnoreQueryFilters() operator.
Limitations
Navigation references are not allowed. This feature may be added based on feedback.
Filters can only be defined on the root Entity Type of a hierarchy.
Database scalar function mapping
EF Core 2.0 includes an important contribution from Paul Middleton which enables mapping database scalar
functions to method stubs so that they can be used in LINQ queries and translated to SQL.
Here is a brief description of how the feature can be used:
Declare a static method on your DbContext and annotate it with DbFunctionAttribute :
Methods like this are automatically registered. Once registered, calls to the method in a LINQ query can be
translated to function calls in SQL:
var query =
from p in context.Posts
where BloggingContext.PostReadCount(p.Id) > 5
select p;
...
// OnModelCreating
builder.ApplyConfiguration(new CustomerConfiguration());
High Performance
DbContext pooling
The basic pattern for using EF Core in an ASP.NET Core application usually involves registering a custom
DbContext type into the dependency injection system and later obtaining instances of that type through
constructor parameters in controllers. This means a new instance of the DbContext is created for each requests.
In version 2.0 we are introducing a new way to register custom DbContext types in dependency injection which
transparently introduces a pool of reusable DbContext instances. To use DbContext pooling, use the
AddDbContextPool instead of AddDbContext during service registration:
services.AddDbContextPool<BloggingContext>(
options => options.UseSqlServer(connectionString));
If this method is used, at the time a DbContext instance is requested by a controller we will first check if there is an
instance available in the pool. Once the request processing finalizes, any state on the instance is reset and the
instance is itself returned to the pool.
This is conceptually similar to how connection pooling operates in ADO.NET providers and has the advantage of
saving some of the cost of initialization of DbContext instance.
Limitations
The new method introduces a few limitations on what can be done in the OnConfiguring() method of the
DbContext.
WARNING
Avoid using DbContext Pooling if you maintain your own state (for example, private fields) in your derived DbContext class
that should not be shared across requests. EF Core will only reset the state that is aware of before adding a DbContext
instance to the pool.
Change Tracking
Attach can track a graph of new and existing entities
EF Core supports automatic generation of key values through a variety of mechanisms. When using this feature, a
value is generated if the key property is the CLR default--usually zero or null. This means that a graph of entities
can be passed to DbContext.Attach or DbSet.Attach and EF Core will mark those entities that have a key already
set as Unchanged while those entities that do not have a key set will be marked as Added . This makes it easy to
attach a graph of mixed new and existing entities when using generated keys. DbContext.Update and DbSet.Update
work in the same way, except that entities with a key set are marked as Modified instead of Unchanged .
Query
Improved LINQ Translation
Enables more queries to successfully execute, with more logic being evaluated in the database (rather than in-
memory) and less data unnecessarily being retrieved from the database.
GroupJoin improvements
This work improves the SQL that is generated for group joins. Group joins are most often a result of sub-queries
on optional navigation properties.
String interpolation in FromSql and ExecuteSqlCommand
C# 6 introduced String Interpolation, a feature that allows C# expressions to be directly embedded in string literals,
providing a nice way of building strings at runtime. In EF Core 2.0 we added special support for interpolated
strings to our two primary APIs that accept raw SQL strings: FromSql and ExecuteSqlCommand . This new support
allows C# string interpolation to be used in a 'safe' manner. That is, in a way that protects against common SQL
injection mistakes that can occur when dynamically constructing SQL at runtime.
Here is an example:
In this example, there are two variables embedded in the SQL format string. EF Core will produce the following
SQL:
SELECT *
FROM ""Customers""
WHERE ""City"" = @p0
AND ""ContactTitle"" = @p1
EF.Functions.Like ()
We have added the EF.Functions property which can be used by EF Core or providers to define methods that map
to database functions or operators so that those can be invoked in LINQ queries. The first example of such a
method is Like():
var aCustomers =
from c in context.Customers
where EF.Functions.Like(c.Name, "a%")
select c;
Note that Like() comes with an in-memory implementation, which can be handy when working against an in-
memory database or when evaluation of the predicate needs to occur on the client side.
Database management
Pluralization hook for DbContext Scaffolding
EF Core 2.0 introduces a new IPluralizer service that is used to singularize entity type names and pluralize DbSet
names. The default implementation is a no-op, so this is just a hook where folks can easily plug in their own
pluralizer.
Here is what it looks like for a developer to hook in their own pluralizer:
Others
Move ADO.NET SQLite provider to SQLitePCL.raw
This gives us a more robust solution in Microsoft.Data.Sqlite for distributing native SQLite binaries on different
platforms.
Only one provider per model
Significantly augments how providers can interact with the model and simplifies how conventions, annotations
and fluent APIs work with different providers.
EF Core 2.0 will now build a different IModel for each different provider being used. This is usually transparent to
the application. This has facilitated a simplification of lower-level metadata APIs such that any access to common
relational metadata concepts is always made through a call to .Relational instead of .SqlServer , .Sqlite , etc.
Consolidated Logging and Diagnostics
Logging (based on ILogger) and Diagnostics (based on DiagnosticSource) mechanisms now share more code.
The event IDs for messages sent to an ILogger have changed in 2.0. The event IDs are now unique across EF Core
code. These messages now also follow the standard pattern for structured logging used by, for example, MVC.
Logger categories have also changed. There is now a well-known set of categories accessed through
DbLoggerCategory.
DiagnosticSource events now use the same event ID names as the corresponding ILogger messages.
New features in EF Core 1.1
8/27/2018 • 2 minutes to read • Edit Online
Modelling
Field mapping
Allows you to configure a backing field for a property. This can be useful for read-only properties, or data that has
Get/Set methods rather than a property.
Mapping to Memory-Optimized Tables in SQL Server
You can specify that the table an entity is mapped to is memory-optimized. When using EF Core to create and
maintain a database based on your model (either with migrations or Database.EnsureCreated() ), a memory-
optimized table will be created for these entities.
Change tracking
Additional change tracking APIs from EF6
Such as Reload , GetModifiedProperties , GetDatabaseValues etc.
Query
Explicit Loading
Allows you to trigger population of a navigation property on an entity that was previously loaded from the
database.
DbSet.Find
Provides an easy way to fetch an entity based on its primary key value.
Other
Connection resiliency
Automatically retries failed database commands. This is especially useful when connection to SQL Azure, where
transient failures are common.
Simplified service replacement
Makes it easier to replace internal services that EF uses.
Features included in EF Core 1.0
8/27/2018 • 3 minutes to read • Edit Online
Platforms
.NET Framework 4.5.1
Includes Console, WPF, WinForms, ASP.NET 4, etc.
.NET Standard 1.3
Including ASP.NET Core targeting both .NET Framework and .NET Core on Windows, OSX, and Linux.
Modelling
Basic modelling
Based on POCO entities with get/set properties of common scalar types ( int , string , etc.).
Relationships and navigation properties
One-to-many and One-to-zero-or-one relationships can be specified in the model based on a foreign key.
Navigation properties of simple collection or reference types can be associated with these relationships.
Built-in conventions
These build an initial model based on the shape of the entity classes.
Fluent API
Allows you to override the OnModelCreating method on your context to further configure the model that was
discovered by convention.
Data annotations
Are attributes that can be added to your entity classes/properties and will influence the EF model. For example,
adding [Required] will let EF know that a property is required.
Relational Table mapping
Allows entities to be mapped to tables/columns.
Key value generation
Including client-side generation and database generation.
Database generated values
Allows for values to be generated by the database on insert (default values) or update (computed columns).
Sequences in SQL Server
Allows for sequence objects to be defined in the model.
Unique constraints
Allows for the definition of alternate keys and the ability to define relationships that target that key.
Indexes
Defining indexes in the model automatically introduces indexes in the database. Unique indexes are also
supported.
Shadow state properties
Allows for properties to be defined in the model that are not declared and are not stored in the .NET class but can
be tracked and updated by EF Core. Commonly used for foreign key properties when exposing these in the object
is not desired.
Table -Per-Hierarchy inheritance pattern
Allows entities in an inheritance hierarchy to be saved to a single table using a discriminator column to identify
they entity type for a given record in the database.
Model validation
Detects invalid patterns in the model and provides helpful error messages.
Change tracking
Snapshot change tracking
Allows changes in entities to be detected automatically by comparing current state against a copy (snapshot) of the
original state.
Notification change tracking
Allows your entities to notify the change tracker when property values are modified.
Accessing tracked state
Via DbContext.Entry and DbContext.ChangeTracker .
Attaching detached entities/graphs
The new DbContext.AttachGraph API helps re-attach entities to a context in order to save new/modified entities.
Saving data
Basic save functionality
Allows changes to entity instances to be persisted to the database.
Optimistic Concurrency
Protects against overwriting changes made by another user since data was fetched from the database.
Async SaveChanges
Can free up the current thread to process other requests while the database processes the commands issued from
SaveChanges .
Database Transactions
Means that SaveChanges is always atomic (meaning it either completely succeeds, or no changes are made to the
database). There are also transaction related APIs to allow sharing transactions between context instances etc.
Relational: Batching of statements
Provides better performance by batching up multiple INSERT/UPDATE/DELETE commands into a single roundtrip
to the database.
Query
Basic LINQ support
Provides the ability to use LINQ to retrieve data from the database.
Mixed client/server evaluation
Enables queries to contain logic that cannot be evaluated in the database, and must therefore be evaluated after
the data is retrieved into memory.
NoTracking
Queries enables quicker query execution when the context does not need to monitor for changes to the entity
instances (this is useful if the results are read-only).
Eager loading
Provides the Include and ThenInclude methods to identify related data that should also be fetched when
querying.
Async query
Can free up the current thread (and it's associated resources) to process other requests while the database
processes the query.
Raw SQL queries
Provides the DbSet.FromSql method to use raw SQL queries to fetch data. These queries can also be composed on
using LINQ.
Database providers
SQL Server
Connects to Microsoft SQL Server 2008 onwards.
SQLite
Connects to a SQLite 3 database.
In-Memory
Is designed to easily enable testing without connecting to a real database.
3rd party providers
Several providers are available for other database engines. See Database Providers for a complete list.
Upgrading from EF Core 1.0 RC1 to 1.0 RC2
8/27/2018 • 4 minutes to read • Edit Online
This article provides guidance for moving an application built with the RC1 packages to RC2.
You will need to completely remove the RC1 packages and then install the RC2 ones. Here is the mapping
for some common packages.
Namespaces
Along with package names, namespaces changed from Microsoft.Data.Entity.* to
Microsoft.EntityFrameworkCore.* . You can handle this change with a find/replace of using Microsoft.Data.Entity
with using Microsoft.EntityFrameworkCore .
If you want to adopt the new naming strategy, we would recommend successfully completing the rest of the
upgrade steps and then removing the code and creating a migration to apply the table renames.
services.AddEntityFramework()
.AddSqlServer()
.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(Configuration["ConnectionStrings:DefaultConnection"]));
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(Configuration["ConnectionStrings:DefaultConnection"]));
You also need to add a constructor, to your derived context, that takes context options and passes them to the base
constructor. This is needed because we removed some of the scary magic that snuck them in behind the scenes:
Passing in an IServiceProvider
If you have RC1 code that passes an IServiceProvider to the context, this has now moved to DbContextOptions ,
rather than being a separate constructor parameter. Use DbContextOptionsBuilder.UseInternalServiceProvider(...)
to set the service provider.
Testing
The most common scenario for doing this was to control the scope of an InMemory database when testing. See the
updated Testing article for an example of doing this with RC2.
Resolving Internal Services from Application Service Provider (ASP.NET Core Projects Only)
If you have an ASP.NET Core application and you want EF to resolve internal services from the application service
provider, there is an overload of AddDbContext that allows you to configure this:
services.AddEntityFrameworkSqlServer()
.AddDbContext<ApplicationDbContext>((serviceProvider, options) =>
options.UseSqlServer(Configuration["ConnectionStrings:DefaultConnection"])
.UseInternalServiceProvider(serviceProvider)); );
WARNING
We recommend allowing EF to internally manage its own services, unless you have a reason to combine the internal EF
services into your application service provider. The main reason you may want to do this is to use your application service
provider to replace services that EF uses internally
"tools": {
"Microsoft.EntityFrameworkCore.Tools": {
"version": "1.0.0-preview1-final",
"imports": [
"portable-net45+win8+dnxcore50",
"portable-net45+win8"
]
}
}
TIP
If you use Visual Studio, you can now use Package Manager Console to run EF commands for ASP.NET Core projects (this
was not supported in RC1). You still need to register the commands in the tools section of project.json to do this.
Package Ix-Async 1.2.5 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Ix-Async 1.2.5
supports:
- net40 (.NETFramework,Version=v4.0)
- net45 (.NETFramework,Version=v4.5)
- portable-net45+win8+wp8 (.NETPortable,Version=v0.0,Profile=Profile78)
Package Remotion.Linq 2.0.2 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package
Remotion.Linq 2.0.2 supports:
- net35 (.NETFramework,Version=v3.5)
- net40 (.NETFramework,Version=v4.0)
- net45 (.NETFramework,Version=v4.5)
- portable-net45+win8+wp8+wpa81 (.NETPortable,Version=v0.0,Profile=Profile259)
The workaround is to manually import the portable profile "portable-net451+win8". This forces NuGet to treat this
binaries that match this provide as a compatible framework with .NET Standard, even though they are not.
Although "portable-net451+win8" is not 100% compatible with .NET Standard, it is compatible enough for the
transition from PCL to .NET Standard. Imports can be removed when EF's dependencies eventually upgrade to
.NET Standard.
Multiple frameworks can be added to "imports" in array syntax. Other imports may be necessary if you add
additional libraries to your project.
{
"frameworks": {
"netcoreapp1.0": {
"imports": ["dnxcore50", "portable-net451+win8"]
}
}
}
This article provides guidance for moving an application built with the RC2 packages to 1.0.0 RTM.
Package Versions
The names of the top level packages that you would typically install into an application did not change between
RC2 and RTM.
You need to upgrade the installed packages to the RTM versions:
Runtime packages (for example, Microsoft.EntityFrameworkCore.SqlServer ) changed from 1.0.0-rc2-final
to 1.0.0 .
The Microsoft.EntityFrameworkCore.Tools package changed from 1.0.0-preview1-final to
1.0.0-preview2-final . Note that tooling is still pre-release.
{
"frameworks": {
"netcoreapp1.0": {
"imports": ["dnxcore50", "portable-net451+win8"]
}
}
}
NOTE
As of version 1.0 RTM, the .NET Core SDK no longer supports project.json or developing .NET Core applications using
Visual Studio 2015. We recommend you migrate from project.json to csproj. If you are using Visual Studio, we recommend
you upgrade to Visual Studio 2017.
You need to manually add binding redirects to the UWP project. Create a file named App.config in the project root
folder and add redirects to the correct assembly versions.
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="System.IO.FileSystem.Primitives"
publicKeyToken="b03f5f7f11d50a3a"
culture="neutral" />
<bindingRedirect oldVersion="4.0.0.0"
newVersion="4.0.1.0"/>
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name="System.Threading.Overlapped"
publicKeyToken="b03f5f7f11d50a3a"
culture="neutral" />
<bindingRedirect oldVersion="4.0.0.0"
newVersion="4.0.1.0"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>
Upgrading applications from previous versions to EF
Core 2.0
9/18/2018 • 7 minutes to read • Edit Online
We have taken the opportunity to significantly refine our existing APIs and behaviors in 2.0. There are a few
improvements that can require modifying existing application code, although we believe that for the majority of
applications the impact will be low, in most cases requiring just recompilation and minimal guided changes to
replace obsolete APIs.
Updating an existing application to EF Core 2.0 may require:
1. Upgrading the target .NET implementation of the application to one that supports .NET Standard 2.0. See
Supported .NET Implementations for more details.
2. Identify a provider for the target database which is compatible with EF Core 2.0. See EF Core 2.0 requires a
2.0 database provider below.
3. Upgrading all the EF Core packages (runtime and tooling) to 2.0. Refer to Installing EF Core for more
details.
4. Make any necessary code changes to compensate for the breaking changes described in the rest of this
document.
A new design-time hook has been added in ASP.NET Core 2.0's default template. The static Program.BuildWebHost
method enables EF Core to access the application's service provider at design time. If you are upgrading an
ASP.NET Core 1.x application, you will need to update the Program class to resemble the following.
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
namespace AspNetCoreDotNetCore2._0App
{
public class Program
{
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}
The adoption of this new pattern when updating applications to 2.0 is highly recommended and is required in
order for product features like Entity Framework Core Migrations to work. The other common alternative is to
implement IDesignTimeDbContextFactory<TContext>.
IDbContextFactory renamed
In order to support diverse application patterns and give users more control over how their DbContext is used at
design time, we have, in the past, provided the IDbContextFactory<TContext> interface. At design-time, the EF Core
tools will discover implementations of this interface in your project and use it to create DbContext objects.
This interface had a very general name which mislead some users to try re-using it for other DbContext -creating
scenarios. They were caught off guard when the EF Tools then tried to use their implementation at design-time and
caused commands like Update-Database or dotnet ef database update to fail.
In order to communicate the strong design-time semantics of this interface, we have renamed it to
IDesignTimeDbContextFactory<TContext> .
For the 2.0 release the IDbContextFactory<TContext> still exists but is marked as obsolete.
DbContextFactoryOptions removed
Because of the ASP.NET Core 2.0 changes described above, we found that DbContextFactoryOptions was no longer
needed on the new IDesignTimeDbContextFactory<TContext> interface. Here are the alternatives you should be using
instead.
ApplicationBasePath AppContext.BaseDirectory
ContentRootPath Directory.GetCurrentDirectory()
EnvironmentName Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRO
NMENT")
Instead of using methods like ForSqlServerToTable , extension methods are now available to write conditional code
based on the current provider in use. For example:
modelBuilder.Entity<User>().ToTable(
Database.IsSqlServer() ? "SqlServerName" : "OtherName");
Note that this change only applies to APIs/metadata that is defined for all relational providers. The API and
metadata remains the same when it is specific to only a single provider. For example, clustered indexes are specific
to SQL Sever, so ForSqlServerIsClustered and .SqlServer().IsClustered() must still be used.
optionsBuilder.UseInMemoryDatabase("MyDatabase");
This creates/uses a database with the name “MyDatabase”. If UseInMemoryDatabase is called again with the same
name, then the same in-memory database will be used, allowing it to be shared by multiple context instances.
Properties marked as ValueGenerated.OnAddOrUpdate (for example, for computed columns) will by default ignore
any value currently set on the property. This means that a store-generated value will always be obtained regardless
of whether any value has been set or modified on the tracked entity. This can be changed by setting a different
Before\AfterSaveBehavior .
Installing EF Core
A summary of the steps necessary to add EF Core to your application in different platforms and popular IDEs.
Step-by-step Tutorials
These introductory tutorials require no previous knowledge of Entity Framework Core or any particular IDE. They
will take you step-by-step through the creation of a simple application that queries and saves data from a database.
We have provided tutorials to get you started on various operating systems and application types.
Entity Framework Core can create a model based on an existing database, or create a database for you based on
your model. There are tutorials that demonstrate both of these approaches.
.NET Core Console Apps
New Database
ASP.NET Core Apps
New Database
Existing Database
EF Core and Razor Pages
Universal Windows Platform (UWP ) Apps
New Database
.NET Framework Apps
New Database
Existing Database
NOTE
These tutorials and the accompanying samples have been updated to use EF Core 2.1. However, in the majority of cases it
should be possible to create applications that use previous releases, with minimal modification to the instructions.
Installing Entity Framework Core
2/6/2019 • 5 minutes to read • Edit Online
Prerequisites
EF Core is a .NET Standard 2.0 library. So EF Core requires a .NET implementation that supports .NET
Standard 2.0 to run. EF Core can also be referenced by other .NET Standard 2.0 libraries.
For example, you can use EF Core to develop apps that target .NET Core. Building .NET Core apps requires
the .NET Core SDK. Optionally, you can also use a development environment like Visual Studio, Visual
Studio for Mac, or Visual Studio Code. For more information, check Getting Started with .NET Core.
You can use EF Core to develop applications that target .NET Framework 4.6.1 or later on Windows, using
Visual Studio. The latest version of Visual Studio is recommended. If you want to use an older version, like
Visual Studio 2015, make sure you upgrade the NuGet client to version 3.6.0 to work with .NET Standard
2.0 libraries.
EF Core can run on other .NET implementations like Xamarin and .NET Native. But in practice those
implementations have runtime limitations that may affect how well EF Core works on your app. For more
information, see .NET implementations supported by EF Core.
Finally, different database providers may require specific database engine versions, .NET implementations,
or operating systems. Make sure an EF Core database provider is available that supports the right
environment for your application.
You can indicate a specific version in the dotnet add package command, using the -v modifier. For
example, to install EF Core 2.2.0 packages, append -v 2.2.0 to the command.
For more information, see .NET command-line interface (CLI) tools.
Visual Studio NuGet Package Manager Dialog
From the Visual Studio menu, select Project > Manage NuGet Packages
Click on the Browse or the Updates tab
To install or update the SQL Server provider, select the Microsoft.EntityFrameworkCore.SqlServer package,
and confirm.
For more information, see NuGet Package Manager Dialog.
Visual Studio NuGet Package Manager Console
From the Visual Studio menu, select Tools > NuGet Package Manager > Package Manager Console
To install the SQL Server provider, run the following command in the Package Manager Console:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
Although you can also use the dotnet ef commands from the Package Manager Console, it's recommended to
use the Package Manager Console tools when you're using Visual Studio:
They automatically work with the current project selected in the PMC in Visual Studio, without requiring
manually switching directories.
They automatically open files generated by the commands in Visual Studio after the command is
completed.
Get the .NET Core CLI tools
.NET Core CLI tools require the .NET Core SDK, mentioned earlier in Prerequisites.
The dotnet ef commands are included in current versions of the .NET Core SDK, but to enable the commands on
a specific project, you have to install the Microsoft.EntityFrameworkCore.Design package:
IMPORTANT
Always use the version of the tools package that matches the major version of the runtime packages.
Applications that target .NET Framework may need changes to work with .NET Standard 2.0 libraries:
Edit the project file and make sure the following entry appears in the initial property group:
<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
For test projects, also make sure the following entry is present:
<GenerateBindingRedirectsOutputType>true</GenerateBindingRedirectsOutputType>
Getting Started with EF Core on .NET Core
8/27/2018 • 2 minutes to read • Edit Online
These 101 tutorials require no previous knowledge of Entity Framework Core or Visual Studio. They will take you
step-by-step through creating a simple .NET Core Console Application that queries and saves data from a
database. The tutorials can be completed on any platform supported by .NET Core (Windows, OSX, Linux, etc.).
You can find the .NET Core documentation at docs.microsoft.com/dotnet/articles/core.
Getting Started with EF Core on .NET Core Console
App with a New database
6/25/2019 • 3 minutes to read • Edit Online
In this tutorial, you create a .NET Core console app that performs data access against a SQLite database using
Entity Framework Core. You use migrations to create the database from the model. See ASP.NET Core - New
database for a Visual Studio version using ASP.NET Core MVC.
View this article's sample on GitHub.
Prerequisites
The .NET Core 2.1 SDK
cd ConsoleApp.SQLite/
namespace ConsoleApp.SQLite
{
public class BloggingContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
Tip: In a real application, you put each class in a separate file and put the connection string in a configuration file or
environment variable. To keep the tutorial simple, everything is contained in one file.
namespace ConsoleApp.SQLite
{
public class Program
{
public static void Main()
{
using (var db = new BloggingContext())
{
db.Blogs.Add(new Blog { Url = "http://blogs.msdn.com/adonet" });
var count = db.SaveChanges();
Console.WriteLine("{0} records saved to database", count);
Console.WriteLine();
Console.WriteLine("All blogs in database:");
foreach (var blog in db.Blogs)
{
Console.WriteLine(" - {0}", blog.Url);
}
}
}
}
}
Test the app from the console. See the Visual Studio note to run the app from Visual Studio.
dotnet run
One blog is saved to the database and the details of all blogs are displayed in the console.
ConsoleApp.SQLite>dotnet run
1 records saved to database
Additional Resources
Tutorial: Get started with EF Core on ASP.NET Core with a new database using SQLite
Tutorial: Get started with Razor Pages in ASP.NET Core
Tutorial: Razor Pages with Entity Framework Core in ASP.NET Core
Getting Started with EF Core on ASP.NET Core
8/27/2018 • 2 minutes to read • Edit Online
These 101 tutorials require no previous knowledge of Entity Framework Core or Visual Studio. They will take you
step-by-step through creating a simple ASP.NET Core application that queries and saves data from a database.
You can choose a tutorial that creates a model based on an existing database, or creates a database for you based
on your model.
You can find the ASP.NET Core documentation at Introduction to ASP.NET Core.
NOTE
These tutorials and the accompanying samples have been updated to use EF Core 2.0 (with the exception of the UWP
tutorial, that still uses EF Core 1.1). However, in the majority of cases it should be possible to create applications that use
previous releases, with minimal modification to the instructions.
Getting Started with EF Core on ASP.NET Core with a
New database
5/31/2019 • 6 minutes to read • Edit Online
In this tutorial, you build an ASP.NET Core MVC application that performs basic data access using Entity
Framework Core. The tutorial uses migrations to create the database from the data model.
You can follow the tutorial by using Visual Studio 2017 on Windows, or by using the .NET Core CLI on Windows,
macOS, or Linux.
View this article's sample on GitHub:
Visual Studio 2017 with SQL Server
.NET Core CLI with SQLite.
Prerequisites
Install the following software:
Visual Studio
.NET Core CLI
Visual Studio 2017 version 15.7 or later with these workloads:
ASP.NET and web development (under Web & Cloud)
.NET Core cross-platform development (under Other Toolsets)
.NET Core 2.1 SDK.
using Microsoft.EntityFrameworkCore;
using System.Collections.Generic;
namespace EFGetStarted.AspNetCore.NewDb.Models
{
public class BloggingContext : DbContext
{
public BloggingContext(DbContextOptions<BloggingContext> options)
: base(options)
{ }
A production app would typically put each class in a separate file. For the sake of simplicity, this tutorial puts these
classes in one file.
using EFGetStarted.AspNetCore.NewDb.Models;
using Microsoft.EntityFrameworkCore;
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
A production app would typically put the connection string in a configuration file or environment variable. For the
sake of simplicity, this tutorial defines it in code. See Connection Strings for more information.
Add-Migration InitialCreate
Update-Database
If you get an error stating The term 'add-migration' is not recognized as the name of a cmdlet , close and
reopen Visual Studio.
The Add-Migration command scaffolds a migration to create the initial set of tables for the model. The
Update-Database command creates the database and applies the new migration to it.
Create a controller
Scaffold a controller and views for the Blog entity.
Visual Studio
.NET Core CLI
Right-click on the Controllers folder in Solution Explorer and select Add > Controller.
Select MVC Controller with views, using Entity Framework and click Add.
Set Model class to Blog and Data context class to BloggingContext.
Click Add.
The scaffolding engine creates the following files:
A controller (Controllers/BlogsController.cs)
Razor views for Create, Delete, Details, Edit, and Index pages (Views/Blogs/*.cshtml)
In this tutorial, you build an ASP.NET Core MVC application that performs basic data access using Entity
Framework Core. You reverse engineer an existing database to create an Entity Framework model.
View this article's sample on GitHub.
Prerequisites
Install the following software:
Visual Studio 2017 15.7 with these workloads:
ASP.NET and web development (under Web & Cloud)
.NET Core cross-platform development (under Other Toolsets)
.NET Core 2.1 SDK.
USE [Blogging];
GO
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;"
Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models
If you receive an error stating The term 'Scaffold-DbContext' is not recognized as the name of a cmdlet , then
close and reopen Visual Studio.
TIP
You can specify which tables you want to generate entities for by adding the -Tables argument to the command above.
For example, -Tables Blog,Post .
The reverse engineer process created entity classes ( Blog.cs & Post.cs ) and a derived context (
BloggingContext.cs ) based on the schema of the existing database.
The entity classes are simple C# objects that represent the data you will be querying and saving. Here are the
Blog and Post entity classes:
using System;
using System.Collections.Generic;
namespace EFGetStarted.AspNetCore.ExistingDb.Models
{
public partial class Blog
{
public Blog()
{
Post = new HashSet<Post>();
}
using System;
using System.Collections.Generic;
namespace EFGetStarted.AspNetCore.ExistingDb.Models
{
public partial class Post
{
public int PostId { get; set; }
public int BlogId { get; set; }
public string Content { get; set; }
public string Title { get; set; }
TIP
To enable lazy loading, you can make navigation properties virtual (Blog.Post and Post.Blog).
The context represents a session with the database and allows you to query and save instances of the entity
classes.
modelBuilder.Entity<Post>(entity =>
{
entity.HasOne(d => d.Blog)
.WithMany(p => p.Post)
.HasForeignKey(d => d.BlogId);
});
}
}
using EFGetStarted.AspNetCore.ExistingDb.Models;
using Microsoft.EntityFrameworkCore;
Now you can use the AddDbContext(...) method to register it as a service.
Locate the ConfigureServices(...) method
Add the following highlighted code to register the context as a service
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given
request.
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
TIP
In a real application you would typically put the connection string in a configuration file or environment variable. For the sake
of simplicity, this tutorial has you define it in code. For more information, see Connection Strings.
These 101-level tutorials require no previous knowledge of Entity Framework Core or Visual Studio. They will take
you step-by-step through creating a simple Universal Window Platform (UWP ) application that queries and saves
data from a database using EF Core.
More details about developing UWP applications can be found in the UWP documentation.
Getting Started with EF Core on Universal Windows
Platform (UWP) with a New Database
10/25/2018 • 7 minutes to read • Edit Online
In this tutorial, you build a Universal Windows Platform (UWP ) application that performs basic data access against
a local SQLite database using Entity Framework Core.
View this article's sample on GitHub.
Prerequisites
Windows 10 Fall Creators Update (10.0; Build 16299) or later.
Visual Studio 2017 version 15.7 or later with the Universal Windows Platform Development workload.
.NET Core 2.1 SDK or later or later.
IMPORTANT
This tutorial uses Entity Framework Core migrations commands to create and update the schema of the database. These
commands don't work directly with UWP projects. For this reason, the application's data model is placed in a shared library
project, and a separate .NET Core console application is used to run the commands.
using Microsoft.EntityFrameworkCore;
using System.Collections.Generic;
namespace Blogging.Model
{
public class BloggingContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
This command scaffolds a migration that creates the initial set of database tables for your data model.
IMPORTANT
Set the target and minimum versions to at least Windows 10 Fall Creators Update (10.0; build 16299.0). Previous
versions of Windows 10 do not support .NET Standard 2.0, which is required by Entity Framework Core.
using Blogging.Model;
using Microsoft.EntityFrameworkCore;
using System;
using Windows.ApplicationModel;
using Windows.ApplicationModel.Activation;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Navigation;
namespace Blogging.UWP
{
/// <summary>
/// Provides application-specific behavior to supplement the default Application class.
/// </summary>
sealed partial class App : Application
{
/// <summary>
/// Initializes the singleton application object. This is the first line of authored code
/// executed, and as such is the logical equivalent of main() or WinMain().
/// </summary>
public App()
{
this.InitializeComponent();
this.Suspending += OnSuspending;
/// <summary>
/// Invoked when the application is launched normally by the end user. Other entry points
/// will be used such as when the application is launched to open a specific file.
/// </summary>
/// <param name="e">Details about the launch request and process.</param>
protected override void OnLaunched(LaunchActivatedEventArgs e)
{
Frame rootFrame = Window.Current.Content as Frame;
// Do not repeat app initialization when the Window already has content,
// just ensure that the window is active
if (rootFrame == null)
{
// Create a Frame to act as the navigation context and navigate to the first page
rootFrame = new Frame();
rootFrame.NavigationFailed += OnNavigationFailed;
if (e.PreviousExecutionState == ApplicationExecutionState.Terminated)
{
//TODO: Load state from previously suspended application
}
if (e.PrelaunchActivated == false)
{
if (rootFrame.Content == null)
{
// When the navigation stack isn't restored navigate to the first page,
// configuring the new page by passing required information as a navigation
// parameter
rootFrame.Navigate(typeof(MainPage), e.Arguments);
}
// Ensure the current window is active
Window.Current.Activate();
}
}
/// <summary>
/// Invoked when Navigation to a certain page fails
/// </summary>
/// <param name="sender">The Frame which failed navigation</param>
/// <param name="e">Details about the navigation failure</param>
void OnNavigationFailed(object sender, NavigationFailedEventArgs e)
{
throw new Exception("Failed to load Page " + e.SourcePageType.FullName);
}
/// <summary>
/// Invoked when application execution is being suspended. Application state is saved
/// without knowing whether the application will be terminated or resumed with the contents
/// of memory still intact.
/// </summary>
/// <param name="sender">The source of the suspend request.</param>
/// <param name="e">Details about the suspend request.</param>
private void OnSuspending(object sender, SuspendingEventArgs e)
{
var deferral = e.SuspendingOperation.GetDeferral();
//TODO: Save application state and stop any background activity
deferral.Complete();
}
}
}
}
TIP
If you change your model, use the Add-Migration command to scaffold a new migration to apply the corresponding
changes to the database. Any pending migrations will be applied to the local database on each device when the application
starts.
EF Core uses a __EFMigrationsHistory table in the database to keep track of which migrations have already been applied
to the database.
<Page
x:Class="Blogging.UWP.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:Blogging.UWP"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
Loaded="Page_Loaded">
namespace Blogging.UWP
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
}
Blogs.ItemsSource = db.Blogs.ToList();
}
}
}
}
Next steps
For compatibility and performance information that you should know when using EF Core with UWP, see .NET
implementations supported by EF Core.
Check out other articles in this documentation to learn more about Entity Framework Core features.
Getting Started with EF Core on .NET Framework
8/27/2018 • 2 minutes to read • Edit Online
These 101-level tutorials require no previous knowledge of Entity Framework Core or Visual Studio. They will take
you step-by-step through creating a simple .NET Framework Console Application that queries and saves data from
a database. You can choose a tutorial that creates a model based on an existing database, or creates a database for
you based on your model.
You can use the techniques learned in these tutorials in any application that targets the .NET Framework, including
WPF and WinForms.
NOTE
These tutorials and the accompanying samples have been updated to use EF Core 2.1. However, in the majority of cases it
should be possible to create applications that use previous releases, with minimal modification to the instructions.
Getting started with EF Core on .NET Framework
with a New Database
8/27/2018 • 3 minutes to read • Edit Online
In this tutorial, you build a console application that performs basic data access against a Microsoft SQL Server
database using Entity Framework. You use migrations to create the database from a model.
View this article's sample on GitHub.
Prerequisites
Visual Studio 2017 version 15.7 or later
Later in this tutorial you use some Entity Framework Tools to maintain the database. So install the tools package as
well.
Run Install-Package Microsoft.EntityFrameworkCore.Tools
namespace ConsoleApp.NewDb
{
public class BloggingContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
TIP
In a real application you would put each class in a separate file and put the connection string in a configuration file or
environment variable. For the sake of simplicity, everything is in a single code file for this tutorial.
TIP
If you make changes to the model, you can use the Add-Migration command to scaffold a new migration to make the
corresponding schema changes to the database. Once you have checked the scaffolded code (and made any required
changes), you can use the Update-Database command to apply the changes to the database.
EF uses a __EFMigrationsHistory table in the database to keep track of which migrations have already been applied to the
database.
Use the model
You can now use the model to perform data access.
Open Program.cs
Replace the contents of the file with the following code
using System;
namespace ConsoleApp.NewDb
{
class Program
{
static void Main(string[] args)
{
using (var db = new BloggingContext())
{
db.Blogs.Add(new Blog { Url = "http://blogs.msdn.com/adonet" });
var count = db.SaveChanges();
Console.WriteLine("{0} records saved to database", count);
Console.WriteLine();
Console.WriteLine("All blogs in database:");
foreach (var blog in db.Blogs)
{
Console.WriteLine(" - {0}", blog.Url);
}
}
}
}
}
Additional Resources
EF Core on .NET Framework with an existing database
EF Core on .NET Core with a new database - SQLite - a cross-platform console EF tutorial.
Getting started with EF Core on .NET Framework
with an Existing Database
11/15/2018 • 4 minutes to read • Edit Online
In this tutorial, you build a console application that performs basic data access against a Microsoft SQL Server
database using Entity Framework. You create an Entity Framework model by reverse engineering an existing
database.
View this article's sample on GitHub.
Prerequisites
Visual Studio 2017 version 15.7 or later
USE [Blogging];
GO
In the next step, you use some Entity Framework Tools to reverse engineer the database. So install the tools
package as well.
Run Install-Package Microsoft.EntityFrameworkCore.Tools
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;"
Microsoft.EntityFrameworkCore.SqlServer
TIP
You can specify the tables to generate entities for by adding the -Tables argument to the command above. For example,
-Tables Blog,Post .
The reverse engineer process created entity classes ( Blog and Post ) and a derived context ( BloggingContext )
based on the schema of the existing database.
The entity classes are simple C# objects that represent the data you will be querying and saving. Here are the
Blog and Post entity classes:
using System;
using System.Collections.Generic;
namespace ConsoleApp.ExistingDb
{
public partial class Blog
{
public Blog()
{
Post = new HashSet<Post>();
}
using System;
using System.Collections.Generic;
namespace ConsoleApp.ExistingDb
{
public partial class Post
{
public int PostId { get; set; }
public int BlogId { get; set; }
public string Content { get; set; }
public string Title { get; set; }
TIP
To enable lazy loading, you can make navigation properties virtual (Blog.Post and Post.Blog).
The context represents a session with the database. It has methods that you can use to query and save instances of
the entity classes.
using System;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata;
namespace ConsoleApp.ExistingDb
{
public partial class BloggingContext : DbContext
{
public BloggingContext()
{
}
modelBuilder.Entity<Post>(entity =>
{
entity.HasOne(d => d.Blog)
.WithMany(p => p.Post)
.HasForeignKey(d => d.BlogId);
});
}
}
}
namespace ConsoleApp.ExistingDb
{
class Program
{
static void Main(string[] args)
{
using (var db = new BloggingContext())
{
db.Blog.Add(new Blog { Url = "http://blogs.msdn.com/adonet" });
var count = db.SaveChanges();
Console.WriteLine("{0} records saved to database", count);
Console.WriteLine();
Console.WriteLine("All blogs in database:");
foreach (var blog in db.Blog)
{
Console.WriteLine(" - {0}", blog.Url);
}
}
}
}
}
Next steps
For more information about how to scaffold a context and entity classes, see the following articles:
Reverse Engineering
Entity Framework Core tools reference - .NET CLI
Entity Framework Core tools reference - Package Manager Console
Connection Strings
6/24/2019 • 2 minutes to read • Edit Online
Most database providers require some form of connection string to connect to the database. Sometimes this
connection string contains sensitive information that needs to be protected. You may also need to change the
connection string as you move your application between environments, such as development, testing, and
production.
<connectionStrings>
<add name="BloggingDatabase"
connectionString="Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;" />
</connectionStrings>
</configuration>
TIP
The providerName setting is not required on EF Core connection strings stored in App.config because the database
provider is configured via code.
You can then read the connection string using the ConfigurationManager API in your context's OnConfiguring
method. You may need to add a reference to the System.Configuration framework assembly to be able to use this
API.
optionsBuilder.UseSqlServer(ConfigurationManager.ConnectionStrings["BloggingDatabase"].ConnectionString);
}
}
ASP.NET Core
In ASP.NET Core the configuration system is very flexible, and the connection string could be stored in
appsettings.json , an environment variable, the user secret store, or another configuration source. See the
Configuration section of the ASP.NET Core documentation for more details. The following example shows the
connection string stored in appsettings.json .
{
"ConnectionStrings": {
"BloggingDatabase": "Server=
(localdb)\\mssqllocaldb;Database=EFGetStarted.ConsoleApp.NewDb;Trusted_Connection=True;"
},
}
The context is typically configured in Startup.cs with the connection string being read from configuration. Note
the GetConnectionString() method looks for a configuration value whose key is
ConnectionStrings:<connection string name> . You need to import the Microsoft.Extensions.Configuration
namespace to use this extension method.
TIP
You can view this article's sample on GitHub.
Other applications
EF Core logging currently requires an ILoggerFactory which is itself configured with one or more ILoggerProvider.
Common providers are shipped in the following packages:
Microsoft.Extensions.Logging.Console: A simple console logger.
Microsoft.Extensions.Logging.AzureAppServices: Supports Azure App Services 'Diagnostics logs' and 'Log
stream' features.
Microsoft.Extensions.Logging.Debug: Logs to a debugger monitor using System.Diagnostics.Debug.WriteLine().
Microsoft.Extensions.Logging.EventLog: Logs to Windows Event Log.
Microsoft.Extensions.Logging.EventSource: Supports EventSource/EventListener.
Microsoft.Extensions.Logging.TraceSource: Logs to a trace listener using
System.Diagnostics.TraceSource.TraceEvent().
NOTE
The following code sample uses a ConsoleLoggerProvider constructor that has been obsoleted in version 2.2. Proper
replacements for obsolete logging APIs will be available in version 3.0. In the meantime, it is safe to ignore and suppress the
warnings.
After installing the appropriate package(s), the application should create a singleton/global instance of a
LoggerFactory. For example, using the console logger:
This singleton/global instance should then be registered with EF Core on the DbContextOptionsBuilder . For
example:
The easiest way to filter what is logged is to configure it when registering the ILoggerProvider. For example:
Connection resiliency automatically retries failed database commands. The feature can be used with any database
by supplying an "execution strategy", which encapsulates the logic necessary to detect failures and retry
commands. EF Core providers can supply execution strategies tailored to their specific database failure conditions
and optimal retry policies.
As an example, the SQL Server provider includes an execution strategy that is specifically tailored to SQL Server
(including SQL Azure). It is aware of the exception types that can be retried and has sensible defaults for maximum
retries, delay between retries, etc.
An execution strategy is specified when configuring the options for your context. This is typically in the
OnConfiguring method of your derived context:
The solution is to manually invoke the execution strategy with a delegate representing everything that needs to be
executed. If a transient failure occurs, the execution strategy will invoke the delegate again.
strategy.Execute(() =>
{
using (var context = new BloggingContext())
{
using (var transaction = context.Database.BeginTransaction())
{
context.Blogs.Add(new Blog {Url = "http://blogs.msdn.com/dotnet"});
context.SaveChanges();
transaction.Commit();
}
}
});
}
strategy.Execute(() =>
{
using (var context2 = new BloggingContext())
{
using (var transaction = new TransactionScope())
{
context2.Blogs.Add(new Blog { Url = "http://blogs.msdn.com/dotnet" });
context2.SaveChanges();
context1.SaveChanges();
transaction.Complete();
}
}
});
}
Transaction commit failure and the idempotency issue
In general, when there is a connection failure the current transaction is rolled back. However, if the connection is
dropped while the transaction is being committed the resulting state of the transaction is unknown. See this blog
post for more details.
By default, the execution strategy will retry the operation as if the transaction was rolled back, but if it's not the case
this will result in an exception if the new database state is incompatible or could lead to data corruption if the
operation does not rely on a particular state, for example when inserting a new row with auto-generated key
values.
There are several ways to deal with this.
Option 1 - Do (almost) nothing
The likelihood of a connection failure during transaction commit is low so it may be acceptable for your application
to just fail if this condition actually occurs.
However, you need to avoid using store-generated keys in order to ensure that an exception is thrown instead of
adding a duplicate row. Consider using a client-generated GUID value or a client-side value generator.
Option 2 - Rebuild application state
1. Discard the current DbContext .
2. Create a new DbContext and restore the state of your application from the database.
3. Inform the user that the last operation might not have been completed successfully.
Option 3 - Add state verification
For most of the operations that change the database state it is possible to add code that checks whether it
succeeded. EF provides an extension method to make this easier - IExecutionStrategy.ExecuteInTransaction .
This method begins and commits a transaction and also accepts a function in the verifySucceeded parameter that
is invoked when a transient error occurs during the transaction commit.
strategy.ExecuteInTransaction(db,
operation: context =>
{
context.SaveChanges(acceptAllChangesOnSuccess: false);
},
verifySucceeded: context => context.Blogs.AsNoTracking().Any(b => b.BlogId == blogToAdd.BlogId));
db.ChangeTracker.AcceptAllChanges();
}
NOTE
Here SaveChanges is invoked with acceptAllChangesOnSuccess set to false to avoid changing the state of the Blog
entity to Unchanged if SaveChanges succeeds. This allows to retry the same operation if the commit fails and the
transaction is rolled back.
strategy.ExecuteInTransaction(db,
operation: context =>
{
context.SaveChanges(acceptAllChangesOnSuccess: false);
},
verifySucceeded: context => context.Transactions.AsNoTracking().Any(t => t.Id == transaction.Id));
db.ChangeTracker.AcceptAllChanges();
db.Transactions.Remove(transaction);
db.SaveChanges();
}
NOTE
Make sure that the context used for the verification has an execution strategy defined as the connection is likely to fail again
during verification if it failed during transaction commit.
Testing
8/27/2018 • 2 minutes to read • Edit Online
You may want to test components using something that approximates connecting to the real database, without the
overhead of actual database I/O operations.
There are two main options for doing this:
SQLite in-memory mode allows you to write efficient tests against a provider that behaves like a relational
database.
The InMemory provider is a lightweight provider that has minimal dependencies, but does not always behave
like a relational database.
Testing with SQLite
4/14/2019 • 3 minutes to read • Edit Online
SQLite has an in-memory mode that allows you to use SQLite to write tests against a relational database, without
the overhead of actual database operations.
TIP
You can view this article's sample on GitHub
using System.Collections.Generic;
using System.Linq;
namespace BusinessLogic
{
public class BlogService
{
private BloggingContext _context;
TIP
DbContextOptions<TContext> tells the context all of its settings, such as which database to connect to. This is the same
object that is built by running the OnConfiguring method in your context.
Writing tests
The key to testing with this provider is the ability to tell the context to use SQLite, and control the scope of the in-
memory database. The scope of the database is controlled by opening and closing the connection. The database is
scoped to the duration that the connection is open. Typically you want a clean database for each test method.
TIP
To use SqliteConnection() and the .UseSqlite() extension method, reference the NuGet package
Microsoft.EntityFrameworkCore.Sqlite.
using BusinessLogic;
using Microsoft.Data.Sqlite;
using Microsoft.EntityFrameworkCore;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Linq;
namespace TestProject.SQLite
{
[TestClass]
public class BlogServiceTests
{
[TestMethod]
[TestMethod]
public void Add_writes_to_database()
{
// In-memory database only exists while the connection is open
var connection = new SqliteConnection("DataSource=:memory:");
connection.Open();
try
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseSqlite(connection)
.Options;
// Use a separate instance of the context to verify correct data was saved to database
using (var context = new BloggingContext(options))
{
Assert.AreEqual(1, context.Blogs.Count());
Assert.AreEqual("http://sample.com", context.Blogs.Single().Url);
}
}
finally
{
connection.Close();
}
}
[TestMethod]
public void Find_searches_url()
{
// In-memory database only exists while the connection is open
var connection = new SqliteConnection("DataSource=:memory:");
connection.Open();
try
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseSqlite(connection)
.Options;
// Insert seed data into the database using one instance of the context
using (var context = new BloggingContext(options))
{
context.Blogs.Add(new Blog { Url = "http://sample.com/cats" });
context.Blogs.Add(new Blog { Url = "http://sample.com/catfish" });
context.Blogs.Add(new Blog { Url = "http://sample.com/dogs" });
context.SaveChanges();
}
The InMemory provider is useful when you want to test components using something that approximates
connecting to the real database, without the overhead of actual database operations.
TIP
You can view this article's sample on GitHub.
TIP
For many test purposes these differences will not matter. However, if you want to test against something that behaves more
like a true relational database, then consider using SQLite in-memory mode.
namespace BusinessLogic
{
public class BlogService
{
private BloggingContext _context;
TIP
If you are using ASP.NET Core, then you should not need this code since your database provider is already configured
outside of the context (in Startup.cs).
TIP
DbContextOptions<TContext> tells the context all of its settings, such as which database to connect to. This is the same
object that is built by running the OnConfiguring method in your context.
Writing tests
The key to testing with this provider is the ability to tell the context to use the InMemory provider, and control the
scope of the in-memory database. Typically you want a clean database for each test method.
Here is an example of a test class that uses the InMemory database. Each test method specifies a unique database
name, meaning each method has its own InMemory database.
TIP
To use the .UseInMemoryDatabase() extension method, reference the NuGet package
Microsoft.EntityFrameworkCore.InMemory.
using BusinessLogic;
using Microsoft.EntityFrameworkCore;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Linq;
namespace TestProject.InMemory
{
[TestClass]
public class BlogServiceTests
{
[TestMethod]
public void Add_writes_to_database()
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseInMemoryDatabase(databaseName: "Add_writes_to_database")
.Options;
// Use a separate instance of the context to verify correct data was saved to database
using (var context = new BloggingContext(options))
{
Assert.AreEqual(1, context.Blogs.Count());
Assert.AreEqual("http://sample.com", context.Blogs.Single().Url);
}
}
[TestMethod]
public void Find_searches_url()
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseInMemoryDatabase(databaseName: "Find_searches_url")
.Options;
// Insert seed data into the database using one instance of the context
using (var context = new BloggingContext(options))
{
context.Blogs.Add(new Blog { Url = "http://sample.com/cats" });
context.Blogs.Add(new Blog { Url = "http://sample.com/catfish" });
context.Blogs.Add(new Blog { Url = "http://sample.com/dogs" });
context.SaveChanges();
}
This article shows basic patterns for configuring a DbContext via a DbContextOptions to connect to a database
using a specific EF Core provider and optional behaviors.
Configuring DbContextOptions
DbContext must have an instance of DbContextOptions in order to perform any work. The DbContextOptions
instance carries configuration information such as:
The database provider to use, typically selected by invoking a method such as UseSqlServer or UseSqlite .
These extension methods require the corresponding provider package, such as
Microsoft.EntityFrameworkCore.SqlServer or Microsoft.EntityFrameworkCore.Sqlite . The methods are defined in
the Microsoft.EntityFrameworkCore namespace.
Any necessary connection string or identifier of the database instance, typically passed as an argument to the
provider selection method mentioned above
Any provider-level optional behavior selectors, typically also chained inside the call to the provider selection
method
Any general EF Core behavior selectors, typically chained after or before the provider selector method
The following example configures the DbContextOptions to use the SQL Server provider, a connection contained in
the connectionString variable, a provider-level command timeout, and an EF Core behavior selector that makes all
queries executed in the DbContext no-tracking by default:
optionsBuilder
.UseSqlServer(connectionString, providerOptions=>providerOptions.CommandTimeout(60))
.UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking);
NOTE
Provider selector methods and other behavior selector methods mentioned above are extension methods on
DbContextOptions or provider-specific option classes. In order to have access to these extension methods you may need to
have a namespace (typically Microsoft.EntityFrameworkCore ) in scope and include additional package dependencies in the
project.
The DbContextOptions can be supplied to the DbContext by overriding the OnConfiguring method or externally via
a constructor argument.
If both are used, OnConfiguring is applied last and can overwrite options supplied to the constructor argument.
Constructor argument
Context code with constructor:
TIP
The base constructor of DbContext also accepts the non-generic version of DbContextOptions , but using the non-generic
version is not recommended for applications with multiple context types.
OnConfiguring
Context code with OnConfiguring :
TIP
This approach does not lend itself to testing, unless the tests target the full database.
This requires adding a constructor argument to your DbContext type that accepts DbContextOptions<TContext> .
Context code:
...
}
A second operation started on this context before a previous operation completed. This is usually caused by
different threads using the same instance of DbContext, however instance members are not guaranteed to be
thread safe.
When concurrent access goes undetected, it can result in undefined behavior, application crashes and data
corruption.
There are common mistakes that can inadvernetly cause concurrent access on the same DbContext instance:
Forgetting to await the completion of an asynchronous operation before starting any other operation on the
same DbContext
Asynchronous methods enable EF Core to initiate operations that access the database in a non-blocking way. But if
a caller does not await the completion of one of these methods, and proceeds to perform other operations on the
DbContext , the state of the DbContext can be, (and very likely will be) corrupted.
More reading
Read Getting Started on ASP.NET Core for more information on using EF with ASP.NET Core.
Read Dependency Injection to learn more about using DI.
Read Testing for more information.
Creating and configuring a Model
4/18/2019 • 2 minutes to read • Edit Online
Entity Framework uses a set of conventions to build a model based on the shape of your entity classes. You can
specify additional configuration to supplement and/or override what was discovered by convention.
This article covers configuration that can be applied to a model targeting any data store and that which can be
applied when targeting any relational database. Providers may also enable configuration that is specific to a
particular data store. For documentation on provider specific configuration see the Database Providers section.
TIP
You can view this article’s sample on GitHub.
using Microsoft.EntityFrameworkCore;
namespace EFModeling.Configuring.FluentAPI.Samples.Required
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
namespace EFModeling.Configuring.DataAnnotations.Samples.Required
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
Including a type in the model means that EF has metadata about that type and will attempt to read and write
instances from/to the database.
Conventions
By convention, types that are exposed in DbSet properties on your context are included in your model. In addition,
types that are mentioned in the OnModelCreating method are also included. Finally, any types that are found by
recursively exploring the navigation properties of discovered types are also included in the model.
For example, in the following code listing all three types are discovered:
Blog because it is exposed in a DbSet property on the context
Post because it is discovered via the Blog.Posts navigation property
AuditEntry because it is mentioned in OnModelCreating
Data Annotations
You can use Data Annotations to exclude a type from the model.
using Microsoft.EntityFrameworkCore;
using System;
using System.ComponentModel.DataAnnotations.Schema;
namespace EFModeling.Configuring.DataAnnotations.Samples.IgnoreType
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
[NotMapped]
public class BlogMetadata
{
public DateTime LoadedFromDatabase { get; set; }
}
}
Fluent API
You can use the Fluent API to exclude a type from the model.
using Microsoft.EntityFrameworkCore;
using System;
namespace EFModeling.Configuring.FluentAPI.Samples.IgnoreType
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
Including a property in the model means that EF has metadata about that property and will attempt to read and
write values from/to the database.
Conventions
By convention, public properties with a getter and a setter will be included in the model.
Data Annotations
You can use Data Annotations to exclude a property from the model.
using Microsoft.EntityFrameworkCore;
using System;
using System.ComponentModel.DataAnnotations.Schema;
namespace EFModeling.Configuring.DataAnnotations.Samples.IgnoreProperty
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
[NotMapped]
public DateTime LoadedFromDatabase { get; set; }
}
}
Fluent API
You can use the Fluent API to exclude a property from the model.
using Microsoft.EntityFrameworkCore;
using System;
namespace EFModeling.Configuring.FluentAPI.Samples.IgnoreProperty
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
A key serves as the primary unique identifier for each entity instance. When using a relational database this maps
to the concept of a primary key. You can also configure a unique identifier that is not the primary key (see Alternate
Keys for more information).
One of the following methods can be used to setup/create a primary key.
Conventions
By convention, a property named Id or <type name>Id will be configured as the key of an entity.
class Car
{
public string Id { get; set; }
class Car
{
public string CarId { get; set; }
Data Annotations
You can use Data Annotations to configure a single property to be the key of an entity.
using Microsoft.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations;
namespace EFModeling.Configuring.DataAnnotations.Samples.KeySingle
{
class MyContext : DbContext
{
public DbSet<Car> Cars { get; set; }
}
class Car
{
[Key]
public string LicensePlate { get; set; }
using Microsoft.EntityFrameworkCore;
namespace EFModeling.Configuring.FluentAPI.Samples.KeySingle
{
class MyContext : DbContext
{
public DbSet<Car> Cars { get; set; }
class Car
{
public string LicensePlate { get; set; }
You can also use the Fluent API to configure multiple properties to be the key of an entity (known as a composite
key). Composite keys can only be configured using the Fluent API - conventions will never setup a composite key
and you can not use Data Annotations to configure one.
using Microsoft.EntityFrameworkCore;
namespace EFModeling.Configuring.FluentAPI.Samples.KeyComposite
{
class MyContext : DbContext
{
public DbSet<Car> Cars { get; set; }
class Car
{
public string State { get; set; }
public string LicensePlate { get; set; }
WARNING
How the value is generated for added entities will depend on the database provider being used. Database providers may
automatically setup value generation for some property types, but others may require you to manually setup how the value
is generated.
For example, when using SQL Server, values will be automatically generated for GUID properties (using the SQL Server
sequential GUID algorithm). However, if you specify that a DateTime property is generated on add, then you must setup a
way for the values to be generated. One way to do this, is to configure a default value of GETDATE() , see Default Values.
UPDATE dbo.Blogs
SET LastUpdated = GETDATE()
WHERE BlogId = @Id
END
Conventions
By convention, non-composite primary keys of type short, int, long, or Guid will be setup to have values generated
on add. All other properties will be setup with no value generation.
Data Annotations
No value generation (Data Annotations)
WARNING
This just lets EF know that values are generated for added or updated entities, it does not guarantee that EF will setup the
actual mechanism to generate values. See Value generated on add or update section for more details.
Fluent API
You can use the Fluent API to change the value generation pattern for a given property.
No value generation (Fluent API )
modelBuilder.Entity<Blog>()
.Property(b => b.BlogId)
.ValueGeneratedNever();
modelBuilder.Entity<Blog>()
.Property(b => b.Inserted)
.ValueGeneratedOnAdd();
WARNING
ValueGeneratedOnAdd() just lets EF know that values are generated for added entities, it does not guarantee that EF will
setup the actual mechanism to generate values. See Value generated on add section for more details.
modelBuilder.Entity<Blog>()
.Property(b => b.LastUpdated)
.ValueGeneratedOnAddOrUpdate();
WARNING
This just lets EF know that values are generated for added or updated entities, it does not guarantee that EF will setup the
actual mechanism to generate values. See Value generated on add or update section for more details.
Required and Optional Properties
4/18/2019 • 2 minutes to read • Edit Online
A property is considered optional if it is valid for it to contain null . If null is not a valid value to be assigned to a
property then it is considered to be a required property.
Conventions
By convention, a property whose CLR type can contain null will be configured as optional ( string , int? , byte[] ,
etc.). Properties whose CLR type cannot contain null will be configured as required ( int , decimal , bool , etc.).
NOTE
A property whose CLR type cannot contain null cannot be configured as optional. The property will always be considered
required by Entity Framework.
Data Annotations
You can use Data Annotations to indicate that a property is required.
using Microsoft.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations;
namespace EFModeling.Configuring.DataAnnotations.Samples.Required
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
Fluent API
You can use the Fluent API to indicate that a property is required.
using Microsoft.EntityFrameworkCore;
namespace EFModeling.Configuring.FluentAPI.Samples.Required
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
Configuring a maximum length provides a hint to the data store about the appropriate data type to use for a given
property. Maximum length only applies to array data types, such as string and byte[] .
NOTE
Entity Framework does not do any validation of maximum length before passing data to the provider. It is up to the provider
or data store to validate if appropriate. For example, when targeting SQL Server, exceeding the maximum length will result in
an exception as the data type of the underlying column will not allow excess data to be stored.
Conventions
By convention, it is left up to the database provider to choose an appropriate data type for properties. For
properties that have a length, the database provider will generally choose a data type that allows for the longest
length of data. For example, Microsoft SQL Server will use nvarchar(max) for string properties (or
nvarchar(450) if the column is used as a key).
Data Annotations
You can use the Data Annotations to configure a maximum length for a property. In this example, targeting SQL
Server this would result in the nvarchar(500) data type being used.
using Microsoft.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations;
namespace EFModeling.Configuring.DataAnnotations.Samples.MaxLength
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
Fluent API
You can use the Fluent API to configure a maximum length for a property. In this example, targeting SQL Server
this would result in the nvarchar(500) data type being used.
using Microsoft.EntityFrameworkCore;
namespace EFModeling.Configuring.FluentAPI.Samples.MaxLength
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
NOTE
This page documents how to configure concurrency tokens. See Handling Concurrency Conflicts for a detailed explanation of
how concurrency control works on EF Core and examples of how to handle concurrency conflicts in your application.
Properties configured as concurrency tokens are used to implement optimistic concurrency control.
Conventions
By convention, properties are never configured as concurrency tokens.
Data Annotations
You can use the Data Annotations to configure a property as a concurrency token.
[ConcurrencyCheck]
public string LastName { get; set; }
Fluent API
You can use the Fluent API to configure a property as a concurrency token.
Timestamp/row version
A timestamp is a property where a new value is generated by the database every time a row is inserted or updated.
The property is also treated as a concurrency token. This ensures you will get an exception if anyone else has
modified a row that you are trying to update since you queried for the data.
How this is achieved is up to the database provider being used. For SQL Server, timestamp is usually used on a
byte[] property, which will be setup as a ROWVERSION column in the database.
Conventions
By convention, properties are never configured as timestamps.
Data Annotations
You can use Data Annotations to configure a property as a timestamp.
[Timestamp]
public byte[] Timestamp { get; set; }
}
Fluent API
You can use the Fluent API to configure a property as a timestamp.
Shadow properties are properties that are not defined in your .NET entity class but are defined for that entity type
in the EF Core model. The value and state of these properties is maintained purely in the Change Tracker.
Shadow properties are useful when there is data in the database that should not be exposed on the mapped entity
types. They are most often used for foreign key properties, where the relationship between two entities is
represented by a foreign key value in the database, but the relationship is managed on the entity types using
navigation properties between the entity types.
Shadow property values can be obtained and changed through the ChangeTracker API.
context.Entry(myBlog).Property("LastUpdated").CurrentValue = DateTime.Now;
Shadow properties can be referenced in LINQ queries via the EF.Property static method.
Conventions
Shadow properties can be created by convention when a relationship is discovered but no foreign key property is
found in the dependent entity class. In this case, a shadow foreign key property will be introduced. The shadow
foreign key property will be named <navigation property name><principal key property name> (the navigation on
the dependent entity, which points to the principal entity, is used for the naming). If the principal key property
name includes the name of the navigation property, then the name will just be <principal key property name> . If
there is no navigation property on the dependent entity, then the principal type name is used in its place.
For example, the following code listing will result in a BlogId shadow property being introduced to the Post
entity.
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}
Data Annotations
Shadow properties can not be created with data annotations.
Fluent API
You can use the Fluent API to configure shadow properties. Once you have called the string overload of Property
you can chain any of the configuration calls you would for other properties.
If the name supplied to the Property method matches the name of an existing property (a shadow property or
one defined on the entity class), then the code will configure that existing property rather than introducing a new
shadow property.
A relationship defines how two entities relate to each other. In a relational database, this is represented by a foreign
key constraint.
NOTE
Most of the samples in this article use a one-to-many relationship to demonstrate concepts. For examples of one-to-one and
many-to-many relationships see the Other Relationship Patterns section at the end of the article.
Definition of Terms
There are a number of terms used to describe relationships
Dependent entity: This is the entity that contains the foreign key property(s). Sometimes referred to as the
'child' of the relationship.
Principal entity: This is the entity that contains the primary/alternate key property(s). Sometimes referred
to as the 'parent' of the relationship.
Foreign key: The property(s) in the dependent entity that is used to store the values of the principal key
property that the entity is related to.
Principal key: The property(s) that uniquely identifies the principal entity. This may be the primary key or
an alternate key.
Navigation property: A property defined on the principal and/or dependent entity that contains a
reference(s) to the related entity(s).
Collection navigation property: A navigation property that contains references to many related
entities.
Reference navigation property: A navigation property that holds a reference to a single related
entity.
Inverse navigation property: When discussing a particular navigation property, this term refers to
the navigation property on the other end of the relationship.
The following code listing shows a one-to-many relationship between Blog and Post
Conventions
By convention, a relationship will be created when there is a navigation property discovered on a type. A property
is considered a navigation property if the type it points to can not be mapped as a scalar type by the current
database provider.
NOTE
Relationships that are discovered by convention will always target the primary key of the principal entity. To target an
alternate key, additional configuration must be performed using the Fluent API.
Cascade Delete
By convention, cascade delete will be set to Cascade for required relationships and ClientSetNull for optional
relationships. Cascade means dependent entities are also deleted. ClientSetNull means that dependent entities that
are not loaded into memory will remain unchanged and must be manually deleted, or updated to point to a valid
principal entity. For entities that are loaded into memory, EF Core will attempt to set the foreign key properties to
null.
See the Required and Optional Relationships section for the difference between required and optional
relationships.
See Cascade Delete for more details about the different delete behaviors and the defaults used by convention.
Data Annotations
There are two data annotations that can be used to configure relationships, [ForeignKey] and [InverseProperty] .
These are available in the System.ComponentModel.DataAnnotations.Schema namespace.
[ForeignKey]
You can use the Data Annotations to configure which property should be used as the foreign key property for a
given relationship. This is typically done when the foreign key property is not discovered by convention.
using Microsoft.EntityFrameworkCore;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations.Schema;
namespace EFModeling.Configuring.DataAnnotations.Samples.Relationships.ForeignKey
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}
#region Entities
public class Blog
{
public int BlogId { get; set; }
public string Url { get; set; }
[ForeignKey("BlogForeignKey")]
public Blog Blog { get; set; }
}
#endregion
}
TIP
The [ForeignKey] annotation can be placed on either navigation property in the relationship. It does not need to go on
the navigation property in the dependent entity class.
[InverseProperty]
You can use the Data Annotations to configure how navigation properties on the dependent and principal entities
pair up. This is typically done when there is more than one pair of navigation properties between two entity types.
using Microsoft.EntityFrameworkCore;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations.Schema;
namespace EFModeling.Configuring.DataAnnotations.Samples.Relationships.InverseProperty
{
class MyContext : DbContext
{
public DbSet<Post> Posts { get; set; }
public DbSet<User> Users { get; set; }
}
#region Entities
public class Post
{
public int PostId { get; set; }
public string Title { get; set; }
public string Content { get; set; }
[InverseProperty("Author")]
public List<Post> AuthoredPosts { get; set; }
[InverseProperty("Contributor")]
public List<Post> ContributedToPosts { get; set; }
}
#endregion
}
Fluent API
To configure a relationship in the Fluent API, you start by identifying the navigation properties that make up the
relationship. HasOne or HasMany identifies the navigation property on the entity type you are beginning the
configuration on. You then chain a call to WithOne or WithMany to identify the inverse navigation. HasOne / WithOne
are used for reference navigation properties and HasMany / WithMany are used for collection navigation properties.
using Microsoft.EntityFrameworkCore;
using System.Collections.Generic;
namespace EFModeling.Configuring.FluentAPI.Samples.Relationships.NoForeignKey
{
#region Model
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
namespace EFModeling.Configuring.FluentAPI.Samples.Relationships.OneNavigation
{
#region Model
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
Foreign Key
You can use the Fluent API to configure which property should be used as the foreign key property for a given
relationship.
using Microsoft.EntityFrameworkCore;
using System.Collections.Generic;
namespace EFModeling.Configuring.DataAnnotations.Samples.Relationships.ForeignKey
{
#region Model
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
The following code listing shows how to configure a composite foreign key.
using Microsoft.EntityFrameworkCore;
using System;
using System.Collections.Generic;
namespace EFModeling.Configuring.DataAnnotations.Samples.Relationships.CompositeForeignKey
{
#region Model
class MyContext : DbContext
{
public DbSet<Car> Cars { get; set; }
modelBuilder.Entity<RecordOfSale>()
.HasOne(s => s.Car)
.WithMany(c => c.SaleHistory)
.HasForeignKey(s => new { s.CarState, s.CarLicensePlate });
}
}
You can use the string overload of HasForeignKey(...) to configure a shadow property as a foreign key (see
Shadow Properties for more information). We recommend explicitly adding the shadow property to the model
before using it as a foreign key (as shown below ).
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
namespace EFModeling.Configuring.FluentAPI.Samples.Relationships.NoNavigation
{
#region Model
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
Principal Key
If you want the foreign key to reference a property other than the primary key, you can use the Fluent API to
configure the principal key property for the relationship. The property that you configure as the principal key will
automatically be setup as an alternate key (see Alternate Keys for more information).
class MyContext : DbContext
{
public DbSet<Car> Cars { get; set; }
The following code listing shows how to configure a composite principal key.
class MyContext : DbContext
{
public DbSet<Car> Cars { get; set; }
WARNING
The order in which you specify principal key properties must match the order in which they are specified for the foreign key.
Cascade Delete
You can use the Fluent API to configure the cascade delete behavior for a given relationship explicitly.
See Cascade Delete on the Saving Data section for a detailed discussion of each option.
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
When configuring the relationship with the Fluent API, you use the HasOne and WithOne methods.
When configuring the foreign key you need to specify the dependent entity type - notice the generic parameter
provided to HasForeignKey in the listing below. In a one-to-many relationship it is clear that the entity with the
reference navigation is the dependent and the one with the collection is the principal. But this is not so in a one-to-
one relationship - hence the need to explicitly define it.
Many-to -many
Many-to-many relationships without an entity class to represent the join table are not yet supported. However, you
can represent a many-to-many relationship by including an entity class for the join table and mapping two separate
one-to-many relationships.
class MyContext : DbContext
{
public DbSet<Post> Posts { get; set; }
public DbSet<Tag> Tags { get; set; }
modelBuilder.Entity<PostTag>()
.HasOne(pt => pt.Post)
.WithMany(p => p.PostTags)
.HasForeignKey(pt => pt.PostId);
modelBuilder.Entity<PostTag>()
.HasOne(pt => pt.Tag)
.WithMany(t => t.PostTags)
.HasForeignKey(pt => pt.TagId);
}
}
Indexes are a common concept across many data stores. While their implementation in the data store may vary,
they are used to make lookups based on a column (or set of columns) more efficient.
Conventions
By convention, an index is created in each property (or set of properties) that are used as a foreign key.
Data Annotations
Indexes can not be created using data annotations.
Fluent API
You can use the Fluent API to specify an index on a single property. By default, indexes are non-unique.
You can also specify that an index should be unique, meaning that no two entities can have the same value(s) for
the given property(s).
modelBuilder.Entity<Blog>()
.HasIndex(b => b.Url)
.IsUnique();
You can also specify an index over more than one column.
class MyContext : DbContext
{
public DbSet<Person> People { get; set; }
TIP
There is only one index per distinct set of properties. If you use the Fluent API to configure an index on a set of properties
that already has an index defined, either by convention or previous configuration, then you will be changing the definition of
that index. This is useful if you want to further configure an index that was created by convention.
Alternate Keys
8/27/2018 • 2 minutes to read • Edit Online
An alternate key serves as an alternate unique identifier for each entity instance in addition to the primary key.
Alternate keys can be used as the target of a relationship. When using a relational database this maps to the
concept of a unique index/constraint on the alternate key column(s) and one or more foreign key constraints that
reference the column(s).
TIP
If you just want to enforce uniqueness of a column then you want a unique index rather than an alternate key, see Indexes.
In EF, alternate keys provide greater functionality than unique indexes because they can be used as the target of a foreign
key.
Alternate keys are typically introduced for you when needed and you do not need to manually configure them.
See Conventions for more details.
Conventions
By convention, an alternate key is introduced for you when you identify a property, that is not the primary key, as
the target of a relationship.
Fluent API
You can use the Fluent API to configure a single property to be an alternate key.
class Car
{
public int CarId { get; set; }
public string LicensePlate { get; set; }
public string Make { get; set; }
public string Model { get; set; }
}
You can also use the Fluent API to configure multiple properties to be an alternate key (known as a composite
alternate key).
class Car
{
public int CarId { get; set; }
public string State { get; set; }
public string LicensePlate { get; set; }
public string Make { get; set; }
public string Model { get; set; }
}
Inheritance
3/21/2019 • 2 minutes to read • Edit Online
Inheritance in the EF model is used to control how inheritance in the entity classes is represented in the database.
Conventions
By convention, it is up to the database provider to determine how inheritance will be represented in the database.
See Inheritance (Relational Database) for how this is handled with a relational database provider.
EF will only setup inheritance if two or more inherited types are explicitly included in the model. EF will not scan for
base or derived types that were not otherwise included in the model. You can include types in the model by
exposing a DbSet for each type in the inheritance hierarchy.
If you don't want to expose a DbSet for one or more entities in the hierarchy, you can use the Fluent API to ensure
they are included in the model. And if you don't rely on conventions, you can specify the base type explicitly using
HasBaseType .
NOTE
You can use .HasBaseType((Type)null) to remove an entity type from the hierarchy.
Data Annotations
You cannot use Data Annotations to configure inheritance.
Fluent API
The Fluent API for inheritance depends on the database provider you are using. See Inheritance (Relational
Database) for the configuration you can perform for a relational database provider.
Backing Fields
8/27/2018 • 3 minutes to read • Edit Online
NOTE
This feature is new in EF Core 1.1.
Backing fields allow EF to read and/or write to a field rather than a property. This can be useful when encapsulation
in the class is being used to restrict the use of and/or enhance the semantics around access to the data by
application code, but the value should be read from and/or written to the database without using those
restrictions/enhancements.
Conventions
By convention, the following fields will be discovered as backing fields for a given property (listed in precedence
order). Fields are only discovered for properties that are included in the model. For more information on which
properties are included in the model, see Including & Excluding Properties.
_<camel-cased property name>
_<property name>
m_<camel-cased property name>
m_<property name>
When a backing field is configured, EF will write directly to that field when materializing entity instances from the
database (rather than using the property setter). If EF needs to read or write the value at other times, it will use the
property if possible. For example, if EF needs to update the value for a property, it will use the property setter if one
is defined. If the property is read-only, then it will write to the field.
Data Annotations
Backing fields cannot be configured with data annotations.
Fluent API
You can use the Fluent API to configure a backing field for a property.
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
_validatedUrl = url;
}
}
modelBuilder.Entity<Blog>()
.Property(b => b.Url)
.HasField("_validatedUrl")
.UsePropertyAccessMode(PropertyAccessMode.Field);
_validatedUrl = url;
}
}
You can also choose to give the property a name, other than the field name. This name is then used when creating
the model, most notably it will be used for the column name that is mapped to in the database.
When there is no property in the entity class, you can use the EF.Property(...) method in a LINQ query to refer
to the property that is conceptually part of the model.
NOTE
This feature is new in EF Core 2.1.
Value converters allow property values to be converted when reading from or writing to the database. This
conversion can be from one value to another of the same type (for example, encrypting strings) or from a value of
one type to a value of another type (for example, converting enum values to and from strings in the database.)
Fundamentals
Value converters are specified in terms of a ModelClrType and a ProviderClrType . The model type is the .NET type
of the property in the entity type. The provider type is the .NET type understood by the database provider. For
example, to save enums as strings in the database, the model type is the type of the enum, and the provider type is
String . These two types can be the same.
Conversions are defined using two Func expression trees: one from ModelClrType to ProviderClrType and the
other from ProviderClrType to ModelClrType . Expression trees are used so that they can be compiled into the
database access code for efficient conversions. For complex conversions, the expression tree may be a simple call
to a method that performs the conversion.
Then conversions can be defined in OnModelCreating to store the enum values as strings (for example, "Donkey",
"Mule", ...) in the database:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder
.Entity<Rider>()
.Property(e => e.Mount)
.HasConversion(
v => v.ToString(),
v => (EquineBeast)Enum.Parse(typeof(EquineBeast), v));
}
NOTE
A null value will never be passed to a value converter. This makes the implementation of conversions easier and allows
them to be shared amongst nullable and non-nullable properties.
modelBuilder
.Entity<Rider>()
.Property(e => e.Mount)
.HasConversion(converter);
This can be useful when multiple properties use the same conversion.
NOTE
There is currently no way to specify in one place that every property of a given type must use the same value converter. This
feature will be considered for a future release.
Built-in converters
EF Core ships with a set of pre-defined ValueConverter classes, found in the
Microsoft.EntityFrameworkCore.Storage.ValueConversion namespace. These are:
BoolToZeroOneConverter - Bool to zero and one
BoolToStringConverter - Bool to strings such as "Y" and "N"
BoolToTwoValuesConverter - Bool to any two values
BytesToStringConverter - Byte array to Base64 -encoded string
CastingConverter - Conversions that require only a type cast
CharToStringConverter - Char to single character string
DateTimeOffsetToBinaryConverter - DateTimeOffset to binary-encoded 64 -bit value
DateTimeOffsetToBytesConverter - DateTimeOffset to byte array
DateTimeOffsetToStringConverter - DateTimeOffset to string
DateTimeToBinaryConverter - DateTime to 64 -bit value including DateTimeKind
DateTimeToStringConverter - DateTime to string
DateTimeToTicksConverter - DateTime to ticks
EnumToNumberConverter - Enum to underlying number
EnumToStringConverter - Enum to string
GuidToBytesConverter - Guid to byte array
GuidToStringConverter - Guid to string
NumberToBytesConverter - Any numerical value to byte array
NumberToStringConverter - Any numerical value to string
StringToBytesConverter - String to UTF8 bytes
TimeSpanToStringConverter - TimeSpan to string
TimeSpanToTicksConverter - TimeSpan to ticks
Notice that EnumToStringConverter is included in this list. This means that there is no need to specify the
conversion explicitly, as shown above. Instead, just use the built-in converter:
modelBuilder
.Entity<Rider>()
.Property(e => e.Mount)
.HasConversion(converter);
Note that all the built-in converters are stateless and so a single instance can be safely shared by multiple
properties.
Pre-defined conversions
For common conversions for which a built-in converter exists there is no need to specify the converter explicitly.
Instead, just configure which provider type should be used and EF will automatically use the appropriate built-in
converter. Enum to string conversions are used as an example above, but EF will actually do this automatically if
the provider type is configured:
modelBuilder
.Entity<Rider>()
.Property(e => e.Mount)
.HasConversion<string>();
The same thing can be achieved by explicitly specifying the column type. For example, if the entity type is defined
like so:
[Column(TypeName = "nvarchar(24)")]
public EquineBeast Mount { get; set; }
}
Then the enum values will be saved as strings in the database without any further configuration in
OnModelCreating .
Limitations
There are a few known current limitations of the value conversion system:
As noted above, null cannot be converted.
There is currently no way to spread a conversion of one property to multiple columns or vice-versa.
Use of value conversions may impact the ability of EF Core to translate expressions to SQL. A warning will be
logged for such cases. Removal of these limitations is being considered for a future release.
Data Seeding
1/15/2019 • 3 minutes to read • Edit Online
Data seeding is the process of populating a database with an initial set of data.
There are several ways this can be accomplished in EF Core:
Model seed data
Manual migration customization
Custom initialization logic
Unlike in EF6, in EF Core, seeding data can be associated with an entity type as part of the model configuration.
Then EF Core migrations can automatically compute what insert, update or delete operations need to be applied
when upgrading the database to a new version of the model.
NOTE
Migrations only considers model changes when determining what operation should be performed to get the seed data into
the desired state. Thus any changes to the data performed outside of migrations might be lost or cause an error.
To add entities that have a relationship the foreign key values need to be specified:
modelBuilder.Entity<Post>().HasData(
new Post() { BlogId = 1, PostId = 1, Title = "First post", Content = "Test 1" });
If the entity type has any properties in shadow state an anonymous class can be used to provide the values:
modelBuilder.Entity<Post>().HasData(
new { BlogId = 1, PostId = 2, Title = "Second post", Content = "Test 2" });
TIP
If you need to apply migrations as part of an automated deployment you can create a SQL script that can be previewed
before execution.
Alternatively, you can use context.Database.EnsureCreated() to create a new database containing the seed data, for
example for a test database or when using the in-memory provider or any non-relation database. Note that if the
database already exists, EnsureCreated() will neither update the schema nor seed data in the database. For
relational databases you shouldn't call EnsureCreated() if you plan to use Migrations.
Limitations of model seed data
This type of seed data is managed by migrations and the script to update the data that's already in the database
needs to be generated without connecting to the database. This imposes some restrictions:
The primary key value needs to be specified even if it's usually generated by the database. It will be used to
detect data changes between migrations.
Previously seeded data will be removed if the primary key is changed in any way.
Therefore this feature is most useful for static data that's not expected to change outside of migrations and does
not depend on anything else in the database, for example ZIP codes.
If your scenario includes any of the following it is recommended to use custom initialization logic described in the
last section:
Temporary data for testing
Data that depends on database state
Data that needs key values to be generated by the database, including entities that use alternate keys as the
identity
Data that requires custom transformation (that is not handled by value conversions), such as some password
hashing
Data that requires calls to external API, such as ASP.NET Core Identity roles and users creation
migrationBuilder.InsertData(
table: "Blogs",
columns: new[] { "Url" },
values: new object[] { "http://generated.com" });
WARNING
The seeding code should not be part of the normal app execution as this can cause concurrency issues when multiple
instances are running and would also require the app having permission to modify the database schema.
Depending on the constraints of your deployment the initialization code can be executed in different ways:
Running the initialization app locally
Deploying the initialization app with the main app, invoking the initialization routine and disabling or removing
the initialization app.
This can usually be automated by using publish profiles.
Entity types with constructors
5/8/2019 • 6 minutes to read • Edit Online
NOTE
This feature is new in EF Core 2.1.
Starting with EF Core 2.1, it is now possible to define a constructor with parameters and have EF Core call this
constructor when creating an instance of the entity. The constructor parameters can be bound to mapped
properties, or to various kinds of services to facilitate behaviors like lazy-loading.
NOTE
As of EF Core 2.1, all constructor binding is by convention. Configuration of specific constructors to use is planned for a
future release.
When EF Core creates instances of these types, such as for the results of a query, it will first call the default
parameterless constructor and then set each property to the value from the database. However, if EF Core finds a
parameterized constructor with parameter names and types that match those of mapped properties, then it will
instead call the parameterized constructor with values for those properties and will not set each property explicitly.
For example:
public class Blog
{
public Blog(int id, string name, string author)
{
Id = id;
Name = name;
Author = author;
}
EF Core sees a property with a private setter as read-write, which means that all properties are mapped as before
and the key can still be store-generated.
An alternative to using private setters is to make properties really read-only and add more explicit mapping in
OnModelCreating. Likewise, some properties can be removed completely and replaced with only fields. For
example, consider these entity types:
public class Blog
{
private int _id;
modelBuilder.Entity<Post>(
b =>
{
b.HasKey("_id");
b.Property(e => e.Title);
b.Property(e => e.PostedOn);
});
}
Things to note:
The key "property" is now a field. It is not a readonly field so that store-generated keys can be used.
The other properties are read-only properties set only in the constructor.
If the primary key value is only ever set by EF or read from the database, then there is no need to include it in
the constructor. This leaves the key "property" as a simple field and makes it clear that it should not be set
explicitly when creating new blogs or posts.
NOTE
This code will result in compiler warning '169' indicating that the field is never used. This can be ignored since in reality EF
Core is using the field in an extralinguistic manner.
Injecting services
EF Core can also inject "services" into an entity type's constructor. For example, the following can be injected:
DbContext - the current context instance, which can also be typed as your derived DbContext type
ILazyLoader - the lazy-loading service--see the lazy-loading documentation for more details
Action<object, string> - a lazy-loading delegate--see the lazy-loading documentation for more details
IEntityType - the EF Core metadata associated with this entity type
NOTE
As of EF Core 2.1, only services known by EF Core can be injected. Support for injecting application services is being
considered for a future release.
For example, an injected DbContext can be used to selectively access the database to obtain information about
related entities without loading them all. In the example below this is used to obtain the number of posts in a blog
without loading the posts:
WARNING
Injecting the DbContext like this is often considered an anti-pattern since it couples your entity types directly to EF Core.
Carefully consider all options before using service injection like this.
Table Splitting
4/14/2019 • 2 minutes to read • Edit Online
NOTE
This feature is new in EF Core 2.0.
EF Core allows to map two or more entities to a single row. This is called table splitting or table sharing.
Configuration
To use table splitting the entity types need to be mapped to the same table, have the primary keys mapped to the
same columns and at least one relationship configured between the primary key of one entity type and another in
the same table.
A common scenario for table splitting is using only a subset of the columns in the table for greater performance or
encapsulation.
In this example Order represents a subset of DetailedOrder .
In addition to the required configuration we call HasBaseType((string)null) to avoid mapping DetailedOrder in the
same hierarchy as Order .
modelBuilder.Entity<DetailedOrder>()
.ToTable("Orders")
.HasBaseType((string)null)
.Ignore(o => o.DetailedOrder);
modelBuilder.Entity<Order>()
.ToTable("Orders")
.HasOne(o => o.DetailedOrder).WithOne()
.HasForeignKey<Order>(o => o.Id);
Usage
Saving and querying entities using table splitting is done in the same way as other entities, the only difference is
that all entities sharing a row must be tracked for the insert.
context.Add(new Order
{
DetailedOrder = new DetailedOrder
{
Status = OrderStatus.Pending,
ShippingAddress = "221 B Baker St, London",
BillingAddress = "11 Wall Street, New York"
}
});
context.SaveChanges();
}
Concurrency tokens
If any of the entity types sharing a table has a concurrency token then it must be included in all other entity types to
avoid a stale concurrency token value when only one of the entities mapped to the same table is updated.
To avoid exposing it to the consuming code it's possible the create one in shadow -state.
modelBuilder.Entity<Order>()
.Property<byte[]>("Version").IsRowVersion().HasColumnName("Version");
modelBuilder.Entity<DetailedOrder>()
.Property(o => o.Version).IsRowVersion().HasColumnName("Version");
Owned Entity Types
1/7/2019 • 6 minutes to read • Edit Online
NOTE
This feature is new in EF Core 2.0.
EF Core allows you to model entity types that can only ever appear on navigation properties of other entity types.
These are called owned entity types. The entity containing an owned entity type is its owner.
Explicit configuration
Owned entity types are never included by EF Core in the model by convention. You can use the OwnsOne method
in OnModelCreating or annotate the type with OwnedAttribute (new in EF Core 2.1) to configure the type as an
owned type.
In this example, StreetAddress is a type with no identity property. It is used as a property of the Order type to
specify the shipping address for a particular order.
We can use the OwnedAttribute to treat it as an owned entity when referenced from another entity type:
[Owned]
public class StreetAddress
{
public string Street { get; set; }
public string City { get; set; }
}
It is also possible to use the OwnsOne method in OnModelCreating to specify that the ShippingAddress property is
an Owned Entity of the Order entity type and to configure additional facets if needed.
If the ShippingAddress property is private in the Order type, you can use the string version of the OwnsOne
method:
modelBuilder.Entity<Order>().OwnsOne(typeof(StreetAddress), "ShippingAddress");
Implicit keys
Owned types configured with OwnsOne or discovered through a reference navigation always have a one-to-one
relationship with the owner, therefore they don't need their own key values as the foreign key values are unique. In
the previous example, the StreetAddress type does not need to define a key property.
In order to understand how EF Core tracks these objects, it is useful to think that a primary key is created as a
shadow property for the owned type. The value of the key of an instance of the owned type will be the same as the
value of the key of the owner instance.
To configure a collection of owned types OwnsMany should be used in OnModelCreating . However the primary key
will not be configured by convention, so it needs to be specified explicitly. It is common to use a complex key for
these type of entities incorporating the foreign key to the owner and an additional unique property that can also
be in shadow state:
TIP
Owned types stored with table splitting can be used similarly to how complex types are used in EF6.
By convention, EF Core will name the database columns for the properties of the owned entity type following the
pattern Navigation_OwnedEntityProperty. Therefore the StreetAddress properties will appear in the 'Orders'
table with the names 'ShippingAddress_Street' and 'ShippingAddress_City'.
You can append the HasColumnName method to rename those columns:
modelBuilder.Entity<Order>().OwnsOne(
o => o.ShippingAddress,
sa =>
{
sa.Property(p => p.Street).HasColumnName("ShipsToStreet");
sa.Property(p => p.City).HasColumnName("ShipsToCity");
});
In order to understand how EF Core will distinguish tracked instances of these objects, it may be useful to think
that the defining navigation has become part of the key of the instance alongside the value of the key of the owner
and the .NET type of the owned type.
In addition to nested owned types, an owned type can reference a regular entity, it can be either the owner or a
different entity as long as the owned entity is on the dependent side. This capability sets owned entity types apart
from complex types in EF6.
It is possible to chain the OwnsOne method in a fluent call to configure this model:
Limitations
Some of these limitations are fundamental to how owned entity types work, but some others are restrictions that
we may be able to remove in future releases:
By-design restrictions
You cannot create a DbSet<T> for an owned type
You cannot call Entity<T>() with an owned type on ModelBuilder
Current shortcomings
Inheritance hierarchies that include owned entity types are not supported
Reference navigations to owned entity types cannot be null unless they are explicitly mapped to a separate
table from the owner
Instances of owned entity types cannot be shared by multiple owners (this is a well-known scenario for value
objects that cannot be implemented using owned entity types)
Shortcomings in previous versions
In EF Core 2.0, navigations to owned entity types cannot be declared in derived entity types unless the owned
entities are explicitly mapped to a separate table from the owner hierarchy. This limitation has been removed in
EF Core 2.1
In EF Core 2.0 and 2.1 only reference navigations to owned types were supported. This limitation has been
removed in EF Core 2.2
Query Types
12/7/2018 • 3 minutes to read • Edit Online
NOTE
This feature is new in EF Core 2.1
In addition to entity types, an EF Core model can contain query types, which can be used to carry out database
queries against data that isn't mapped to entity types.
Usage scenarios
Some of the main usage scenarios for query types are:
Serving as the return type for ad hoc FromSql() queries.
Mapping to database views.
Mapping to tables that do not have a primary key defined.
Mapping to queries defined in the model.
Example
The following example shows how to use Query Type to query a database view.
TIP
You can view this article's sample on GitHub.
Next, we define a simple database view that will allow us to query the number of posts associated with each blog:
db.Database.ExecuteSqlCommand(
@"CREATE VIEW View_BlogPostCounts AS
SELECT b.Name, Count(p.PostId) as PostCount
FROM Blogs b
JOIN Posts p on p.BlogId = b.BlogId
GROUP BY b.Name");
Next, we define a class to hold the result from the database view:
Next, we configure the query type in OnModelCreating using the modelBuilder.Query<T> API. We use standard
fluent configuration APIs to configure the mapping for the Query Type:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder
.Query<BlogPostsCount>().ToView("View_BlogPostCounts")
.Property(v => v.BlogName).HasColumnName("Name");
}
TIP
Note we have also defined a context level query property (DbQuery) to act as a root for queries against this type.
Alternating between multiple models with the same
DbContext type
8/27/2018 • 2 minutes to read • Edit Online
The model built in OnModelCreating could use a property on the context to change how the model is built. For
example it could be used to exclude a certain property:
IModelCacheKeyFactory
However if you tried doing the above without additional changes you would get the same model every time a new
context is created for any value of IgnoreIntProperty . This is caused by the model caching mechanism EF uses to
improve the performance by only invoking OnModelCreating once and caching the model.
By default EF assumes that for any given context type the model will be the same. To accomplish this the default
implementation of IModelCacheKeyFactory returns a key that just contains the context type. To change this you need
to replace the IModelCacheKeyFactory service. The new implementation needs to return an object that can be
compared to other model keys using the Equals method that takes into account all the variables that affect the
model:
public class DynamicModelCacheKeyFactory : IModelCacheKeyFactory
{
public object Create(DbContext context)
{
if (context is DynamicContext dynamicContext)
{
return (context.GetType(), dynamicContext.IgnoreIntProperty);
}
return context.GetType();
}
}
Spatial Data
11/16/2018 • 7 minutes to read • Edit Online
NOTE
This feature is new in EF Core 2.2.
Spatial data represents the physical location and the shape of objects. Many databases provide support for this
type of data so it can be indexed and queried alongside other data. Common scenarios include querying for objects
within a given distance from a location, or selecting the object whose border contains a given location. EF Core
supports mapping to spatial data types using the NetTopologySuite spatial library.
Installing
In order to use spatial data with EF Core, you need to install the appropriate supporting NuGet package. Which
package you need to install depends on the provider you're using.
Microsoft.EntityFrameworkCore.SqlServer Microsoft.EntityFrameworkCore.SqlServer.NetTopologySuite
Microsoft.EntityFrameworkCore.Sqlite Microsoft.EntityFrameworkCore.Sqlite.NetTopologySuite
Microsoft.EntityFrameworkCore.InMemory NetTopologySuite
Npgsql.EntityFrameworkCore.PostgreSQL Npgsql.EntityFrameworkCore.PostgreSQL.NetTopologySuite
Reverse engineering
The spatial NuGet packages also enable reverse engineering models with spatial properties, but you need to install
the package before running Scaffold-DbContext or dotnet ef dbcontext scaffold . If you don't, you'll receive
warnings about not finding type mappings for the columns and the columns will be skipped.
NetTopologySuite (NTS)
NetTopologySuite is a spatial library for .NET. EF Core enables mapping to spatial data types in the database by
using NTS types in your model.
To enable mapping to spatial types via NTS, call the UseNetTopologySuite method on the provider's DbContext
options builder. For example, with SQL Server you'd call it like this.
optionsBuilder.UseSqlServer(
@"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=WideWorldImporters",
x => x.UseNetTopologySuite());
There are several spatial data types. Which type you use depends on the types of shapes you want to allow. Here is
the hierarchy of NTS types that you can use for properties in your model. They're located within the
NetTopologySuite.Geometries namespace. Corresponding interfaces in the GeoAPI package ( GeoAPI.Geometries
namespace) can also be used.
Geometry
Point
LineString
Polygon
GeometryCollection
MultiPoint
MultiLineString
MultiPolygon
WARNING
CircularString, CompoundCurve, and CurePolygon aren't supported by NTS.
Using the base Geometry type allows any type of shape to be specified by the property.
The following entity classes could be used to map to tables in the Wide World Importers sample database.
Creating values
You can use constructors to create geometry objects; however, NTS recommends using a geometry factory instead.
This lets you specify a default SRID (the spatial reference system used by the coordinates) and gives you control
over more advanced things like the precision model (used during calculations) and the coordinate sequence
(determines which ordinates--dimensions and measures--are available).
NOTE
4326 refers to WGS 84, a standard used in GPS and other geographic systems.
return GeometryTransform.TransformGeometry(
geometryFactory,
geometry,
transformation.MathTransform);
}
}
var seattle = new Point(-122.333056, 47.609722) { SRID = 4326 };
var redmond = new Point(-122.123889, 47.669444) { SRID = 4326 };
Querying Data
In LINQ, the NTS methods and properties available as database functions will be translated to SQL. For example,
the Distance and Contains methods are translated in the following queries. The table at the end of this article
shows which members are supported by various EF Core providers.
SQL Server
If you're using SQL Server, there are some additional things you should be aware of.
Geography or geometry
By default, spatial properties are mapped to geography columns in SQL Server. To use geometry , configure the
column type in your model.
Geography polygon rings
When using the geography column type, SQL Server imposes additional requirements on the exterior ring (or
shell) and interior rings (or holes). The exterior ring must be oriented counterclockwise and the interior rings
clockwise. NTS validates this before sending values to the database.
FullGlobe
SQL Server has a non-standard geometry type to represent the full globe when using the geography column type.
It also has a way to represent polygons based on the full globe (without an exterior ring). Neither of these are
supported by NTS.
WARNING
FullGlobe and polygons based on it aren't supported by NTS.
SQLite
Here is some additional information for those using SQLite.
Installing SpatiaLite
On Windows, the native mod_spatialite library is distributed as a NuGet package dependency. Other platforms
need to install it separately. This is typically done using a software package manager. For example, you can use APT
on Ubuntu and Homebrew on MacOS.
# Ubuntu
apt-get install libsqlite3-mod-spatialite
# macOS
brew install libspatialite
Configuring SRID
In SpatiaLite, columns need to specify an SRID per column. The default SRID is 0 . Specify a different SRID using
the ForSqliteHasSrid method.
Dimension
Similar to SRID, a column's dimension (or ordinates) is also specified as part of the column. The default ordinates
are X and Y. Enable additional ordinates (Z and M ) using the ForSqliteHasDimension method.
Translated Operations
This table shows which NTS members are translated into SQL by each EF Core provider.
Geometry.Area ✔ ✔ ✔ ✔
Geometry.AsBinary() ✔ ✔ ✔ ✔
Geometry.AsText() ✔ ✔ ✔ ✔
Geometry.Boundary ✔ ✔ ✔
Geometry.Buffer(doub ✔ ✔ ✔ ✔
le)
Geometry.Buffer(doub ✔
le, int)
Geometry.Centroid ✔ ✔ ✔
Geometry.Contains(G ✔ ✔ ✔ ✔
eometry)
Geometry.ConvexHull ✔ ✔ ✔ ✔
()
Geometry.CoveredBy( ✔ ✔
Geometry)
SQL SERVER SQL SERVER
NETTOPOLOGYSUITE (GEOMETRY) (GEOGRAPHY) SQLITE NPGSQL
Geometry.Covers(Geo ✔ ✔
metry)
Geometry.Crosses(Ge ✔ ✔ ✔
ometry)
Geometry.Difference( ✔ ✔ ✔ ✔
Geometry)
Geometry.Dimension ✔ ✔ ✔ ✔
Geometry.Disjoint(Ge ✔ ✔ ✔ ✔
ometry)
Geometry.Distance(G ✔ ✔ ✔ ✔
eometry)
Geometry.Envelope ✔ ✔ ✔
Geometry.EqualsExact ✔
(Geometry)
Geometry.EqualsTopol ✔ ✔ ✔ ✔
ogically(Geometry)
Geometry.GeometryT ✔ ✔ ✔ ✔
ype
Geometry.GetGeomet ✔ ✔ ✔
ryN(int)
Geometry.InteriorPoin ✔ ✔
t
Geometry.Intersection ✔ ✔ ✔ ✔
(Geometry)
Geometry.Intersects( ✔ ✔ ✔ ✔
Geometry)
Geometry.IsEmpty ✔ ✔ ✔ ✔
Geometry.IsSimple ✔ ✔ ✔
Geometry.IsValid ✔ ✔ ✔ ✔
Geometry.IsWithinDis ✔ ✔
tance(Geometry,
double)
Geometry.Length ✔ ✔ ✔ ✔
SQL SERVER SQL SERVER
NETTOPOLOGYSUITE (GEOMETRY) (GEOGRAPHY) SQLITE NPGSQL
Geometry.NumGeom ✔ ✔ ✔ ✔
etries
Geometry.NumPoints ✔ ✔ ✔ ✔
Geometry.OgcGeome ✔ ✔ ✔
tryType
Geometry.Overlaps(G ✔ ✔ ✔ ✔
eometry)
Geometry.PointOnSur ✔ ✔ ✔
face
Geometry.Relate(Geo ✔ ✔ ✔
metry, string)
Geometry.Reverse() ✔ ✔
Geometry.SRID ✔ ✔ ✔ ✔
Geometry.Symmetric ✔ ✔ ✔ ✔
Difference(Geometry)
Geometry.ToBinary() ✔ ✔ ✔ ✔
Geometry.ToText() ✔ ✔ ✔ ✔
Geometry.Touches(Ge ✔ ✔ ✔
ometry)
Geometry.Union() ✔
Geometry.Union(Geo ✔ ✔ ✔ ✔
metry)
Geometry.Within(Geo ✔ ✔ ✔ ✔
metry)
GeometryCollection.C ✔ ✔ ✔ ✔
ount
GeometryCollection[in ✔ ✔ ✔ ✔
t]
LineString.Count ✔ ✔ ✔ ✔
LineString.EndPoint ✔ ✔ ✔ ✔
LineString.GetPointN(i ✔ ✔ ✔ ✔
nt)
SQL SERVER SQL SERVER
NETTOPOLOGYSUITE (GEOMETRY) (GEOGRAPHY) SQLITE NPGSQL
LineString.IsClosed ✔ ✔ ✔ ✔
LineString.IsRing ✔ ✔ ✔
LineString.StartPoint ✔ ✔ ✔ ✔
MultiLineString.IsClos ✔ ✔ ✔ ✔
ed
Point.M ✔ ✔ ✔ ✔
Point.X ✔ ✔ ✔ ✔
Point.Y ✔ ✔ ✔ ✔
Point.Z ✔ ✔ ✔ ✔
Polygon.ExteriorRing ✔ ✔ ✔ ✔
Polygon.GetInteriorRi ✔ ✔ ✔ ✔
ngN(int)
Polygon.NumInteriorR ✔ ✔ ✔ ✔
ings
Additional resources
Spatial Data in SQL Server
SpatiaLite Homepage
Npgsql Spatial Documentation
PostGIS Documentation
Relational Database Modeling
8/27/2018 • 2 minutes to read • Edit Online
The configuration in this section is applicable to relational databases in general. The extension methods shown here
will become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
Table Mapping
8/27/2018 • 2 minutes to read • Edit Online
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
Table mapping identifies which table data should be queried from and saved to in the database.
Conventions
By convention, each entity will be set up to map to a table with the same name as the DbSet<TEntity> property that
exposes the entity on the derived context. If no DbSet<TEntity> is included for the given entity, the class name is
used.
Data Annotations
You can use Data Annotations to configure the table that a type maps to.
using System.ComponentModel.DataAnnotations.Schema;
[Table("blogs")]
public class Blog
{
public int BlogId { get; set; }
public string Url { get; set; }
}
You can also specify a schema that the table belongs to.
Fluent API
You can use the Fluent API to configure the table that a type maps to.
using Microsoft.EntityFrameworkCore;
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
You can also specify a schema that the table belongs to.
modelBuilder.Entity<Blog>()
.ToTable("blogs", schema: "blogging");
Column Mapping
4/18/2019 • 2 minutes to read • Edit Online
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
Column mapping identifies which column data should be queried from and saved to in the database.
Conventions
By convention, each property will be set up to map to a column with the same name as the property.
Data Annotations
You can use Data Annotations to configure the column to which a property is mapped.
using Microsoft.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations.Schema;
namespace EFModeling.Configuring.DataAnnotations.Samples.Relational.Column
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
Fluent API
You can use the Fluent API to configure the column to which a property is mapped.
using Microsoft.EntityFrameworkCore;
namespace EFModeling.Configuring.FluentAPI.Samples.Relational.Column
{
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
Data type refers to the database specific type of the column to which a property is mapped.
Conventions
By convention, the database provider selects a data type based on the CLR type of the property. It also takes into
account other metadata, such as the configured Maximum Length, whether the property is part of a primary key,
etc.
For example, SQL Server uses datetime2(7) for DateTime properties, and nvarchar(max) for string properties
(or nvarchar(450) for string properties that are used as a key).
Data Annotations
You can use Data Annotations to specify an exact data type for a column.
For example the following code configures Url as a non-unicode string with maximum length of 200 and
Rating as decimal with precision of 5 and scale of 2 .
Fluent API
You can also use the Fluent API to specify the same data types for the columns.
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
A primary key constraint is introduced for the key of each entity type.
Conventions
By convention, the primary key in the database will be named PK_<type name> .
Data Annotations
No relational database specific aspects of a primary key can be configured using Data Annotations.
Fluent API
You can use the Fluent API to configure the name of the primary key constraint in the database.
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
The default schema is the database schema that objects will be created in if a schema is not explicitly configured for
that object.
Conventions
By convention, the database provider will choose the most appropriate default schema. For example, Microsoft
SQL Server will use the dbo schema and SQLite will not use a schema (since schemas are not supported in
SQLite).
Data Annotations
You can not set the default schema using Data Annotations.
Fluent API
You can use the Fluent API to specify a default schema.
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
A computed column is a column whose value is calculated in the database. A computed column can use other
columns in the table to calculate its value.
Conventions
By convention, computed columns are not created in the model.
Data Annotations
Computed columns can not be configured with Data Annotations.
Fluent API
You can use the Fluent API to specify that a property should map to a computed column.
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
A sequence generates a sequential numeric values in the database. Sequences are not associated with a specific
table.
Conventions
By convention, sequences are not introduced in to the model.
Data Annotations
You can not configure a sequence using Data Annotations.
Fluent API
You can use the Fluent API to create a sequence in the model.
You can also configure additional aspect of the sequence, such as its schema, start value, and increment.
class MyContext : DbContext
{
public DbSet<Order> Orders { get; set; }
Once a sequence is introduced, you can use it to generate values for properties in your model. For example, you
can use Default Values to insert the next value from the sequence.
modelBuilder.Entity<Order>()
.Property(o => o.OrderNo)
.HasDefaultValueSql("NEXT VALUE FOR shared.OrderNumbers");
}
}
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
The default value of a column is the value that will be inserted if a new row is inserted but no value is specified for
the column.
Conventions
By convention, a default value is not configured.
Data Annotations
You can not set a default value using Data Annotations.
Fluent API
You can use the Fluent API to specify the default value for a property.
You can also specify a SQL fragment that is used to calculate the default value.
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
An index in a relational database maps to the same concept as an index in the core of Entity Framework.
Conventions
By convention, indexes are named IX_<type name>_<property name> . For composite indexes <property name>
becomes an underscore separated list of property names.
Data Annotations
Indexes can not be configured using Data Annotations.
Fluent API
You can use the Fluent API to configure the name of an index.
When using the SQL Server provider EF adds a 'IS NOT NULL' filter for all nullable columns that are part of a
unique index. To override this convention you can supply a null value.
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
Conventions
By convention, foreign key constraints are named
FK_<dependent type name>_<principal type name>_<foreign key property name> . For composite foreign keys
<foreign key property name> becomes an underscore separated list of foreign key property names.
Data Annotations
Foreign key constraint names cannot be configured using data annotations.
Fluent API
You can use the Fluent API to configure the foreign key constraint name for a relationship.
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
Conventions
By convention, the index and constraint that are introduced for an alternate key will be named
AK_<type name>_<property name> . For composite alternate keys <property name> becomes an underscore separated
list of property names.
Data Annotations
Unique constraints can not be configured using Data Annotations.
Fluent API
You can use the Fluent API to configure the index and constraint name for an alternate key.
class Car
{
public int CarId { get; set; }
public string LicensePlate { get; set; }
public string Make { get; set; }
public string Model { get; set; }
}
Inheritance (Relational Database)
5/31/2019 • 2 minutes to read • Edit Online
NOTE
The configuration in this section is applicable to relational databases in general. The extension methods shown here will
become available when you install a relational database provider (due to the shared
Microsoft.EntityFrameworkCore.Relational package).
Inheritance in the EF model is used to control how inheritance in the entity classes is represented in the database.
NOTE
Currently, only the table-per-hierarchy (TPH) pattern is implemented in EF Core. Other common patterns like table-per-type
(TPT) and table-per-concrete-type (TPC) are not yet available.
Conventions
By convention, inheritance will be mapped using the table-per-hierarchy (TPH) pattern. TPH uses a single table to
store the data for all types in the hierarchy. A discriminator column is used to identify which type each row
represents.
EF Core will only setup inheritance if two or more inherited types are explicitly included in the model (see
Inheritance for more details).
Below is an example showing a simple inheritance scenario and the data stored in a relational database table using
the TPH pattern. The Discriminator column identifies which type of Blog is stored in each row.
Data Annotations
You cannot use Data Annotations to configure inheritance.
Fluent API
You can use the Fluent API to configure the name and type of the discriminator column and the values that are
used to identify each type in the hierarchy.
modelBuilder.Entity<Blog>()
.Property("Discriminator")
.HasMaxLength(200);
The discriminator can also be mapped to an actual CLR property in your entity. For example:
class MyContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
Combining these two things together it is possible to both map the discriminator to a real property and configure
it:
modelBuilder.Entity<Blog>(b =>
{
b.HasDiscriminator<string>("BlogType");
EF Core provides two primary ways of keeping your EF Core model and database schema in sync. To choose
between the two, decide whether your EF Core model or the database schema is the source of truth.
If you want your EF Core model to be the source of truth, use Migrations. As you make changes to your EF Core
model, this approach incrementally applies the corresponding schema changes to your database so that it remains
compatible with your EF Core model.
Use Reverse Engineering if you want your database schema to be the source of truth. This approach allows you to
scaffold a DbContext and the entity type classes by reverse engineering your database schema into an EF Core
model.
NOTE
The create and drop APIs can also create the database schema from your EF Core model. However, they are primarily for
testing, prototyping, and other scenarios where dropping the database is acceptable.
Migrations
4/14/2019 • 5 minutes to read • Edit Online
A data model changes during development and gets out of sync with the database. You can drop the database
and let EF create a new one that matches the model, but this procedure results in the loss of data. The
migrations feature in EF Core provides a way to incrementally update the database schema to keep it in sync
with the application's data model while preserving existing data in the database.
Migrations includes command-line tools and APIs that help with the following tasks:
Create a migration. Generate code that can update the database to sync it with a set of model changes.
Update the database. Apply pending migrations to update the database schema.
Customize migration code. Sometimes the generated code needs to be modified or supplemented.
Remove a migration. Delete the generated code.
Revert a migration. Undo the database changes.
Generate SQL scripts. You might need a script to update a production database or to troubleshoot migration
code.
Apply migrations at runtime. When design-time updates and running scripts aren't the best options, call the
Migrate() method.
Create a migration
After you've defined your initial model, it's time to create the database. To add an initial migration, run the
following command.
Add-Migration InitialCreate
Three files are added to your project under the Migrations directory:
XXXXXXXXXXXXXX_InitialCreate.cs--The main migrations file. Contains the operations necessary to
apply the migration (in Up() ) and to revert it (in Down() ).
XXXXXXXXXXXXXX_InitialCreate.Designer.cs--The migrations metadata file. Contains information used
by EF.
MyContextModelSnapshot.cs--A snapshot of your current model. Used to determine what changed when
adding the next migration.
The timestamp in the filename helps keep them ordered chronologically so you can see the progression of
changes.
TIP
You are free to move Migrations files and change their namespace. New migrations are created as siblings of the last
migration.
Update-Database
Add-Migration AddProductReviews
Once the migration is scaffolded (code generated for it), review the code for accuracy and add, remove or
modify any operations required to apply it correctly.
For example, a migration might contain the following operations:
migrationBuilder.DropColumn(
name: "FirstName",
table: "Customer");
migrationBuilder.DropColumn(
name: "LastName",
table: "Customer");
migrationBuilder.AddColumn<string>(
name: "Name",
table: "Customer",
nullable: true);
While these operations make the database schema compatible, they don't preserve the existing customer
names. To make it better, rewrite it as follows.
migrationBuilder.AddColumn<string>(
name: "Name",
table: "Customer",
nullable: true);
migrationBuilder.Sql(
@"
UPDATE Customer
SET Name = FirstName + ' ' + LastName;
");
migrationBuilder.DropColumn(
name: "FirstName",
table: "Customer");
migrationBuilder.DropColumn(
name: "LastName",
table: "Customer");
TIP
The migration scaffolding process warns when an operation might result in data loss (like dropping a column). If you see
that warning, be especially sure to review the migrations code for accuracy.
Update-Database
Empty migrations
Sometimes it's useful to add a migration without making any model changes. In this case, adding a new
migration creates code files with empty classes. You can customize this migration to perform operations that
don't directly relate to the EF Core model. Some things you might want to manage this way are:
Full-Text Search
Functions
Stored procedures
Triggers
Views
Remove a migration
Sometimes you add a migration and realize you need to make additional changes to your EF Core model before
applying it. To remove the last migration, use this command.
Remove-Migration
After removing the migration, you can make the additional model changes and add it again.
Revert a migration
If you already applied a migration (or several migrations) to the database but need to revert it, you can use the
same command to apply migrations, but specify the name of the migration you want to roll back to.
Update-Database LastGoodMigration
Script-Migration
myDbContext.Database.Migrate();
WARNING
This approach isn't for everyone. While it's great for apps with a local database, most applications will require more
robust deployment strategy like generating SQL scripts.
Don't call EnsureCreated() before Migrate() . EnsureCreated() bypasses Migrations to create the schema,
which causes Migrate() to fail.
Next steps
For more information, see Entity Framework Core tools reference - EF Core.
Migrations in Team Environments
8/27/2018 • 2 minutes to read • Edit Online
When working with Migrations in team environments, pay extra attention to the model snapshot file. This file can
tell you if your teammate's migration merges cleanly with yours or if you need to resolve a conflict by re-creating
your migration before sharing it.
Merging
When you merge migrations from your teammates, you may get conflicts in your model snapshot file. If both
changes are unrelated, the merge is trivial and the two migrations can coexist. For example, you may get a merge
conflict in the customer entity type configuration that looks like this:
<<<<<<< Mine
b.Property<bool>("Deactivated");
=======
b.Property<int>("LoyaltyPoints");
>>>>>>> Theirs
Since both of these properties need to exist in the final model, complete the merge by adding both properties. In
many cases, your version control system may automatically merge such changes for you.
b.Property<bool>("Deactivated");
b.Property<int>("LoyaltyPoints");
In these cases, your migration and your teammate's migration are independent of each other. Since either of them
could be applied first, you don't need to make any additional changes to your migration before sharing it with your
team.
Resolving conflicts
Sometimes you encounter a true conflict when merging the model snapshot model. For example, you and your
teammate may each have renamed the same property.
<<<<<<< Mine
b.Property<string>("Username");
=======
b.Property<string>("Alias");
>>>>>>> Theirs
If you encounter this kind of conflict, resolve it by re-creating your migration. Follow these steps:
1. Abort the merge and rollback to your working directory before the merge
2. Remove your migration (but keep your model changes)
3. Merge your teammate's changes into your working directory
4. Re-add your migration
After doing this, the two migrations can be applied in the correct order. Their migration is applied first, renaming
the column to Alias, thereafter your migration renames it to Username.
Your migration can safely be shared with the rest of the team.
Custom Migrations Operations
11/6/2018 • 2 minutes to read • Edit Online
The MigrationBuilder API allows you to perform many different kinds of operations during a migration, but it's far
from exhaustive. However, the API is also extensible allowing you to define your own operations. There are two
ways to extend the API: Using the Sql() method, or by defining custom MigrationOperation objects.
To illustrate, let's look at implementing an operation that creates a database user using each approach. In our
migrations, we want to enable writing the following code:
migrationBuilder.CreateUser("SQLUser1", "Password");
Using MigrationBuilder.Sql()
The easiest way to implement a custom operation is to define an extension method that calls
MigrationBuilder.Sql() . Here is an example that generates the appropriate Transact-SQL.
If your migrations need to support multiple database providers, you can use the MigrationBuilder.ActiveProvider
property. Here's an example supporting both Microsoft SQL Server and PostgreSQL.
case "Microsoft.EntityFrameworkCore.SqlServer":
return migrationBuilder
.Sql($"CREATE USER {name} WITH PASSWORD = '{password}';");
}
return migrationBuilder;
}
This approach only works if you know every provider where your custom operation will be applied.
Using a MigrationOperation
To decouple the custom operation from the SQL, you can define your own MigrationOperation to represent it. The
operation is then passed to the provider so it can determine the appropriate SQL to generate.
class CreateUserOperation : MigrationOperation
{
public string Name { get; set; }
public string Password { get; set; }
}
With this approach, the extension method just needs to add one of these operations to
MigrationBuilder.Operations .
return migrationBuilder;
}
This approach requires each provider to know how to generate SQL for this operation in their
IMigrationsSqlGenerator service. Here is an example overriding the SQL Server's generator to handle the new
operation.
class MyMigrationsSqlGenerator : SqlServerMigrationsSqlGenerator
{
public MyMigrationsSqlGenerator(
MigrationsSqlGeneratorDependencies dependencies,
IMigrationsAnnotationProvider migrationsAnnotations)
: base(dependencies, migrationsAnnotations)
{
}
builder
.Append("CREATE USER ")
.Append(sqlHelper.DelimitIdentifier(operation.Name))
.Append(" WITH PASSWORD = ")
.Append(stringMapping.GenerateSqlLiteral(operation.Password))
.AppendLine(sqlHelper.StatementTerminator)
.EndCommand();
}
}
Replace the default migrations sql generator service with the updated one.
You may want to store your migrations in a different assembly than the one containing your DbContext . You can
also use this strategy to maintain multiple sets of migrations, for example, one for development and another for
release-to-release upgrades.
To do this...
1. Create a new class library.
2. Add a reference to your DbContext assembly.
3. Move the migrations and model snapshot files to the class library.
TIP
If you have no existing migrations, generate one in the project containing the DbContext then move it. This is
important because if the migrations assembly does not contain an existing migration, the Add-Migration command
will be unable to find the DbContext.
options.UseSqlServer(
connectionString,
x => x.MigrationsAssembly("MyApp.Migrations"));
<PropertyGroup>
<OutputPath>..\MyStartupProject\bin\$(Configuration)\</OutputPath>
</PropertyGroup>
If you did everything correctly, you should be able to add new migrations to the project.
The EF Core Tools only scaffold migrations for the active provider. Sometimes, however, you may want to use more
than one provider (for example Microsoft SQL Server and SQLite) with your DbContext. There are two ways to
handle this with Migrations. You can maintain two sets of migrations--one for each provider--or merge them into a
single set that can work on both.
NOTE
Since each migration set uses its own DbContext types, this approach doesn't require using a separate migrations assembly.
TIP
You don't need to specify the output directory for subsequent migrations since they are created as siblings to the last one.
If operations can only be applied on one provider (or they're differently between providers), use the
ActiveProvider property to tell which provider is active.
if (migrationBuilder.ActiveProvider == "Microsoft.EntityFrameworkCore.SqlServer")
{
migrationBuilder.CreateSequence(
name: "EntityFrameworkHiLoSequence");
}
Custom Migrations History Table
9/13/2018 • 2 minutes to read • Edit Online
By default, EF Core keeps track of which migrations have been applied to the database by recording them in a table
named __EFMigrationsHistory . For various reasons, you may want to customize this table to better suit your needs.
IMPORTANT
If you customize the Migrations history table after applying migrations, you are responsible for updating the existing table in
the database.
Other changes
To configure additional aspects of the table, override and replace the provider-specific IHistoryRepository service.
Here is an example of changing the MigrationId column name to Id on SQL Server.
WARNING
SqlServerHistoryRepository is inside an internal namespace and may change in future releases.
The EnsureCreated and EnsureDeleted methods provide a lightweight alternative to Migrations for managing the
database schema. These methods are useful in scenarios when the data is transient and can be dropped when the
schema changes. For example during prototyping, in tests, or for local caches.
Some providers (especially non-relational ones) don't support Migrations. For these providers, EnsureCreated is
often the easiest way to initialize the database schema.
WARNING
EnsureCreated and Migrations don't work well together. If you're using Migrations, don't use EnsureCreated to initialize the
schema.
Transitioning from EnsureCreated to Migrations is not a seamless experience. The simplest way to do it is to drop
the database and re-create it using Migrations. If you anticipate using migrations in the future, it's best to just start
with Migrations instead of using EnsureCreated.
EnsureDeleted
The EnsureDeleted method will drop the database if it exists. If you don't have the appropriate permissions, an
exception is thrown.
EnsureCreated
EnsureCreated will create the database if it doesn't exist and initialize the database schema. If any tables exist
(including tables for another DbContext class), the schema won't be initialized.
TIP
Async versions of these methods are also available.
SQL Script
To get the SQL used by EnsureCreated, you can use the GenerateCreateScript method.
Reverse engineering is the process of scaffolding entity type classes and a DbContext class based on a database
schema. It can be performed using the Scaffold-DbContext command of the EF Core Package Manager Console
(PMC ) tools or the dotnet ef dbcontext scaffold command of the .NET Command-line Interface (CLI) tools.
Installing
Before reverse engineering, you'll need to install either the PMC tools (Visual Studio only) or the CLI tools. See
links for details.
You'll also need to install an appropriate database provider for the database schema you want to reverse engineer.
Connection string
The first argument to the command is a connection string to the database. The tools will use this connection string
to read the database schema.
How you quote and escape the connection string depends on which shell you are using to execute the command.
Refer to your shell's documentation for specifics. For example, PowerShell requires you to escape the $
character, but not \ .
Provider name
The second argument is the provider name. The provider name is typically the same as the provider's NuGet
package name.
Specifying tables
All tables in the database schema are reverse engineered into entity types by default. You can limit which tables
are reverse engineered by specifying schemas and tables.
The -Schemas parameter in PMC and the --schema option in the CLI can be used to include every table within a
schema.
-Tables (PMC ) and --table (CLI) can be used to include specific tables.
To include multiple tables in PMC, use an array.
To include multiple tables in the CLI, specify the option multiple times.
Preserving names
Table and column names are fixed up to better match the .NET naming conventions for types and properties by
default. Specifying the -UseDatabaseNames switch in PMC or the --use-database-names option in the CLI will
disable this behavior preserving the original database names as much as possible. Invalid .NET identifiers will still
be fixed and synthesized names like navigation properties will still conform to .NET naming conventions.
[Required]
[StringLength(160)]
public string Title { get; set; }
DbContext name
The scaffolded DbContext class name will be the name of the database suffixed with Context by default. To specify
a different one, use -Context in PMC and --context in the CLI.
How it works
Reverse engineering starts by reading the database schema. It reads information about tables, columns,
constraints, and indexes.
Next, it uses the schema information to create an EF Core model. Tables are used to create entity types; columns
are used to create properties; and foreign keys are used to create relationships.
Finally, the model is used to generate code. The corresponding entity type classes, Fluent API, and data
annotations are scaffolded in order to re-create the same model from your app.
WARNING
If you reverse engineer the model from the database again, any changes you've made to the files will be lost.
Querying Data
8/27/2018 • 2 minutes to read • Edit Online
Entity Framework Core uses Language Integrated Query (LINQ ) to query data from the database. LINQ allows
you to use C# (or your .NET language of choice) to write strongly typed queries based on your derived context and
entity classes. A representation of the LINQ query is passed to the database provider, to be translated in database-
specific query language (for example, SQL for a relational database). For more detailed information on how a
query is processed, see How Query Works.
Basic Queries
8/27/2018 • 2 minutes to read • Edit Online
Learn how to load entities from the database using Language Integrated Query (LINQ ).
TIP
You can view this article's sample on GitHub.
Filtering
using (var context = new BloggingContext())
{
var blogs = context.Blogs
.Where(b => b.Url.Contains("dotnet"))
.ToList();
}
Loading Related Data
4/29/2019 • 9 minutes to read • Edit Online
Entity Framework Core allows you to use the navigation properties in your model to load related entities. There
are three common O/RM patterns used to load related data.
Eager loading means that the related data is loaded from the database as part of the initial query.
Explicit loading means that the related data is explicitly loaded from the database at a later time.
Lazy loading means that the related data is transparently loaded from the database when the navigation
property is accessed.
TIP
You can view this article's sample on GitHub.
Eager loading
You can use the Include method to specify related data to be included in query results. In the following example,
the blogs that are returned in the results will have their Posts property populated with the related posts.
TIP
Entity Framework Core will automatically fix-up navigation properties to any other entities that were previously loaded into
the context instance. So even if you don't explicitly include the data for a navigation property, the property may still be
populated if some or all of the related entities were previously loaded.
You can include related data from multiple relationships in a single query.
NOTE
Current versions of Visual Studio offer incorrect code completion options and can cause correct expressions to be flagged
with syntax errors when using the ThenInclude method after a collection navigation property. This is a symptom of an
IntelliSense bug tracked at https://github.com/dotnet/roslyn/issues/8237. It is safe to ignore these spurious syntax errors as
long as the code is correct and can be compiled successfully.
You can chain multiple calls to ThenInclude to continue including further levels of related data.
You can combine all of this to include related data from multiple levels and multiple roots in the same query.
You may want to include multiple related entities for one of the entities that is being included. For example, when
querying Blogs , you include Posts and then want to include both the Author and Tags of the Posts . To do this,
you need to specify each include path starting at the root. For example, Blog -> Posts -> Author and
Blog -> Posts -> Tags . This does not mean you will get redundant joins; in most cases, EF will consolidate the
joins when generating SQL.
Contents of School navigation of all People who are Students can be eagerly loaded using a number of patterns:
using cast
using as operator
context.People.Include("School").ToList()
Ignored includes
If you change the query so that it no longer returns instances of the entity type that the query began with, then the
include operators are ignored.
In the following example, the include operators are based on the Blog , but then the Select operator is used to
change the query to return an anonymous type. In this case, the include operators have no effect.
using (var context = new BloggingContext())
{
var blogs = context.Blogs
.Include(blog => blog.Posts)
.Select(blog => new
{
Id = blog.BlogId,
Url = blog.Url
})
.ToList();
}
By default, EF Core will log a warning when include operators are ignored. See Logging for more information on
viewing logging output. You can change the behavior when an include operator is ignored to either throw or do
nothing. This is done when setting up the options for your context - typically in DbContext.OnConfiguring , or in
Startup.cs if you are using ASP.NET Core.
Explicit loading
NOTE
This feature was introduced in EF Core 1.1.
You can explicitly load a navigation property via the DbContext.Entry(...) API.
context.Entry(blog)
.Collection(b => b.Posts)
.Load();
context.Entry(blog)
.Reference(b => b.Owner)
.Load();
}
You can also explicitly load a navigation property by executing a separate query that returns the related entities. If
change tracking is enabled, then when loading an entity, EF Core will automatically set the navigation properties of
the newly-loaded entitiy to refer to any entities already loaded, and set the navigation properties of the already-
loaded entities to refer to the newly-loaded entity.
Querying related entities
You can also get a LINQ query that represents the contents of a navigation property.
This allows you to do things such as running an aggregate operator over the related entities without loading them
into memory.
using (var context = new BloggingContext())
{
var blog = context.Blogs
.Single(b => b.BlogId == 1);
You can also filter which related entities are loaded into memory.
Lazy loading
NOTE
This feature was introduced in EF Core 2.1.
The simplest way to use lazy-loading is by installing the Microsoft.EntityFrameworkCore.Proxies package and
enabling it with a call to UseLazyLoadingProxies . For example:
.AddDbContext<BloggingContext>(
b => b.UseLazyLoadingProxies()
.UseSqlServer(myConnectionString));
EF Core will then enable lazy loading for any navigation property that can be overridden--that is, it must be
virtual and on a class that can be inherited from. For example, in the following entities, the Post.Blog and
Blog.Posts navigation properties will be lazy-loaded.
public class Blog
{
public int Id { get; set; }
public string Name { get; set; }
public Blog()
{
}
public Post()
{
}
This doesn't require entity types to be inherited from or navigation properties to be virtual, and allows entity
instances created with new to lazy-load once attached to a context. However, it requires a reference to the
ILazyLoader service, which is defined in the Microsoft.EntityFrameworkCore.Abstractions package. This package
contains a minimal set of types so that there is very little impact in depending on it. However, to completely avoid
depending on any EF Core packages in the entity types, it is possible to inject the ILazyLoader.Load method as a
delegate. For example:
public class Blog
{
private ICollection<Post> _posts;
public Blog()
{
}
public Post()
{
}
The code above uses a Load extension method to make using the delegate a bit cleaner:
public static class PocoLoadingExtensions
{
public static TRelated Load<TRelated>(
this Action<object, string> loader,
object entity,
ref TRelated navigationField,
[CallerMemberName] string navigationName = null)
where TRelated : class
{
loader?.Invoke(entity, navigationName);
return navigationField;
}
}
NOTE
The constructor parameter for the lazy-loading delegate must be called "lazyLoader". Configuration to use a different name
than this is planned for a future release.
Newtonsoft.Json.JsonSerializationException: Self referencing loop detected for property 'Blog' with type
'MyApplication.Models.Blog'.
If you are using ASP.NET Core, you can configure Json.NET to ignore cycles that it finds in the object graph. This
is done in the ConfigureServices(...) method in Startup.cs .
services.AddMvc()
.AddJsonOptions(
options => options.SerializerSettings.ReferenceLoopHandling =
Newtonsoft.Json.ReferenceLoopHandling.Ignore
);
...
}
Another alternative is to decorate one of the navigation properties with the [JsonIgnore] attribute, which instructs
Json.NET to not traverse that navigation property while serializing.
Client vs. Server Evaluation
9/9/2018 • 2 minutes to read • Edit Online
Entity Framework Core supports parts of the query being evaluated on the client and parts of it being pushed to
the database. It is up to the database provider to determine which parts of the query will be evaluated in the
database.
TIP
You can view this article's sample on GitHub.
Client evaluation
In the following example a helper method is used to standardize URLs for blogs that are returned from a SQL
Server database. Because the SQL Server provider has no insight into how this method is implemented, it is not
possible to translate it into SQL. All other aspects of the query are evaluated in the database, but passing the
returned URL through this method is performed on the client.
if (!url.StartsWith("http://"))
{
url = string.Concat("http://", url);
}
return url;
}
Tracking behavior controls whether or not Entity Framework Core will keep information about an entity instance in
its change tracker. If an entity is tracked, any changes detected in the entity will be persisted to the database during
SaveChanges() . Entity Framework Core will also fix-up navigation properties between entities that are obtained
from a tracking query and entities that were previously loaded into the DbContext instance.
TIP
You can view this article's sample on GitHub.
Tracking queries
By default, queries that return entity types are tracking. This means you can make changes to those entity instances
and have those changes persisted by SaveChanges() .
In the following example, the change to the blogs rating will be detected and persisted to the database during
SaveChanges() .
No-tracking queries
No tracking queries are useful when the results are used in a read-only scenario. They are quicker to execute
because there is no need to setup change tracking information.
You can swap an individual query to be no-tracking:
You can also change the default tracking behavior at the context instance level:
If the result set does not contain any entity types, then no tracking is performed. In the following query, which
returns an anonymous type with some of the values from the entity (but no instances of the actual entity type),
there is no tracking performed.
Entity Framework Core allows you to drop down to raw SQL queries when working with a relational database. This
can be useful if the query you want to perform can't be expressed using LINQ, or if using a LINQ query is resulting
in inefficient SQL queries. Raw SQL queries can return entity types or, starting with EF Core 2.1, query types that
are part of your model.
TIP
You can view this article's sample on GitHub.
Passing parameters
As with any API that accepts SQL, it is important to parameterize any user input to protect against a SQL injection
attack. You can include parameter placeholders in the SQL query string and then supply parameter values as
additional arguments. Any parameter values you supply will automatically be converted to a DbParameter .
The following example passes a single parameter to a stored procedure. While this may look like String.Format
syntax, the supplied value is wrapped in a parameter and the generated parameter name inserted where the {0}
placeholder was specified.
This is the same query but using string interpolation syntax, which is supported in EF Core 2.0 and above:
This allows you to use named parameters in the SQL query string, which is useful when a stored procedure has
optional parameters:
Change Tracking
Queries that use the FromSql() follow the exact same change tracking rules as any other LINQ query in EF Core.
For example, if the query projects entity types, the results will be tracked by default.
The following example uses a raw SQL query that selects from a Table-Valued Function (TVF ), then disables
change tracking with the call to .AsNoTracking():
Limitations
There are a few limitations to be aware of when using raw SQL queries:
The SQL query must return data for all properties of the entity or query type.
The column names in the result set must match the column names that properties are mapped to. Note this
is different from EF6 where property/column mapping was ignored for raw SQL queries and result set
column names had to match the property names.
The SQL query cannot contain related data. However, in many cases you can compose on top of the query
using the Include operator to return related data (see Including related data).
SELECT statements passed to this method should generally be composable: If EF Core needs to evaluate
additional query operators on the server (for example, to translate LINQ operators applied after FromSql ),
the supplied SQL will be treated as a subquery. This means that the SQL passed should not contain any
characters or options that are not valid on a subquery, such as:
a trailing semicolon
On SQL Server, a trailing query-level hint (for example, OPTION (HASH JOIN) )
On SQL Server, an ORDER BY clause that is not accompanied of TOP 100 PERCENT in the SELECT clause
SQL statements other than SELECT are recognized automatically as non-composable. As a consequence, the
full results of stored procedures are always returned to the client and any LINQ operators applied after
FromSql are evaluated in-memory.
WARNING
Always use parameterization for raw SQL queries: In addition to validating user input, always use parameterization for
any values used in a raw SQL query/command. APIs that accept a raw SQL string such as FromSql and
ExecuteSqlCommand allow values to be easily passed as parameters. Overloads of FromSql and ExecuteSqlCommand that
accept FormattableString also allow using string interpolation syntaxt in a way that helps protect against SQL injection
attacks.
If you are using string concatenation or interpolation to dynamically build any part of the query string, or passing user input
to statements or stored procedures that can execute those inputs as dynamic SQL, then you are responsible for validating
any input to protect against SQL injection attacks.
Asynchronous Queries
8/27/2018 • 2 minutes to read • Edit Online
Asynchronous queries avoid blocking a thread while the query is executed in the database. This can be useful to
avoid freezing the UI of a thick-client application. Asynchronous operations can also increase throughput in a web
application, where the thread can be freed up to service other requests while the database operation completes. For
more information, see Asynchronous Programming in C#.
WARNING
EF Core does not support multiple parallel operations being run on the same context instance. You should always wait for an
operation to complete before beginning the next operation. This is typically done by using the await keyword on each
asynchronous operation.
Entity Framework Core provides a set of asynchronous extension methods that can be used as an alternative to the
LINQ methods that cause a query to be executed and results returned. Examples include ToListAsync() ,
ToArrayAsync() , SingleAsync() , etc. There are not async versions of LINQ operators such as Where(...) ,
OrderBy(...) , etc. because these methods only build up the LINQ expression tree and do not cause the query to
be executed in the database.
IMPORTANT
The EF Core async extension methods are defined in the Microsoft.EntityFrameworkCore namespace. This namespace
must be imported for the methods to be available.
Entity Framework Core uses Language Integrated Query (LINQ ) to query data from the database. LINQ allows
you to use C# (or your .NET language of choice) to write strongly typed queries based on your derived context and
entity classes.
NOTE
This feature was introduced in EF Core 2.0.
Global query filters are LINQ query predicates (a boolean expression typically passed to the LINQ Where query
operator) applied to Entity Types in the metadata model (usually in OnModelCreating). Such filters are
automatically applied to any LINQ queries involving those Entity Types, including Entity Types referenced
indirectly, such as through the use of Include or direct navigation property references. Some common applications
of this feature are:
Soft delete - An Entity Type defines an IsDeleted property.
Multi-tenancy - An Entity Type defines a TenantId property.
Example
The following example shows how to use Global Query Filters to implement soft-delete and multi-tenancy query
behaviors in a simple blogging model.
TIP
You can view this article's sample on GitHub.
Note the declaration of a _tenantId field on the Blog entity. This will be used to associate each Blog instance with a
specific tenant. Also defined is an IsDeleted property on the Post entity type. This is used to keep track of whether a
Post instance has been "soft-deleted". That is, the instance is marked as deleted without physically removing the
underlying data.
Next, configure the query filters in OnModelCreating using the HasQueryFilter API.
The predicate expressions passed to the HasQueryFilter calls will now automatically be applied to any LINQ
queries for those types.
TIP
Note the use of a DbContext instance level field: _tenantId used to set the current tenant. Model-level filters will use the
value from the correct context instance (that is, the instance that is executing the query).
Disabling Filters
Filters may be disabled for individual LINQ queries by using the IgnoreQueryFilters() operator.
blogs = db.Blogs
.Include(b => b.Posts)
.IgnoreQueryFilters()
.ToList();
Limitations
Global query filters have the following limitations:
Filters cannot contain references to navigation properties.
Filters can only be defined for the root Entity Type of an inheritance hierarchy.
Query tags
11/15/2018 • 2 minutes to read • Edit Online
NOTE
This feature is new in EF Core 2.2.
This feature helps correlate LINQ queries in code with generated SQL queries captured in logs. You annotate a
LINQ query using the new TagWith() method:
var nearestFriends =
(from f in context.Friends.TagWith("This is my spatial query!")
orderby f.Location.Distance(myLocation) descending
select f).Take(5).ToList();
It's possible to call TagWith() many times on the same query. Query tags are cumulative. For example, given the
following methods:
Translates to:
-- GetNearestFriends
-- Limit
It's also possible to use multi-line strings as query tags. For example:
var results = Limit(GetNearestFriends(myLocation), 25).TagWith(
@"This is a multi-line
string").ToList();
-- GetNearestFriends
-- Limit
-- This is a multi-line
-- string
Known limitations
Query tags aren't parameterizable: EF Core always treats query tags in the LINQ query as string literals that
are included in the generated SQL. Compiled queries that take query tags as parameters aren't allowed.
Saving Data
8/27/2018 • 2 minutes to read • Edit Online
Each context instance has a ChangeTracker that is responsible for keeping track of changes that need to be written
to the database. As you make changes to instances of your entity classes, these changes are recorded in the
ChangeTracker and then written to the database when you call SaveChanges . The database provider is responsible
for translating the changes into database-specific operations (for example, INSERT , UPDATE , and DELETE
commands for a relational database).
Basic Save
8/27/2018 • 2 minutes to read • Edit Online
Learn how to add, modify, and remove data using your context and entity classes.
TIP
You can view this article's sample on GitHub.
Adding Data
Use the DbSet.Add method to add new instances of your entity classes. The data will be inserted in the database
when you call SaveChanges.
TIP
The Add, Attach, and Update methods all work on the full graph of entities passed to them, as described in the Related Data
section. Alternately, the EntityEntry.State property can be used to set the state of just a single entity. For example,
context.Entry(blog).State = EntityState.Modified .
Updating Data
EF will automatically detect changes made to an existing entity that is tracked by the context. This includes entities
that you load/query from the database, and entities that were previously added and saved to the database.
Simply modify the values assigned to properties and then call SaveChanges.
Deleting Data
Use the DbSet.Remove method to delete instances of your entity classes.
If the entity already exists in the database, it will be deleted during SaveChanges. If the entity has not yet been
saved to the database (that is, it is tracked as added) then it will be removed from the context and will no longer be
inserted when SaveChanges is called.
using (var context = new BloggingContext())
{
var blog = context.Blogs.First();
context.Blogs.Remove(blog);
context.SaveChanges();
}
NOTE
For most database providers, SaveChanges is transactional. This means all the operations will either succeed or fail and the
operations will never be left partially applied.
// update
var firstBlog = context.Blogs.First();
firstBlog.Url = "";
// remove
var lastBlog = context.Blogs.Last();
context.Blogs.Remove(lastBlog);
context.SaveChanges();
}
Saving Related Data
8/27/2018 • 2 minutes to read • Edit Online
In addition to isolated entities, you can also make use of the relationships defined in your model.
TIP
You can view this article's sample on GitHub.
context.Blogs.Add(blog);
context.SaveChanges();
}
TIP
Use the EntityEntry.State property to set the state of just a single entity. For example,
context.Entry(blog).State = EntityState.Modified .
blog.Posts.Add(post);
context.SaveChanges();
}
Changing relationships
If you change the navigation property of an entity, the corresponding changes will be made to the foreign key
column in the database.
In the following example, the post entity is updated to belong to the new blog entity because its Blog
navigation property is set to point to blog . Note that blog will also be inserted into the database because it is a
new entity that is referenced by the navigation property of an entity that is already tracked by the context ( post ).
post.Blog = blog;
context.SaveChanges();
}
Removing relationships
You can remove a relationship by setting a reference navigation to null , or removing the related entity from a
collection navigation.
Removing a relationship can have side effects on the dependent entity, according to the cascade delete behavior
configured in the relationship.
By default, for required relationships, a cascade delete behavior is configured and the child/dependent entity will
be deleted from the database. For optional relationships, cascade delete is not configured by default, but the
foreign key property will be set to null.
See Required and Optional Relationships to learn about how the requiredness of relationships can be configured.
See Cascade Delete for more details on how cascade delete behaviors work, how they can be configured explicitly
and how they are selected by convention.
In the following example, a cascade delete is configured on the relationship between Blog and Post , so the post
entity is deleted from the database.
blog.Posts.Remove(post);
context.SaveChanges();
}
Cascade Delete
9/11/2018 • 13 minutes to read • Edit Online
Cascade delete is commonly used in database terminology to describe a characteristic that allows the deletion of a
row to automatically trigger the deletion of related rows. A closely related concept also covered by EF Core delete
behaviors is the automatic deletion of a child entity when it's relationship to a parent has been severed--this is
commonly known as "deleting orphans".
EF Core implements several different delete behaviors and allows for the configuration of the delete behaviors of
individual relationships. EF Core also implements conventions that automatically configure useful default delete
behaviors for each relationship based on the requiredness of the relationship.
Delete behaviors
Delete behaviors are defined in the DeleteBehavior enumerator type and can be passed to the OnDelete fluent
API to control whether the deletion of a principal/parent entity or the severing of the relationship to
dependent/child entities should have a side effect on the dependent/child entities.
There are three actions EF can take when a principal/parent entity is deleted or the relationship to the child is
severed:
The child/dependent can be deleted
The child's foreign key values can be set to null
The child remains unchanged
NOTE
The delete behavior configured in the EF Core model is only applied when the principal entity is deleted using EF Core and
the dependent entities are loaded in memory (that is, for tracked dependents). A corresponding cascade behavior needs to
be setup in the database to ensure data that is not being tracked by the context has the necessary action applied. If you use
EF Core to create the database, this cascade behavior will be setup for you.
For the second action above, setting a foreign key value to null is not valid if foreign key is not nullable. (A non-
nullable foreign key is equivalent to a required relationship.) In these cases, EF Core tracks that the foreign key
property has been marked as null until SaveChanges is called, at which time an exception is thrown because the
change cannot be persisted to the database. This is similar to getting a constraint violation from the database.
There are four delete behaviors, as listed in the tables below.
Optional relationships
For optional relationships (nullable foreign key) it is possible to save a null foreign key value, which results in the
following effects:
EFFECT ON DEPENDENT/CHILD IN
BEHAVIOR NAME EFFECT ON DEPENDENT/CHILD IN MEMORY DATABASE
SetNull Foreign key properties are set to null Foreign key properties are set to null
Required relationships
For required relationships (non-nullable foreign key) it is not possible to save a null foreign key value, which
results in the following effects:
EFFECT ON DEPENDENT/CHILD IN
BEHAVIOR NAME EFFECT ON DEPENDENT/CHILD IN MEMORY DATABASE
In the tables above, None can result in a constraint violation. For example, if a principal/child entity is deleted but
no action is taken to change the foreign key of a dependent/child, then the database will likely throw on
SaveChanges due to a foreign constraint violation.
At a high level:
If you have entities that cannot exist without a parent, and you want EF to take care for deleting the children
automatically, then use Cascade.
Entities that cannot exist without a parent usually make use of required relationships, for which Cascade
is the default.
If you have entities that may or may not have a parent, and you want EF to take care of nulling out the foreign
key for you, then use ClientSetNull
Entities that can exist without a parent usually make use of optional relationships, for which
ClientSetNull is the default.
If you want the database to also try to propagate null values to child foreign keys even when the child
entity is not loaded, then use SetNull. However, note that the database must support this, and
configuring the database like this can result in other restrictions, which in practice often makes this
option impractical. This is why SetNull is not the default.
If you don't want EF Core to ever delete an entity automatically or null out the foreign key automatically, then
use Restrict. Note that this requires that your code keep child entities and their foreign key values in sync
manually otherwise constraint exceptions will be thrown.
NOTE
In EF Core, unlike EF6, cascading effects do not happen immediately, but instead only when SaveChanges is called.
NOTE
Changes in EF Core 2.0: In previous releases, Restrict would cause optional foreign key properties in tracked dependent
entities to be set to null, and was the default delete behavior for optional relationships. In EF Core 2.0, the ClientSetNull was
introduced to represent that behavior and became the default for optional relationships. The behavior of Restrict was
adjusted to never have any side effects on dependent entities.
context.Remove(blog);
try
{
Console.WriteLine();
Console.WriteLine(" Saving changes:");
context.SaveChanges();
DumpSql();
Console.WriteLine();
Console.WriteLine($" SaveChanges threw {e.GetType().Name}: {(e is DbUpdateException ?
e.InnerException.Message : e.Message)}");
}
Saving changes:
DELETE FROM [Posts] WHERE [PostId] = 1
DELETE FROM [Posts] WHERE [PostId] = 2
DELETE FROM [Blogs] WHERE [BlogId] = 1
After SaveChanges:
Blog '1' is in state Detached with 2 posts referenced.
Post '1' is in state Detached with FK '1' and no reference to a blog.
Post '2' is in state Detached with FK '1' and no reference to a blog.
Saving changes:
UPDATE [Posts] SET [BlogId] = NULL WHERE [PostId] = 1
SaveChanges threw DbUpdateException: Cannot insert the value NULL into column 'BlogId', table
'EFSaving.CascadeDelete.dbo.Posts'; column does not allow nulls. UPDATE fails. The statement has been
terminated.
Saving changes:
UPDATE [Posts] SET [BlogId] = NULL WHERE [PostId] = 1
UPDATE [Posts] SET [BlogId] = NULL WHERE [PostId] = 2
DELETE FROM [Blogs] WHERE [BlogId] = 1
After SaveChanges:
Blog '1' is in state Detached with 2 posts referenced.
Post '1' is in state Unchanged with FK 'null' and no reference to a blog.
Post '2' is in state Unchanged with FK 'null' and no reference to a blog.
Saving changes:
SaveChanges threw InvalidOperationException: The association between entity types 'Blog' and 'Post' has been
severed but the foreign key for this relationship cannot be set to null. If the dependent entity should be
deleted, then setup the relationship to use cascade deletes.
blog.Posts.Clear();
try
{
Console.WriteLine();
Console.WriteLine(" Saving changes:");
context.SaveChanges();
DumpSql();
Console.WriteLine();
Console.WriteLine($" SaveChanges threw {e.GetType().Name}: {(e is DbUpdateException ?
e.InnerException.Message : e.Message)}");
}
Saving changes:
DELETE FROM [Posts] WHERE [PostId] = 1
DELETE FROM [Posts] WHERE [PostId] = 2
After SaveChanges:
Blog '1' is in state Unchanged with 2 posts referenced.
Post '1' is in state Detached with FK '1' and no reference to a blog.
Post '2' is in state Detached with FK '1' and no reference to a blog.
Posts are marked as Modified because severing the relationship caused the FK to be marked as null
If the FK is not nullable, then the actual value will not change even though it is marked as null
SaveChanges sends deletes for dependents/children (posts)
After saving, the dependents/children (posts) are detached since they have now been deleted from the
database
DeleteBehavior.ClientSetNull or DeleteBehavior.SetNull with required relationship
After loading entities:
Blog '1' is in state Unchanged with 2 posts referenced.
Post '1' is in state Unchanged with FK '1' and reference to blog '1'.
Post '2' is in state Unchanged with FK '1' and reference to blog '1'.
Saving changes:
UPDATE [Posts] SET [BlogId] = NULL WHERE [PostId] = 1
SaveChanges threw DbUpdateException: Cannot insert the value NULL into column 'BlogId', table
'EFSaving.CascadeDelete.dbo.Posts'; column does not allow nulls. UPDATE fails. The statement has been
terminated.
Posts are marked as Modified because severing the relationship caused the FK to be marked as null
If the FK is not nullable, then the actual value will not change even though it is marked as null
SaveChanges attempts to set the post FK to null, but this fails because the FK is not nullable
DeleteBehavior.ClientSetNull or DeleteBehavior.SetNull with optional relationship
Saving changes:
UPDATE [Posts] SET [BlogId] = NULL WHERE [PostId] = 1
UPDATE [Posts] SET [BlogId] = NULL WHERE [PostId] = 2
After SaveChanges:
Blog '1' is in state Unchanged with 2 posts referenced.
Post '1' is in state Unchanged with FK 'null' and no reference to a blog.
Post '2' is in state Unchanged with FK 'null' and no reference to a blog.
Posts are marked as Modified because severing the relationship caused the FK to be marked as null
If the FK is not nullable, then the actual value will not change even though it is marked as null
SaveChanges sets the FK of both dependents/children (posts) to null
After saving, the dependents/children (posts) now have null FK values and their reference to the deleted
principal/parent (blog) has been removed
DeleteBehavior.Restrict with required or optional relationship
After loading entities:
Blog '1' is in state Unchanged with 2 posts referenced.
Post '1' is in state Unchanged with FK '1' and reference to blog '1'.
Post '2' is in state Unchanged with FK '1' and reference to blog '1'.
Saving changes:
SaveChanges threw InvalidOperationException: The association between entity types 'Blog' and 'Post' has been
severed but the foreign key for this relationship cannot be set to null. If the dependent entity should be
deleted, then setup the relationship to use cascade deletes.
Posts are marked as Modified because severing the relationship caused the FK to be marked as null
If the FK is not nullable, then the actual value will not change even though it is marked as null
Since Restrict tells EF to not automatically set the FK to null, it remains non-null and SaveChanges throws
without saving
If only the principal is loaded--for example, when a query is made for a blog without an Include(b => b.Posts) to
also include posts--then SaveChanges will only generate SQL to delete the principal/parent:
The dependents/children (posts) will only be deleted if the database has a corresponding cascade behavior
configured. If you use EF to create the database, this cascade behavior will be setup for you.
Handling Concurrency Conflicts
8/27/2018 • 3 minutes to read • Edit Online
NOTE
This page documents how concurrency works in EF Core and how to handle concurrency conflicts in your application. See
Concurrency Tokens for details on how to configure concurrency tokens in your model.
TIP
You can view this article's sample on GitHub.
Database concurrency refers to situations in which multiple processes or users access or change the same data in a
database at the same time. Concurrency control refers to specific mechanisms used to ensure data consistency in
presence of concurrent changes.
EF Core implements optimistic concurrency control, meaning that it will let multiple processes or users make
changes independently without the overhead of synchronization or locking. In the ideal situation, these changes
will not interfere with each other and therefore will be able to succeed. In the worst case scenario, two or more
processes will attempt to make conflicting changes, and only one of them should succeed.
Transactions allow several database operations to be processed in an atomic manner. If the transaction is
committed, all of the operations are successfully applied to the database. If the transaction is rolled back, none of
the operations are applied to the database.
TIP
You can view this article's sample on GitHub.
Controlling transactions
You can use the DbContext.Database API to begin, commit, and rollback transactions. The following example shows
two SaveChanges() operations and a LINQ query being executed in a single transaction.
Not all database providers support transactions. Some providers may throw or no-op when transaction APIs are
called.
using (var context = new BloggingContext())
{
using (var transaction = context.Database.BeginTransaction())
{
try
{
context.Blogs.Add(new Blog { Url = "http://blogs.msdn.com/dotnet" });
context.SaveChanges();
TIP
DbContextOptionsBuilder is the API you used in DbContext.OnConfiguring to configure the context, you are now going
to use it externally to create DbContextOptions .
An alternative is to keep using DbContext.OnConfiguring , but accept a DbConnection that is saved and then used in
DbContext.OnConfiguring .
public class BloggingContext : DbContext
{
private DbConnection _connection;
Using System.Transactions
NOTE
This feature is new in EF Core 2.1.
It is possible to use ambient transactions if you need to coordinate across a larger scope.
using (var scope = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted }))
{
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
try
{
// Run raw ADO.NET command in the transaction
var command = connection.CreateCommand();
command.CommandText = "DELETE FROM dbo.Blogs";
command.ExecuteNonQuery();
try
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseSqlServer(connection)
.Options;
Limitations of System.Transactions
1. EF Core relies on database providers to implement support for System.Transactions. Although support is
quite common among ADO.NET providers for .NET Framework, the API has only been recently added to
.NET Core and hence support is not as widespread. If a provider does not implement support for
System.Transactions, it is possible that calls to these APIs will be completely ignored. SqlClient for .NET
Core does support it from 2.1 onwards. SqlClient for .NET Core 2.0 will throw an exception if you attempt to
use the feature.
IMPORTANT
It is recommended that you test that the API behaves correctly with your provider before you rely on it for managing
transactions. You are encouraged to contact the maintainer of the database provider if it does not.
2. As of version 2.1, the System.Transactions implementation in .NET Core does not include support for
distributed transactions, therefore you cannot use TransactionScope or CommittableTransaction to
coordinate transactions across multiple resource managers.
Asynchronous Saving
8/27/2018 • 2 minutes to read • Edit Online
Asynchronous saving avoids blocking a thread while the changes are written to the database. This can be useful to
avoid freezing the UI of a thick-client application. Asynchronous operations can also increase throughput in a web
application, where the thread can be freed up to service other requests while the database operation completes. For
more information, see Asynchronous Programming in C#.
WARNING
EF Core does not support multiple parallel operations being run on the same context instance. You should always wait for an
operation to complete before beginning the next operation. This is typically done by using the await keyword on each
asynchronous operation.
A DbContext instance will automatically track entities returned from the database. Changes made to these entities
will then be detected when SaveChanges is called and the database will be updated as needed. See Basic Save and
Related Data for details.
However, sometimes entities are queried using one context instance and then saved using a different instance. This
often happens in "disconnected" scenarios such as a web application where the entities are queried, sent to the
client, modified, sent back to the server in a request, and then saved. In this case, the second context instance needs
to know whether the entities are new (should be inserted) or existing (should be updated).
TIP
You can view this article's sample on GitHub.
TIP
EF Core can only track one instance of any entity with a given primary key value. The best way to avoid this being an issue is
to use a short-lived context for each unit-of-work such that the context starts empty, has entities attached to it, saves those
entities, and then the context is disposed and discarded.
However, EF also has a built-in way to do this for any entity type and key type:
It is beyond the scope of this document to show the full code for passing a flag from a client. In a web app, it
usually means making different requests for different actions, or passing some state in the request then extracting
it in the controller.
However, if the entity uses auto-generated key values, then the Update method can be used for both cases:
The Update method normally marks the entity for update, not insert. However, if the entity has a auto-generated
key, and no key value has been set, then the entity is instead automatically marked for insert.
TIP
This behavior was introduced in EF Core 2.0. For earlier releases it is always necessary to explicitly choose either Add or
Update.
If the entity is not using auto-generated keys, then the application must decide whether the entity should be
inserted or updated: For example:
context.SaveChanges();
}
TIP
SetValues will only mark as modified the properties that have different values to those in the tracked entity. This means that
when the update is sent, only those columns that have actually changed will be updated. (And if nothing has changed, then
no update will be sent at all.)
The call to Add will mark the blog and all the posts to be inserted.
Likewise, if all the entities in a graph need to be updated, then Update can be used:
Update will mark any entity in the graph, blog or post, for insertion if it does not have a key value set, while all
other entities are marked for update.
As before, when not using auto-generated keys, a query and some processing can be used:
public static void InsertOrUpdateGraph(BloggingContext context, Blog blog)
{
var existingBlog = context.Blogs
.Include(b => b.Posts)
.FirstOrDefault(b => b.BlogId == blog.BlogId);
if (existingBlog == null)
{
context.Add(blog);
}
else
{
context.Entry(existingBlog).CurrentValues.SetValues(blog);
foreach (var post in blog.Posts)
{
var existingPost = existingBlog.Posts
.FirstOrDefault(p => p.PostId == post.PostId);
if (existingPost == null)
{
existingBlog.Posts.Add(post);
}
else
{
context.Entry(existingPost).CurrentValues.SetValues(post);
}
}
}
context.SaveChanges();
}
Handling deletes
Delete can be tricky to handle since often the absence of an entity means that it should be deleted. One way to deal
with this is to use "soft deletes" such that the entity is marked as deleted rather than actually being deleted. Deletes
then becomes the same as updates. Soft deletes can be implemented in using query filters.
For true deletes, a common pattern is to use an extension of the query pattern to perform what is essentially a
graph diff. For example:
public static void InsertUpdateOrDeleteGraph(BloggingContext context, Blog blog)
{
var existingBlog = context.Blogs
.Include(b => b.Posts)
.FirstOrDefault(b => b.BlogId == blog.BlogId);
if (existingBlog == null)
{
context.Add(blog);
}
else
{
context.Entry(existingBlog).CurrentValues.SetValues(blog);
foreach (var post in blog.Posts)
{
var existingPost = existingBlog.Posts
.FirstOrDefault(p => p.PostId == post.PostId);
if (existingPost == null)
{
existingBlog.Posts.Add(post);
}
else
{
context.Entry(existingPost).CurrentValues.SetValues(post);
}
}
context.SaveChanges();
}
TrackGraph
Internally, Add, Attach, and Update use graph-traversal with a determination made for each entity as to whether it
should be marked as Added (to insert), Modified (to update), Unchanged (do nothing), or Deleted (to delete). This
mechanism is exposed via the TrackGraph API. For example, let's assume that when the client sends back a graph
of entities it sets some flag on each entity indicating how it should be handled. TrackGraph can then be used to
process this flag:
public static void SaveAnnotatedGraph(DbContext context, object rootEntity)
{
context.ChangeTracker.TrackGraph(
rootEntity,
n =>
{
var entity = (EntityBase)n.Entry.Entity;
n.Entry.State = entity.IsNew
? EntityState.Added
: entity.IsChanged
? EntityState.Modified
: entity.IsDeleted
? EntityState.Deleted
: EntityState.Unchanged;
});
context.SaveChanges();
}
The flags are only shown as part of the entity for simplicity of the example. Typically the flags would be part of a
DTO or some other state included in the request.
Setting Explicit Values for Generated Properties
8/27/2018 • 3 minutes to read • Edit Online
A generated property is a property whose value is generated (either by EF or the database) when the entity is
added and/or updated. See Generated Properties for more information.
There may be situations where you want to set an explicit value for a generated property, rather than having one
generated.
TIP
You can view this article's sample on GitHub.
The model
The model used in this article contains a single Employee entity.
modelBuilder.Entity<Employee>()
.Property(b => b.EmploymentStarted)
.HasDefaultValueSql("CONVERT(date, GETDATE())");
Output shows that the database generated a value for the first employee and our explicit value was used for the
second.
NOTE
We have a feature request on our backlog to do this automatically within the SQL Server provider.
context.Database.OpenConnection();
try
{
context.Database.ExecuteSqlCommand("SET IDENTITY_INSERT dbo.Employees ON");
context.SaveChanges();
context.Database.ExecuteSqlCommand("SET IDENTITY_INSERT dbo.Employees OFF");
}
finally
{
context.Database.CloseConnection();
}
Output shows that the supplied ids were saved to the database.
modelBuilder.Entity<Employee>()
.Property(b => b.LastPayRaise)
.ValueGeneratedOnAddOrUpdate();
modelBuilder.Entity<Employee>()
.Property(b => b.LastPayRaise)
.Metadata.AfterSaveBehavior = PropertySaveBehavior.Ignore;
NOTE
By default, EF Core will throw an exception if you try to save an explicit value for a property that is configured to be
generated during update. To avoid this, you need to drop down to the lower level metadata API and set the
AfterSaveBehavior (as shown above).
NOTE
Changes in EF Core 2.0: In previous releases the after-save behavior was controlled through the IsReadOnlyAfterSave
flag. This flag has been obsoleted and replaced by AfterSaveBehavior .
There is also a trigger in the database to generate values for the LastPayRaise column during UPDATE operations.
The following code increases the salary of two employees in the database.
For the first, no value is assigned to Employee.LastPayRaise property, so it remains set to null.
For the second, we have set an explicit value of one week ago (back dating the pay raise).
using (var context = new EmployeeContext())
{
var john = context.Employees.Single(e => e.Name == "John Doe");
john.Salary = 200;
context.SaveChanges();
Output shows that the database generated a value for the first employee and our explicit value was used for the
second.
We want EF Core to be available anywhere you can write .NET code, and we're still working towards that goal.
While EF Core's support on .NET Core and .NET Framework is covered by automated testing and many
applications known to be using it successfully, Mono, Xamarin and UWP have some issues.
Overview
The following table provides guidance for each .NET implementation:
.NET IMPLEMENTATION STATUS EF CORE 1.X REQUIREMENTS EF CORE 2.X REQUIREMENTS (1)
.NET Core (ASP.NET Core, Fully supported and .NET Core SDK 1.x .NET Core SDK 2.x
Console, etc.) recommended
.NET Framework Fully supported and .NET Framework 4.5.1 .NET Framework 4.6.1
(WinForms, WPF, ASP.NET, recommended. EF6 also
Console, etc.) available (2)
Universal Windows EF Core 2.0.1 recommended .NET Core UWP 5.x package .NET Core UWP 6.x package
Platform (4)
(1) EF Core 2.0 targets and therefore requires .NET implementations that support .NET Standard 2.0.
(2) See Compare EF Core & EF6 to choose the right technology.
(3)
There are issues and known limitations with Xamarin which may prevent some applications developed using EF
Core 2.0 from working correctly. Check the list of active issues for workarounds.
(4) See the Universal Windows Platform section of this article.
Entity Framework Core can access many different databases through plug-in libraries called database providers.
Current providers
IMPORTANT
EF Core providers are built by a variety of sources. Not all providers are maintained as part of the Entity Framework Core
Project. When considering a provider, be sure to evaluate quality, licensing, support, etc. to ensure they meet your
requirements. Also make sure you review each provider's documentation for detailed version compatibility information.
EntityFrameworkCore SQL Server Compact Erik Ejlskov Jensen .NET Framework wiki
.SqlServerCompact40 4.0
EntityFrameworkCore SQL Server Compact Erik Ejlskov Jensen .NET Framework wiki
.SqlServerCompact35 3.5
Future Providers
Cosmos DB
We have been developing an EF Core provider for the SQL API in Cosmos DB. This will be the first complete
document-oriented database provider we have produced, and the learnings from this exercise are going to
inform improvements in the design of future releases of EF Core and possibly other non-relational providers. A
preview is available on the NuGet Gallery.
Oracle first-party provider
The Oracle .NET team has published the beta of the Oracle provider for EF Core. Please direct any questions
about this provider, including the release timeline, to the Oracle Community Site.
install-package provider_package_name
Once installed, you will configure the provider in your DbContext , either in the OnConfiguring method or in the
AddDbContext method if you are using a dependency injection container. For example, the following line
configures the SQL Server provider with the passed connection string:
optionsBuilder.UseSqlServer(
"Server=(localdb)\mssqllocaldb;Database=MyDatabase;Trusted_Connection=True;");
Database providers can extend EF Core to enable functionality unique to specific databases. Some concepts are
common to most databases, and are included in the primary EF Core components. Such concepts include
expressing queries in LINQ, transactions, and tracking changes to objects once they are loaded from the
database. Some concepts are specific to a particular provider. For example, the SQL Server provider allows you
to configure memory-optimized tables (a feature specific to SQL Server). Other concepts are specific to a class
of providers. For example, EF Core providers for relational databases build on the common
Microsoft.EntityFrameworkCore.Relational library, which provides APIs for configuring table and column
mappings, foreign key constraints, etc. Providers are usually distributed as NuGet packages.
IMPORTANT
When a new patch version of EF Core is released, it often includes updates to the
Microsoft.EntityFrameworkCore.Relational package. When you add a relational database provider, this package
becomes a transitive dependency of your application. But many providers are released independently from EF Core and
may not be updated to depend on the newer patch version of that package. In order to make sure you will get all bug
fixes, it is recommended that you add the patch version of Microsoft.EntityFrameworkCore.Relational as a direct
dependency of your application.
Microsoft SQL Server EF Core Database Provider
8/27/2018 • 2 minutes to read • Edit Online
This database provider allows Entity Framework Core to be used with Microsoft SQL Server (including SQL
Azure). The provider is maintained as part of the Entity Framework Core Project.
Install
Install the Microsoft.EntityFrameworkCore.SqlServer NuGet package.
Install-Package Microsoft.EntityFrameworkCore.SqlServer
Get Started
The following resources will help you get started with this provider.
Getting Started on .NET Framework (Console, WinForms, WPF, etc.)
Getting Started on ASP.NET Core
UnicornStore Sample Application
Supported Platforms
.NET Framework (4.5.1 onwards)
.NET Core
Mono (4.2.0 onwards)
Caution: Using this provider on Mono will make use of the Mono SQL Client implementation, which has a
number of known issues. For example, it does not support secure connections (SSL).
Memory-Optimized Tables support in SQL Server EF
Core Database Provider
8/27/2018 • 2 minutes to read • Edit Online
NOTE
This feature was introduced in EF Core 1.1.
Memory-Optimized Tables are a feature of SQL Server where the entire table resides in memory. A second copy of
the table data is maintained on disk, but only for durability purposes. Data in memory-optimized tables is only
read from disk during database recovery. For example, after a server restart.
This database provider allows Entity Framework Core to be used with SQLite. The provider is maintained as part
of the Entity Framework Core project.
Install
Install the Microsoft.EntityFrameworkCore.Sqlite NuGet package.
Install-Package Microsoft.EntityFrameworkCore.Sqlite
Get Started
The following resources will help you get started with this provider.
Local SQLite on UWP
.NET Core Application to New SQLite Database
Unicorn Clicker Sample Application
Unicorn Packer Sample Application
Supported Platforms
.NET Framework (4.5.1 onwards)
.NET Core
Mono (4.2.0 onwards)
Universal Windows Platform
Limitations
See SQLite Limitations for some important limitations of the SQLite provider.
SQLite EF Core Database Provider Limitations
6/26/2019 • 2 minutes to read • Edit Online
The SQLite provider has a number of migrations limitations. Most of these limitations are a result of limitations in
the underlying SQLite database engine and are not specific to EF.
Modeling limitations
The common relational library (shared by Entity Framework relational database providers) defines APIs for
modelling concepts that are common to most relational database engines. A couple of these concepts are not
supported by the SQLite provider.
Schemas
Sequences
Computed columns
Query limitations
SQLite doesn't natively support the following data types. EF Core can read and write values of these types, and
querying for equality ( where e.Property == value ) is also support. Other operations, however, like comparison and
ordering will require evaluation on the client.
DateTimeOffset
Decimal
TimeSpan
UInt64
Instead of DateTimeOffset , we recommend using DateTime values. When handling multiple time zones, we
recommend converting the values to UTC before saving and then converting back to the appropriate time zone.
The Decimal type provides a high level of precision. If you don't need that level of precision, however, we
recommend using double instead. You can use a value converter to continue using decimal in your classes.
modelBuilder.Entity<MyEntity>()
.Property(e => e.DecimalProperty)
.HasConversion<double>();
Migrations limitations
The SQLite database engine does not support a number of schema operations that are supported by the majority
of other relational databases. If you attempt to apply one of the unsupported operations to a SQLite database then
a NotSupportedException will be thrown.
AddColumn ✔ 1.0
AddForeignKey ✗
OPERATION SUPPORTED? REQUIRES VERSION
AddPrimaryKey ✗
AddUniqueConstraint ✗
AlterColumn ✗
CreateIndex ✔ 1.0
CreateTable ✔ 1.0
DropColumn ✗
DropForeignKey ✗
DropIndex ✔ 1.0
DropPrimaryKey ✗
DropTable ✔ 1.0
DropUniqueConstraint ✗
RenameColumn ✔ 2.2.2
RenameIndex ✔ 2.1
RenameTable ✔ 1.0
Insert ✔ 2.0
Update ✔ 2.0
Delete ✔ 2.0
This database provider allows Entity Framework Core to be used with an in-memory database. This can be useful
for testing, although the SQLite provider in in-memory mode may be a more appropriate test replacement for
relational databases. The provider is maintained as part of the Entity Framework Core Project.
Install
Install the Microsoft.EntityFrameworkCore.InMemory NuGet package.
Install-Package Microsoft.EntityFrameworkCore.InMemory
Get Started
The following resources will help you get started with this provider.
Testing with InMemory
UnicornStore Sample Application Tests
Supported Platforms
.NET Framework (4.5.1 onwards)
.NET Core
Mono (4.2.0 onwards)
Universal Windows Platform
Writing a Database Provider
8/27/2018 • 2 minutes to read • Edit Online
For information about writing an Entity Framework Core database provider, see So you want to write an EF
Core provider by Arthur Vickers.
NOTE
These posts have not been updated since EF Core 1.1 and there have been significant changes since that time Issue 681 is
tracking updates to this documentation.
The EF Core codebase is open source and contains several database providers that can be used as a reference. You
can find the source code at https://github.com/aspnet/EntityFrameworkCore. It may also be helpful to look at the
code for commonly used third-party providers, such as Npgsql, Pomelo MySQL, and SQL Server Compact. In
particular, these projects are setup to extend from and run functional tests that we publish on NuGet. This kind of
setup is strongly recommended.
For example:
Microsoft.EntityFrameworkCore.SqlServer
Npgsql.EntityFrameworkCore.PostgreSQL
EntityFrameworkCore.SqlServerCompact40
Provider-impacting changes
4/18/2019 • 4 minutes to read • Edit Online
This page contains links to pull requests made on the EF Core repo that may require authors of other database
providers to react. The intention is to provide a starting point for authors of existing third-party database providers
when updating their provider to a new version.
We are starting this log with changes from 2.1 to 2.2. Prior to 2.1 we used the providers-beware and
providers-fyi labels on our issues and pull requests.
These tools and extensions provide additional functionality for Entity Framework Core 2.0 and later.
IMPORTANT
Extensions are built by a variety of sources and aren't maintained as part of the Entity Framework Core project. When
considering a third party extension, be sure to evaluate its quality, licensing, compatibility, support, etc. to ensure it meets
your requirements.
Tools
LLBLGen Pro
LLBLGen Pro is an entity modeling solution with support for Entity Framework and Entity Framework Core. It lets
you easily define your entity model and map it to your database, using database first or model first, so you can get
started writing queries right away.
Website
Devart Entity Developer
Entity Developer is a powerful ORM designer for ADO.NET Entity Framework, NHibernate, LinqConnect, Telerik
Data Access, and LINQ to SQL. It supports designing EF Core models visually, using model first or database first
approaches, and C# or Visual Basic code generation.
Website
EF Core Power Tools
EF Core Power Tools is a Visual Studio 2017 extension that exposes various EF Core design-time tasks in a simple
user interface. It includes reverse engineering of DbContext and entity classes from existing databases and SQL
Server DACPACs, management of database migrations, and model visualizations.
GitHub wiki
Entity Framework Visual Editor
Entity Framework Visual Editor is a Visual Studio extension that adds an ORM designer for visual design of EF 6,
and EF Core classes. Code is generated using T4 templates so can be customized to suit any needs. It supports
inheritance, unidirectional and bidirectional associations, enumerations, and the ability to color-code your classes
and add text blocks to explain potentially arcane parts of your design.
Marketplace
CatFactory
CatFactory is a scaffolding engine for .NET Core that can automate the generation of DbContext classes, entities,
mapping configurations, and repository classes from a SQL Server database.
GitHub repository
LoreSoft's Entity Framework Core Generator
Entity Framework Core Generator (efg) is a .NET Core CLI tool that can generate EF Core models from an existing
database, much like dotnet ef dbcontext scaffold , but it also supports safe code regeneration via region
replacement or by parsing mapping files. This tool supports generating view models, validation, and object mapper
code.
Tutorial Documentation
Extensions
Microsoft.EntityFrameworkCore.AutoHistory
A plugin library that enables automatically recording the data changes performed by EF Core into a history table.
GitHub repository
Microsoft.EntityFrameworkCore.DynamicLinq
A .NET Core / .NET Standard port of System.Linq.Dynamic that includes async support with EF Core.
System.Linq.Dynamic originated as a Microsoft sample that shows how to construct LINQ queries dynamically
from string expressions rather than code.
GitHub repository
EFSecondLevelCache.Core
An extension that enables storing the results of EF Core queries into a second-level cache, so that subsequent
executions of the same queries can avoid accessing the database and retrieve the data directly from the cache.
GitHub repository
EntityFrameworkCore.PrimaryKey
This library allows retrieving the values of primary key (including composite keys) from any entity as a dictionary.
GitHub repository
EntityFrameworkCore.TypedOriginalValues
This library enables strongly typed access to the original values of entity properties.
GitHub repository
Geco
Geco (Generator Console) is a simple code generator based on a console project, that runs on .NET Core and uses
C# interpolated strings for code generation. Geco includes a reverse model generator for EF Core with support for
pluralization, singularization, and editable templates. It also provides a seed data script generator, a script runner,
and a database cleaner.
GitHub repository
LinqKit.Microsoft.EntityFrameworkCore
LinqKit.Microsoft.EntityFrameworkCore is an EF Core-compatible version of the LINQKit library. LINQKit is a free
set of extensions for LINQ to SQL and Entity Framework power users. It enables advanced functionality like
dynamic building of predicate expressions, and using expression variables in subqueries.
GitHub repository
NeinLinq.EntityFrameworkCore
NeinLinq extends LINQ providers such as Entity Framework to enable reusing functions, rewriting queries, and
building dynamic queries using translatable predicates and selectors.
GitHub repository
Microsoft.EntityFrameworkCore.UnitOfWork
A plugin for Microsoft.EntityFrameworkCore to support repository, unit of work patterns, and multiple databases
with distributed transaction supported.
GitHub repository
EFCore.BulkExtensions
EF Core extensions for Bulk operations (Insert, Update, Delete).
GitHub repository
Bricelam.EntityFrameworkCore.Pluralizer
Adds design-time pluralization to EF Core.
GitHub repository
PomeloFoundation/Pomelo.EntityFrameworkCore.Extensions.ToSql
A simple extension method that obtains the SQL statement EF Core would generate for a given LINQ query in
simple scenarios. The ToSql method is limited to simple scenarios because EF Core can generate more than one
SQL statement for a single LINQ query, and different SQL statements depending on parameter values.
GitHub repository
Toolbelt.EntityFrameworkCore.IndexAttribute
Revival of [Index] attribute for EF Core (with extension for model building).
GitHub repository
EfCore.InMemoryHelpers
Provides a wrapper around the EF Core In-Memory Database Provider. Makes it act more like a relational provider.
GitHub repository
EFCore.TemporalSupport
An implementation of temporal support for EF Core.
GitHub repository
EntityFrameworkCore.Cacheable
A high-performance second-level query cache for EF Core.
GitHub repository
Entity Framework Core tools reference
5/31/2019 • 2 minutes to read • Edit Online
The Entity Framework Core tools help with design-time development tasks. They're primarily used to manage
Migrations and to scaffold a DbContext and entity types by reverse engineering the schema of a database.
The EF Core Package Manager Console tools run in the Package Manager Console in Visual Studio. These
tools work with both .NET Framework and .NET Core projects.
The EF Core .NET command-line interface (CLI) tools are an extension to the cross-platform .NET Core CLI
tools. These tools require a .NET Core SDK project (one with Sdk="Microsoft.NET.Sdk" or similar in the
project file).
Both tools expose the same functionality. If you're developing in Visual Studio, we recommend using the Package
Manager Console tools since they provide a more integrated experience.
Next steps
EF Core Package Manager Console tools reference
EF Core .NET CLI tools reference
Entity Framework Core tools reference - Package
Manager Console in Visual Studio
3/25/2019 • 8 minutes to read • Edit Online
The Package Manager Console (PMC ) tools for Entity Framework Core perform design-time development tasks.
For example, they create migrations, apply migrations, and generate code for a model based on an existing
database. The commands run inside of Visual Studio using the Package Manager Console. These tools work with
both .NET Framework and .NET Core projects.
If you aren't using Visual Studio, we recommend the EF Core Command-line Tools instead. The CLI tools are
cross-platform and run inside a command prompt.
Therefore, you don't have to do anything to install the tools, but you do have to:
Restore packages before using the tools in a new project.
Install a package to update the tools to a newer version.
To make sure that you're getting the latest version of the tools, we recommend that you also do the following
step:
Edit your .csproj file and add a line specifying the latest version of the
Microsoft.EntityFrameworkCore.Tools package. For example, the .csproj file might include an ItemGroup
that looks like this:
<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore.App" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.1.3" />
<PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.1.1" />
</ItemGroup>
Update the tools when you get a message like the following example:
The EF Core tools version '2.1.1-rtm-30846' is older than that of the runtime '2.1.3-rtm-32065'. Update the
tools for the latest features and bug fixes.
Install-Package Microsoft.EntityFrameworkCore.Tools
Update the tools by running the following command in Package Manager Console.
Update-Package Microsoft.EntityFrameworkCore.Tools
Get-Help about_EntityFrameworkCore
The output looks like this (it doesn't tell you which version of the tools you're using):
_/\__
---==/ \\
___ ___ |. \|\
| __|| __| | ) \\\
| _| | _| \_/ | //|\\
|___||_| / \\\/\\
TOPIC
about_EntityFrameworkCore
SHORT DESCRIPTION
Provides information about the Entity Framework Core Package Manager Console Tools.
Common parameters
The following table shows parameters that are common to all of the EF Core commands:
PARAMETER DESCRIPTION
-Context <String> The DbContext class to use. Class name only or fully
qualified with namespaces. If this parameter is omitted, EF
Core finds the context class. If there are multiple context
classes, this parameter is required.
-Project <String> The target project. If this parameter is omitted, the Default
project for Package Manager Console is used as the
target project.
-StartupProject <String> The startup project. If this parameter is omitted, the Startup
project in Solution properties is used as the target project.
TIP
The Context, Project, and StartupProject parameters support tab-expansion.
Add-Migration
Adds a new migration.
Parameters:
PARAMETER DESCRIPTION
-Name <String> The name of the migration. This is a positional parameter and
is required.
-OutputDir <String> The directory (and sub-namespace) to use. Paths are relative
to the target project directory. Defaults to "Migrations".
Drop-Database
Drops the database.
Parameters:
PARAMETER DESCRIPTION
-WhatIf Show which database would be dropped, but don't drop it.
Get-DbContext
Gets information about a DbContext type.
Remove-Migration
Removes the last migration (rolls back the code changes that were done for the migration).
Parameters:
PARAMETER DESCRIPTION
-Force Revert the migration (roll back the changes that were applied
to the database).
Scaffold-DbContext
Generates code for a DbContext and entity types for a database. In order for Scaffold-DbContext to generate an
entity type, the database table must have a primary key.
Parameters:
PARAMETER DESCRIPTION
-Connection <String> The connection string to the database. For ASP.NET Core 2.x
projects, the value can be name=<name of connection
string>. In that case the name comes from the configuration
sources that are set up for the project. This is a positional
parameter and is required.
-Provider <String> The provider to use. Typically this is the name of the NuGet
package, for example:
Microsoft.EntityFrameworkCore.SqlServer . This is a
positional parameter and is required.
-OutputDir <String> The directory to put files in. Paths are relative to the project
directory.
PARAMETER DESCRIPTION
-ContextDir <String> The directory to put the DbContext file in. Paths are relative
to the project directory.
-Schemas <String[]> The schemas of tables to generate entity types for. If this
parameter is omitted, all schemas are included.
-Tables <String[]> The tables to generate entity types for. If this parameter is
omitted, all tables are included.
-UseDatabaseNames Use table and column names exactly as they appear in the
database. If this parameter is omitted, database names are
changed to more closely conform to C# name style
conventions.
Example:
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;"
Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models
Example that scaffolds only selected tables and creates the context in a separate folder with a specified name:
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;"
Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models -Tables "Blog","Post" -ContextDir Context -Context
BlogContext
Script-Migration
Generates a SQL script that applies all of the changes from one selected migration to another selected migration.
Parameters:
PARAMETER DESCRIPTION
-Output <String> The file to write the result to. IF this parameter is omitted, the
file is created with a generated name in the same folder as
the app's runtime files are created, for example:
/obj/Debug/netcoreapp2.1/ghbkztfz.sql/.
TIP
The To, From, and Output parameters support tab-expansion.
The following example creates a script for the InitialCreate migration, using the migration name.
The following example creates a script for all migrations after the InitialCreate migration, using the migration ID.
Update-Database
Updates the database to the last migration or to a specified migration.
PARAMETER DESCRIPTION
TIP
The Migration parameter supports tab-expansion.
Update-Database -Migration 0
The following examples update the database to a specified migration. The first uses the migration name and the
second uses the migration ID:
Additional resources
Migrations
Reverse Engineering
Entity Framework Core tools reference - .NET CLI
11/15/2018 • 8 minutes to read • Edit Online
The command-line interface (CLI) tools for Entity Framework Core perform design-time development tasks. For
example, they create migrations, apply migrations, and generate code for a model based on an existing database.
The commands are an extension to the cross-platform dotnet command, which is part of the .NET Core SDK.
These tools work with .NET Core projects.
If you're using Visual Studio, we recommend the Package Manager Console tools instead:
They automatically work with the current project selected in the Package Manager Console without
requiring that you manually switch directories.
They automatically open files generated by a command after the command is completed.
EF Core 1.x
Install the .NET Core SDK version 2.1.200. Later versions are not compatible with CLI tools for EF Core 1.0
and 1.1.
Configure the application to use the 2.1.200 SDK version by modifying its global.json file. This file is
normally included in the solution directory (one above the project).
Edit the project file and add Microsoft.EntityFrameworkCore.Tools.DotNet as a DotNetCliToolReference item.
Specify the latest 1.x version, for example: 1.1.6. See the project file example at the end of this section.
Install the latest 1.x version of the Microsoft.EntityFrameworkCore.Design package, for example:
dotnet add package Microsoft.EntityFrameworkCore.Design -v 1.1.6
With both package references added, the project file looks something like this:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.EntityFrameworkCore.Design"
Version="1.1.6"
PrivateAssets="All" />
</ItemGroup>
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet"
Version="1.1.6" />
</ItemGroup>
</Project>
A package reference with PrivateAssets="All" isn't exposed to projects that reference this project. This
restriction is especially useful for packages that are typically only used during development.
Verify installation
Run the following commands to verify that EF Core CLI tools are correctly installed:
dotnet restore
dotnet ef
The output from the command identifies the version of the tools in use:
_/\__
---==/ \\
___ ___ |. \|\
| __|| __| | ) \\\
| _| | _| \_/ | //|\\
|___||_| / \\\/\\
The startup project and target project are often the same project. A typical scenario where they are separate
projects is when:
The EF Core context and entity classes are in a .NET Core class library.
A .NET Core console app or web app references the class library.
It's also possible to put migrations code in a class library separate from the EF Core context.
Other target frameworks
The CLI tools work with .NET Core projects and .NET Framework projects. Apps that have the EF Core model in a
.NET Standard class library might not have a .NET Core or .NET Framework project. For example, this is true of
Xamarin and Universal Windows Platform apps. In such cases, you can create a .NET Core console app project
whose only purpose is to act as startup project for the tools. The project can be a dummy project with no real
code — it is only needed to provide a target for the tooling.
Why is a dummy project required? As mentioned earlier, the tools have to execute application code at design
time. To do that, they need to use the .NET Core runtime. When the EF Core model is in a project that targets
.NET Core or .NET Framework, the EF Core tools borrow the runtime from the project. They can't do that if the
EF Core model is in a .NET Standard class library. The .NET Standard is not an actual .NET implementation; it's a
specification of a set of APIs that .NET implementations must support. Therefore .NET Standard is not sufficient
for the EF Core tools to execute application code. The dummy project you create to use as startup project
provides a concrete target platform into which the tools can load the .NET Standard class library.
ASP.NET Core environment
To specify the environment for ASP.NET Core projects, set the ASPNETCORE_ENVIRONMENT environment
variable before running commands.
Common options
OPTION DESCRIPTION
OPTION DESCRIPTION
ARGUMENT DESCRIPTION
The following examples update the database to a specified migration. The first uses the migration name and the
second uses the migration ID:
ARGUMENT DESCRIPTION
<CONNECTION> The connection string to the database. For ASP.NET Core 2.x
projects, the value can be name=<name of connection
string>. In that case the name comes from the configuration
sources that are set up for the project.
<PROVIDER> The provider to use. Typically this is the name of the NuGet
package, for example:
Microsoft.EntityFrameworkCore.SqlServer .
Options:
OPTION DESCRIPTION
The following example scaffolds all schemas and tables and puts the new files in the Models folder.
The following example scaffolds only selected tables and creates the context in a separate folder with a specified
name:
ARGUMENT DESCRIPTION
Options:
OPTION DESCRIPTION
OPTION DESCRIPTION
ARGUMENT DESCRIPTION
Options:
OPTION DESCRIPTION
The following example creates a script for all migrations after the InitialCreate migration.
Additional resources
Migrations
Reverse Engineering
Design-time DbContext Creation
8/27/2018 • 2 minutes to read • Edit Online
Some of the EF Core Tools commands (for example, the Migrations commands) require a derived DbContext
instance to be created at design time in order to gather details about the application's entity types and how they
map to a database schema. In most cases, it is desirable that the DbContext thereby created is configured in a
similar way to how it would be configured at run time.
There are various ways the tools try to create the DbContext :
NOTE
When you create a new ASP.NET Core 2.0 application, this hook is included by default. In previous versions of EF Core and
ASP.NET Core, the tools try to invoke Startup.ConfigureServices directly in order to obtain the application's service
provider, but this pattern no longer works correctly in ASP.NET Core 2.0 applications. If you are upgrading an ASP.NET Core
1.x application to 2.0, you can modify your Program class to follow the new pattern.
The DbContext itself and any dependencies in its constructor need to be registered as services in the application's
service provider. This can be easily achieved by having a constructor on the DbContext that takes an instance of
DbContextOptions<TContext> as an argument and using the AddDbContext<TContext> method.
namespace MyProject
{
public class BloggingContextFactory : IDesignTimeDbContextFactory<BloggingContext>
{
public BloggingContext CreateDbContext(string[] args)
{
var optionsBuilder = new DbContextOptionsBuilder<BloggingContext>();
optionsBuilder.UseSqlite("Data Source=blog.db");
NOTE
The args parameter is currently unused. There is an issue tracking the ability to specify design-time arguments from the
tools.
A design-time factory can be especially useful if you need to configure the DbContext differently for design time
than at run time, if the DbContext constructor takes additional parameters are not registered in DI, if you are not
using DI at all, or if for some reason you prefer not to have a BuildWebHost method in your ASP.NET Core
application's Main class.
Design-time services
8/27/2018 • 2 minutes to read • Edit Online
Some services used by the tools are only used at design time. These services are managed separately from EF
Core's runtime services to prevent them from being deployed with your app. To override one of these services (for
example the service to generate migration files), add an implementation of IDesignTimeServices to your startup
project.
Entity Framework 6 (EF6) is a tried and tested object-relational mapper (O/RM ) for .NET with many years of
feature development and stabilization.
As an O/RM, EF6 reduces the impedance mismatch between the relational and object-oriented worlds, enabling
developers to write applications that interact with data stored in relational databases using strongly-typed .NET
objects that represent the application's domain, and eliminating the need for a large portion of the data access
"plumbing" code that they usually need to write.
EF6 implements many popular O/RM features:
Mapping of POCO entity classes which do not depend on any EF types
Automatic change tracking
Identity resolution and Unit of Work
Eager, lazy and explicit loading
Translation of strongly-typed queries using LINQ (Language INtegrated Query)
Rich mapping capabilities, including support for:
One-to-one, one-to-many and many-to-many relationships
Inheritance (table per hierarchy, table per type and table per concrete class)
Complex types
Stored procedures
A visual designer to create entity models.
A "Code First" experience to create entity models by writing code.
Models can either be generated from existing databases and then hand-edited, or they can be created from
scratch and then used to generate new databases.
Integration with .NET Framework application models, including ASP.NET, and through databinding, with WPF
and WinForms.
Database connectivity based on ADO.NET and numerous providers available to connect to SQL Server, Oracle,
MySQL, SQLite, PostgreSQL, DB2, etc.
Get Started
Add the EntityFramework NuGet package to your project or install the Entity Framework Tools for Visual Studio.
Then watch videos, read tutorials, and advanced documentation to help you make the most of EF6.
We highly recommend that you use the latest released version of Entity Framework to ensure you get the latest
features and the highest stability. However, we realize that you may need to use a previous version, or that you
may want to experiment with new improvements in the latest pre-release. To install specific versions of EF, see Get
Entity Framework.
This page documents the features that are included on each new release.
Recent releases
EF Tools Update in Visual Studio 2017 15.7
In May 2018, we released an updated version of the EF Tools as part of Visual Studio 2017 15.7. It includes
improvements for some common pain points:
Fixes for several user interface accessibility bugs
Workaround for SQL Server performance regression when generating models from existing databases #4
Support for updating models for larger models on SQL Server #185
Another improvement in this this new version of EF Tools is that it installs the EF 6.2 runtime when creating a
model in a new project. With older versions of Visual Studio, it is possible to use the EF 6.2 runtime (as well as any
past version of EF ) by installing the corresponding version of the NuGet package.
EF 6.2 Runtime
The EF 6.2 runtime was released to NuGet in October of 2017. Thanks in great part to the efforts our community
of open source contributors, EF 6.2 includes numerous bugs fixes and product enhancements.
Here is a brief list of the most important changes affecting the EF 6.2 runtime:
Reduce start up time by loading finished code first models from a persistent cache #275
Fluent API to define indexes #274
DbFunctions.Like() to enable writing LINQ queries that translate to LIKE in SQL #241
Migrate.exe now supports -script option #240
EF6 can now work with key values generated by a sequence in SQL Server #165
Update list of transient errors for SQL Azure Execution Strategy #83
Bug: Retrying queries or SQL commands fails with "The SqlParameter is already contained by another
SqlParameterCollection" #81
Bug: Evaluation of DbQuery.ToString() frequently times out in the debugger #73
Future Releases
For information on future version of EF6, please look at our Roadmap.
Past Releases
The Past Releases page contains an archive of all previous versions of EF and the major features that were
introduced on each release.
Future Versions of Entity Framework
10/25/2018 • 2 minutes to read • Edit Online
Here you can find information on upcoming versions of Entity Framework. While most of the focus of the EF team
is nowadays on adding new features and improvements to EF Core, we plan to still fix important bugs, implement
small improvements, and incorporate community contributions in the EF6 codebase.
Staying Up To Date
Besides this page, new releases are usually announced on the .NET team blog and our Twitter account,
@efmagicunicorns.
Past Releases of Entity Framework
9/18/2018 • 12 minutes to read • Edit Online
The first version of Entity Framework was released in 2008, as part of .NET Framework 3.5 SP1 and Visual Studio
2008 SP1.
Starting with the EF4.1 release it has shipped as the EntityFramework NuGet Package - currently one of the most
popular packages on NuGet.org.
Between versions 4.1 and 5.0, the EntityFramework NuGet package extended the EF libraries that shipped as part
of .NET Framework.
Starting with version 6, EF became an open source project and also moved completely out of band form the .NET
Framework. This means that when you add the EntityFramework version 6 NuGet package to an application, you
are getting a complete copy of the EF library that does not depend on the EF bits that ship as part of .NET
Framework. This helped somewhat accelerate the pace of development and delivery of new features.
In June 2016, we released EF Core 1.0. EF Core is based on a new codebase and is designed as a more lightweight
and extensible version of EF. Currently EF Core is the main focus of development for the Entity Framework Team
at Microsoft. This means there are no new major features planned for EF6. However EF6 is still maintained as an
open source project and a supported Microsoft product.
Here is the list of past releases, in reverse chronological order, with information on the new features that were
introduced in each release.
EF 6.1.3
The EF 6.1.3 runtime was released to NuGet in October of 2015. This release contains only fixes to high-priority
defects and regressions reported on the 6.1.2 release. The fixes include:
Query: Regression in EF 6.1.2: OUTER APPLY introduced and more complex queries for 1:1 relationships and
“let” clause
TPT problem with hiding base class property in inherited class
DbMigration.Sql fails when the word ‘go’ is contained in the text
Create compatibility flag for UnionAll and Intersect flattening support
Query with multiple Includes does not work in 6.1.2 (working in 6.1.1)
“You have an error in your SQL syntax” exception after upgrading from EF 6.1.1 to 6.1.2
EF 6.1.2
The EF 6.1.2 runtime was released to NuGet in December of 2014. This version is mostly about bug fixes. We also
accepted a couple of noteworthy changes from members of the community:
Query cache parameters can be configured from the app/web.configuration file
<entityFramework>
<queryCache size='1000' cleaningIntervalInSeconds='-1'/>
</entityFramework>
SqlFile and SqlResource methods on DbMigration allow you to run a SQL script stored as a file or
embedded resource.
EF 6.1.1
The EF 6.1.1 runtime was released to NuGet in June of 2014. This version contains fixes for issues that a number
of people have encountered. Among others:
Designer: Error opening EF5 edmx with decimal precision in EF6 designer
Default instance detection logic for LocalDB doesn't work with SQL Server 2014
EF 6.1.0
The EF 6.1.0 runtime was released to NuGet in March of 2014. This minor update includes a significant number of
new features:
Tooling consolidation provides a consistent way to create a new EF model. This feature extends the
ADO.NET Entity Data Model wizard to support creating Code First models, including reverse engineering from
an existing database. These features were previously available in Beta quality in the EF Power Tools.
Handling of transaction commit failures provides the CommitFailureHandler which makes use of the newly
introduced ability to intercept transaction operations. The CommitFailureHandler allows automatic recovery
from connection failures whilst committing a transaction.
IndexAttribute allows indexes to be specified by placing an [Index] attribute on a property (or properties) in
your Code First model. Code First will then create a corresponding index in the database.
The public mapping API provides access to the information EF has on how properties and types are mapped
to columns and tables in the database. In past releases this API was internal.
Ability to configure interceptors via the App/Web.config file allows interceptors to be added without
recompiling the application.
System.Data.Entity.Infrastructure.Interception.DatabaseLoggeris a new interceptor that makes it easy to
log all database operations to a file. In combination with the previous feature, this allows you to easily switch on
logging of database operations for a deployed application, without the need to recompile.
Migrations model change detection has been improved so that scaffolded migrations are more accurate;
performance of the change detection process has also been enhanced.
Performance improvements including reduced database operations during initialization, optimizations for
null equality comparison in LINQ queries, faster view generation (model creation) in more scenarios, and more
efficient materialization of tracked entities with multiple associations.
EF 6.0.2
The EF 6.0.2 runtime was released to NuGet in December of 2013. This patch release is limited to fixing issues that
were introduced in the EF6 release (regressions in performance/behavior since EF5).
EF 6.0.1
The EF 6.0.1 runtime was released to NuGet in October of 2013 simultaneously with EF 6.0.0, because the latter
was embedded in a version of Visual Studio that had locked down a few months before. This patch release is
limited to fixing issues that were introduced in the EF6 release (regressions in performance/behavior since EF5).
The most notable changes were to fix some performance issues during warm-up for EF models. This was
important as warm-up performance was an area of focus in EF6 and these issues were negating some of the other
performance gains made in EF6.
EF 6.0
The EF 6.0.0 runtime was released to NuGet in October of 2013. This is the first version in which a complete EF
runtime is included in the EntityFramework NuGet Package which does not depend on the EF bits that are part of
the .NET Framework. Moving the remaining parts of the runtime to the NuGet package required a number of
breaking change for existing code. See the section on Upgrading to Entity Framework 6 for more details on the
manual steps required to upgrade.
This release includes numerous new features. The following features work for models created with Code First or
the EF Designer:
Async Query and Save adds support for the task-based asynchronous patterns that were introduced in .NET
4.5.
Connection Resiliency enables automatic recovery from transient connection failures.
Code-Based Configuration gives you the option of performing configuration – that was traditionally
performed in a config file – in code.
Dependency Resolution introduces support for the Service Locator pattern and we've factored out some
pieces of functionality that can be replaced with custom implementations.
Interception/SQL logging provides low -level building blocks for interception of EF operations with simple
SQL logging built on top.
Testability improvements make it easier to create test doubles for DbContext and DbSet when using a
mocking framework or writing your own test doubles.
DbContext can now be created with a DbConnection that is already opened which enables scenarios
where it would be helpful if the connection could be open when creating the context (such as sharing a
connection between components where you can not guarantee the state of the connection).
Improved Transaction Support provides support for a transaction external to the framework as well as
improved ways of creating a transaction within the Framework.
Enums, Spatial and Better Performance on .NET 4.0 - By moving the core components that used to be in
the .NET Framework into the EF NuGet package we are now able to offer enum support, spatial data types and
the performance improvements from EF5 on .NET 4.0.
Improved performance of Enumerable.Contains in LINQ queries.
Improved warm up time (view generation), especially for large models.
Pluggable Pluralization & Singularization Service.
Custom implementations of Equals or GetHashCode on entity classes are now supported.
DbSet.AddRange/RemoveRange provides an optimized way to add or remove multiple entities from a set.
DbChangeTracker.HasChanges provides an easy and efficient way to see if there are any pending changes to
be saved to the database.
SqlCeFunctions provides a SQL Compact equivalent to the SqlFunctions.
The following features apply to Code First only:
Custom Code First Conventions allow write your own conventions to help avoid repetitive configuration. We
provide a simple API for lightweight conventions as well as some more complex building blocks to allow you to
author more complicated conventions.
Code First Mapping to Insert/Update/Delete Stored Procedures is now supported.
Idempotent migrations scripts allow you to generate a SQL script that can upgrade a database at any
version up to the latest version.
Configurable Migrations History Table allows you to customize the definition of the migrations history
table. This is particularly useful for database providers that require the appropriate data types etc. to be
specified for the Migrations History table to work correctly.
Multiple Contexts per Database removes the previous limitation of one Code First model per database
when using Migrations or when Code First automatically created the database for you.
DbModelBuilder.HasDefaultSchema is a new Code First API that allows the default database schema for a
Code First model to be configured in one place. Previously the Code First default schema was hard-coded to
"dbo" and the only way to configure the schema to which a table belonged was via the ToTable API.
DbModelBuilder.Configurations.AddFromAssembly method allows you to easily add all configuration
classes defined in an assembly when you are using configuration classes with the Code First Fluent API.
Custom Migrations Operations enabled you to add additional operations to be used in your code-based
migrations.
Default transaction isolation level is changed to READ_COMMITTED_SNAPSHOT for databases
created using Code First, allowing for more scalability and fewer deadlocks.
Entity and complex types can now be nestedinside classes. |
EF 5.0
The EF 5.0.0 runtime was released to NuGet in August of 2012. This release introduces some new features
including enum support, table-valued functions, spatial data types and various performance improvements.
The Entity Framework Designer in Visual Studio 2012 also introduces support for multiple-diagrams per model,
coloring of shapes on the design surface and batch import of stored procedures.
Here is a list of content we put together specifically for the EF 5 release.
EF 5 Release Post
New Features in EF5
Enum Support in Code First
Enum Support in EF Designer
Spatial Data Types in Code First
Spatial Data Types in EF Designer
Provider Support for Spatial Types
Table-Valued Functions
Multiple Diagrams per Model
Setting up your model
Creating a Model
Connections and Models
Performance Considerations
Working with Microsoft SQL Azure
Configuration File Settings
Glossary
Code First
Code First to a new database (walkthrough and video)
Code First to an existing database (walkthrough and video)
Conventions
Data Annotations
Fluent API - Configuring/Mapping Properties & Types
Fluent API - Configuring Relationships
Fluent API with VB.NET
Code First Migrations
Automatic Code First Migrations
Migrate.exe
Defining DbSets
EF Designer
Model First (walkthrough and video)
Database First (walkthrough and video)
Complex Types
Associations/Relationships
TPT Inheritance Pattern
TPH Inheritance Pattern
Query with Stored Procedures
Stored Procedures with Multiple Result Sets
Insert, Update & Delete with Stored Procedures
Map an Entity to Multiple Tables (Entity Splitting)
Map Multiple Entities to One Table (Table Splitting)
Defining Queries
Code Generation Templates
Reverting to ObjectContext
Using Your Model
Working with DbContext
Querying/Finding Entities
Working with Relationships
Loading Related Entities
Working with Local Data
N -Tier Applications
Raw SQL Queries
Optimistic Concurrency Patterns
Working with Proxies
Automatic Detect Changes
No-Tracking Queries
The Load Method
Add/Attach and Entity States
Working with Property Values
Data Binding with WPF (Windows Presentation Foundation)
Data Binding with WinForms (Windows Forms)
EF 4.3.1
The EF 4.3.1 runtime was released to NuGet in February 2012 shortly after EF 4.3.0. This patch release included
some bug fixes to the EF 4.3 release and introduced better LocalDB support for customers using EF 4.3 with
Visual Studio 2012.
Here is a list of content we put together specifically for the EF 4.3.1 release, most of the content provided for EF 4.1
still applies to EF 4.3 as well.
EF 4.3.1 Release Blog Post
EF 4.3
The EF 4.3.0 runtime was released to NuGet in February of 2012. This release included the new Code First
Migrations feature that allows a database created by Code First to be incrementally changed as your Code First
model evolves.
Here is a list of content we put together specifically for the EF 4.3 release, most of the content provided for EF 4.1
still applies to EF 4.3 as well:
EF 4.3 Release Post
EF 4.3 Code-Based Migrations Walkthrough
EF 4.3 Automatic Migrations Walkthrough
EF 4.2
The EF 4.2.0 runtime was released to NuGet in November of 2011. This release includes bug fixes to the EF 4.1.1
release. Because this release only included bug fixes it could have been the EF 4.1.2 patch release but we opted to
move to 4.2 to allow us to move away from the date based patch version numbers we used in the 4.1.x releases
and adopt the Semantic Versionsing standard for semantic versioning.
Here is a list of content we put together specifically for the EF 4.2 release, the content provided for EF 4.1 still
applies to EF 4.2 as well.
EF 4.2 Release Post
Code First Walkthrough
Model & Database First Walkthrough
EF 4.1.1
The EF 4.1.10715 runtime was released to NuGet in July of 2011. In addition to bug fixes this patch release
introduced some components to make it easier for design time tooling to work with a Code First model. These
components are used by Code First Migrations (included in EF 4.3) and the EF Power Tools.
You’ll notice that the strange version number 4.1.10715 of the package. We used to use date based patch versions
before we decided to adopt Semantic Versioning. Think of this version as EF 4.1 patch 1 (or EF 4.1.1).
Here is a list of content we put together for the 4.1.1 release:
EF 4.1.1 Release Post
EF 4.1
The EF 4.1.10331 runtime was the first to be published on NuGet, in April of 2011. This release included the
simplified DbContext API and the Code First workflow.
You will notice the strange version number, 4.1.10331, which should really have been 4.1. In addition there is a
4.1.10311 version which should have been 4.1.0-rc (the ‘rc’ stands for ‘release candidate’). We used to use date
based patch versions before we decided to adopt Semantic Versioning.
Here is a list of content we put together for the 4.1 release. Much of it still applies to later releases of Entity
Framework:
EF 4.1 Release Post
Code First Walkthrough
Model & Database First Walkthrough
SQL Azure Federations and the Entity Framework
EF 4.0
This release was included in .NET Framework 4 and Visual Studio 2010, in April of 2010. Important new features
in this release included POCO support, foreign key mapping, lazy loading, testability improvements, customizable
code generation and the Model First workflow.
Although it was the second release of Entity Framework, it was named EF 4 to align with the .NET Framework
version that it shipped with. After this release, we started making Entity Framework available on NuGet and
adopted semantic versioning since we were no longer tied to the .NET Framework Version.
Note that some subsequent versions of .NET Framework have shipped with significant updates to the included EF
bits. In fact, many of the new features of EF 5.0 were implemented as improvements on these bits. However, in
order to rationalize the versioning story for EF, we continue to refer to the EF bits that are part of the .NET
Framework as the EF 4.0 runtime, while all newer versions consist of the EntityFramework NuGet Package.
EF 3.5
The initial version of Entity Framework was included in .NET 3.5 Service Pack 1 and Visual Studio 2008 SP1,
released in August of 2008. This release provided basic O/RM support using the Database First workflow.
Upgrading to Entity Framework 6
12/10/2018 • 3 minutes to read • Edit Online
In previous versions of EF the code was split between core libraries (primarily System.Data.Entity.dll) shipped as
part of the .NET Framework and out-of-band (OOB ) libraries (primarily EntityFramework.dll) shipped in a NuGet
package. EF6 takes the code from the core libraries and incorporates it into the OOB libraries. This was necessary
in order to allow EF to be made open source and for it to be able to evolve at a different pace from .NET
Framework. The consequence of this is that applications will need to be rebuilt against the moved types.
This should be straightforward for applications that make use of DbContext as shipped in EF 4.1 and later. A little
more work is required for applications that make use of ObjectContext but it still isn’t hard to do.
Here is a checklist of the things you need to do to upgrade an existing application to EF6.
NOTE
If a previous version of the EntityFramework NuGet package was installed this will upgrade it to EF6.
Alternatively, you can run the following command from Package Manager Console:
Install-Package EntityFramework
NOTE
There are currently only EF 6.x DbContext Generator templates available for Visual Studio 2012 and 2013.
1. Delete existing code-generation templates. These files will typically be named <edmx_file_name>.tt and
<edmx_file_name>.Context.tt and be nested under your edmx file in Solution Explorer. You can select
the templates in Solution Explorer and press the Del key to delete them.
NOTE
In Web Site projects the templates will not be nested under your edmx file, but listed alongside it in Solution Explorer.
NOTE
In VB.NET projects you will need to enable 'Show All Files' to be able to see the nested template files.
2. Add the appropriate EF 6.x code generation template. Open your model in the EF Designer, right-click on
the design surface and select Add Code Generation Item...
If you are using the DbContext API (recommended) then EF 6.x DbContext Generator will be
available under the Data tab.
NOTE
If you are using Visual Studio 2012, you will need to install the EF 6 Tools to have this template. See Get
Entity Framework for details.
If you are using the ObjectContext API then you will need to select the Online tab and search for EF
6.x EntityObject Generator.
3. If you applied any customizations to the code generation templates you will need to re-apply them to the
updated templates.
NOTE
This class has been renamed; a class with the old name still exists and works, but it now marked as obsolete.
Spatial classes (for example, DbGeography, DbGeometry) have moved from System.Data.Spatial =>
System.Data.Entity.Spatial
NOTE
Some types in the System.Data namespace are in System.Data.dll which is not an EF assembly. These types have not moved
and so their namespaces remain unchanged.
Visual Studio Releases
10/25/2018 • 3 minutes to read • Edit Online
We recommend to always use the latest version of Visual Studio because it contains the latest tools for .NET,
NuGet, and Entity Framework. In fact, the various samples and walkthroughs across the Entity Framework
documentation assume that you are using a recent version of Visual Studio.
It is possible however to use older versions of Visual Studio with different versions of Entity Framework as long as
you take into account some differences:
This guide contains a collection of links to selected documentation articles, walkthroughs and videos that can help
you get started quickly.
Fundamentals
Get Entity Framework
Here you will learn how to add Entity Framework to your applications and, if you want to use the EF
Designer, make sure you get it installed in Visual Studio.
Creating a Model: Code First, the EF Designer, and the EF Workflows
Do you prefer to specify your EF model writing code or drawing boxes and lines? Are you going to use EF
to map your objects to an existing database or would you like EF to create a database tailored for your
objects? Here your learn about two different approaches to use EF6: EF Designer and Code First. Make sure
you follow the discussion and watch the video about the difference.
Working with DbContext
DbContext is the first and most important EF type that you need to learn how to use. It serves as the
launchpad for database queries and keeps track of changes you make to objects so that they can be
persisted back to the database.
Ask a Question
Find out how to get help from the experts and contribute your own answers to the community.
Contribute
Entity Framework 6 uses an open development model. Find out how you can help make EF even better by
visiting our GitHub repository.
EF Designer resources
Database First Workflow
Model First Workflow
Mapping Enums
Mapping Spatial Types
Table-Per Hierarchy Inheritance Mapping
Table-Per Type Inheritance Mapping
Stored Procedure Mapping for Updates
Stored Procedure Mapping for Query
Entity Splitting
Table Splitting
Defining Query (Advanced)
Table-Valued Functions (Advanced)
Other resources
Async Query and Save
Databinding with WinForms
Databinding with WPF
Disconnected scenarios with Self-Tracking Entities (This is no longer recommended)
Entity Framework 6 fundamentals
9/13/2018 • 2 minutes to read • Edit Online
Topics under this section describe various basic aspects of working with EF6.
Get Entity Framework
9/13/2018 • 2 minutes to read • Edit Online
Entity Framework is made up of the EF Tools for Visual Studio and the EF Runtime.
EF Runtime
The latest version of Entity Framework is available as the EntityFramework NuGet package. If you are not familiar
with the NuGet Package Manager, we encourage you to read the NuGet Overview.
Installing the EF NuGet Package
You can install the EntityFramework package by right-clicking on the References folder of your project and
selecting Manage NuGet Packages…
Install-Package EntityFramework
Note that <number> represents the specific version of EF to install. For example, 6.2.0 is the version of number for
EF 6.2.
EF runtimes before 4.1 were part of .NET Framework and cannot be installed separately.
Installing the Latest Preview
The above methods will give you the latest fully supported release of Entity Framework. There are often
prerelease versions of Entity Framework available that we would love you to try out and give us feedback on.
To install the latest preview of EntityFramework you can select Include Prerelease in the Manage NuGet
Packages window. If no prerelease versions are available you will automatically get the latest fully supported
version of Entity Framework.
Alternatively, you can run the following command in the Package Manager Console.
In order to use Entity Framework to query, insert, update, and delete data using .NET objects, you first need to
Create a Model which maps the entities and relationships that are defined in your model to tables in a database.
Once you have a model, the primary class your application interacts with is System.Data.Entity.DbContext (often
referred to as the context class). You can use a DbContext associated to a model to:
Write and execute queries
Materialize query results as entity objects
Track changes that are made to those objects
Persist object changes back on the database
Bind objects in memory to UI controls
This page gives some guidance on how to manage the context class.
Once you have a context, you would query for, add (using Add or Attach methods ) or remove (using Remove )
entities in the context through these properties. Accessing a DbSet property on a context object represent a
starting query that returns all entities of the specified type. Note that just accessing a property will not execute the
query. A query is executed when:
It is enumerated by a foreach (C#) or For Each (Visual Basic) statement.
It is enumerated by a collection operation such as ToArray , ToDictionary , or ToList .
LINQ operators such as First or Any are specified in the outermost part of the query.
One of the following methods are called: the Load extension method, DbEntityEntry.Reload ,
Database.ExecuteSqlCommand , and DbSet<T>.Find , if an entity with the specified key is not found already loaded
in the context.
Lifetime
The lifetime of the context begins when the instance is created and ends when the instance is either disposed or
garbage-collected. Use using if you want all the resources that the context controls to be disposed at the end of
the block. When you use using, the compiler automatically creates a try/finally block and calls dispose in the
finally block.
public void UseProducts()
{
using (var context = new ProductContext())
{
// Perform data access using the context
}
}
Here are some general guidelines when deciding on the lifetime of the context:
When working with Web applications, use a context instance per request.
When working with Windows Presentation Foundation (WPF ) or Windows Forms, use a context instance per
form. This lets you use change-tracking functionality that context provides.
If the context instance is created by a dependency injection container, it is usually the responsibility of the
container to dispose the context.
If the context is created in application code, remember to dispose of the context when it is no longer required.
When working with long-running context consider the following:
As you load more objects and their references into memory, the memory consumption of the context
may increase rapidly. This may cause performance issues.
The context is not thread-safe, therefore it should not be shared across multiple threads doing work on it
concurrently.
If an exception causes the context to be in an unrecoverable state, the whole application may terminate.
The chances of running into concurrency-related issues increase as the gap between the time when the
data is queried and updated grows.
Connections
By default, the context manages connections to the database. The context opens and closes connections as needed.
For example, the context opens a connection to execute a query, and then closes the connection when all the result
sets have been processed.
There are cases when you want to have more control over when the connection opens and closes. For example,
when working with SQL Server Compact, it is often recommended to maintain a separate open connection to the
database for the lifetime of the application to improve performance. You can manage this process manually by
using the Connection property.
Relationships, navigation properties and foreign keys
3/21/2019 • 9 minutes to read • Edit Online
This topic gives an overview of how Entity Framework manages relationships between entities. It also gives some
guidance on how to map and manipulate relationships.
Relationships in EF
In relational databases, relationships (also called associations) between tables are defined through foreign keys. A
foreign key (FK) is a column or combination of columns that is used to establish and enforce a link between the
data in two tables. There are generally three types of relationships: one-to-one, one-to-many, and many-to-many.
In a one-to-many relationship, the foreign key is defined on the table that represents the many end of the
relationship. The many-to-many relationship involves defining a third table (called a junction or join table), whose
primary key is composed of the foreign keys from both related tables. In a one-to-one relationship, the primary
key acts additionally as a foreign key and there is no separate foreign key column for either table.
The following image shows two tables that participate in one-to-many relationship. The Course table is the
dependent table because it contains the DepartmentID column that links it to the Department table.
In Entity Framework, an entity can be related to other entities through an association or relationship. Each
relationship contains two ends that describe the entity type and the multiplicity of the type (one, zero-or-one, or
many) for the two entities in that relationship. The relationship may be governed by a referential constraint, which
describes which end in the relationship is a principal role and which is a dependent role.
Navigation properties provide a way to navigate an association between two entity types. Every object can have a
navigation property for every relationship in which it participates. Navigation properties allow you to navigate
and manage relationships in both directions, returning either a reference object (if the multiplicity is either one or
zero-or-one) or a collection (if the multiplicity is many). You may also choose to have one-way navigation, in which
case you define the navigation property on only one of the types that participates in the relationship and not on
both.
It is recommended to include properties in the model that map to foreign keys in the database. With foreign key
properties included, you can create or change a relationship by modifying the foreign key value on a dependent
object. This kind of association is called a foreign key association. Using foreign keys is even more essential when
working with disconnected entities. Note, that when working with 1-to-1 or 1-to-0..1 relationships, there is no
separate foreign key column, the primary key property acts as the foreign key and is always included in the model.
When foreign key columns are not included in the model, the association information is managed as an
independent object. Relationships are tracked through object references instead of foreign key properties. This
type of association is called an independent association. The most common way to modify an independent
association is to modify the navigation properties that are generated for each entity that participates in the
association.
You can choose to use one or both types of associations in your model. However, if you have a pure many-to-
many relationship that is connected by a join table that contains only foreign keys, the EF will use an independent
association to manage such many-to-many relationship.
The following image shows a conceptual model that was created with the Entity Framework Designer. The model
contains two entities that participate in one-to-many relationship. Both entities have navigation properties.
Course is the depend entity and has the DepartmentID foreign key property defined.
The following code snippet shows the same model that was created with Code First.
course.DepartmentID = newCourse.DepartmentID;
The following code removes a relationship by setting the foreign key to null. Note, that the foreign key
property must be nullable.
course.DepartmentID = null;
NOTE
If the reference is in the added state (in this example, the course object), the reference navigation property will not be
synchronized with the key values of a new object until SaveChanges is called. Synchronization does not occur
because the object context does not contain permanent keys for added objects until they are saved. If you must
have new objects fully synchronized as soon as you set the relationship, use one of the following methods.*
By assigning a new object to a navigation property. The following code creates a relationship between a
course and a department . If the objects are attached to the context, the course is also added to the
department.Courses collection, and the corresponding foreign key property on the course object is set to
the key property value of the department.
course.Department = department;
To delete the relationship, set the navigation property to null . If you are working with Entity Framework
that is based on .NET 4.0, then the related end needs to be loaded before you set it to null. For example:
Starting with Entity Framework 5.0, that is based on .NET 4.5, you can set the relationship to null without
loading the related end. You can also set the current value to null using the following method.
By deleting or adding an object in an entity collection. For example, you can add an object of type Course
to the department.Courses collection. This operation creates a relationship between a particular course and
a particular department . If the objects are attached to the context, the department reference and the foreign
key property on the course object will be set to the appropriate department .
department.Courses.Add(newCourse);
By using the ChangeRelationshipState method to change the state of the specified relationship between two
entity objects. This method is most commonly used when working with N -Tier applications and an
independent association (it cannot be used with a foreign key association). Also, to use this method you
must drop down to ObjectContext , as shown in the example below.
In the following example, there is a many-to-many relationship between Instructors and Courses. Calling
the ChangeRelationshipState method and passing the EntityState.Added parameter, lets the SchoolContext
know that a relationship has been added between the two objects:
((IObjectContextAdapter)context).ObjectContext.
ObjectStateManager.
ChangeRelationshipState(course, instructor, c => c.Instructor, EntityState.Added);
Note that if you are updating (not just adding) a relationship, you must delete the old relationship after
adding the new one:
((IObjectContextAdapter)context).ObjectContext.
ObjectStateManager.
ChangeRelationshipState(course, oldInstructor, c => c.Instructor, EntityState.Deleted);
NOTE
In a foreign key association, when you load a related end of a dependent object, the related object will be loaded based on
the foreign key value of the dependent that is currently in memory:
// Get the course where currently DepartmentID = 2.
Course course2 = context.Courses.First(c=>c.DepartmentID == 2);
In an independent association, the related end of a dependent object is queried based on the foreign key value that
is currently in the database. However, if the relationship was modified, and the reference property on the
dependent object points to a different principal object that is loaded in the object context, Entity Framework will
try to create a relationship as it is defined on the client.
Managing concurrency
In both foreign key and independent associations, concurrency checks are based on the entity keys and other
entity properties that are defined in the model. When using the EF Designer to create a model, set the
ConcurrencyMode attribute to fixed to specify that the property should be checked for concurrency. When using
Code First to define a model, use the ConcurrencyCheck annotation on properties that you want to be checked for
concurrency. When working with Code First you can also use the TimeStamp annotation to specify that the
property should be checked for concurrency. You can have only one timestamp property in a given class. Code
First maps this property to a non-nullable field in the database.
We recommend that you always use the foreign key association when working with entities that participate in
concurrency checking and resolution.
For more information, see Handling Concurrency Conflicts.
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
EF6 introduced support for asynchronous query and save using the async and await keywords that were
introduced in .NET 4.5. While not all applications may benefit from asynchrony, it can be used to improve client
responsiveness and server scalability when handling long-running, network or I/O -bound tasks.
namespace AsyncDemo
{
public class BloggingContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}
namespace AsyncDemo
{
class Program
{
static void Main(string[] args)
{
PerformDatabaseOperations();
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
This code calls the PerformDatabaseOperations method which saves a new Blog to the database and then
retrieves all Blogs from the database and prints them to the Console. After this, the program writes a quote of
the day to the Console.
Since the code is synchronous, we can observe the following execution flow when we run the program:
1. SaveChanges begins to push the new Blog to the database
2. SaveChanges completes
3. Query for all Blogs is sent to the database
4. Query returns and results are written to Console
5. Quote of the day is written to Console
Making it asynchronous
Now that we have our program up and running, we can begin making use of the new async and await keywords.
We've made the following changes to Program.cs
1. Line 2: The using statement for the System.Data.Entity namespace gives us access to the EF async extension
methods.
2. Line 4: The using statement for the System.Threading.Tasks namespace allows us to use the Task type.
3. Line 12 & 18: We are capturing as task that monitors the progress of PerformSomeDatabaseOperations
(line 12) and then blocking program execution for this task to complete once all the work for the program is
done (line 18).
4. Line 25: We've update PerformSomeDatabaseOperations to be marked as async and return a Task.
5. Line 35: We're now calling the Async version of SaveChanges and awaiting it's completion.
6. Line 42: We're now calling the Async version of ToList and awaiting on the result.
For a comprehensive list of available extension methods in the System.Data.Entity namespace, refer to the
QueryableExtensions class. You’ll also need to add “using System.Data.Entity” to your using statements.
using System;
using System.Data.Entity;
using System.Linq;
using System.Threading.Tasks;
namespace AsyncDemo
{
class Program
{
static void Main(string[] args)
{
var task = PerformDatabaseOperations();
task.Wait();
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
Now that the code is asyncronous, we can observe a different execution flow when we run the program:
1. SaveChanges begins to push the new Blog to the database Once the command is sent to the database no
more compute time is needed on the current managed thread. The PerformDatabaseOperations method
returns (even though it hasn't finished executing ) and program flow in the Main method continues.
2. Quote of the day is written to Console Since there is no more work to do in the Main method, the managed
thread is blocked on the Wait call until the database operation completes. Once it completes, the remainder of
our PerformDatabaseOperations will be executed.
3. SaveChanges completes
4. Query for all Blogs is sent to the database Again, the managed thread is free to do other work while the query
is processed in the database. Since all other execution has completed, the thread will just halt on the Wait call
though.
5. Query returns and results are written to Console
The takeaway
We now saw how easy it is to make use of EF’s asynchronous methods. Although the advantages of async may
not be very apparent with a simple console app, these same strategies can be applied in situations where long-
running or network-bound activities might otherwise block the application, or cause a large number of threads to
increase the memory footprint.
Code-based configuration
4/16/2019 • 4 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
Configuration for an Entity Framework application can be specified in a config file (app.config/web.config) or
through code. The latter is known as code-based configuration.
Configuration in a config file is described in a separate article. The config file takes precedence over code-based
configuration. In other words, if a configuration option is set in both code and in the config file, then the setting in
the config file is used.
Using DbConfiguration
Code-based configuration in EF6 and above is achieved by creating a subclass of
System.Data.Entity.Config.DbConfiguration. The following guidelines should be followed when subclassing
DbConfiguration:
Create only one DbConfiguration class for your application. This class specifies app-domain wide settings.
Place your DbConfiguration class in the same assembly as your DbContext class. (See the Moving
DbConfiguration section if you want to change this.)
Give your DbConfiguration class a public parameterless constructor.
Set configuration options by calling protected DbConfiguration methods from within this constructor.
Following these guidelines allows EF to discover and use your configuration automatically by both tooling that
needs to access your model and when your application is run.
Example
A class derived from DbConfiguration might look like this:
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.SqlServer;
namespace MyNamespace
{
public class MyConfiguration : DbConfiguration
{
public MyConfiguration()
{
SetExecutionStrategy("System.Data.SqlClient", () => new SqlAzureExecutionStrategy());
SetDefaultConnectionFactory(new LocalDbConnectionFactory("mssqllocaldb"));
}
}
}
This class sets up EF to use the SQL Azure execution strategy - to automatically retry failed database operations -
and to use Local DB for databases that are created by convention from Code First.
Moving DbConfiguration
There are cases where it is not possible to place your DbConfiguration class in the same assembly as your
DbContext class. For example, you may have two DbContext classes each in different assemblies. There are two
options for handling this.
The first option is to use the config file to specify the DbConfiguration instance to use. To do this, set the
codeConfigurationType attribute of the entityFramework section. For example:
The value of codeConfigurationType must be the assembly and namespace qualified name of your
DbConfiguration class.
The second option is to place DbConfigurationTypeAttribute on your context class. For example:
[DbConfigurationType(typeof(MyDbConfiguration))]
public class MyContextContext : DbContext
{
}
The value passed to the attribute can either be your DbConfiguration type - as shown above - or the assembly and
namespace qualified type name string. For example:
[DbConfigurationType("MyNamespace.MyDbConfiguration, MyAssembly")]
public class MyContextContext : DbContext
{
}
Overriding DbConfiguration
There are some situations where you need to override the configuration set in the DbConfiguration. This is not
typically done by application developers but rather by third party providers and plug-ins that cannot use a derived
DbConfiguration class.
For this, EntityFramework allows an event handler to be registered that can modify existing configuration just
before it is locked down. It also provides a sugar method specifically for replacing any service returned by the EF
service locator. This is how it is intended to be used:
At app startup (before EF is used) the plug-in or provider should register the event handler method for this
event. (Note that this must happen before the application uses EF.)
The event handler makes a call to ReplaceService for every service that needs to be replaced.
For example, to replace IDbConnectionFactory and DbProviderService you would register a handler something
like this:
In the code above MyProviderServices and MyConnectionFactory represent your implementations of the service.
You can also add additional dependency handlers to get the same effect.
Note that you could also wrap DbProviderFactory in this way, but doing so will only affect EF and not uses of the
DbProviderFactory outside of EF. For this reason you’ll probably want to continue to wrap DbProviderFactory as
you have before.
You should also keep in mind the services that you run externally to your application - for example, when running
migrations from the Package Manager Console. When you run migrate from the console it will attempt to find
your DbConfiguration. However, whether or not it will get the wrapped service depends on where the event
handler it registered. If it is registered as part of the construction of your DbConfiguration then the code should
execute and the service should get wrapped. Usually this won’t be the case and this means that tooling won’t get
the wrapped service.
Configuration File Settings
9/27/2018 • 6 minutes to read • Edit Online
Entity Framework allows a number of settings to be specified from the configuration file. In general EF follows a
‘convention over configuration’ principle: all the settings discussed in this post have a default behavior, you only
need to worry about changing the setting when the default no longer satisfies your requirements.
A Code-Based Alternative
All of these settings can also be applied using code. Starting in EF6 we introduced code-based configuration,
which provides a central way of applying configuration from code. Prior to EF6, configuration can still be applied
from code but you need to use various APIs to configure different areas. The configuration file option allows these
settings to be easily changed during deployment without updating your code.
Connection Strings
This page provides more details on how Entity Framework determines the database to be used, including
connection strings in the configuration file.
Connection strings go in the standard connectionStrings element and do not require the entityFramework
section.
Code First based models use normal ADO.NET connection strings. For example:
<connectionStrings>
<add name="BlogContext"
providerName="System.Data.SqlClient"
connectionString="Server=.\SQLEXPRESS;Database=Blogging;Integrated Security=True;"/>
</connectionStrings>
NOTE
An assembly qualified name is the namespace qualified name, followed by a comma, then the assembly that the type resides
in. You can optionally also specify the assembly version, culture and public key token.
NOTE
An assembly qualified name is the namespace qualified name, followed by a comma, then the assembly that the type resides
in. You can optionally also specify the assembly version, culture and public key token.
As an example here is the entry created to register the default SQL Server provider when you install Entity
Framework.
<providers>
<provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices,
EntityFramework.SqlServer" />
</providers>
<interceptors>
<interceptor type="System.Data.Entity.Infrastructure.Interception.DatabaseLogger, EntityFramework"/>
</interceptors>
<interceptors>
<interceptor type="System.Data.Entity.Infrastructure.Interception.DatabaseLogger, EntityFramework">
<parameters>
<parameter value="C:\Temp\LogOutput.txt"/>
</parameters>
</interceptor>
</interceptors>
By default this will cause the log file to be overwritten with a new file each time the app starts. To instead append
to the log file if it already exists use something like:
<interceptors>
<interceptor type="System.Data.Entity.Infrastructure.Interception.DatabaseLogger, EntityFramework">
<parameters>
<parameter value="C:\Temp\LogOutput.txt"/>
<parameter value="true" type="System.Boolean"/>
</parameters>
</interceptor>
</interceptors>
For additional information on DatabaseLogger and registering interceptors, see the blog post EF 6.1: Turning on
logging without recompiling.
NOTE
An assembly qualified name is the namespace qualified name, followed by a comma, then the assembly that the type resides
in. You can optionally also specify the assembly version, culture and public key token.
<entityFramework>
<defaultConnectionFactory type="MyNamespace.MyCustomFactory, MyAssembly"/>
</entityFramework>
The above example requires the custom factory to have a parameterless constructor. If needed, you can specify
constructor parameters using the parameters element.
For example, the SqlCeConnectionFactory, that is included in Entity Framework, requires you to supply a provider
invariant name to the constructor. The provider invariant name identifies the version of SQL Compact you want to
use. The following configuration will cause contexts to use SQL Compact version 4.0 by default.
<entityFramework>
<defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlCeConnectionFactory, EntityFramework">
<parameters>
<parameter value="System.Data.SqlServerCe.4.0" />
</parameters>
</defaultConnectionFactory>
</entityFramework>
If you don’t set a default connection factory, Code First uses the SqlConnectionFactory, pointing to .\SQLEXPRESS .
SqlConnectionFactory also has a constructor that allows you to override parts of the connection string. If you
want to use a SQL Server instance other than .\SQLEXPRESS you can use this constructor to set the server.
The following configuration will cause Code First to use MyDatabaseServer for contexts that don’t have an
explicit connection string set.
<entityFramework>
<defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework">
<parameters>
<parameter value="Data Source=MyDatabaseServer; Integrated Security=True; MultipleActiveResultSets=True"
/>
</parameters>
</defaultConnectionFactory>
</entityFramework>
By default, it’s assumed that constructor arguments are of type string. You can use the type attribute to change
this.
Database Initializers
Database initializers are configured on a per-context basis. They can be set in the configuration file using the
context element. This element uses the assembly qualified name to identify the context being configured.
By default, Code First contexts are configured to use the CreateDatabaseIfNotExists initializer. There is a
disableDatabaseInitialization attribute on the context element that can be used to disable database
initialization.
For example, the following configuration disables database initialization for the Blogging.BlogContext context
defined in MyAssembly.dll.
<contexts>
<context type=" Blogging.BlogContext, MyAssembly" disableDatabaseInitialization="true" />
</contexts>
<contexts>
<context type=" Blogging.BlogContext, MyAssembly">
<databaseInitializer type="Blogging.MyCustomBlogInitializer, MyAssembly" />
</context>
</contexts>
<contexts>
<context type=" Blogging.BlogContext, MyAssembly">
<databaseInitializer type="Blogging.MyCustomBlogInitializer, MyAssembly">
<parameters>
<parameter value="MyConstructorParameter" />
</parameters>
</databaseInitializer>
</context>
</contexts>
You can configure one of the generic database initializers that are included in Entity Framework. The type
attribute uses the .NET Framework format for generic types.
For example, if you are using Code First Migrations, you can configure the database to be migrated automatically
using the MigrateDatabaseToLatestVersion<TContext, TMigrationsConfiguration> initializer.
<contexts>
<context type="Blogging.BlogContext, MyAssembly">
<databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[Blogging.BlogContext,
MyAssembly], [Blogging.Migrations.Configuration, MyAssembly]], EntityFramework" />
</context>
</contexts>
Connection strings and models
9/13/2018 • 5 minutes to read • Edit Online
This topic covers how Entity Framework discovers which database connection to use, and how you can change it.
Models created with Code First and the EF Designer are both covered in this topic.
Typically an Entity Framework application uses a class derived from DbContext. This derived class will call one of
the constructors on the base DbContext class to control:
How the context will connect to a database — that is, how a connection string is found/used
Whether the context will use calculate a model using Code First or load a model created with the EF Designer
Additional advanced options
The following fragments show some of the ways the DbContext constructors can be used.
namespace Demo.EF
{
public class BloggingContext : DbContext
{
public BloggingContext()
// C# will call base class parameterless constructor by default
{
}
}
}
In this example DbContext uses the namespace qualified name of your derived context class—
Demo.EF.BloggingContext—as the database name and creates a connection string for this database using either
SQL Express or LocalDB. If both are installed, SQL Express will be used.
Visual Studio 2010 includes SQL Express by default and Visual Studio 2012 and later includes LocalDB. During
installation, the EntityFramework NuGet package checks which database server is available. The NuGet package
will then update the configuration file by setting the default database server that Code First uses when creating a
connection by convention. If SQL Express is running, it will be used. If SQL Express is not available then LocalDB
will be registered as the default instead. No changes are made to the configuration file if it already contains a
setting for the default connection factory.
In this example DbContext uses “BloggingDatabase” as the database name and creates a connection string for this
database using either SQL Express (installed with Visual Studio 2010) or LocalDB (installed with Visual Studio
2012). If both are installed, SQL Express will be used.
<configuration>
<connectionStrings>
<add name="BloggingCompactDatabase"
providerName="System.Data.SqlServerCe.4.0"
connectionString="Data Source=Blogging.sdf"/>
</connectionStrings>
</configuration>
This is an easy way to tell DbContext to use a database server other than SQL Express or LocalDB — the example
above specifies a SQL Server Compact Edition database.
If the name of the connection string matches the name of your context (either with or without namespace
qualification) then it will be found by DbContext when the parameterless constructor is used. If the connection
string name is different from the name of your context then you can tell DbContext to use this connection in Code
First mode by passing the connection string name to the DbContext constructor. For example:
Alternatively, you can use the form “name=<connection string name>” for the string passed to the DbContext
constructor. For example:
This form makes it explicit that you expect the connection string to be found in your config file. An exception will
be thrown if a connection string with the given name is not found.
<configuration>
<connectionStrings>
<add name="Northwind_Entities"
connectionString="metadata=res://*/Northwind.csdl|
res://*/Northwind.ssdl|
res://*/Northwind.msl;
provider=System.Data.SqlClient;
provider connection string=
"Data Source=.\sqlexpress;
Initial Catalog=Northwind;
Integrated Security=True;
MultipleActiveResultSets=True""
providerName="System.Data.EntityClient"/>
</connectionStrings>
</configuration>
The EF Designer will also generate code that tells DbContext to use this connection by passing the connection
string name to the DbContext constructor. For example:
DbContext knows to load the existing model (rather than using Code First to calculate it from code) because the
connection string is an EF connection string containing details of the model to use.
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
Starting with EF6, Entity Framework contains a general-purpose mechanism for obtaining implementations of
services that it requires. That is, when EF uses an instance of some interfaces or base classes it will ask for a
concrete implementation of the interface or base class to use. This is achieved through use of the
IDbDependencyResolver interface:
System.Data.Entity.IDatabaseInitializer<TContext>
Version introduced: EF6.0.0
Object returned: A database initializer for the given context type
Key: Not used; will be null
Func<System.Data.Entity.Migrations.Sql.MigrationSqlGenerator>
Version introduced: EF6.0.0
Object returned: A factory to create a SQL generator that can be used for Migrations and other actions that
cause a database to be created, such as database creation with database initializers.
Key: A string containing the ADO.NET provider invariant name specifying the type of database for which SQL will
be generated. For example, the SQL Server SQL generator is returned for the key "System.Data.SqlClient".
NOTE
For more details on provider-related services in EF6 see the EF6 provider model section.
System.Data.Entity.Core.Common.DbProviderServices
Version introduced: EF6.0.0
Object returned: The EF provider to use for a given provider invariant name
Key: A string containing the ADO.NET provider invariant name specifying the type of database for which a
provider is needed. For example, the SQL Server provider is returned for the key "System.Data.SqlClient".
NOTE
For more details on provider-related services in EF6 see the EF6 provider model section.
System.Data.Entity.Infrastructure.IDbConnectionFactory
Version introduced: EF6.0.0
Object returned: The connection factory that will be used when EF creates a database connection by convention.
That is, when no connection or connection string is given to EF, and no connection string can be found in the
app.config or web.config, then this service is used to create a connection by convention. Changing the connection
factory can allow EF to use a different type of database (for example, SQL Server Compact Edition) by default.
Key: Not used; will be null
NOTE
For more details on provider-related services in EF6 see the EF6 provider model section.
System.Data.Entity.Infrastructure.IManifestTokenService
Version introduced: EF6.0.0
Object returned: A service that can generate a provider manifest token from a connection. This service is typically
used in two ways. First, it can be used to avoid Code First connecting to the database when building a model.
Second, it can be used to force Code First to build a model for a specific database version -- for example, to force a
model for SQL Server 2005 even if sometimes SQL Server 2008 is used.
Object lifetime: Singleton -- the same object may be used multiple times and concurrently by different threads
Key: Not used; will be null
System.Data.Entity.Infrastructure.IDbProviderFactoryService
Version introduced: EF6.0.0
Object returned: A service that can obtain a provider factory from a given connection. On .NET 4.5 the provider
is publicly accessible from the connection. On .NET 4 the default implementation of this service uses some
heuristics to find the matching provider. If these fail then a new implementation of this service can be registered to
provide an appropriate resolution.
Key: Not used; will be null
Func<DbContext,
System.Data.Entity.Infrastructure.IDbModelCacheKey>
Version introduced: EF6.0.0
Object returned: A factory that will generate a model cache key for a given context. By default, EF caches one
model per DbContext type per provider. A different implementation of this service can be used to add other
information, such as schema name, to the cache key.
Key: Not used; will be null
System.Data.Entity.Spatial.DbSpatialServices
Version introduced: EF6.0.0
Object returned: An EF spatial provider that adds support to the basic EF provider for geography and geometry
spatial types.
Key: DbSptialServices is asked for in two ways. First, provider-specific spatial services are requested using a
DbProviderInfo object (which contains invariant name and manifest token) as the key. Second, DbSpatialServices
can be asked for with no key. This is used to resolve the "global spatial provider" which is used when creating
stand-alone DbGeography or DbGeometry types.
NOTE
For more details on provider-related services in EF6 see the EF6 provider model section.
Func<System.Data.Entity.Infrastructure.IDbExecutionStrategy>
Version introduced: EF6.0.0
Object returned: A factory to create a service that allows a provider to implement retries or other behavior when
queries and commands are executed against the database. If no implementation is provided, then EF will simply
execute the commands and propagate any exceptions thrown. For SQL Server this service is used to provide a
retry policy which is especially useful when running against cloud-based database servers such as SQL Azure.
Key: An ExecutionStrategyKey object that contains the provider invariant name and optionally a server name for
which the execution strategy will be used.
NOTE
For more details on provider-related services in EF6 see the EF6 provider model section.
Func<DbConnection, string,
System.Data.Entity.Migrations.History.HistoryContext>
Version introduced: EF6.0.0
Object returned: A factory that allows a provider to configure the mapping of the HistoryContext to the
__MigrationHistory table used by EF Migrations. The HistoryContext is a Code First DbContext and can be
configured using the normal fluent API to change things like the name of the table and the column mapping
specifications.
Key: Not used; will be null
NOTE
For more details on provider-related services in EF6 see the EF6 provider model section.
System.Data.Common.DbProviderFactory
Version introduced: EF6.0.0
Object returned: The ADO.NET provider to use for a given provider invariant name.
Key: A string containing the ADO.NET provider invariant name
NOTE
This service is not usually changed directly since the default implementation uses the normal ADO.NET provider registration.
For more details on provider-related services in EF6 see the EF6 provider model section.
System.Data.Entity.Infrastructure.IProviderInvariantName
Version introduced: EF6.0.0
Object returned: a service that is used to determine a provider invariant name for a given type of
DbProviderFactory. The default implementation of this service uses the ADO.NET provider registration. This
means that if the ADO.NET provider is not registered in the normal way because DbProviderFactory is being
resolved by EF, then it will also be necessary to resolve this service.
Key: The DbProviderFactory instance for which an invariant name is required.
NOTE
For more details on provider-related services in EF6 see the EF6 provider model section.
System.Data.Entity.Core.Mapping.ViewGeneration.IViewAssemblyCach
e
Version introduced: EF6.0.0
Object returned: a cache of the assemblies that contain pre-generated views. A replacement is typically used to
let EF know which assemblies contain pre-generated views without doing any discovery.
Key: Not used; will be null
System.Data.Entity.Infrastructure.Pluralization.IPluralizationService
Version introduced: EF6.0.0
Object returned: a service used by EF to pluralize and singularize names. By default an English pluralization
service is used.
Key: Not used; will be null
System.Data.Entity.Infrastructure.Interception.IDbInterceptor
Version introduced: EF6.0.0
Objects returned: Any interceptors that should be registered when the application starts. Note that these objects
are requested using the GetServices call and all interceptors returned by any dependency resolver will registered.
Key: Not used; will be null.
Func<System.Data.Entity.DbContext, Action<string>,
System.Data.Entity.Infrastructure.Interception.DatabaseLogFormatter>
Version introduced: EF6.0.0
Object returned: A factory that will be used to create the database log formatter that will be used when the
context.Database.Log property is set on the given context.
Key: Not used; will be null.
Func<System.Data.Entity.DbContext>
Version introduced: EF6.1.0
Object returned: A factory that will be used to create context instances for Migrations when the context does not
have an accessible parameterless constructor.
Key: The Type object for the type of the derived DbContext for which a factory is needed.
Func<System.Data.Entity.Core.Metadata.Edm.IMetadataAnnotationSer
ializer>
Version introduced: EF6.1.0
Object returned: A factory that will be used to create serializers for serialization of strongly-typed custom
annotations such that they can be serialized and desterilized into XML for use in Code First Migrations.
Key: The name of the annotation that is being serialized or deserialized.
Func<System.Data.Entity.Infrastructure.TransactionHandler>
Version introduced: EF6.1.0
Object returned: A factory that will be used to create handlers for transactions so that special handling can be
applied for situations such as handling commit failures.
Key: An ExecutionStrategyKey object that contains the provider invariant name and optionally a server name for
which the transaction handler will be used.
Connection management
9/13/2018 • 5 minutes to read • Edit Online
This page describes the behavior of Entity Framework with regard to passing connections to the context and the
functionality of the Database.Connection.Open() API.
It is possible to use these but you have to work around a couple of limitations:
1. If you pass an open connection to either of these then the first time the framework attempts to use it an
InvalidOperationException is thrown saying it cannot re-open an already open connection.
2. The contextOwnsConnection flag is interpreted to mean whether or not the underlying store connection should
be disposed when the context is disposed. But, regardless of that setting, the store connection is always closed
when the context is disposed. So if you have more than one DbContext with the same connection whichever
context is disposed first will close the connection (similarly if you have mixed an existing ADO.NET connection
with a DbContext, DbContext will always close the connection when it is disposed).
It is possible to work around the first limitation above by passing a closed connection and only executing code that
would open it once all contexts have been created:
using System.Collections.Generic;
using System.Data.Common;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Data.EntityClient;
using System.Linq;
namespace ConnectionManagementExamples
{
class ConnectionManagementExampleEF5
{
public static void TwoDbContextsOneConnection()
{
using (var context1 = new BloggingContext())
{
var conn =
((EntityConnection)
((IObjectContextAdapter)context1).ObjectContext.Connection)
.StoreConnection;
The second limitation just means you need to refrain from disposing any of your DbContext objects until you are
ready for the connection to be closed.
Behavior in EF6 and future versions
In EF6 and future versions the DbContext has the same two constructors but no longer requires that the
connection passed to the constructor be closed when it is received. So this is now possible:
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace ConnectionManagementExamples
{
class ConnectionManagementExample
{
public static void PassingAnOpenConnection()
{
using (var conn = new SqlConnection("{connectionString}"))
{
conn.Open();
Also the contextOwnsConnection flag now controls whether or not the connection is both closed and disposed
when the DbContext is disposed. So in the above example the connection is not closed when the context is
disposed (line 32) as it would have been in previous versions of EF, but rather when the connection itself is
disposed (line 40).
Of course it is still possible for the DbContext to take control of the connection (just set contextOwnsConnection to
true or use one of the other constructors) if you so wish.
NOTE
There are some additional considerations when using transactions with this new model. For details see Working with
Transactions.
Database.Connection.Open()
Behavior for EF5 and earlier versions
In EF5 and earlier versions there is a bug such that the ObjectContext.Connection.State was not updated to
reflect the true state of the underlying store connection. For example, if you executed the following code you can
be returned the status Closed even though in fact the underlying store connection is Open.
((IObjectContextAdapter)context).ObjectContext.Connection.State
Separately, if you open the database connection by calling Database.Connection.Open() it will be open until the
next time you execute a query or call anything which requires a database connection (for example, SaveChanges())
but after that the underlying store connection will be closed. The context will then re-open and re-close the
connection any time another database operation is required:
using System;
using System.Data;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Data.EntityClient;
namespace ConnectionManagementExamples
{
public class DatabaseOpenConnectionBehaviorEF5
{
public static void DatabaseOpenConnectionBehavior()
{
using (var context = new BloggingContext())
{
// At this point the underlying store connection is closed
context.Database.Connection.Open();
context.SaveChanges();
NOTE
This can potentially lead to connections which are open for a long time so use with care.
We also updated the code so that ObjectContext.Connection.State now keeps track of the state of the underlying
connection correctly.
using System;
using System.Data;
using System.Data.Entity;
using System.Data.Entity.Core.EntityClient;
using System.Data.Entity.Infrastructure;
namespace ConnectionManagementExamples
{
internal class DatabaseOpenConnectionBehaviorEF6
{
public static void DatabaseOpenConnectionBehavior()
{
using (var context = new BloggingContext())
{
// At this point the underlying store connection is closed
context.Database.Connection.Open();
// The underlying store connection remains open for the next operation
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
Applications connecting to a database server have always been vulnerable to connection breaks due to back-end
failures and network instability. However, in a LAN based environment working against dedicated database
servers these errors are rare enough that extra logic to handle those failures is not often required. With the rise of
cloud based database servers such as Windows Azure SQL Database and connections over less reliable networks
it is now more common for connection breaks to occur. This could be due to defensive techniques that cloud
databases use to ensure fairness of service, such as connection throttling, or to instability in the network causing
intermittent timeouts and other transient errors.
Connection Resiliency refers to the ability for EF to automatically retry any commands that fail due to these
connection breaks.
Execution Strategies
Connection retry is taken care of by an implementation of the IDbExecutionStrategy interface. Implementations of
the IDbExecutionStrategy will be responsible for accepting an operation and, if an exception occurs, determining if
a retry is appropriate and retrying if it is. There are four execution strategies that ship with EF:
1. DefaultExecutionStrategy: this execution strategy does not retry any operations, it is the default for
databases other than sql server.
2. DefaultSqlExecutionStrategy: this is an internal execution strategy that is used by default. This strategy does
not retry at all, however, it will wrap any exceptions that could be transient to inform users that they might want
to enable connection resiliency.
3. DbExecutionStrategy: this class is suitable as a base class for other execution strategies, including your own
custom ones. It implements an exponential retry policy, where the initial retry happens with zero delay and the
delay increases exponentially until the maximum retry count is hit. This class has an abstract ShouldRetryOn
method that can be implemented in derived execution strategies to control which exceptions should be retried.
4. SqlAzureExecutionStrategy: this execution strategy inherits from DbExecutionStrategy and will retry on
exceptions that are known to be possibly transient when working with Azure SQL Database.
NOTE
Execution strategies 2 and 4 are included in the Sql Server provider that ships with EF, which is in the
EntityFramework.SqlServer assembly and are designed to work with SQL Server.
This code tells EF to use the SqlAzureExecutionStrategy when connecting to SQL Server.
The SqlAzureExecutionStrategy will retry instantly the first time a transient failure occurs, but will delay longer
between each retry until either the max retry limit is exceeded or the total time hits the max delay.
The execution strategies will only retry a limited number of exceptions that are usually tansient, you will still need
to handle other errors as well as catching the RetryLimitExceeded exception for the case where an error is not
transient or takes too long to resolve itself.
There are some known of limitations when using a retrying execution strategy:
Streaming is not supported when a retrying execution strategy is registered. This limitation exists because the
connection could drop part way through the results being returned. When this occurs, EF needs to re-run the
entire query but has no reliable way of knowing which results have already been returned (data may have changed
since the initial query was sent, results may come back in a different order, results may not have a unique identifier,
etc.).
User initiated transactions are not supported
When you have configured an execution strategy that results in retries, there are some limitations around the use
of transactions.
By default, EF will perform any database updates within a transaction. You don’t need to do anything to enable
this, EF always does this automatically.
For example, in the following code SaveChanges is automatically performed within a transaction. If SaveChanges
were to fail after inserting one of the new Site’s then the transaction would be rolled back and no changes applied
to the database. The context is also left in a state that allows SaveChanges to be called again to retry applying the
changes.
When not using a retrying execution strategy you can wrap multiple operations in a single transaction. For
example, the following code wraps two SaveChanges calls in a single transaction. If any part of either operation
fails then none of the changes are applied.
trn.Commit();
}
}
This is not supported when using a retrying execution strategy because EF isn’t aware of any previous operations
and how to retry them. For example, if the second SaveChanges failed then EF no longer has the required
information to retry the first SaveChanges call.
Workaround: Suspend Execution Strategy
One possible workaround is to suspend the retrying execution strategy for the piece of code that needs to use a
user initiated transaction. The easiest way to do this is to add a SuspendExecutionStrategy flag to your code based
configuration class and change the execution strategy lambda to return the default (non-retying) execution
strategy when the flag is set.
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.SqlServer;
using System.Runtime.Remoting.Messaging;
namespace Demo
{
public class MyConfiguration : DbConfiguration
{
public MyConfiguration()
{
this.SetExecutionStrategy("System.Data.SqlClient", () => SuspendExecutionStrategy
? (IDbExecutionStrategy)new DefaultExecutionStrategy()
: new SqlAzureExecutionStrategy());
}
Note that we are using CallContext to store the flag value. This provides similar functionality to thread local
storage but is safe to use with asynchronous code - including async query and save with Entity Framework.
We can now suspend the execution strategy for the section of code that uses a user initiated transaction.
trn.Commit();
}
MyConfiguration.SuspendExecutionStrategy = false;
}
MyConfiguration.SuspendExecutionStrategy = true;
executionStrategy.Execute(
() =>
{
using (var db = new BloggingContext())
{
using (var trn = db.Database.BeginTransaction())
{
db.Blogs.Add(new Blog { Url = "http://msdn.com/data/ef" });
db.Blogs.Add(new Blog { Url = "http://blogs.msdn.com/adonet" });
db.SaveChanges();
trn.Commit();
}
}
});
MyConfiguration.SuspendExecutionStrategy = false;
Handling transaction commit failures
9/18/2018 • 3 minutes to read • Edit Online
NOTE
EF6.1 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6.1. If you are
using an earlier version, some or all of the information does not apply.
As part of 6.1 we are introducing a new connection resiliency feature for EF: the ability to detect and recover
automatically when transient connection failures affect the acknowledgement of transaction commits. The full
details of the scenario are best described in the blog post SQL Database Connectivity and the Idempotency Issue.
In summary, the scenario is that when an exception is raised during a transaction commit there are two possible
causes:
1. The transaction commit failed on the server
2. The transaction commit succeeded on the server but a connectivity issue prevented the success notification
from reaching the client
When the first situation happens the application or the user can retry the operation, but when the second situation
occurs retries should be avoided and the application could recover automatically. The challenge is that without the
ability to detect what was the actual reason an exception was reported during commit, the application cannot
choose the right course of action. The new feature in EF 6.1 allows EF to double-check with the database if the
transaction succeeded and take the right course of action transparently.
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.SqlServer;
This step-by-step walkthrough shows how to bind POCO types to Window Forms (WinForms) controls in a
“master-detail" form. The application uses Entity Framework to populate objects with data from the database, track
changes, and persist data to the database.
The model defines two types that participate in one-to-many relationship: Category (principal\master) and
Product (dependent\detail). Then, the Visual Studio tools are used to bind the types defined in the model to the
WinForms controls. The WinForms data-binding framework enables navigation between related objects: selecting
rows in the master view causes the detail view to update with the corresponding child data.
The screen shots and code listings in this walkthrough are taken from Visual Studio 2013 but you can complete
this walkthrough with Visual Studio 2012 or Visual Studio 2010.
Pre-Requisites
You need to have Visual Studio 2013, Visual Studio 2012 or Visual Studio 2010 installed to complete this
walkthrough.
If you are using Visual Studio 2010, you also have to install NuGet. For more information, see Installing NuGet.
NOTE
In addition to the EntityFramework assembly a reference to System.ComponentModel.DataAnnotations is also
added. If the project has a reference to System.Data.Entity, then it will be removed when the EntityFramework
package is installed. The System.Data.Entity assembly is no longer used for Entity Framework 6 applications.
using System.Collections;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.Diagnostics.CodeAnalysis;
using System.Data.Entity;
namespace WinFormswithEFSample
{
public class ObservableListSource<T> : ObservableCollection<T>, IListSource
where T : class
{
private IBindingList _bindingList;
IList IListSource.GetList()
{
return _bindingList ?? (_bindingList = this.ToBindingList());
}
}
}
Define a Model
In this walkthrough you can chose to implement a model using Code First or the EF Designer. Complete one of
the two following sections.
Option 1: Define a Model using Code First
This section shows how to create a model and its associated database using Code First. Skip to the next section
(Option 2: Define a model using Database First) if you would rather use Database First to reverse engineer
your model from a database using the EF designer
When using Code First development you usually begin by writing .NET Framework classes that define your
conceptual (domain) model.
Add a new Product class to project
Replace the code generated by default with the following code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace WinFormswithEFSample
{
public class Product
{
public int ProductId { get; set; }
public string Name { get; set; }
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace WinFormswithEFSample
{
public class Category
{
private readonly ObservableListSource<Product> _products =
new ObservableListSource<Product>();
In addition to defining entities, you need to define a class that derives from DbContext and exposes
DbSet<TEntity> properties. The DbSet properties let the context know which types you want to include in the
model. The DbContext and DbSet types are defined in the EntityFramework assembly.
An instance of the DbContext derived type manages the entity objects during run time, which includes populating
objects with data from a database, change tracking, and persisting data to the database.
Add a new ProductContext class to the project.
Replace the code generated by default with the following code:
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
namespace WinFormswithEFSample
{
public class ProductContext : DbContext
{
public DbSet<Category> Categories { get; set; }
public DbSet<Product> Products { get; set; }
}
}
Connect to either LocalDB or SQL Express, depending on which one you have installed, and enter
Products as the database name
Select OK and you will be asked if you want to create a new database, select Yes
The new database will now appear in Server Explorer, right-click on it and select New Query
Copy the following SQL into the new query, then right-click on the query and select Execute
CREATE TABLE [dbo].[Categories] (
[CategoryId] [int] NOT NULL IDENTITY,
[Name] [nvarchar](max),
CONSTRAINT [PK_dbo.Categories] PRIMARY KEY ([CategoryId])
)
Select the connection to the database you created in the first section, enter ProductContext as the name of
the connection string and click Next
Click the checkbox next to ‘Tables’ to import all tables and click ‘Finish’
Once the reverse engineer process completes the new model is added to your project and opened up for you to
view in the Entity Framework Designer. An App.config file has also been added to your project with the connection
details for the database.
Additional Steps in Visual Studio 2010
If you are working in Visual Studio 2010 then you will need to update the EF designer to use EF6 code generation.
Right-click on an empty spot of your model in the EF Designer and select Add Code Generation Item…
Select Online Templates from the left menu and search for DbContext
Select the EF 6.x DbContext Generator for C#, enter ProductsModel as the name and click Add
Updating code generation for data binding
EF generates code from your model using T4 templates. The templates shipped with Visual Studio or downloaded
from the Visual Studio gallery are intended for general purpose use. This means that the entities generated from
these templates have simple ICollection<T> properties. However, when doing data binding it is desirable to have
collection properties that implement IListSource. This is why we created the ObservableListSource class above
and we are now going to modify the templates to make use of this class.
Open the Solution Explorer and find ProductModel.edmx file
Find the ProductModel.tt file which will be nested under the ProductModel.edmx file
Lazy Loading
The Products property on the Category class and Category property on the Product class are navigation
properties. In Entity Framework, navigation properties provide a way to navigate a relationship between two entity
types.
EF gives you an option of loading related entities from the database automatically the first time you access the
navigation property. With this type of loading (called lazy loading), be aware that the first time you access each
navigation property a separate query will be executed against the database if the contents are not already in the
context.
When using POCO entity types, EF achieves lazy loading by creating instances of derived proxy types during
runtime and then overriding virtual properties in your classes to add the loading hook. To get lazy loading of
related objects, you must declare navigation property getters as public and virtual (Overridable in Visual Basic),
and you class must not be sealed (NotOverridable in Visual Basic). When using Database First navigation
properties are automatically made virtual to enable lazy loading. In the Code First section we chose to make the
navigation properties virtual for the same reason
Click Finish. If the Data Sources window is not showing up, selectView -> Other Windows-> Data
Sources
Press the pin icon, so the Data Sources window does not auto hide. You may need to hit the refresh button
if the window was already visible.
In Solution Explorer, double-click the Form1.cs file to open the main form in designer.
Select the Category data source and drag it on the form. By default, a new DataGridView
(categoryDataGridView) and Navigation toolbar controls are added to the designer. These controls are
bound to the BindingSource (categoryBindingSource) and Binding Navigator
(categoryBindingNavigator) components that are created as well.
Edit the columns on the categoryDataGridView. We want to set the CategoryId column to read-only.
The value for the CategoryId property is generated by the database after we save the data.
Right-click the DataGridView control and select Edit Columns…
Select the CategoryId column and set ReadOnly to True
Press OK
Select Products from under the Category data source and drag it on the form. The productDataGridView
and productBindingSource are added to the form.
Edit the columns on the productDataGridView. We want to hide the CategoryId and Category columns and
set ProductId to read-only. The value for the ProductId property is generated by the database after we save
the data.
Right-click the DataGridView control and select Edit Columns....
Select the ProductId column and set ReadOnly to True.
Select the CategoryId column and press the Remove button. Do the same with the Category column.
Press OK.
So far, we associated our DataGridView controls with BindingSource components in the designer. In the
next section we will add code to the code behind to set categoryBindingSource.DataSource to the collection
of entities that are currently tracked by DbContext. When we dragged-and-dropped Products from under
the Category, the WinForms took care of setting up the productsBindingSource.DataSource property to
categoryBindingSource and productsBindingSource.DataMember property to Products. Because of this
binding, only the products that belong to the currently selected Category will be displayed in the
productDataGridView.
Enable the Save button on the Navigation toolbar by clicking the right mouse button and selecting
Enabled.
Add the event handler for the save button by double-clicking on the button. This will add the event handler
and bring you to the code behind for the form. The code for the
categoryBindingNavigatorSaveItem_Click event handler will be added in the next section.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Data.Entity;
namespace WinFormswithEFSample
{
public partial class Form1 : Form
{
ProductContext _context;
public Form1()
{
InitializeComponent();
}
// Call the Load method to get the data for the given DbSet
// from the database.
// The data is materialized as entities. The entities are managed by
// the DbContext instance.
_context.Categories.Load();
After saving the store generated keys are shown on the screen.
If you used Code First, then you will also see that a WinFormswithEFSample.ProductContext database
is created for you.
Databinding with WPF
10/4/2018 • 13 minutes to read • Edit Online
This step-by-step walkthrough shows how to bind POCO types to WPF controls in a “master-detail" form. The
application uses the Entity Framework APIs to populate objects with data from the database, track changes, and
persist data to the database.
The model defines two types that participate in one-to-many relationship: Category (principal\master) and
Product (dependent\detail). Then, the Visual Studio tools are used to bind the types defined in the model to the
WPF controls. The WPF data-binding framework enables navigation between related objects: selecting rows in the
master view causes the detail view to update with the corresponding child data.
The screen shots and code listings in this walkthrough are taken from Visual Studio 2013 but you can complete
this walkthrough with Visual Studio 2012 or Visual Studio 2010.
Pre-Requisites
You need to have Visual Studio 2013, Visual Studio 2012 or Visual Studio 2010 installed to complete this
walkthrough.
If you are using Visual Studio 2010, you also have to install NuGet. For more information, see Installing NuGet.
NOTE
In addition to the EntityFramework assembly a reference to System.ComponentModel.DataAnnotations is also
added. If the project has a reference to System.Data.Entity, then it will be removed when the EntityFramework
package is installed. The System.Data.Entity assembly is no longer used for Entity Framework 6 applications.
Define a Model
In this walkthrough you can chose to implement a model using Code First or the EF Designer. Complete one of
the two following sections.
Option 1: Define a Model using Code First
This section shows how to create a model and its associated database using Code First. Skip to the next section
(Option 2: Define a model using Database First) if you would rather use Database First to reverse engineer
your model from a database using the EF designer
When using Code First development you usually begin by writing .NET Framework classes that define your
conceptual (domain) model.
Add a new class to the WPFwithEFSample:
Right-click on the project name
Select Add, then New Item
Select Class and enter Product for the class name
Replace the Product class definition with the following code:
namespace WPFwithEFSample
{
public class Product
{
public int ProductId { get; set; }
public string Name { get; set; }
using System.Collections.ObjectModel;
namespace WPFwithEFSample
{
public class Category
{
public Category()
{
this.Products = new ObservableCollection<Product>();
}
using System.Data.Entity;
namespace WPFwithEFSample
{
public class ProductContext : DbContext
{
public DbSet<Category> Categories { get; set; }
public DbSet<Product> Products { get; set; }
}
}
The new database will now appear in Server Explorer, right-click on it and select New Query
Copy the following SQL into the new query, then right-click on the query and select Execute
CREATE TABLE [dbo].[Categories] (
[CategoryId] [int] NOT NULL IDENTITY,
[Name] [nvarchar](max),
CONSTRAINT [PK_dbo.Categories] PRIMARY KEY ([CategoryId])
)
Select the connection to the database you created in the first section, enter ProductContext as the name of
the connection string and click Next
Click the checkbox next to ‘Tables’ to import all tables and click ‘Finish’
Once the reverse engineer process completes the new model is added to your project and opened up for you to
view in the Entity Framework Designer. An App.config file has also been added to your project with the connection
details for the database.
Additional Steps in Visual Studio 2010
If you are working in Visual Studio 2010 then you will need to update the EF designer to use EF6 code generation.
Right-click on an empty spot of your model in the EF Designer and select Add Code Generation Item…
Select Online Templates from the left menu and search for DbContext
Select the EF 6.x DbContext Generator for C#, enter ProductsModel as the name and click Add
Updating code generation for data binding
EF generates code from your model using T4 templates. The templates shipped with Visual Studio or downloaded
from the Visual Studio gallery are intended for general purpose use. This means that the entities generated from
these templates have simple ICollection<T> properties. However, when doing data binding using WPF it is
desirable to use ObservableCollection for collection properties so that WPF can keep track of changes made to
the collections. To this end we will to modify the templates to use ObservableCollection.
Open the Solution Explorer and find ProductModel.edmx file
Find the ProductModel.tt file which will be nested under the ProductModel.edmx file
Lazy Loading
The Products property on the Category class and Category property on the Product class are navigation
properties. In Entity Framework, navigation properties provide a way to navigate a relationship between two entity
types.
EF gives you an option of loading related entities from the database automatically the first time you access the
navigation property. With this type of loading (called lazy loading), be aware that the first time you access each
navigation property a separate query will be executed against the database if the contents are not already in the
context.
When using POCO entity types, EF achieves lazy loading by creating instances of derived proxy types during
runtime and then overriding virtual properties in your classes to add the loading hook. To get lazy loading of
related objects, you must declare navigation property getters as public and virtual (Overridable in Visual Basic),
and you class must not be sealed (NotOverridable in Visual Basic). When using Database First navigation
properties are automatically made virtual to enable lazy loading. In the Code First section we chose to make the
navigation properties virtual for the same reason
Click Finish.
The Data Sources window is opened next to the MainWindow.xaml window If the Data Sources window is
not showing up, select View -> Other Windows-> Data Sources
Press the pin icon, so the Data Sources window does not auto hide. You may need to hit the refresh button
if the window was already visible.
<Window.Resources>
<CollectionViewSource x:Key="categoryViewSource"
d:DesignSource="{d:DesignInstance {x:Type local:Category}, CreateList=True}"/>
</Window.Resources>
<Grid DataContext="{StaticResource categoryViewSource}">
<DataGrid x:Name="categoryDataGrid" AutoGenerateColumns="False" EnableRowVirtualization="True"
ItemsSource="{Binding}" Margin="13,13,43,191"
RowDetailsVisibilityMode="VisibleWhenSelected">
<DataGrid.Columns>
<DataGridTextColumn x:Name="categoryIdColumn" Binding="{Binding CategoryId}"
Header="Category Id" Width="SizeToHeader"/>
<DataGridTextColumn x:Name="nameColumn" Binding="{Binding Name}"
Header="Name" Width="SizeToHeader"/>
</DataGrid.Columns>
</DataGrid>
</Grid>
Also add the Click event for the Save button by double-clicking the Save button in the designer.
This brings you to the code behind for the form, we'll now edit the code to use the ProductContext to perform data
access. Update the code for the MainWindow as shown below.
The code declares a long-running instance of ProductContext. The ProductContext object is used to query and
save data to the database. The Dispose() on the ProductContext instance is then called from the overridden
OnClosing method. The code comments provide details about what the code does.
using System.Data.Entity;
using System.Linq;
using System.Windows;
namespace WPFwithEFSample
{
public partial class MainWindow : Window
{
private ProductContext _context = new ProductContext();
public MainWindow()
{
InitializeComponent();
}
_context.SaveChanges();
// Refresh the grids so the database generated values show up.
this.categoryDataGrid.Items.Refresh();
this.productsDataGrid.Items.Refresh();
}
Additional Resources
To learn more about data binding to collections using WPF, see this topic in the WPF documentation.
Working with disconnected entities
10/25/2018 • 2 minutes to read • Edit Online
In an Entity Framework-based application, a context class is responsible for detecting changes applied to tracked
entities. Calling the SaveChanges method persists the changes tracked by the context to the database. When
working with n-tier applications, entity objects are usually modified while disconnected from the context, and you
must decide how to track changes and report those changes back to the context. This topic discusses different
options that are available when using Entity Framework with disconnected entities.
Low-level EF APIs
If you don't want to use an existing n-tier solution, or if you want to customize what happens inside a controller
action in a Web API services, Entity Framework provides APIs that allow you to apply changes made on a
disconnected tier. For more information, see Add, Attach, and entity state.
Self-Tracking Entities
Tracking changes on arbitrary graphs of entities while disconnected from the EF context is a hard problem. One of
the attempts to solve it was the Self-Tracking Entities code generation template. This template generates entity
classes that contain logic to track changes made on a disconnected tier as state in the entities themselves. A set of
extension methods is also generated to apply those changes to a context.
This template can be used with models created using the EF Designer, but can not be used with Code First models.
For more information, see Self-Tracking Entities.
IMPORTANT
We no longer recommend using the self-tracking-entities template. It will only continue to be available to support existing
applications. If your application requires working with disconnected graphs of entities, consider other alternatives such as
Trackable Entities, which is a technology similar to Self-Tracking-Entities that is more actively developed by the community, or
writing custom code using the low-level change tracking APIs.
Self-tracking entities
9/18/2018 • 5 minutes to read • Edit Online
IMPORTANT
We no longer recommend using the self-tracking-entities template. It will only continue to be available to support existing
applications. If your application requires working with disconnected graphs of entities, consider other alternatives such as
Trackable Entities, which is a technology similar to Self-Tracking-Entities that is more actively developed by the community, or
writing custom code using the low-level change tracking APIs.
In an Entity Framework-based application, a context is responsible for tracking changes in your objects. You then
use the SaveChanges method to persist the changes to the database. When working with N -Tier applications, the
entity objects are usually disconnected from the context and you must decide how to track changes and report
those changes back to the context. Self-Tracking Entities (STEs) can help you track changes in any tier and then
replay these changes into a context to be saved.
Use STEs only if the context is not available on a tier where the changes to the object graph are made. If the
context is available, there is no need to use STEs because the context will take care of tracking changes.
This template item generates two .tt (text template) files:
The <model name>.tt file generates the entity types and a helper class that contains the change-tracking logic
that is used by self-tracking entities and the extension methods that allow setting state on self-tracking entities.
The <model name>.Context.tt file generates a derived context and an extension class that contains
ApplyChanges methods for the ObjectContext and ObjectSet classes. These methods examine the change-
tracking information that is contained in the graph of self-tracking entities to infer the set of operations that
must be performed to save the changes in the database.
Get Started
To get started, visit the Self-Tracking Entities Walkthrough page.
If objects in your graph contain properties with database-generated values (for example, identity or
concurrency values), Entity Framework will replace values of these properties with the database-
generated values after the SaveChanges method is called. You can implement your service operation
to return saved objects or a list of generated property values for the objects back to the client. The client
would then need to replace the object instances or object property values with the objects or property
values returned from the service operation.
Merging graphs from multiple service requests may introduce objects with duplicate key values in the
resulting graph. Entity Framework does not remove the objects with duplicate keys when you call the
ApplyChanges method but instead throws an exception. To avoid having graphs with duplicate key values
follow one of the patterns described in the following blog: Self-Tracking Entities: ApplyChanges and
duplicate entities.
When you change the relationship between objects by setting the foreign key property, the reference
navigation property is set to null and not synchronized to the appropriate principal entity on the client. After
the graph is attached to the object context (for example, after you call the ApplyChanges method), the
foreign key properties and navigation properties are synchronized.
Not having a reference navigation property synchronized with the appropriate principal object could be
an issue if you have specified cascade delete on the foreign key relationship. If you delete the principal,
the delete will not be propagated to the dependent objects. If you have cascade deletes specified, use
navigation properties to change relationships instead of setting the foreign key property.
Security Considerations
The following security considerations should be taken into account when working with self-tracking entities:
A service should not trust requests to retrieve or update data from a non-trusted client or through a non-
trusted channel. A client must be authenticated: a secure channel or message envelope should be used. Clients'
requests to update or retrieve data must be validated to ensure they conform to expected and legitimate
changes for the given scenario.
Avoid using sensitive information as entity keys (for example, social security numbers). This mitigates the
possibility of inadvertently serializing sensitive information in the self-tracking entity graphs to a client that is
not fully trusted. With independent associations, the original key of an entity that is related to the one that is
being serialized might be sent to the client as well.
To avoid propagating exception messages that contain sensitive data to the client tier, calls to ApplyChanges
and SaveChanges on the server tier should be wrapped in exception-handling code.
Self-Tracking Entities Walkthrough
9/13/2018 • 12 minutes to read • Edit Online
IMPORTANT
We no longer recommend using the self-tracking-entities template. It will only continue to be available to support existing
applications. If your application requires working with disconnected graphs of entities, consider other alternatives such as
Trackable Entities, which is a technology similar to Self-Tracking-Entities that is more actively developed by the community, or
writing custom code using the low-level change tracking APIs.
This walkthrough demonstrates the scenario in which a Windows Communication Foundation (WCF ) service
exposes an operation that returns an entity graph. Next, a client application manipulates that graph and submits
the modifications to a service operation that validates and saves the updates to a database using Entity
Framework.
Before completing this walkthrough make sure you read the Self-Tracking Entities page.
This walkthrough completes the following actions:
Creates a database to access.
Creates a class library that contains the model.
Swaps to the Self-Tracking Entity Generator template.
Moves the entity classes to a separate project.
Creates a WCF service that exposes operations to query and save entities.
Creates client applications (Console and WPF ) that consume the service.
We'll use Database First in this walkthrough but the same techniques apply equally to Model First.
Pre-Requisites
To complete this walkthrough you will need a recent version of Visual Studio.
Create a Database
The database server that is installed with Visual Studio is different depending on the version of Visual Studio you
have installed:
If you are using Visual Studio 2012 then you'll be creating a LocalDB database.
If you are using Visual Studio 2010 you'll be creating a SQL Express database.
Let's go ahead and generate the database.
Open Visual Studio
View -> Server Explorer
Right click on Data Connections -> Add Connection…
If you haven’t connected to a database from Server Explorer before you’ll need to select Microsoft SQL
Server as the data source
Connect to either LocalDB or SQL Express, depending on which one you have installed
Enter STESample as the database name
Select OK and you will be asked if you want to create a new database, select Yes
The new database will now appear in Server Explorer
If you are using Visual Studio 2012
Right-click on the database in Server Explorer and select New Query
Copy the following SQL into the new query, then right-click on the query and select Execute
If you are using Visual Studio 2010
Select Data -> Transact SQL Editor -> New Query Connection...
Enter .\SQLEXPRESS as the server name and click OK
Select the STESample database from the drop down at the top of the query editor
Copy the following SQL into the new query, then right-click on the query and select Execute SQL
NOTE
Another option for moving the entity types to a separate project is to move the template file, rather than linking it from its
default location. If you do this, you will need to update the inputFile variable in the template to provide the relative path to
the edmx file (in this example that would be ..\BloggingModel.edmx).
using System.Collections.Generic;
using System.ServiceModel;
namespace STESample.Service
{
[ServiceContract]
public interface IService1
{
[OperationContract]
List<Blog> GetBlogs();
[OperationContract]
void UpdateBlog(Blog blog);
}
}
Open Service1.svc and replace the contents with the following code
using System;
using System.Collections.Generic;
using System.Data;
using System.Linq;
namespace STESample.Service
{
public class Service1 : IService1
{
/// <summary>
/// Gets all the Blogs and related Posts.
/// </summary>
public List<Blog> GetBlogs()
{
using (BloggingContext context = new BloggingContext())
{
return context.Blogs.Include("Posts").ToList();
}
}
/// <summary>
/// Updates Blog and its related Posts.
/// </summary>
public void UpdateBlog(Blog blog)
{
using (BloggingContext context = new BloggingContext())
{
try
{
// TODO: Perform validation on the updated order before applying the changes.
}
catch (UpdateException)
{
// To avoid propagating exception messages that contain sensitive data to the client
tier
// calls to ApplyChanges and SaveChanges should be wrapped in exception handling code.
throw new InvalidOperationException("Failed to update. Try your request again.");
}
}
}
}
}
using STESample.ConsoleTest.BloggingService;
using System;
using System.Linq;
namespace STESample.ConsoleTest
{
class Program
{
static void Main(string[] args)
{
// Print out the data before we change anything
Console.WriteLine("Initial Data:");
DisplayBlogsAndPosts();
Console.WriteLine();
Console.WriteLine();
}
Initial Data:
ADO.NET Blog
- Intro to EF
- What is New
After Adding:
ADO.NET Blog
- Intro to EF
- What is New
The New Blog
- Welcome to the new blog
- What's new on the new blog
After Update:
ADO.NET Blog
- Intro to EF
- What is New
The Not-So-New Blog
- Welcome to the new blog
- What's new on the new blog
After Delete:
ADO.NET Blog
- Intro to EF
- What is New
<Window.Resources>
<CollectionViewSource
x:Key="blogViewSource"
d:DesignSource="{d:DesignInstance {x:Type STESample:Blog}, CreateList=True}"/>
<CollectionViewSource
x:Key="blogPostsViewSource"
Source="{Binding Posts, Source={StaticResource blogViewSource}}"/>
</Window.Resources>
Open the code behind for MainWindow (MainWindow.xaml.cs) and replace the contents with the following
code
using STESample.WPFTest.BloggingService;
using System.Collections.Generic;
using System.Linq;
using System.Windows;
using System.Windows.Data;
namespace STESample.WPFTest
{
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
Starting with Entity Framework 6, anytime Entity Framework sends a command to the database this command can
be intercepted by application code. This is most commonly used for logging SQL, but can also be used to modify
or abort the command.
Specifically, EF includes:
A Log property for the context similar to DataContext.Log in LINQ to SQL
A mechanism to customize the content and formatting of the output sent to the log
Low -level building blocks for interception giving greater control/flexibility
Notice that context.Database.Log is set to Console.Write. This is all that is needed to log SQL to the console.
Let’s add some simple query/insert/update code so that we can see some output:
context.SaveChangesAsync().Wait();
}
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[Title] AS [Title],
[Extent1].[BlogId] AS [BlogId]
FROM [dbo].[Posts] AS [Extent1]
WHERE [Extent1].[BlogId] = @EntityKeyValue1
-- EntityKeyValue1: '1' (Type = Int32)
-- Executing at 10/8/2013 10:55:41 AM -07:00
-- Completed in 2 ms with result: SqlDataReader
UPDATE [dbo].[Posts]
SET [Title] = @0
WHERE ([Id] = @1)
-- @0: 'Green Eggs and Ham' (Type = String, Size = -1)
-- @1: '1' (Type = Int32)
-- Executing asynchronously at 10/8/2013 10:55:41 AM -07:00
-- Completed in 12 ms with result: 1
(Note that this is the output assuming any database initialization has already happened. If database initialization
had not already happened then there would be a lot more output showing all the work Migrations does under the
covers to check for or create a new database.)
Result logging
The default logger logs command text (SQL ), parameters, and the “Executing” line with a timestamp before the
command is sent to the database. A “completed” line containing elapsed time is logged following execution of the
command.
Note that for async commands the “completed” line is not logged until the async task actually completes, fails, or is
canceled.
The “completed” line contains different information depending on the type of command and whether or not
execution was successful.
Successful execution
For commands that complete successfully the output is “Completed in x ms with result: “ followed by some
indication of what the result was. For commands that return a data reader the result indication is the type of
DbDataReader returned. For commands that return an integer value such as the update command shown above
the result shown is that integer.
Failed execution
For commands that fail by throwing an exception, the output contains the message from the exception. For
example, using SqlQuery to query against a table that does exist will result in log output something like this:
Canceled execution
For async commands where the task is canceled the result could be failure with an exception, since this is what the
underlying ADO.NET provider often does when an attempt is made to cancel. If this doesn’t happen and the task is
canceled cleanly then the output will look something like this:
To log output simply call the Write method which will send output to the configured write delegate.
(Note that this code does simplistic removal of line breaks just as an example. It will likely not work well for
viewing complex SQL.)
Setting the DatabaseLogFormatter
Once a new DatabaseLogFormatter class has been created it needs to be registered with EF. This is done using
code-based configuration. In a nutshell this means creating a new class that derives from DbConfiguration in the
same assembly as your DbContext class and then calling SetDatabaseLogFormatter in the constructor of this new
class. For example:
Context 'BlogContext' is executing command 'SELECT TOP (1) [Extent1].[Id] AS [Id], [Extent1].[Title] AS
[Title]FROM [dbo].[Blogs] AS [Extent1]WHERE (N'One Unicorn' = [Extent1].[Title]) AND ([Extent1].[Title] IS NOT
NULL)'
Context 'BlogContext' is executing command 'SELECT [Extent1].[Id] AS [Id], [Extent1].[Title] AS [Title],
[Extent1].[BlogId] AS [BlogId]FROM [dbo].[Posts] AS [Extent1]WHERE [Extent1].[BlogId] = @EntityKeyValue1'
Context 'BlogContext' is executing command 'update [dbo].[Posts]set [Title] = @0where ([Id] = @1)'
Context 'BlogContext' is executing command 'insert [dbo].[Posts]([Title], [BlogId])values (@0, @1)select
[Id]from [dbo].[Posts]where @@rowcount > 0 and [Id] = scope_identity()'
DbInterception.Add(new NLogCommandInterceptor());
Interceptors can also be registered at the app-domain level using the DbConfiguration code-based configuration
mechanism.
Example: Logging to NLog
Let’s put all this together into an example that using IDbCommandInterceptor and NLog to:
Log a warning for any command that is executed non-asynchronously
Log an error for any command that throws when executed
Here’s the class that does the logging, which should be registered as shown above:
public class NLogCommandInterceptor : IDbCommandInterceptor
{
private static readonly Logger Logger = LogManager.GetCurrentClassLogger();
Notice how this code uses the interception context to discover when a command is being executed non-
asynchronously and to discover when there was an error executing a command.
Performance considerations for EF 4, 5, and 6
5/18/2019 • 69 minutes to read • Edit Online
1. Introduction
Object-Relational Mapping frameworks are a convenient way to provide an abstraction for data access in an
object-oriented application. For .NET applications, Microsoft's recommended O/RM is Entity Framework. With any
abstraction though, performance can become a concern.
This whitepaper was written to show the performance considerations when developing applications using Entity
Framework, to give developers an idea of the Entity Framework internal algorithms that can affect performance,
and to provide tips for investigation and improving performance in their applications that use Entity Framework.
There are a number of good topics on performance already available on the web, and we've also tried pointing to
these resources where possible.
Performance is a tricky topic. This whitepaper is intended as a resource to help you make performance related
decisions for your applications that use Entity Framework. We have included some test metrics to demonstrate
performance, but these metrics aren't intended as absolute indicators of the performance you will see in your
application.
For practical purposes, this document assumes Entity Framework 4 is run under .NET 4.0 and Entity Framework 5
and 6 are run under .NET 4.5. Many of the performance improvements made for Entity Framework 5 reside within
the core components that ship with .NET 4.5.
Entity Framework 6 is an out of band release and does not depend on the Entity Framework components that ship
with .NET. Entity Framework 6 work on both .NET 4.0 and .NET 4.5, and can offer a big performance benefit to
those who haven’t upgraded from .NET 4.0 but want the latest Entity Framework bits in their application. When
this document mentions Entity Framework 6, it refers to the latest version available at the time of this writing:
version 6.1.0.
var c1 = LINQ query execution - Metadata loading: - Metadata loading: - Metadata loading:
q1.First(); High but cached High but cached High but cached
- View generation: - View generation: - View generation:
Potentially very high Potentially very high Medium but cached
but cached but cached - Parameter
- Parameter - Parameter evaluation: Low
evaluation: Medium evaluation: Low - Query translation:
- Query translation: - Query translation: Medium but cached
Medium Medium but cached - Materializer
- Materializer - Materializer generation: Medium
generation: Medium generation: Medium but cached
but cached but cached - Database query
- Database query - Database query execution: Potentially
execution: Potentially execution: Potentially high (Better queries in
high high (Better queries in some situations)
+ Connection.Open some situations) + Connection.Open
+ + Connection.Open +
Command.ExecuteRea + Command.ExecuteRea
der Command.ExecuteRea der
+ DataReader.Read der + DataReader.Read
Object materialization: + DataReader.Read Object materialization:
Medium Object materialization: Medium (Faster than
- Identity lookup: Medium EF5)
Medium - Identity lookup: - Identity lookup:
Medium Medium
var c1 = LINQ query execution - Metadata loading - Metadata loading - Metadata loading
q1.First(); lookup: High but lookup: High but lookup: High but
cached Low cached Low cached Low
- View generation - View generation - View generation
lookup: Potentially lookup: Potentially lookup: Medium but
very high but cached very high but cached cached Low
Low Low - Parameter
- Parameter - Parameter evaluation: Low
evaluation: Medium evaluation: Low - Query translation
- Query translation - Query translation lookup: Medium but
lookup: Medium lookup: Medium but cached Low
- Materializer cached Low - Materializer
generation lookup: - Materializer generation lookup:
Medium but cached generation lookup: Medium but cached
Low Medium but cached Low
- Database query Low - Database query
execution: Potentially - Database query execution: Potentially
high execution: Potentially high (Better queries in
+ Connection.Open high (Better queries in some situations)
+ some situations) + Connection.Open
Command.ExecuteRea + Connection.Open +
der + Command.ExecuteRea
+ DataReader.Read Command.ExecuteRea der
Object materialization: der + DataReader.Read
Medium + DataReader.Read Object materialization:
- Identity lookup: Object materialization: Medium (Faster than
Medium Medium EF5)
- Identity lookup: - Identity lookup:
Medium Medium
There are several ways to reduce the performance cost of both cold and warm queries, and we'll take a look at
these in the following section. Specifically, we'll look at reducing the cost of model loading in cold queries by using
pre-generated views, which should help alleviate performance pains experienced during view generation. For
warm queries, we'll cover query plan caching, no tracking queries, and different query execution options.
2.1 What is View Generation?
In order to understand what view generation is, we must first understand what “Mapping Views” are. Mapping
Views are executable representations of the transformations specified in the mapping for each entity set and
association. Internally, these mapping views take the shape of CQTs (canonical query trees). There are two types of
mapping views:
Query views: these represent the transformation necessary to go from the database schema to the conceptual
model.
Update views: these represent the transformation necessary to go from the conceptual model to the database
schema.
Keep in mind that the conceptual model might differ from the database schema in various ways. For example, one
single table might be used to store the data for two different entity types. Inheritance and non-trivial mappings
play a role in the complexity of the mapping views.
The process of computing these views based on the specification of the mapping is what we call view generation.
View generation can either take place dynamically when a model is loaded, or at build time, by using "pre-
generated views"; the latter are serialized in the form of Entity SQL statements to a C# or VB file.
When views are generated, they are also validated. From a performance standpoint, the vast majority of the cost of
view generation is actually the validation of the views which ensures that the connections between the entities
make sense and have the correct cardinality for all the supported operations.
When a query over an entity set is executed, the query is combined with the corresponding query view, and the
result of this composition is run through the plan compiler to create the representation of the query that the
backing store can understand. For SQL Server, the final result of this compilation will be a T-SQL SELECT
statement. The first time an update over an entity set is performed, the update view is run through a similar
process to transform it into DML statements for the target database.
2.2 Factors that affect View Generation performance
The performance of view generation step not only depends on the size of your model but also on how
interconnected the model is. If two Entities are connected via an inheritance chain or an Association, they are said
to be connected. Similarly if two tables are connected via a foreign key, they are connected. As the number of
connected Entities and tables in your schemas increase, the view generation cost increases.
The algorithm that we use to generate and validate views is exponential in the worst case, though we do use some
optimizations to improve this. The biggest factors that seem to negatively affect performance are:
Model size, referring to the number of entities and the amount of associations between these entities.
Model complexity, specifically inheritance involving a large number of types.
Using Independent Associations, instead of Foreign Key Associations.
For small, simple models the cost may be small enough to not bother using pre-generated views. As model size
and complexity increase, there are several options available to reduce the cost of view generation and validation.
2.3 Using Pre -Generated Views to decrease model load time
For detailed information on how to use pre-generated views on Entity Framework 6 visit Pre-Generated Mapping
Views
2.3.1 Pre-Generated views using the Entity Framework Power Tools Community Edition
You can use the Entity Framework 6 Power Tools Community Edition to generate views of EDMX and Code First
models by right-clicking the model class file and using the Entity Framework menu to select “Generate Views”. The
Entity Framework Power Tools Community Edition work only on DbContext-derived contexts.
2.3.2 How to use Pre-generated views with a model created by EDMGen
EDMGen is a utility that ships with .NET and works with Entity Framework 4 and 5, but not with Entity Framework
6. EDMGen allows you to generate a model file, the object layer and the views from the command line. One of the
outputs will be a Views file in your language of choice, VB or C#. This is a code file containing Entity SQL snippets
for each entity set. To enable pre-generated views, you simply include the file in your project.
If you manually make edits to the schema files for the model, you will need to re-generate the views file. You can
do this by running EDMGen with the /mode:ViewGeneration flag.
2.3.3 How to use Pre-Generated Views with an EDMX file
You can also use EDMGen to generate views for an EDMX file - the previously referenced MSDN topic describes
how to add a pre-build event to do this - but this is complicated and there are some cases where it isn't possible.
It's generally easier to use a T4 template to generate the views when your model is in an edmx file.
The ADO.NET team blog has a post that describes how to use a T4 template for view generation (
<http://blogs.msdn.com/b/adonet/archive/2008/06/20/how -to-use-a-t4-template-for-view -generation.aspx>).
This post includes a template that can be downloaded and added to your project. The template was written for the
first version of Entity Framework, so they aren’t guaranteed to work with the latest versions of Entity Framework.
However, you can download a more up-to-date set of view generation templates for Entity Framework 4 and
5from the Visual Studio Gallery:
VB.NET: <http://visualstudiogallery.msdn.microsoft.com/118b44f2-1b91-4de2-a584-7a680418941d>
C#: <http://visualstudiogallery.msdn.microsoft.com/ae7730ce-ddab-470f-8456-1b313cd2c44d>
If you’re using Entity Framework 6 you can get the view generation T4 templates from the Visual Studio Gallery at
<http://visualstudiogallery.msdn.microsoft.com/18a7db90-6705-4d19-9dd1-0a6c23d0751f>.
2.4 Reducing the cost of view generation
Using pre-generated views moves the cost of view generation from model loading (run time) to design time. While
this improves startup performance at runtime, you will still experience the pain of view generation while you are
developing. There are several additional tricks that can help reduce the cost of view generation, both at compile
time and run time.
2.4.1 Using Foreign Key Associations to reduce view generation cost
We have seen a number of cases where switching the associations in the model from Independent Associations to
Foreign Key Associations dramatically improved the time spent in view generation.
To demonstrate this improvement, we generated two versions of the Navision model by using EDMGen. Note: see
appendix C for a description of the Navision model. The Navision model is interesting for this exercise due to its
very large amount of entities and relationships between them.
One version of this very large model was generated with Foreign Keys Associations and the other was generated
with Independent Associations. We then timed how long it took to generate the views for each model. Entity
Framework 5 test used the GenerateViews() method from class EntityViewGenerator to generate the views, while
the Entity Framework 6 test used the GenerateViews() method from class StorageMappingItemCollection. This
due to code restructuring that occurred in the Entity Framework 6 codebase.
Using Entity Framework 5, view generation for the model with Foreign Keys took 65 minutes in a lab machine. It's
unknown how long it would have taken to generate the views for the model that used independent associations.
We left the test running for over a month before the machine was rebooted in our lab to install monthly updates.
Using Entity Framework 6, view generation for the model with Foreign Keys took 28 seconds in the same lab
machine. View generation for the model that uses Independent Associations took 58 seconds. The improvements
done to Entity Framework 6 on its view generation code mean that many projects won’t need pre-generated views
to obtain faster startup times.
It’s important to remark that pre-generating views in Entity Framework 4 and 5 can be done with EDMGen or the
Entity Framework Power Tools. For Entity Framework 6 view generation can be done via the Entity Framework
Power Tools or programmatically as described in Pre-Generated Mapping Views.
2 .4 .1 .1 H o w t o u se F o r e i g n K e y s i n st e a d o f I n d e p e n d e n t A sso c i a t i o n s
When using EDMGen or the Entity Designer in Visual Studio, you get FKs by default, and it only takes a single
checkbox or command line flag to switch between FKs and IAs.
If you have a large Code First model, using Independent Associations will have the same effect on view generation.
You can avoid this impact by including Foreign Key properties on the classes for your dependent objects, though
some developers will consider this to be polluting their object model. You can find more information on this
subject in <http://blog.oneunicorn.com/2011/12/11/whats-the-deal-with-mapping-foreign-keys-using-the-entity-
framework/>.
Entity Designer After adding an association between two entities, make sure
you have a referential constraint. Referential constraints tell
Entity Framework to use Foreign Keys instead of Independent
Associations. For additional details visit
<http://blogs.msdn.com/b/efdesign/archive/2009/03/16/forei
gn-keys-in-the-entity-framework.aspx>.
WHEN USING DO THIS
Code First See the "Relationship Convention" section of the Code First
Conventions topic for information on how to include foreign
key properties on dependent objects when using Code First.
context.Configuration.AutoDetectChangesEnabled = false;
var product = context.Products.Find(productId);
context.Configuration.AutoDetectChangesEnabled = true;
...
What you have to consider when using the Find method is:
1. If the object is not in the cache the benefits of Find are negated, but the syntax is still simpler than a query by
key.
2. If auto detect changes is enabled the cost of the Find method may increase by one order of magnitude, or even
more depending on the complexity of your model and the amount of entities in your object cache.
Also, keep in mind that Find only returns the entity you are looking for and it does not automatically loads its
associated entities if they are not already in the object cache. If you need to retrieve associated entities, you can use
a query by key with eager loading. For more information see 8.1 Lazy Loading vs. Eager Loading.
3.1.2 Performance issues when the object cache has many entities
The object cache helps to increase the overall responsiveness of Entity Framework. However, when the object
cache has a very large amount of entities loaded it may affect certain operations such as Add, Remove, Find, Entry,
SaveChanges and more. In particular, operations that trigger a call to DetectChanges will be negatively affected by
very large object caches. DetectChanges synchronizes the object graph with the object state manager and its
performance will determined directly by the size of the object graph. For more information about DetectChanges,
see Tracking Changes in POCO Entities.
When using Entity Framework 6, developers are able to call AddRange and RemoveRange directly on a DbSet,
instead of iterating on a collection and calling Add once per instance. The advantage of using the range methods is
that the cost of DetectChanges is only paid once for the entire set of entities as opposed to once per each added
entity.
3.2 Query Plan Caching
The first time a query is executed, it goes through the internal plan compiler to translate the conceptual query into
the store command (for example, the T-SQL which is executed when run against SQL Server). If query plan
caching is enabled, the next time the query is executed the store command is retrieved directly from the query plan
cache for execution, bypassing the plan compiler.
The query plan cache is shared across ObjectContext instances within the same AppDomain. You don't need to
hold onto an ObjectContext instance to benefit from query plan caching.
3.2.1 Some notes about Query Plan Caching
The query plan cache is shared for all query types: Entity SQL, LINQ to Entities, and CompiledQuery objects.
By default, query plan caching is enabled for Entity SQL queries, whether executed through an EntityCommand
or through an ObjectQuery. It is also enabled by default for LINQ to Entities queries in Entity Framework on
.NET 4.5, and in Entity Framework 6
Query plan caching can be disabled by setting the EnablePlanCaching property (on EntityCommand or
ObjectQuery) to false. For example:
For parameterized queries, changing the parameter's value will still hit the cached query. But changing a
parameter's facets (for example, size, precision, or scale) will hit a different entry in the cache.
When using Entity SQL, the query string is part of the key. Changing the query at all will result in different
cache entries, even if the queries are functionally equivalent. This includes changes to casing or whitespace.
When using LINQ, the query is processed to generate a part of the key. Changing the LINQ expression will
therefore generate a different key.
Other technical limitations may apply; see Autocompiled Queries for more details.
3.2.2 Cache eviction algorithm
Understanding how the internal algorithm works will help you figure out when to enable or disable query plan
caching. The cleanup algorithm is as follows:
1. Once the cache contains a set number of entries (800), we start a timer that periodically (once-per-minute)
sweeps the cache.
2. During cache sweeps, entries are removed from the cache on a LFRU (Least frequently – recently used) basis.
This algorithm takes both hit count and age into account when deciding which entries are ejected.
3. At the end of each cache sweep, the cache again contains 800 entries.
All cache entries are treated equally when determining which entries to evict. This means the store command for a
CompiledQuery has the same chance of eviction as the store command for an Entity SQL query.
Note that the cache eviction timer is kicked in when there are 800 entities in the cache, but the cache is only swept
60 seconds after this timer is started. That means that for up to 60 seconds your cache may grow to be quite large.
3.2.3 Test Metrics demonstrating query plan caching performance
To demonstrate the effect of query plan caching on your application's performance, we performed a test where we
executed a number of Entity SQL queries against the Navision model. See the appendix for a description of the
Navision model and the types of queries which were executed. In this test, we first iterate through the list of
queries and execute each one once to add them to the cache (if caching is enabled). This step is untimed. Next, we
sleep the main thread for over 60 seconds to allow cache sweeping to take place; finally, we iterate through the list
a 2nd time to execute the cached queries. Additionally, the SQL Server plan cache is flushed before each set of
queries is executed so that the times we obtain accurately reflect the benefit given by the query plan cache.
3 .2 .3 .1 Te st R e su l t s
this.productsGrid.Visible = true;
In this case, you will create a new CompiledQuery instance on the fly every time the method is called. Instead of
seeing performance benefits by retrieving the store command from the query plan cache, the CompiledQuery will
go through the plan compiler every time a new instance is created. In fact, you will be polluting your query plan
cache with a new CompiledQuery entry every time the method is called.
Instead, you want to create a static instance of the compiled query, so you are invoking the same compiled query
every time the method is called. One way to so this is by adding the CompiledQuery instance as a member of your
object context. You can then make things a little cleaner by accessing the CompiledQuery through a helper
method:
this.productsGrid.DataSource = context.GetProductsForCategory(selectedCategory);
if (this.orderCountFilterList.SelectedItem.Value != defaultFilterText)
{
int orderCount = int.Parse(orderCountFilterList.SelectedValue);
myCustomers = myCustomers.Where(c => c.Orders.Count > orderCount);
}
if (this.countryFilterList.SelectedItem.Value != defaultFilterText)
{
myCustomers = myCustomers.Where(c => c.Address.Country == countryFilterList.SelectedValue);
}
this.customersGrid.DataSource = myCustomers;
this.customersGrid.DataBind();
}
To avoid this re-compilation, you can rewrite the CompiledQuery to take the possible filters into account:
this.customersGrid.DataSource = myCustomers;
this.customersGrid.DataBind();
}
A tradeoff here is the generated store command will always have the filters with the null checks, but these should
be fairly simple for the database server to optimize:
...
WHERE ((0 = (CASE WHEN (@p__linq__1 IS NOT NULL) THEN cast(1 as bit) WHEN (@p__linq__1 IS NULL) THEN cast(0 as
bit) END)) OR ([Project3].[C2] > @p__linq__2)) AND (@p__linq__3 IS NULL OR [Project3].[Country] = @p__linq__4)
4 Autocompiled Queries
When a query is issued against a database using Entity Framework, it must go through a series of steps before
actually materializing the results; one such step is Query Compilation. Entity SQL queries were known to have
good performance as they are automatically cached, so the second or third time you execute the same query it can
skip the plan compiler and use the cached plan instead.
Entity Framework 5 introduced automatic caching for LINQ to Entities queries as well. In past editions of Entity
Framework creating a CompiledQuery to speed your performance was a common practice, as this would make
your LINQ to Entities query cacheable. Since caching is now done automatically without the use of a
CompiledQuery, we call this feature “autocompiled queries”. For more information about the query plan cache and
its mechanics, see Query Plan Caching.
Entity Framework detects when a query requires to be recompiled, and does so when the query is invoked even if
it had been compiled before. Common conditions that cause the query to be recompiled are:
Changing the MergeOption associated to your query. The cached query will not be used, instead the plan
compiler will run again and the newly created plan gets cached.
Changing the value of ContextOptions.UseCSharpNullComparisonBehavior. You get the same effect as
changing the MergeOption.
Other conditions can prevent your query from using the cache. Common examples are:
Using IEnumerable<T>.Contains<>(T value).
Using functions that produce queries with constants.
Using the properties of a non-mapped object.
Linking your query to another query that requires to be recompiled.
4.1 Using IEnumerable<T>.Contains<T>(T value )
Entity Framework does not cache queries that invoke IEnumerable<T>.Contains<T>(T value) against an in-
memory collection, since the values of the collection are considered volatile. The following example query will not
be cached, so it will always be processed by the plan compiler:
Note that the size of the IEnumerable against which Contains is executed determines how fast or how slow your
query is compiled. Performance can suffer significantly when using large collections such as the one shown in the
example above.
Entity Framework 6 contains optimizations to the way IEnumerable<T>.Contains<T>(T value) works when
queries are executed. The SQL code that is generated is much faster to produce and more readable, and in most
cases it also executes faster in the server.
4.2 Using functions that produce queries with constants
The Skip(), Take(), Contains() and DefautIfEmpty() LINQ operators do not produce SQL queries with parameters
but instead put the values passed to them as constants. Because of this, queries that might otherwise be identical
end up polluting the query plan cache, both on the EF stack and on the database server, and do not get reutilized
unless the same constants are used in a subsequent query execution. For example:
var id = 10;
...
using (var context = new MyContext())
{
var query = context.MyEntities.Select(entity => entity.Id).Contains(id);
In this example, each time this query is executed with a different value for id the query will be compiled into a new
plan.
In particular pay attention to the use of Skip and Take when doing paging. In EF6 these methods have a lambda
overload that effectively makes the cached query plan reusable because EF can capture variables passed to these
methods and translate them to SQLparameters. This also helps keep the cache cleaner since otherwise each query
with a different constant for Skip and Take would get its own query plan cache entry.
Consider the following code, which is suboptimal but is only meant to exemplify this class of queries:
A faster version of this same code would involve calling Skip with a lambda:
The second snippet may run up to 11% faster because the same query plan is used every time the query is run,
which saves CPU time and avoids polluting the query cache. Furthermore, because the parameter to Skip is in a
closure the code might as well look like this now:
var i = 0;
var skippyCustomers = context.Customers.OrderBy(c => c.LastName).Skip(() => i);
for (; i < count; ++i)
{
var currentCustomer = skippyCustomers.FirstOrDefault();
ProcessCustomer(currentCustomer);
}
In this example, assume that class NonMappedType is not part of the Entity model. This query can easily be
changed to not use a non-mapped type and instead use a local variable as the parameter to the query:
In this case, the query will be able to get cached and will benefit from the query plan cache.
4.4 Linking to queries that require recompiling
Following the same example as above, if you have a second query that relies on a query that needs to be
recompiled, your entire second query will also be recompiled. Here’s an example to illustrate this scenario:
The example is generic, but it illustrates how linking to firstQuery is causing secondQuery to be unable to get
cached. If firstQuery had not been a query that requires recompiling, then secondQuery would have been cached.
5 NoTracking Queries
5.1 Disabling change tracking to reduce state management overhead
If you are in a read-only scenario and want to avoid the overhead of loading the objects into the
ObjectStateManager, you can issue "No Tracking" queries. Change tracking can be disabled at the query level.
Note though that by disabling change tracking you are effectively turning off the object cache. When you query for
an entity, we can't skip materialization by pulling the previously-materialized query results from the
ObjectStateManager. If you are repeatedly querying for the same entities on the same context, you might actually
see a performance benefit from enabling change tracking.
When querying using ObjectContext, ObjectQuery and ObjectSet instances will remember a MergeOption once it
is set, and queries that are composed on them will inherit the effective MergeOption of the parent query. When
using DbContext, tracking can be disabled by calling the AsNoTracking() modifier on the DbSet.
5.1.1 Disabling change tracking for a query when using DbContext
You can switch the mode of a query to NoTracking by chaining a call to the AsNoTracking() method in the query.
Unlike ObjectQuery, the DbSet and DbQuery classes in the DbContext API don’t have a mutable property for the
MergeOption.
((ObjectQuery)productsForCategory).MergeOption = MergeOption.NoTracking;
5.1.3 Disabling change tracking for an entire entity set using ObjectContext
context.Products.MergeOption = MergeOption.NoTracking;
Pros
Suitable for CUD operations.
Fully materialized objects.
Simplest to write with syntax built into the programming language.
Good performance.
Cons
Certain technical restrictions, such as:
Patterns using DefaultIfEmpty for OUTER JOIN queries result in more complex queries than simple
OUTER JOIN statements in Entity SQL.
You still can’t use LIKE with general pattern matching.
6.2 No Tracking LINQ to Entities queries
When the context derives ObjectContext:
context.Products.MergeOption = MergeOption.NoTracking;
var q = context.Products.Where(p => p.Category.CategoryName == "Beverages");
Pros
Improved performance over regular LINQ queries.
Fully materialized objects.
Simplest to write with syntax built into the programming language.
Cons
Not suitable for CUD operations.
Certain technical restrictions, such as:
Patterns using DefaultIfEmpty for OUTER JOIN queries result in more complex queries than simple
OUTER JOIN statements in Entity SQL.
You still can’t use LIKE with general pattern matching.
Note that queries that project scalar properties are not tracked even if the NoTracking is not specified. For
example:
This particular query doesn’t explicitly specify being NoTracking, but since it’s not materializing a type that’s known
to the object state manager then the materialized result is not tracked.
6.3 Entity SQL over an ObjectQuery
Pros
Suitable for CUD operations.
Fully materialized objects.
Supports query plan caching.
Cons
Involves textual query strings which are more prone to user error than query constructs built into the language.
6.4 Entity SQL over an Entity Command
Pros
Supports query plan caching in .NET 4.0 (plan caching is supported by all other query types in .NET 4.5).
Cons
Involves textual query strings which are more prone to user error than query constructs built into the language.
Not suitable for CUD operations.
Results are not automatically materialized, and must be read from the data reader.
6.5 SqlQuery and ExecuteStoreQuery
SqlQuery on Database:
SqlQuery on DbSet:
ExecyteStoreQuery:
Pros
Generally fastest performance since plan compiler is bypassed.
Fully materialized objects.
Suitable for CUD operations when used from the DbSet.
Cons
Query is textual and error prone.
Query is tied to a specific backend by using store semantics instead of conceptual semantics.
When inheritance is present, handcrafted query needs to account for mapping conditions for the type
requested.
6.6 CompiledQuery
Pros
Provides up to a 7% performance improvement over regular LINQ queries.
Fully materialized objects.
Suitable for CUD operations.
Cons
Increased complexity and programming overhead.
The performance improvement is lost when composing on top of a compiled query.
Some LINQ queries can't be written as a CompiledQuery - for example, projections of anonymous types.
6.7 Performance Comparison of different query options
Simple microbenchmarks where the context creation was not timed were put to the test. We measured querying
5000 times for a set of non-cached entities in a controlled environment. These numbers are to be taken with a
warning: they do not reflect actual numbers produced by an application, but instead they are a very accurate
measurement of how much of a performance difference there is when different querying options are compared
apples-to-apples, excluding the cost of creating a new context.
In this end-to-end case, Entity Framework 6 outperforms Entity Framework 5 due to performance improvements
made on several parts of the stack, including a much lighter DbContext initialization and faster
MetadataCollection<T> lookups.
It's worth noting that when generating the SSDL, the load is almost entirely spent on the SQL Server, while the
client development machine is waiting idle for results to come back from the server. DBAs should particularly
appreciate this improvement. It's also worth noting that essentially the entire cost of model generation takes place
in View Generation now.
7.3 Splitting Large Models with Database First and Model First
As model size increases, the designer surface becomes cluttered and difficult to use. We typically consider a model
with more than 300 entities to be too large to effectively use the designer. The following blog post describes
several options for splitting large models: <http://blogs.msdn.com/b/adonet/archive/2008/11/25/working-with-
large-models-in-entity-framework-part-2.aspx>.
The post was written for the first version of Entity Framework, but the steps still apply.
7.4 Performance considerations with the Entity Data Source Control
We've seen cases in multi-threaded performance and stress tests where the performance of a web application
using the EntityDataSource Control deteriorates significantly. The underlying cause is that the EntityDataSource
repeatedly calls MetadataWorkspace.LoadFromAssembly on the assemblies referenced by the Web application to
discover the types to be used as entities.
The solution is to set the ContextTypeName of the EntityDataSource to the type name of your derived
ObjectContext class. This turns off the mechanism that scans all referenced assemblies for entity types.
Setting the ContextTypeName field also prevents a functional problem where the EntityDataSource in .NET 4.0
throws a ReflectionTypeLoadException when it can't load a type from an assembly via reflection. This issue has
been fixed in .NET 4.5.
7.5 POCO entities and change tracking proxies
Entity Framework enables you to use custom data classes together with your data model without making any
modifications to the data classes themselves. This means that you can use "plain-old" CLR objects (POCO ), such as
existing domain objects, with your data model. These POCO data classes (also known as persistence-ignorant
objects), which are mapped to entities that are defined in a data model, support most of the same query, insert,
update, and delete behaviors as entity types that are generated by the Entity Data Model tools.
Entity Framework can also create proxy classes derived from your POCO types, which are used when you want to
enable features such as lazy loading and automatic change tracking on POCO entities. Your POCO classes must
meet certain requirements to allow Entity Framework to use proxies, as described here:
http://msdn.microsoft.com/library/dd468057.aspx.
Chance tracking proxies will notify the object state manager each time any of the properties of your entities has its
value changed, so Entity Framework knows the actual state of your entities all the time. This is done by adding
notification events to the body of the setter methods of your properties, and having the object state manager
processing such events. Note that creating a proxy entity will typically be more expensive than creating a non-
proxy POCO entity due to the added set of events created by Entity Framework.
When a POCO entity does not have a change tracking proxy, changes are found by comparing the contents of your
entities against a copy of a previous saved state. This deep comparison will become a lengthy process when you
have many entities in your context, or when your entities have a very large amount of properties, even if none of
them changed since the last comparison took place.
In summary: you’ll pay a performance hit when creating the change tracking proxy, but change tracking will help
you speed up the change detection process when your entities have many properties or when you have many
entities in your model. For entities with a small number of properties where the amount of entities doesn’t grow
too much, having change tracking proxies may not be of much benefit.
When using eager loading, you'll issue a single query that returns all customers and all orders. The store command
looks like:
SELECT
[Project1].[C1] AS [C1],
[Project1].[CustomerID] AS [CustomerID],
[Project1].[CompanyName] AS [CompanyName],
[Project1].[ContactName] AS [ContactName],
[Project1].[ContactTitle] AS [ContactTitle],
[Project1].[Address] AS [Address],
[Project1].[City] AS [City],
[Project1].[Region] AS [Region],
[Project1].[PostalCode] AS [PostalCode],
[Project1].[Country] AS [Country],
[Project1].[Phone] AS [Phone],
[Project1].[Fax] AS [Fax],
[Project1].[C2] AS [C2],
[Project1].[OrderID] AS [OrderID],
[Project1].[CustomerID1] AS [CustomerID1],
[Project1].[EmployeeID] AS [EmployeeID],
[Project1].[OrderDate] AS [OrderDate],
[Project1].[RequiredDate] AS [RequiredDate],
[Project1].[ShippedDate] AS [ShippedDate],
[Project1].[ShipVia] AS [ShipVia],
[Project1].[Freight] AS [Freight],
[Project1].[ShipName] AS [ShipName],
[Project1].[ShipAddress] AS [ShipAddress],
[Project1].[ShipCity] AS [ShipCity],
[Project1].[ShipRegion] AS [ShipRegion],
[Project1].[ShipPostalCode] AS [ShipPostalCode],
[Project1].[ShipCountry] AS [ShipCountry]
FROM ( SELECT
[Extent1].[CustomerID] AS [CustomerID],
[Extent1].[CompanyName] AS [CompanyName],
[Extent1].[ContactName] AS [ContactName],
[Extent1].[ContactTitle] AS [ContactTitle],
[Extent1].[Address] AS [Address],
[Extent1].[City] AS [City],
[Extent1].[Region] AS [Region],
[Extent1].[PostalCode] AS [PostalCode],
[Extent1].[Country] AS [Country],
[Extent1].[Phone] AS [Phone],
[Extent1].[Fax] AS [Fax],
1 AS [C1],
[Extent2].[OrderID] AS [OrderID],
[Extent2].[CustomerID] AS [CustomerID1],
[Extent2].[EmployeeID] AS [EmployeeID],
[Extent2].[OrderDate] AS [OrderDate],
[Extent2].[RequiredDate] AS [RequiredDate],
[Extent2].[ShippedDate] AS [ShippedDate],
[Extent2].[ShipVia] AS [ShipVia],
[Extent2].[Freight] AS [Freight],
[Extent2].[ShipName] AS [ShipName],
[Extent2].[ShipAddress] AS [ShipAddress],
[Extent2].[ShipCity] AS [ShipCity],
[Extent2].[ShipRegion] AS [ShipRegion],
[Extent2].[ShipPostalCode] AS [ShipPostalCode],
[Extent2].[ShipCountry] AS [ShipCountry],
CASE WHEN ([Extent2].[OrderID] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C2]
FROM [dbo].[Customers] AS [Extent1]
LEFT OUTER JOIN [dbo].[Orders] AS [Extent2] ON [Extent1].[CustomerID] = [Extent2].[CustomerID]
WHERE N'UK' = [Extent1].[Country]
) AS [Project1]
ORDER BY [Project1].[CustomerID] ASC, [Project1].[C2] ASC
When using lazy loading, you'll issue the following query initially:
SELECT
[Extent1].[CustomerID] AS [CustomerID],
[Extent1].[CompanyName] AS [CompanyName],
[Extent1].[ContactName] AS [ContactName],
[Extent1].[ContactTitle] AS [ContactTitle],
[Extent1].[Address] AS [Address],
[Extent1].[City] AS [City],
[Extent1].[Region] AS [Region],
[Extent1].[PostalCode] AS [PostalCode],
[Extent1].[Country] AS [Country],
[Extent1].[Phone] AS [Phone],
[Extent1].[Fax] AS [Fax]
FROM [dbo].[Customers] AS [Extent1]
WHERE N'UK' = [Extent1].[Country]
And each time you access the Orders navigation property of a customer another query like the following is issued
against the store:
Do you need to access many navigation properties from the No - Both options will probably do. However, if the payload
fetched entities? your query is bringing is not too big, you may experience
performance benefits by using Eager loading as it’ll require
less network round trips to materialize your objects.
Do you know exactly what data will be needed at run time? No - Lazy loading will be better for you. Otherwise, you may
end up querying for data that you will not need.
Is your code executing far from your database? (increased No - When the network latency is not an issue, using Lazy
network latency) loading may simplify your code. Remember that the topology
of your application may change, so don’t take database
proximity for granted.
orders.Load();
This will work only on tracked queries, as we are making use of the ability the context has to perform identity
resolution and association fixup automatically.
As with lazy loading, the tradeoff will be more queries for smaller payloads. You can also use projections of
individual properties to explicitly select only the data you need from each entity, but you will not be loading entities
in this case, and updates will not be supported.
8.2.3 Workaround to get lazy loading of properties
Entity Framework currently doesn’t support lazy loading of scalar or complex properties. However, in cases where
you have a table that includes a large object such as a BLOB, you can use table splitting to separate the large
properties into a separate entity. For example, suppose you have a Product table that includes a varbinary photo
column. If you don't frequently need to access this property in your queries, you can use table splitting to bring in
only the parts of the entity that you normally need. The entity representing the product photo will only be loaded
when you explicitly need it.
A good resource that shows how to enable table splitting is Gil Fink's "Table Splitting in Entity Framework" blog
post: <http://blogs.microsoft.co.il/blogs/gilf/archive/2009/10/13/table-splitting-in-entity-framework.aspx>.
9 Other considerations
9.1 Server Garbage Collection
Some users might experience resource contention that limits the parallelism they are expecting when the Garbage
Collector is not properly configured. Whenever EF is used in a multithreaded scenario, or in any application that
resembles a server-side system, make sure to enable Server Garbage Collection. This is done via a simple setting
in your application config file:
This should decrease your thread contention and increase your throughput by up to 30% in CPU saturated
scenarios. In general terms, you should always test how your application behaves using the classic Garbage
Collection (which is better tuned for UI and client side scenarios) as well as the Server Garbage Collection.
9.2 AutoDetectChanges
As mentioned earlier, Entity Framework might show performance issues when the object cache has many entities.
Certain operations, such as Add, Remove, Find, Entry and SaveChanges, trigger calls to DetectChanges which
might consume a large amount of CPU based on how large the object cache has become. The reason for this is
that the object cache and the object state manager try to stay as synchronized as possible on each operation
performed to a context so that the produced data is guaranteed to be correct under a wide array of scenarios.
It is generally a good practice to leave Entity Framework’s automatic change detection enabled for the entire life of
your application. If your scenario is being negatively affected by high CPU usage and your profiles indicate that the
culprit is the call to DetectChanges, consider temporarily turning off AutoDetectChanges in the sensitive portion of
your code:
try
{
context.Configuration.AutoDetectChangesEnabled = false;
var product = context.Products.Find(productId);
...
}
finally
{
context.Configuration.AutoDetectChangesEnabled = true;
}
Before turning off AutoDetectChanges, it’s good to understand that this might cause Entity Framework to lose its
ability to track certain information about the changes that are taking place on the entities. If handled incorrectly,
this might cause data inconsistency on your application. For more information on turning off AutoDetectChanges,
read <http://blog.oneunicorn.com/2012/03/12/secrets-of-detectchanges-part-3-switching-off-automatic-
detectchanges/>.
9.3 Context per request
Entity Framework’s contexts are meant to be used as short-lived instances in order to provide the most optimal
performance experience. Contexts are expected to be short lived and discarded, and as such have been
implemented to be very lightweight and reutilize metadata whenever possible. In web scenarios it’s important to
keep this in mind and not have a context for more than the duration of a single request. Similarly, in non-web
scenarios, context should be discarded based on your understanding of the different levels of caching in the Entity
Framework. Generally speaking, one should avoid having a context instance throughout the life of the application,
as well as contexts per thread and static contexts.
9.4 Database null semantics
Entity Framework by default will generate SQL code that has C# null comparison semantics. Consider the
following example query:
int? categoryId = 7;
int? supplierId = 8;
decimal? unitPrice = 0;
short? unitsInStock = 100;
short? unitsOnOrder = 20;
short? reorderLevel = null;
var r = q.ToList();
In this example, we’re comparing a number of nullable variables against nullable properties on the entity, such as
SupplierID and UnitPrice. The generated SQL for this query will ask if the parameter value is the same as the
column value, or if both the parameter and the column values are null. This will hide the way the database server
handles nulls and will provide a consistent C# null experience across different database vendors. On the other
hand, the generated code is a bit convoluted and may not perform well when the amount of comparisons in the
where statement of the query grows to a large number.
One way to deal with this situation is by using database null semantics. Note that this might potentially behave
differently to the C# null semantics since now Entity Framework will generate simpler SQL that exposes the way
the database engine handles null values. Database null semantics can be activated per-context with one single
configuration line against the context configuration:
context.Configuration.UseDatabaseNullSemantics = true;
Small to medium sized queries will not display a perceptible performance improvement when using database null
semantics, but the difference will become more noticeable on queries with a large number of potential null
comparisons.
In the example query above, the performance difference was less than 2% in a microbenchmark running in a
controlled environment.
9.5 Async
Entity Framework 6 introduced support of async operations when running on .NET 4.5 or later. For the most part,
applications that have IO related contention will benefit the most from using asynchronous query and save
operations. If your application does not suffer from IO contention, the use of async will, in the best cases, run
synchronously and return the result in the same amount of time as a synchronous call, or in the worst case, simply
defer execution to an asynchronous task and add extra time to the completion of your scenario.
For information on how asynchronous programming work that will help you deciding if async will improve the
performance of your application visit http://msdn.microsoft.com/library/hh191443.aspx. For more information on
the use of async operations on Entity Framework, see Async Query and Save.
9.6 NGEN
Entity Framework 6 does not come in the default installation of .NET framework. As such, the Entity Framework
assemblies are not NGEN’d by default which means that all the Entity Framework code is subject to the same
JIT’ing costs as any other MSIL assembly. This might degrade the F5 experience while developing and also the
cold startup of your application in the production environments. In order to reduce the CPU and memory costs of
JIT’ing it is advisable to NGEN the Entity Framework images as appropriate. For more information on how to
improve the startup performance of Entity Framework 6 with NGEN, see Improving Startup Performance with
NGen.
9.7 Code First versus EDMX
Entity Framework reasons about the impedance mismatch problem between object oriented programming and
relational databases by having an in-memory representation of the conceptual model (the objects), the storage
schema (the database) and a mapping between the two. This metadata is called an Entity Data Model, or EDM for
short. From this EDM, Entity Framework will derive the views to roundtrip data from the objects in memory to the
database and back.
When Entity Framework is used with an EDMX file that formally specifies the conceptual model, the storage
schema, and the mapping, then the model loading stage only has to validate that the EDM is correct (for example,
make sure that no mappings are missing), then generate the views, then validate the views and have this metadata
ready for use. Only then can a query be executed or new data be saved to the data store.
The Code First approach is, at its heart, a sophisticated Entity Data Model generator. The Entity Framework has to
produce an EDM from the provided code; it does so by analyzing the classes involved in the model, applying
conventions and configuring the model via the Fluent API. After the EDM is built, the Entity Framework essentially
behaves the same way as it would had an EDMX file been present in the project. Thus, building the model from
Code First adds extra complexity that translates into a slower startup time for the Entity Framework when
compared to having an EDMX. The cost is completely dependent on the size and complexity of the model that’s
being built.
When choosing to use EDMX versus Code First, it’s important to know that the flexibility introduced by Code First
increases the cost of building the model for the first time. If your application can withstand the cost of this first-
time load then typically Code First will be the preferred way to go.
10 Investigating Performance
10.1 Using the Visual Studio Profiler
If you are having performance issues with the Entity Framework, you can use a profiler like the one built into
Visual Studio to see where your application is spending its time. This is the tool we used to generate the pie charts
in the “Exploring the Performance of the ADO.NET Entity Framework - Part 1” blog post (
<http://blogs.msdn.com/b/adonet/archive/2008/02/04/exploring-the-performance-of-the-ado-net-entity-
framework-part-1.aspx>) that show where Entity Framework spends its time during cold and warm queries.
The "Profiling Entity Framework using the Visual Studio 2010 Profiler" blog post written by the Data and
Modeling Customer Advisory Team shows a real-world example of how they used the profiler to investigate a
performance problem. <http://blogs.msdn.com/b/dmcat/archive/2010/04/30/profiling-entity-framework-using-
the-visual-studio-2010-profiler.aspx>. This post was written for a windows application. If you need to profile a web
application the Windows Performance Recorder (WPR ) and Windows Performance Analyzer (WPA) tools may
work better than working from Visual Studio. WPR and WPA are part of the Windows Performance Toolkit which
is included with the Windows Assessment and Deployment Kit ( http://www.microsoft.com/download/details.aspx?
id=39982).
10.2 Application/Database profiling
Tools like the profiler built into Visual Studio tell you where your application is spending time. Another type of
profiler is available that performs dynamic analysis of your running application, either in production or pre-
production depending on needs, and looks for common pitfalls and anti-patterns of database access.
Two commercially available profilers are the Entity Framework Profiler ( <http://efprof.com>) and ORMProfiler (
<http://ormprofiler.com>).
If your application is an MVC application using Code First, you can use StackExchange's MiniProfiler. Scott
Hanselman describes this tool in his blog at:
<http://www.hanselman.com/blog/NuGetPackageOfTheWeek9ASPNETMiniProfilerFromStackExchangeRocksYo
urWorld.aspx>.
For more information on profiling your application's database activity, see Julie Lerman's MSDN Magazine article
titled Profiling Database Activity in the Entity Framework.
10.3 Database logger
If you are using Entity Framework 6 also consider using the built-in logging functionality. The Database property
of the context can be instructed to log its activity via a simple one-line configuration:
<interceptors>
<interceptor type="System.Data.Entity.Infrastructure.Interception.DatabaseLogger, EntityFramework">
<parameters>
<parameter value="C:\Path\To\My\LogOutput.txt"/>
</parameters>
</interceptor>
</interceptors>
11 Appendix
11.1 A. Test Environment
This environment uses a 2-machine setup with the database on a separate machine from the client application.
Machines are in the same rack, so network latency is relatively low, but more realistic than a single-machine
environment.
11.1.1 App Server
1 1 .1 .1 .1 So ft w a r e En v i r o n m e n t
Dual Processor: Intel(R ) Xeon(R ) CPU L5520 W3530 @ 2.27GHz, 2261 Mhz8 GHz, 4 Core(s), 84 Logical
Processor(s).
2412 GB RamRAM.
136 GB SCSI250GB SATA 7200 rpm 3GB/s drive split into 4 partitions.
11.1.2 DB server
1 1 .1 .2 .1 So ft w a r e En v i r o n m e n t
Single Processor: Intel(R ) Xeon(R ) CPU L5520 @ 2.27GHz, 2261 MhzES -1620 0 @ 3.60GHz, 4 Core(s), 8
Logical Processor(s).
824 GB RamRAM.
465 GB ATA500GB SATA 7200 rpm 6GB/s drive split into 4 partitions.
11.2 B. Query performance comparison tests
The Northwind model was used to execute these tests. It was generated from the database using the Entity
Framework designer. Then, the following code was used to compare the performance of the query execution
options:
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Common;
using System.Data.Entity.Infrastructure;
using System.Data.EntityClient;
using System.Data.Objects;
using System.Linq;
namespace QueryComparison
{
public partial class NorthwindEntities : ObjectContext
{
private static readonly Func<NorthwindEntities, string, IQueryable<Product>> productsForCategoryCQ =
CompiledQuery.Compile(
(NorthwindEntities context, string categoryName) =>
context.Products.Where(p => p.Category.CategoryName == categoryName)
);
// 'materialize' the product by accessing each field and value. Because we are
materializing products, we won't have any nested data readers or records.
int fieldCount = record.FieldCount;
// Treat all products as Product, even if they are the subtype DiscontinuedProduct.
Product product = new Product();
product.ProductID = record.GetInt32(0);
product.ProductName = record.GetString(1);
product.SupplierID = record.GetInt32(2);
product.CategoryID = record.GetInt32(3);
product.QuantityPerUnit = record.GetString(4);
product.UnitPrice = record.GetDecimal(5);
product.UnitsInStock = record.GetInt16(6);
product.UnitsOnOrder = record.GetInt16(7);
product.ReorderLevel = record.GetInt16(8);
product.Discontinued = record.GetBoolean(9);
productsList.Add(product);
}
}
}
}
<Query complexity="Lookup">
<CommandText>Select value distinct top(4) e.Idle_Time From NavisionFKContext.Session as e</CommandText>
</Query>
1 1 .3 .1 .2 Si n g l e A g g r e g a t i n g
1 1 .3 .1 .3 A g g r e g a t i n g Su b t o t a l s
<Query complexity="AggregatingSubtotals">
<CommandText>
using NavisionFK;
function AmountConsumed(entities Collection([CRONUS_International_Ltd__Zone])) as
(
Edm.Sum(select value N.Block_Movement FROM entities as E, E.CRONUS_International_Ltd__Bin as N)
)
function AmountConsumed(P1 Edm.Int32) as
(
AmountConsumed(select value e from NavisionFKContext.CRONUS_International_Ltd__Zone as e where
e.Zone_Ranking = P1)
)
--------------------------------------------------------------------------------------------------------------
--------
(
select top(10) Zone_Ranking, Cross_Dock_Bin_Zone, AmountConsumed(GroupPartition(E))
from NavisionFKContext.CRONUS_International_Ltd__Zone as E
where AmountConsumed(E.Zone_Ranking) > @MinAmountConsumed
group by E.Zone_Ranking, E.Cross_Dock_Bin_Zone
)
union all
(
select top(10) Zone_Ranking, Cast(null as Edm.Byte) as P2, AmountConsumed(GroupPartition(E))
from NavisionFKContext.CRONUS_International_Ltd__Zone as E
where AmountConsumed(E.Zone_Ranking) > @MinAmountConsumed
group by E.Zone_Ranking
)
union all
{
Row(Cast(null as Edm.Int32) as P1, Cast(null as Edm.Byte) as P2, AmountConsumed(select value E
from
NavisionFKContext.CRONUS_International_Ltd__Zone as E
where AmountConsumed(E.Zone_Ranking)
> @MinAmountConsumed))
}</CommandText>
<Parameters>
<Parameter Name="MinAmountConsumed" DbType="Int32" Value="10000" />
</Parameters>
</Query>
Improving startup performance with NGen
9/13/2018 • 4 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
The .NET Framework supports the generation of native images for managed applications and libraries as a way to
help applications start faster and also in some cases use less memory. Native images are created by translating
managed code assemblies into files containing native machine instructions before the application is executed,
relieving the .NET JIT (Just-In-Time) compiler from having to generate the native instructions at application
runtime.
Prior to version 6, the EF runtime’s core libraries were part of the .NET Framework and native images were
generated automatically for them. Starting with version 6 the whole EF runtime has been combined into the
EntityFramework NuGet package. Native images have to now be generated using the NGen.exe command line
tool to obtain similar results.
Empirical observations show that native images of the EF runtime assemblies can cut between 1 and 3 seconds of
application startup time.
cd <*Assemblies location*>
3. Depending on your operating system and the application’s configuration you might need to generate native
images for 32 bit architecture, 64 bit architecture or for both.
For 32 bit run:
NGen.exe also supports other functions such as uninstalling and displaying the installed native images, queuing
the generation of multiple images, etc. For more details of usage read the NGen.exe documentation.
TIP
Make sure you carefully measure the impact of using native images on both the startup performance and the overall
performance of your application and compare them against actual requirements. While native images will generally help
improve startup up performance and in some cases reduce memory usage, not all scenarios will benefit equally. For instance,
on steady state execution (that is, once all the methods being used by the application have been invoked at least once) code
generated by the JIT compiler can in fact yield slightly better performance than native images.
cd <Solution directory>\packages\EntityFramework.6.0.2\lib\net45
%WINDIR%\Microsoft.NET\Framework\v4.0.30319\ngen install EntityFramework.SqlServer.dll
%WINDIR%\Microsoft.NET\Framework64\v4.0.30319\ngen install EntityFramework.SqlServer.dll
NOTE
This takes advantage of the fact that installing the native images for EF provider for SQL Server will also by default install the
native images for the main EF runtime assembly. This works because NGen.exe can detect that EntityFramework.dll is a direct
dependency of the EntityFramework.SqlServer.dll assembly located in the same directory.
Creating native images during setup
The WiX Toolkit supports queuing the generation of native images for managed assemblies during setup, as
explained in this how -to guide. Another alternative is to create a custom setup task that execute the NGen.exe
command.
Before the Entity Framework can execute a query or save changes to the data source, it must generate a set of
mapping views to access the database. These mapping views are a set of Entity SQL statement that represent the
database in an abstract way, and are part of the metadata which is cached per application domain. If you create
multiple instances of the same context in the same application domain, they will reuse mapping views from the
cached metadata rather than regenerating them. Because mapping view generation is a significant part of the
overall cost of executing the first query, the Entity Framework enables you to pre-generate mapping views and
include them in the compiled project. For more information, see Performance Considerations (Entity Framework).
Once the process is finished you will have a class similar to the following generated
Now when you run your application EF will use this class to load views as required. If your model changes and you
do not re-generate this class then EF will throw an exception.
Generating Mapping Views from Code - EF6 Onwards
The other way to generate views is to use the APIs that EF provides. When using this method you have the
freedom to serialize the views however you like, but you also need to load the views yourself.
NOTE
EF6 Onwards Only - The APIs shown in this section were introduced in Entity Framework 6. If you are using an earlier
version this information does not apply.
Generating Views
The APIs to generate views are on the System.Data.Entity.Core.Mapping.StorageMappingItemCollection class. You
can retrieve a StorageMappingCollection for a Context by using the MetadataWorkspace of an ObjectContext. If
you are using the newer DbContext API then you can access this by using the IObjectContextAdapter like below, in
this code we have an instance of your derived DbContext called dbContext:
Once you have the StorageMappingItemCollection then you can get access to the GenerateViews and
ComputeMappingHashValue methods.
The first method creates a dictionary with an entry for each view in the container mapping. The second method
computes a hash value for the single container mapping and is used at runtime to validate that the model has not
changed since the views were pre-generated. Overrides of the two methods are provided for complex scenarios
involving multiple container mappings.
When generating views you will call the GenerateViews method and then write out the resulting EntitySetBase
and DbMappingView. You will also need to store the hash generated by the ComputeMappingHashValue method.
Loading Views
In order to load the views generated by the GenerateViews method, you can provide EF with a class that inherits
from the DbMappingViewCache abstract class. DbMappingViewCache specifies two methods that you must
implement:
The MappingHashValue property must return the hash generated by the ComputeMappingHashValue method.
When EF is going to ask for views it will first generate and compare the hash value of the model with the hash
returned by this property. If they do not match then EF will throw an EntityCommandCompilationException
exception.
The GetView method will accept an EntitySetBase and you need to return a DbMappingVIew containing the
EntitySql that was generated for that was associated with the given EntitySetBase in the dictionary generated by
the GenerateViews method. If EF asks for a view that you do not have then GetView should return null.
The following is an extract from the DbMappingViewCache that is generated with the Power Tools as described
above, in it we see one way to store and retrieve the EntitySql required.
public override string MappingHashValue
{
get { return "a0b843f03dd29abee99789e190a6fb70ce8e93dc97945d437d9a58fb8e2afd2e"; }
}
if (extentName == "BlogContext.Blogs")
{
return GetView2();
}
if (extentName == "BlogContext.Posts")
{
return GetView3();
}
return null;
}
To have EF use your DbMappingViewCache you add use the DbMappingViewCacheTypeAttribute, specifying the
context that it was created for. In the code below we associate the BlogContext with the MyMappingViewCache
class.
For more complex scenarios, mapping view cache instances can be provided by specifying a mapping view cache
factory. This can be done by implementing the abstract class
System.Data.Entity.Infrastructure.MappingViews.DbMappingViewCacheFactory. The instance of the mapping view
cache factory that is used can be retrieved or set using the
StorageMappingItemCollection.MappingViewCacheFactoryproperty.
Entity Framework 6 Providers
2/3/2019 • 3 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
The Entity Framework is now being developed under an open-source license and EF6 and above will not be
shipped as part of the .NET Framework. This has many advantages but also requires that EF providers be rebuilt
against the EF6 assemblies. This means that EF providers for EF5 and below will not work with EF6 until they have
been rebuilt.
Registering EF providers
Starting with Entity Framework 6 EF providers can be registered using either code-based configuration or in the
application’s config file.
Config file registration
Registration of the EF provider in app.config or web.config has the following format:
<entityFramework>
<providers>
<provider invariantName="My.Invariant.Name" type="MyProvider.MyProviderServices, MyAssembly" />
</providers>
</entityFramework>
Note that often if the EF provider is installed from NuGet, then the NuGet package will automatically add this
registration to the config file. If you install the NuGet package into a project that is not the startup project for your
app, then you may need to copy the registration into the config file for your startup project.
The “invariantName” in this registration is the same invariant name used to identify an ADO.NET provider. This
can be found as the “invariant” attribute in a DbProviderFactories registration and as the “providerName” attribute
in a connection string registration. The invariant name to use should also be included in documentation for the
provider. Examples of invariant names are “System.Data.SqlClient” for SQL Server and
“System.Data.SqlServerCe.4.0” for SQL Server Compact.
The “type” in this registration is the assembly-qualified name of the provider type that derives from
“System.Data.Entity.Core.Common.DbProviderServices”. For example, the string to use for SQL Compact is
“System.Data.Entity.SqlServerCompact.SqlCeProviderServices, EntityFramework.SqlServerCompact”. The type to
use here should be included in documentation for the provider.
Code -based registration
Starting with Entity Framework 6 application-wide configuration for EF can be specified in code. For full details see
Entity Framework Code-Based Configuration. The normal way to register an EF provider using code-based
configuration is to create a new class that derives from System.Data.Entity.DbConfiguration and place it in the
same assembly as your DbContext class. Your DbConfiguration class should then register the provider in its
constructor. For example, to register the SQL Compact provider the DbConfiguration class looks like this:
The Entity Framework provider model allows Entity Framework to be used with different types of database
server. For example, one provider can be plugged in to allow EF to be used against Microsoft SQL Server, while
another provider can be plugged into to allow EF to be used against Microsoft SQL Server Compact Edition. The
providers for EF6 that we are aware of can be found on the Entity Framework providers page.
Certain changes were required to the way EF interacts with providers to allow EF to be released under an open
source license. These changes require rebuilding of EF providers against the EF6 assemblies together with new
mechanisms for registration of the provider.
Rebuilding
With EF6 the core code that was previously part of the .NET Framework is now being shipped as out-of-band
(OOB ) assemblies. Details on how to build applications against EF6 can be found on the Updating applications
for EF6 page. Providers will also need to be rebuilt using these instructions.
Additional services
In addition to the fundamental services described above there are also many other services used by EF which
are either always or sometimes provider-specific. Default provider-specific implementations of these services
can be supplied by a DbProviderServices implementation. Applications can also override the implementations of
these services, or provide implementations when a DbProviderServices type does not provide a default. This is
described in more detail in the Resolving additional services section below.
The additional service types that a provider may be of interest to a provider are listed below. More details about
each of these service types can be found in the API documentation.
IDbExecutionStrategy
This is an optional service that allows a provider to implement retries or other behavior when queries and
commands are executed against the database. If no implementation is provided, then EF will simply execute the
commands and propagate any exceptions thrown. For SQL Server this service is used to provide a retry policy
which is especially useful when running against cloud-based database servers such as SQL Azure.
IDbConnectionFactory
This is an optional service that allows a provider to create DbConnection objects by convention when given only
a database name. Note that while this service can be resolved by a DbProviderServices implementation it has
been present since EF 4.1 and can also be explicitly set in either the config file or in code. The provider will only
get a chance to resolve this service if it registered as the default provider (see The default provider below ) and if
a default connection factory has not been set elsewhere.
DbSpatialServices
This is an optional services that allows a provider to add support for geography and geometry spatial types. An
implementation of this service must be supplied in order for an application to use EF with spatial types.
DbSptialServices is asked for in two ways. First, provider-specific spatial services are requested using a
DbProviderInfo object (which contains invariant name and manifest token) as key. Second, DbSpatialServices
can be asked for with no key. This is used to resolve the “global spatial provider” that is used when creating
stand-alone DbGeography or DbGeometry types.
MigrationSqlGenerator
This is an optional service that allows EF Migrations to be used for the generation of SQL used in creating and
modifying database schemas by Code First. An implementation is required in order to support Migrations. If an
implementation is provided then it will also be used when databases are created using database initializers or the
Database.Create method.
Func<DbConnection, string, HistoryContextFactory>
This is an optional service that allows a provider to configure the mapping of the HistoryContext to the
__MigrationHistory table used by EF Migrations. The HistoryContext is a Code First DbContext and can be
configured using the normal fluent API to change things like the name of the table and the column mapping
specifications. The default implementation of this service returned by EF for all providers may work for a given
database server if all the default table and column mappings are supported by that provider. In such a case the
provider does not need to supply an implementation of this service.
IDbProviderFactoryResolver
This is an optional service for obtaining the correct DbProviderFactory from a given DbConnection object. The
default implementation of this service returned by EF for all providers is intended to work for all providers.
However, when running on .NET 4, the DbProviderFactory is not publicly accessible from one if its
DbConnections. Therefore, EF uses some heuristics to search the registered providers to find a match. It is
possible that for some providers these heuristics will fail and in such situations the provider should supply a new
implementation.
Registering DbProviderServices
The DbProviderServices implementation to use can be registered either in the application’s configuration file
(app.config or web.config) or using code-based configuration. In either case the registration uses the provider’s
“invariant name” as a key. This allows multiple providers to be registered and used in a single application. The
invariant name used for EF registrations is the same as the invariant name used for ADO.NET provider
registration and connection strings. For example, for SQL Server the invariant name “System.Data.SqlClient” is
used.
Config file registration
The DbProviderServices type to use is registered as a provider element in the providers list of the
entityFramework section of the application’s config file. For example:
<entityFramework>
<providers>
<provider invariantName="My.Invariant.Name" type="MyProvider.MyProviderServices, MyAssembly" />
</providers>
</entityFramework>
The type string must be the assembly-qualified type name of the DbProviderServices implementation to use.
Code -based registration
Starting with EF6 providers can also be registered using code. This allows an EF provider to be used without any
change to the application’s configuration file. To use code-based configuration an application should create a
DbConfiguration class as described in the code-based configuration documentation. The constructor of the
DbConfiguration class should then call SetProviderServices to register the EF provider. For example:
AddDependencyResolver(new ExecutionStrategyResolver<DefaultSqlExecutionStrategy>(
"System.data.SqlClient", null, () => new DefaultSqlExecutionStrategy()));
AddDependencyResolver(new SingletonDependencyResolver<Func<MigrationSqlGenerator>>(
() => new SqlServerMigrationSqlGenerator(), "System.data.SqlClient"));
AddDependencyResolver(new SingletonDependencyResolver<DbSpatialServices>(
SqlSpatialServices.Instance,
k =>
{
var asSpatialKey = k as DbProviderInfo;
return asSpatialKey == null
|| asSpatialKey.ProviderInvariantName == ProviderInvariantName;
}));
}
Registration order
When multiple DbProviderServices implementations are registered in an application’s config file they will be
added as secondary resolvers in the order that they are listed. Since resolvers are always added to the top of the
secondary resolver chain this means that the provider at the end of the list will get a chance to resolve
dependencies before the others. (This can seem a little counter-intuitive at first, but it makes sense if you imagine
taking each provider out of the list and stacking it on top of the existing providers.)
This ordering usually doesn’t matter because most provider services are provider-specific and keyed by provider
invariant name. However, for services that are not keyed by provider invariant name or some other provider-
specific key the service will be resolved based on this ordering. For example, if it is not explicitly set differently
somewhere else, then the default connection factory will come from the topmost provider in the chain.
<entityFramework>
<defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework" >
</entityFramework>
The type is the assembly-qualified type name for the default connection factory, which must implement
IDbConnectionFactory.
It is recommended that a provider NuGet package set the default connection factory in this way when installed.
See NuGet Packages for providers below.
There are also new asynchronous versions of existing methods that are recommended to be overridden as the
default implementations delegate to the synchronous methods and therefore do not execute asynchronously:
public virtual Task<DbGeography> GetGeographyAsync(int ordinal, CancellationToken cancellationToken)
public virtual Task<DbGeometry> GetGeometryAsync(int ordinal, CancellationToken cancellationToken)
More information about these commands can be obtained by using get-help in the Package Manager Console
window.
Wrapping providers
A wrapping provider is an EF and/or ADO.NET provider that wraps an existing provider to extend it with other
functionality such as profiling or tracing capabilities. Wrapping providers can be registered in the normal way,
but it is often more convenient to setup the wrapping provider at runtime by intercepting the resolution of
provider-related services. The static event OnLockingConfiguration on the DbConfiguration class can be used to
do this.
OnLockingConfiguration is called after EF has determined where all EF configuration for the app domain will be
obtained from but before it is locked for use. At app startup (before EF is used) the app should register an event
handler for this event. (We are considering adding support for registering this handler in the config file but this is
not yet supported.) The event handler should then make a call to ReplaceService for every service that needs to
be wrapped.
For example, to wrap IDbConnectionFactory and DbProviderService, a handler something like this should be
registered:
DbConfiguration.OnLockingConfiguration +=
(_, a) =>
{
a.ReplaceService<DbProviderServices>(
(s, k) => new MyWrappedProviderServices(s));
a.ReplaceService<IDbConnectionFactory>(
(s, k) => new MyWrappedConnectionFactory(s));
};
The service that has been resolved and should now be wrapped together with the key that was used to resolve
the service are passed to the handler. The handler can then wrap this service and replace the returned service
with the wrapped version.
Entity Framework supports working with spatial data through the DbGeography or DbGeometry classes. These
classes rely on database-specific functionality offered by the Entity Framework provider. Not all providers support
spatial data and those that do may have additional prerequisites such as the installation of spatial type assemblies.
More information about provider support for spatial types is provided below.
Additional information on how to use spatial types in an application can be found in two walkthroughs, one for
Code First, the other for Database First or Model First:
Spatial Data Types in Code First
Spatial Data Types in EF Designer
When creating instances of POCO entity types, Entity Framework often creates instances of a dynamically
generated derived type that acts as a proxy for the entity. This proxy overrides some virtual properties of the entity
to insert hooks for performing actions automatically when the property is accessed. For example, this mechanism
is used to support lazy loading of relationships. The techniques shown in this topic apply equally to models created
with Code First and the EF Designer.
Note that the EF will not create proxies for types where there is nothing for the proxy to do. This means that you
can also avoid proxies by having types that are sealed and/or have no virtual properties.
The generic version of Create can be used if you want to create an instance of a derived entity type. For example:
Note that the Create method does not add or attach the created entity to the context.
Note that the Create method will just create an instance of the entity type itself if creating a proxy type for the
entity would have no value because it would not do anything. For example, if the entity type is sealed and/or has
no virtual properties then Create will just create an instance of the entity type.
Getting the actual entity type from a proxy type
Proxy types have names that look something like this:
System.Data.Entity.DynamicProxies.Blog_5E43C6C196972BF0754973E48C9C941092D86818CD94005E9A759
B70BF6E48E6
You can find the entity type for this proxy type using the GetObjectType method from ObjectContext. For example:
Note that if the type passed to GetObjectType is an instance of an entity type that is not a proxy type then the type
of entity is still returned. This means you can always use this method to get the actual entity type without any other
checking to see if the type is a proxy type or not.
Testing with a mocking framework
3/21/2019 • 8 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
When writing tests for your application it is often desirable to avoid hitting the database. Entity Framework allows
you to achieve this by creating a context – with behavior defined by your tests – that makes use of in-memory data.
using System.Collections.Generic;
using System.Data.Entity;
namespace TestingDemo
{
public class BloggingContext : DbContext
{
public virtual DbSet<Blog> Blogs { get; set; }
public virtual DbSet<Post> Posts { get; set; }
}
Service to be tested
To demonstrate testing with in-memory test doubles we are going to be writing a couple of tests for a BlogService.
The service is capable of creating new blogs (AddBlog) and returning all Blogs ordered by name (GetAllBlogs). In
addition to GetAllBlogs, we’ve also provided a method that will asynchronously get all blogs ordered by name
(GetAllBlogsAsync).
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Threading.Tasks;
namespace TestingDemo
{
public class BlogService
{
private BloggingContext _context;
return blog;
}
return query.ToList();
}
namespace TestingDemo
{
[TestClass]
public class NonQueryTests
{
[TestMethod]
public void CreateBlog_saves_a_blog_via_context()
{
var mockSet = new Mock<DbSet<Blog>>();
namespace TestingDemo
{
[TestClass]
public class QueryTests
{
[TestMethod]
public void GetAllBlogs_orders_by_name()
{
var data = new List<Blog>
{
new Blog { Name = "BBB" },
new Blog { Name = "ZZZ" },
new Blog { Name = "AAA" },
}.AsQueryable();
Assert.AreEqual(3, blogs.Count);
Assert.AreEqual("AAA", blogs[0].Name);
Assert.AreEqual("BBB", blogs[1].Name);
Assert.AreEqual("ZZZ", blogs[2].Name);
}
}
}
The source IQueryable doesn't implement IDbAsyncEnumerable{0}. Only sources that implement
IDbAsyncEnumerable can be used for Entity Framework asynchronous operations. For more details see
http://go.microsoft.com/fwlink/?LinkId=287068.
Whilst the async methods are only supported when running against an EF query, you may want to use them in
your unit test when running against an in-memory test double of a DbSet.
In order to use the async methods we need to create an in-memory DbAsyncQueryProvider to process the async
query. Whilst it would be possible to setup a query provider using Moq, it is much easier to create a test double
implementation in code. The code for this implementation is as follows:
using System.Collections.Generic;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.Infrastructure;
using System.Linq;
using System.Linq.Expressions;
using System.Threading;
using System.Threading.Tasks;
namespace TestingDemo
{
internal class TestDbAsyncQueryProvider<TEntity> : IDbAsyncQueryProvider
{
private readonly IQueryProvider _inner;
IDbAsyncEnumerator IDbAsyncEnumerable.GetAsyncEnumerator()
{
return GetAsyncEnumerator();
}
IQueryProvider IQueryable.Provider
{
{
get { return new TestDbAsyncQueryProvider<T>(this); }
}
}
public T Current
{
get { return _inner.Current; }
}
object IDbAsyncEnumerator.Current
{
get { return Current; }
}
}
}
Now that we have an async query provider we can write a unit test for our new GetAllBlogsAsync method.
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Linq;
using System.Threading.Tasks;
namespace TestingDemo
{
[TestClass]
public class AsyncQueryTests
{
[TestMethod]
public async Task GetAllBlogsAsync_orders_by_name()
{
mockSet.As<IQueryable<Blog>>()
.Setup(m => m.Provider)
.Returns(new TestDbAsyncQueryProvider<Blog>(data.Provider));
Assert.AreEqual(3, blogs.Count);
Assert.AreEqual("AAA", blogs[0].Name);
Assert.AreEqual("BBB", blogs[1].Name);
Assert.AreEqual("ZZZ", blogs[2].Name);
}
}
}
Testing with your own test doubles
3/25/2019 • 9 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
When writing tests for your application it is often desirable to avoid hitting the database. Entity Framework allows
you to achieve this by creating a context – with behavior defined by your tests – that makes use of in-memory
data.
using System.Data.Entity;
namespace TestingDemo
{
public interface IBloggingContext
{
DbSet<Blog> Blogs { get; }
DbSet<Post> Posts { get; }
int SaveChanges();
}
}
The EF model
The service we're going to test makes use of an EF model made up of the BloggingContext and the Blog and Post
classes. This code may have been generated by the EF Designer or be a Code First model.
using System.Collections.Generic;
using System.Data.Entity;
namespace TestingDemo
{
public class BloggingContext : DbContext, IBloggingContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}
Service to be tested
To demonstrate testing with in-memory test doubles we are going to be writing a couple of tests for a BlogService.
The service is capable of creating new blogs (AddBlog) and returning all Blogs ordered by name (GetAllBlogs). In
addition to GetAllBlogs, we’ve also provided a method that will asynchronously get all blogs ordered by name
(GetAllBlogsAsync).
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Threading.Tasks;
namespace TestingDemo
{
public class BlogService
{
private IBloggingContext _context;
return blog;
}
return query.ToList();
}
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Linq;
using System.Linq.Expressions;
using System.Threading;
using System.Threading.Tasks;
namespace TestingDemo
{
public class TestContext : IBloggingContext
{
public TestContext()
{
this.Blogs = new TestDbSet<Blog>();
this.Posts = new TestDbSet<Post>();
}
public TestDbSet()
{
_data = new ObservableCollection<TEntity>();
_query = _data.AsQueryable();
}
Type IQueryable.ElementType
{
get { return _query.ElementType; }
}
Expression IQueryable.Expression
{
get { return _query.Expression; }
}
IQueryProvider IQueryable.Provider
{
get { return new TestDbAsyncQueryProvider<TEntity>(_query.Provider); }
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return _data.GetEnumerator();
}
IEnumerator<TEntity> IEnumerable<TEntity>.GetEnumerator()
{
return _data.GetEnumerator();
}
IDbAsyncEnumerator<TEntity> IDbAsyncEnumerable<TEntity>.GetAsyncEnumerator()
{
return new TestDbAsyncEnumerator<TEntity>(_data.GetEnumerator());
}
}
IDbAsyncEnumerator IDbAsyncEnumerable.GetAsyncEnumerator()
{
return GetAsyncEnumerator();
}
IQueryProvider IQueryable.Provider
{
get { return new TestDbAsyncQueryProvider<T>(this); }
}
}
public T Current
public T Current
{
get { return _inner.Current; }
}
object IDbAsyncEnumerator.Current
{
get { return Current; }
}
}
}
Implementing Find
The Find method is difficult to implement in a generic fashion. If you need to test code that makes use of the Find
method it is easiest to create a test DbSet for each of the entity types that need to support find. You can then write
logic to find that particular type of entity, as shown below.
using System.Linq;
namespace TestingDemo
{
class TestBlogDbSet : TestDbSet<Blog>
{
public override Blog Find(params object[] keyValues)
{
var id = (int)keyValues.Single();
return this.SingleOrDefault(b => b.BlogId == id);
}
}
}
namespace TestingDemo
{
[TestClass]
public class NonQueryTests
{
[TestMethod]
public void CreateBlog_saves_a_blog_via_context()
{
var context = new TestContext();
Assert.AreEqual(1, context.Blogs.Count());
Assert.AreEqual("ADO.NET Blog", context.Blogs.Single().Name);
Assert.AreEqual("http://blogs.msdn.com/adonet", context.Blogs.Single().Url);
Assert.AreEqual(1, context.SaveChangesCount);
}
}
}
Here is another example of a test - this time one that performs a query. The test starts by creating a test context
with some data in its Blog property - note that the data is not in alphabetical order. We can then create a
BlogService based on our test context and ensure that the data we get back from GetAllBlogs is ordered by name.
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace TestingDemo
{
[TestClass]
public class QueryTests
{
[TestMethod]
public void GetAllBlogs_orders_by_name()
{
var context = new TestContext();
context.Blogs.Add(new Blog { Name = "BBB" });
context.Blogs.Add(new Blog { Name = "ZZZ" });
context.Blogs.Add(new Blog { Name = "AAA" });
Assert.AreEqual(3, blogs.Count);
Assert.AreEqual("AAA", blogs[0].Name);
Assert.AreEqual("BBB", blogs[1].Name);
Assert.AreEqual("ZZZ", blogs[2].Name);
}
}
}
Finally, we'll write one more test that uses our async method to ensure that the async infrastructure we included in
TestDbSet is working.
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace TestingDemo
{
[TestClass]
public class AsyncQueryTests
{
[TestMethod]
public async Task GetAllBlogsAsync_orders_by_name()
{
var context = new TestContext();
context.Blogs.Add(new Blog { Name = "BBB" });
context.Blogs.Add(new Blog { Name = "ZZZ" });
context.Blogs.Add(new Blog { Name = "AAA" });
Assert.AreEqual(3, blogs.Count);
Assert.AreEqual("AAA", blogs[0].Name);
Assert.AreEqual("BBB", blogs[1].Name);
Assert.AreEqual("ZZZ", blogs[2].Name);
}
}
}
Testability and Entity Framework 4.0
9/18/2018 • 43 minutes to read • Edit Online
Scott Allen
Published: May 2010
Introduction
This white paper describes and demonstrates how to write testable code with the ADO.NET Entity Framework 4.0
and Visual Studio 2010. This paper does not try to focus on a specific testing methodology, like test-driven design
(TDD ) or behavior-driven design (BDD ). Instead this paper will focus on how to write code that uses the ADO.NET
Entity Framework yet remains easy to isolate and test in an automated fashion. We’ll look at common design
patterns that facilitate testing in data access scenarios and see how to apply those patterns when using the
framework. We’ll also look at specific features of the framework to see how those features can work in testable
code.
Testing a method is difficult if the method writes the computed value into a network socket, a database table, or a
file like the following code. The test has to perform additional work to retrieve the value.
public void AddAndSaveToFile(int x, int y) {
var results = string.Format("The answer is {0}", x + y);
File.WriteAllText("results.txt", results);
}
Secondly, testable code is easy to isolate. Let’s use the following pseudo-code as a bad example of testable code.
The method is easy to observe – we can pass in an insurance policy and verify the return value matches an
expected result. However, to test the method we’ll need to have a database installed with the correct schema, and
configure the SMTP server in case the method tries to send an email.
The unit test only wants to verify the calculation logic inside the method, but the test might fail because the email
server is offline, or because the database server moved. Both of these failures are unrelated to the behavior the test
wants to verify. The behavior is difficult to isolate.
Software developers who strive to write testable code often strive to maintain a separation of concerns in the code
they write. The above method should focus on the business calculations and delegate the database and email
implementation details to other components. Robert C. Martin calls this the Single Responsibility Principle. An
object should encapsulate a single, narrow responsibility, like calculating the value of a policy. All other database
and notification work should be the responsibility of some other object. Code written in this fashion is easier to
isolate because it is focused on a single task.
In .NET we have the abstractions we need to follow the Single Responsibility Principle and achieve isolation. We
can use interface definitions and force the code to use the interface abstraction instead of a concrete type. Later in
this paper we’ll see how a method like the bad example presented above can work with interfaces that look like
they will talk to the database. At test time, however, we can substitute a dummy implementation that doesn’t talk to
the database but instead holds data in memory. This dummy implementation will isolate the code from unrelated
problems in the data access code or database configuration.
There are additional benefits to isolation. The business calculation in the last method should only take a few
milliseconds to execute, but the test itself might run for several seconds as the code hops around the network and
talks to various servers. Unit tests should run fast to facilitate small changes. Unit tests should also be repeatable
and not fail because a component unrelated to the test has a problem. Writing code that is easy to observe and to
isolate means developers will have an easier time writing tests for the code, spend less time waiting for tests to
execute, and more importantly, spend less time tracking down bugs that do not exist.
Hopefully you can appreciate the benefits of testing and understand the qualities that testable code exhibits. We are
about to address how to write code that works with EF4 to save data into a database while remaining observable
and easy to isolate, but first we’ll narrow our focus to discuss testable designs for data access.
Design Patterns for Data Persistence
Both of the bad examples presented earlier had too many responsibilities. The first bad example had to perform a
calculation and write to a file. The second bad example had to read data from a database and perform a business
calculation and send email. By designing smaller methods that separate concerns and delegate responsibility to
other components you’ll make great strides towards writing testable code. The goal is to build functionality by
composing actions from small and focused abstractions.
When it comes to data persistence the small and focused abstractions we are looking for are so common they’ve
been documented as design patterns. Martin Fowler’s book Patterns of Enterprise Application Architecture was the
first work to describe these patterns in print. We’ll provide a brief description of these patterns in the following
sections before we show how these ADO.NET Entity Framework implements and works with these patterns.
The Repository Pattern
Fowler says a repository “mediates between the domain and data mapping layers using a collection-like interface
for accessing domain objects”. The goal of the repository pattern is to isolate code from the minutiae of data access,
and as we saw earlier isolation is a required trait for testability.
The key to the isolation is how the repository exposes objects using a collection-like interface. The logic you write
to use the repository has no idea how the repository will materialize the objects you request. The repository might
talk to a database, or it might just return objects from an in-memory collection. All your code needs to know is that
the repository appears to maintain the collection, and you can retrieve, add, and delete objects from the collection.
In existing .NET applications a concrete repository often inherits from a generic interface like the following:
We’ll make a few changes to the interface definition when we provide an implementation for EF4, but the basic
concept remains the same. Code can use a concrete repository implementing this interface to retrieve an entity by
its primary key value, to retrieve a collection of entities based on the evaluation of a predicate, or simply retrieve all
available entities. The code can also add and remove entities through the repository interface.
Given an IRepository of Employee objects, code can perform the following operations.
var employeesNamedScott =
repository
.FindBy(e => e.Name == "Scott")
.OrderBy(e => e.HireDate);
var firstEmployee = repository.FindById(1);
var newEmployee = new Employee() {/*... */};
repository.Add(newEmployee);
Since the code is using an interface (IRepository of Employee), we can provide the code with different
implementations of the interface. One implementation might be an implementation backed by EF4 and persisting
objects into a Microsoft SQL Server database. A different implementation (one we use during testing) might be
backed by an in-memory List of Employee objects. The interface will help to achieve isolation in the code.
Notice the IRepository<T> interface does not expose a Save operation. How do we update existing objects? You
might come across IRepository definitions that do include the Save operation, and implementations of these
repositories will need to immediately persist an object into the database. However, in many applications we don’t
want to persist objects individually. Instead, we want to bring objects to life, perhaps from different repositories,
modify those objects as part of a business activity, and then persist all the objects as part of a single, atomic
operation. Fortunately, there is a pattern to allow this type of behavior.
The Unit of Work Pattern
Fowler says a unit of work will “maintain a list of objects affected by a business transaction and coordinates the
writing out of changes and the resolution of concurrency problems”. It is the responsibility of the unit of work to
track changes to the objects we bring to life from a repository and persist any changes we’ve made to the objects
when we tell the unit of work to commit the changes. It’s also the responsibility of the unit of work to take the new
objects we’ve added to all repositories and insert the objects into a database, and also mange deletion.
If you’ve ever done any work with ADO.NET DataSets then you’ll already be familiar with the unit of work pattern.
ADO.NET DataSets had the ability to track our updates, deletions, and insertion of DataRow objects and could
(with the help of a TableAdapter) reconcile all our changes to a database. However, DataSet objects model a
disconnected subset of the underlying database. The unit of work pattern exhibits the same behavior, but works
with business objects and domain objects that are isolated from data access code and unaware of the database.
An abstraction to model the unit of work in .NET code might look like the following:
By exposing repository references from the unit of work we can ensure a single unit of work object has the ability
to track all entities materialized during a business transaction. The implementation of the Commit method for a
real unit of work is where all the magic happens to reconcile in-memory changes with the database.
Given an IUnitOfWork reference, code can make changes to business objects retrieved from one or more
repositories and save all the changes using the atomic Commit operation.
How is the TimeCards collection populated? There are two possible answers. One answer is that the employee
repository, when asked to fetch an employee, issues a query to retrieve both the employee along with the
employee’s associated time card information. In relational databases this generally requires a query with a JOIN
clause and may result in retrieving more information than an application needs. What if the application never
needs to touch the TimeCards property?
A second answer is to load the TimeCards property “on demand”. This lazy loading is implicit and transparent to
the business logic because the code does not invoke special APIs to retrieve time card information. The code
assumes the time card information is present when needed. There is some magic involved with lazy loading that
generally involves runtime interception of method invocations. The intercepting code is responsible for talking to
the database and retrieving time card information while leaving the business logic free to be business logic. This
lazy load magic allows the business code to isolate itself from data retrieval operations and results in more testable
code.
The drawback to a lazy load is that when an application does need the time card information the code will execute
an additional query. This isn’t a concern for many applications, but for performance sensitive applications or
applications looping through a number of employee objects and executing a query to retrieve time cards during
each iteration of the loop (a problem often referred to as the N+1 query problem), lazy loading is a drag. In these
scenarios an application might want to eagerly load time card information in the most efficient manner possible.
Fortunately, we’ll see how EF4 supports both implicit lazy loads and efficient eager loads as we move into the next
section and implement these patterns.
These class definitions will change slightly as we explore different approaches and features of EF4, but the intent is
to keep these classes as persistence ignorant (PI) as possible. A PI object doesn’t know how, or even if, the state it
holds lives inside a database. PI and POCOs go hand in hand with testable software. Objects using a POCO
approach are less constrained, more flexible, and easier to test because they can operate without a database
present.
With the POCOs in place we can create an Entity Data Model (EDM ) in Visual Studio (see figure 1). We will not use
the EDM to generate code for our entities. Instead, we want to use the entities we lovingly craft by hand. We will
only use the EDM to generate our database schema and provide the metadata EF4 needs to map objects into the
database.
Figure 1
Note: if you want to develop the EDM model first, it is possible to generate clean, POCO code from the EDM. You
can do this with a Visual Studio 2010 extension provided by the Data Programmability team. To download the
extension, launch the Extension Manager from the Tools menu in Visual Studio and search the online gallery of
templates for “POCO” (See figure 2). There are several POCO templates available for EF. For more information on
using the template, see “ Walkthrough: POCO Template for the Entity Framework”.
Figure 2
From this POCO starting point we will explore two different approaches to testable code. The first approach I call
the EF approach because it leverages abstractions from the Entity Framework API to implement units of work and
repositories. In the second approach we will create our own custom repository abstractions and then see the
advantages and disadvantages of each approach. We’ll start by exploring the EF approach.
An EF Centric Implementation
Consider the following controller action from an ASP.NET MVC project. The action retrieves an Employee object
and returns a result to display a detailed view of the employee.
Is the code testable? There are at least two tests we’d need to verify the action’s behavior. First, we’d like to verify
the action returns the correct view – an easy test. We’d also want to write a test to verify the action retrieves the
correct employee, and we’d like to do this without executing code to query the database. Remember we want to
isolate the code under test. Isolation will ensure the test doesn’t fail because of a bug in the data access code or
database configuration. If the test fails, we will know we have a bug in the controller logic, and not in some lower
level system component.
To achieve isolation we’ll need some abstractions like the interfaces we presented earlier for repositories and units
of work. Remember the repository pattern is designed to mediate between domain objects and the data mapping
layer. In this scenario EF4 is the data mapping layer, and already provides a repository-like abstraction named
IObjectSet<T> (from the System.Data.Objects namespace). The interface definition looks like the following.
IObjectSet<T> meets the requirements for a repository because it resembles a collection of objects (via
IEnumerable<T>) and provides methods to add and remove objects from the simulated collection. The Attach and
Detach methods expose additional capabilities of the EF4 API. To use IObjectSet<T> as the interface for
repositories we need a unit of work abstraction to bind repositories together.
One concrete implementation of this interface will talk to SQL Server and is easy to create using the ObjectContext
class from EF4. The ObjectContext class is the real unit of work in the EF4 API.
public class SqlUnitOfWork : IUnitOfWork {
public SqlUnitOfWork() {
var connectionString =
ConfigurationManager
.ConnectionStrings[ConnectionStringName]
.ConnectionString;
_context = new ObjectContext(connectionString);
}
Bringing an IObjectSet<T> to life is as easy as invoking the CreateObjectSet method of the ObjectContext object.
Behind the scenes the framework will use the metadata we provided in the EDM to produce a concrete
ObjectSet<T>. We’ll stick with returning the IObjectSet<T> interface because it will help preserve testability in
client code.
This concrete implementation is useful in production, but we need to focus on how we’ll use our IUnitOfWork
abstraction to facilitate testing.
The Test Doubles
To isolate the controller action we’ll need the ability to switch between the real unit of work (backed by an
ObjectContext) and a test double or “fake” unit of work (performing in-memory operations). The common
approach to perform this type of switching is to not let the MVC controller instantiate a unit of work, but instead
pass the unit of work into the controller as a constructor parameter.
The above code is an example of dependency injection. We don’t allow the controller to create it’s dependency (the
unit of work) but inject the dependency into the controller. In an MVC project it is common to use a custom
controller factory in combination with an inversion of control (IoC ) container to automate dependency injection.
These topics are beyond the scope of this article, but you can read more by following the references at the end of
this article.
A fake unit of work implementation that we can use for testing might look like the following.
public class InMemoryUnitOfWork : IUnitOfWork {
public InMemoryUnitOfWork() {
Committed = false;
}
public IObjectSet<Employee> Employees {
get;
set;
}
Notice the fake unit of work exposes a Commited property. It’s sometimes useful to add features to a fake class
that facilitate testing. In this case it is easy to observe if code commits a unit of work by checking the Commited
property.
We’ll also need a fake IObjectSet<T> to hold Employee and TimeCard objects in memory. We can provide a single
implementation using generics.
public class InMemoryObjectSet<T> : IObjectSet<T> where T : class
public InMemoryObjectSet()
: this(Enumerable.Empty<T>()) {
}
public InMemoryObjectSet(IEnumerable<T> entities) {
_set = new HashSet<T>();
foreach (var entity in entities) {
_set.Add(entity);
}
_queryableSet = _set.AsQueryable();
}
public void AddObject(T entity) {
_set.Add(entity);
}
public void Attach(T entity) {
_set.Add(entity);
}
public void DeleteObject(T entity) {
_set.Remove(entity);
}
public void Detach(T entity) {
_set.Remove(entity);
}
public Type ElementType {
get { return _queryableSet.ElementType; }
}
public Expression Expression {
get { return _queryableSet.Expression; }
}
public IQueryProvider Provider {
get { return _queryableSet.Provider; }
}
public IEnumerator<T> GetEnumerator() {
return _set.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator() {
return GetEnumerator();
}
This test double delegates most of its work to an underlying HashSet<T> object. Note that IObjectSet<T> requires
a generic constraint enforcing T as a class (a reference type), and also forces us to implement IQueryable<T>. It is
easy to make an in-memory collection appear as an IQueryable<T> using the standard LINQ operator
AsQueryable.
The Tests
Traditional unit tests will use a single test class to hold all of the tests for all of the actions in a single MVC
controller. We can write these tests, or any type of unit test, using the in memory fakes we’ve built. However, for
this article we will avoid the monolithic test class approach and instead group our tests to focus on a specific piece
of functionality. For example, “create new employee” might be the functionality we want to test, so we will use a
single test class to verify the single controller action responsible for creating a new employee.
There is some common setup code we need for all these fine grained test classes. For example, we always need to
create our in-memory repositories and fake unit of work. We also need an instance of the employee controller with
the fake unit of work injected. We’ll share this common setup code across test classes by using a base class.
public class EmployeeControllerTestBase {
public EmployeeControllerTestBase() {
_employeeData = EmployeeObjectMother.CreateEmployees()
.ToList();
_repository = new InMemoryObjectSet<Employee>(_employeeData);
_unitOfWork = new InMemoryUnitOfWork();
_unitOfWork.Employees = _repository;
_controller = new EmployeeController(_unitOfWork);
}
The “object mother” we use in the base class is one common pattern for creating test data. An object mother
contains factory methods to instantiate test entities for use across multiple test fixtures.
We can use the EmployeeControllerTestBase as the base class for a number of test fixtures (see figure 3). Each test
fixture will test a specific controller action. For example, one test fixture will focus on testing the Create action used
during an HTTP GET request (to display the view for creating an employee), and a different fixture will focus on the
Create action used in an HTTP POST request (to take information submitted by the user to create an employee).
Each derived class is only responsible for the setup needed in its specific context, and to provide the assertions
needed to verify the outcomes for its specific test context.
Figure 3
The naming convention and test style presented here isn’t required for testable code – it’s just one approach. Figure
4 shows the tests running in the Jet Brains Resharper test runner plugin for Visual Studio 2010.
Figure 4
With a base class to handle the shared setup code, the unit tests for each controller action are small and easy to
write. The tests will execute quickly (since we are performing in-memory operations), and shouldn’t fail because of
unrelated infrastructure or environmental concerns (because we’ve isolated the unit under test).
[TestClass]
public class EmployeeControllerCreateActionPostTests
: EmployeeControllerTestBase {
[TestMethod]
public void ShouldAddNewEmployeeToRepository() {
_controller.Create(_newEmployee);
Assert.IsTrue(_repository.Contains(_newEmployee));
}
[TestMethod]
public void ShouldCommitUnitOfWork() {
_controller.Create(_newEmployee);
Assert.IsTrue(_unitOfWork.Committed);
}
// ... more tests
In these tests, the base class does most of the setup work. Remember the base class constructor creates the in-
memory repository, a fake unit of work, and an instance of the EmployeeController class. The test class derives
from this base class and focuses on the specifics of testing the Create method. In this case the specifics boil down
to the “arrange, act, and assert” steps you’ll see in any unit testing procedure:
Create a newEmployee object to simulate incoming data.
Invoke the Create action of the EmployeeController and pass in the newEmployee.
Verify the Create action produces the expected results (the employee appears in the repository).
What we’ve built allows us to test any of the EmployeeController actions. For example, when we write tests for the
Index action of the Employee controller we can inherit from the test base class to establish the same base setup for
our tests. Again the base class will create the in-memory repository, the fake unit of work, and an instance of the
EmployeeController. The tests for the Index action only need to focus on invoking the Index action and testing the
qualities of the model the action returns.
[TestClass]
public class EmployeeControllerIndexActionTests
: EmployeeControllerTestBase {
[TestMethod]
public void ShouldBuildModelWithAllEmployees() {
var result = _controller.Index();
var model = result.ViewData.Model
as IEnumerable<Employee>;
Assert.IsTrue(model.Count() == _employeeData.Count);
}
[TestMethod]
public void ShouldOrderModelByHiredateAscending() {
var result = _controller.Index();
var model = result.ViewData.Model
as IEnumerable<Employee>;
Assert.IsTrue(model.SequenceEqual(
_employeeData.OrderBy(e => e.HireDate)));
}
// ...
}
The tests we are creating with in-memory fakes are oriented towards testing the state of the software. For example,
when testing the Create action we want to inspect the state of the repository after the create action executes – does
the repository hold the new employee?
[TestMethod]
public void ShouldAddNewEmployeeToRepository() {
_controller.Create(_newEmployee);
Assert.IsTrue(_repository.Contains(_newEmployee));
}
Later we’ll look at interaction based testing. Interaction based testing will ask if the code under test invoked the
proper methods on our objects and passed the correct parameters. For now we’ll move on the cover another
design pattern – the lazy load.
Note that the EmployeeSummaryViewModel is not an entity – in other words it is not something we want to
persist in the database. We are only going to use this class to shuffle data into the view in a strongly typed manner.
The view model is like a data transfer object (DTO ) because it contains no behavior (no methods) – only properties.
The properties will hold the data we need to move. It is easy to instantiate this view model using LINQ’s standard
projection operator – the Select operator.
There are two notable features to the above code. First – the code is easy to test because it is still easy to observe
and isolate. The Select operator works just as well against our in-memory fakes as it does against the real unit of
work.
[TestClass]
public class EmployeeControllerSummaryActionTests
: EmployeeControllerTestBase {
[TestMethod]
public void ShouldBuildModelWithCorrectEmployeeSummary() {
var id = 1;
var result = _controller.Summary(id);
var model = result.ViewData.Model as EmployeeSummaryViewModel;
Assert.IsTrue(model.TotalTimeCards == 3);
}
// ...
}
The second notable feature is how the code allows EF4 to generate a single, efficient query to assemble employee
and time card information together. We’ve loaded employee information and time card information into the same
object without using any special APIs. The code merely expressed the information it requires using standard LINQ
operators that work against in-memory data sources as well as remote data sources. EF4 was able to translate the
expression trees generated by the LINQ query and C# compiler into a single and efficient T-SQL query.
SELECT
[Limit1].[Id] AS [Id],
[Limit1].[Name] AS [Name],
[Limit1].[C1] AS [C1]
FROM (SELECT TOP (2)
[Project1].[Id] AS [Id],
[Project1].[Name] AS [Name],
[Project1].[C1] AS [C1]
FROM (SELECT
[Extent1].[Id] AS [Id],
[Extent1].[Name] AS [Name],
(SELECT COUNT(1) AS [A1]
FROM [dbo].[TimeCards] AS [Extent2]
WHERE [Extent1].[Id] =
[Extent2].[EmployeeTimeCard_TimeCard_Id]) AS [C1]
FROM [dbo].[Employees] AS [Extent1]
WHERE [Extent1].[Id] = @p__linq__0
) AS [Project1]
) AS [Limit1]
There are other times when we don’t want to work with a view model or DTO object, but with real entities. When
we know we need an employee and the employee’s time cards, we can eagerly load the related data in an
unobtrusive and efficient manner.
Explicit Eager Loading
When we want to eagerly load related entity information we need some mechanism for business logic (or in this
scenario, controller action logic) to express its desire to the repository. The EF4 ObjectQuery<T> class defines an
Include method to specify the related objects to retrieve during a query. Remember the EF4 ObjectContext exposes
entities via the concrete ObjectSet<T> class which inherits from ObjectQuery<T>. If we were using ObjectSet<T>
references in our controller action we could write the following code to specify an eager load of time card
information for each employee.
_employees.Include("TimeCards")
.Where(e => e.HireDate.Year > 2009);
However, since we are trying to keep our code testable we are not exposing ObjectSet<T> from outside the real
unit of work class. Instead, we rely on the IObjectSet<T> interface which is easier to fake, but IObjectSet<T> does
not define an Include method. The beauty of LINQ is that we can create our own Include operator.
Notice this Include operator is defined as an extension method for IQueryable<T> instead of IObjectSet<T>. This
gives us the ability to use the method with a wider range of possible types, including IQueryable<T>,
IObjectSet<T>, ObjectQuery<T>, and ObjectSet<T>. In the event the underlying sequence is not a genuine EF4
ObjectQuery<T>, then there is no harm done and the Include operator is a no-op. If the underlying sequence is an
ObjectQuery<T> (or derived from ObjectQuery<T>), then EF4 will see our requirement for additional data and
formulate the proper SQL query.
With this new operator in place we can explicitly request an eager load of time card information from the
repository.
When run against a real ObjectContext, the code produces the following single query. The query gathers enough
information from the database in one trip to materialize the employee objects and fully populate their TimeCards
property.
SELECT
[Project1].[Id] AS [Id],
[Project1].[Name] AS [Name],
[Project1].[HireDate] AS [HireDate],
[Project1].[C1] AS [C1],
[Project1].[Id1] AS [Id1],
[Project1].[Hours] AS [Hours],
[Project1].[EffectiveDate] AS [EffectiveDate],
[Project1].[EmployeeTimeCard_TimeCard_Id] AS [EmployeeTimeCard_TimeCard_Id]
FROM ( SELECT
[Extent1].[Id] AS [Id],
[Extent1].[Name] AS [Name],
[Extent1].[HireDate] AS [HireDate],
[Extent2].[Id] AS [Id1],
[Extent2].[Hours] AS [Hours],
[Extent2].[EffectiveDate] AS [EffectiveDate],
[Extent2].[EmployeeTimeCard_TimeCard_Id] AS
[EmployeeTimeCard_TimeCard_Id],
CASE WHEN ([Extent2].[Id] IS NULL) THEN CAST(NULL AS int)
ELSE 1 END AS [C1]
FROM [dbo].[Employees] AS [Extent1]
LEFT OUTER JOIN [dbo].[TimeCards] AS [Extent2] ON [Extent1].[Id] = [Extent2].
[EmployeeTimeCard_TimeCard_Id]
) AS [Project1]
ORDER BY [Project1].[HireDate] ASC,
[Project1].[Id] ASC, [Project1].[C1] ASC
The great news is the code inside the action method remains fully testable. We don’t need to provide any additional
features for our fakes to support the Include operator. The bad news is we had to use the Include operator inside of
the code we wanted to keep persistence ignorant. This is a prime example of the type of tradeoffs you’ll need to
evaluate when building testable code. There are times when you need to let persistence concerns leak outside the
repository abstraction to meet performance goals.
The alternative to eager loading is lazy loading. Lazy loading means we do not need our business code to explicitly
announce the requirement for associated data. Instead, we use our entities in the application and if additional data
is needed Entity Framework will load the data on demand.
Lazy Loading
It’s easy to imagine a scenario where we don’t know what data a piece of business logic will need. We might know
the logic needs an employee object, but we may branch into different execution paths where some of those paths
require time card information from the employee, and some do not. Scenarios like this are perfect for implicit lazy
loading because data magically appears on an as-needed basis.
Lazy loading, also known as deferred loading, does place some requirements on our entity objects. POCOs with
true persistence ignorance would not face any requirements from the persistence layer, but true persistence
ignorance is practically impossible to achieve. Instead we measure persistence ignorance in relative degrees. It
would be unfortunate if we needed to inherit from a persistence oriented base class or use a specialized collection
to achieve lazy loading in POCOs. Fortunately, EF4 has a less intrusive solution.
Virtually Undetectable
When using POCO objects, EF4 can dynamically generate runtime proxies for entities. These proxies invisibly wrap
the materialized POCOs and provide additional services by intercepting each property get and set operation to
perform additional work. One such service is the lazy loading feature we are looking for. Another service is an
efficient change tracking mechanism which can record when the program changes the property values of an entity.
The list of changes is used by the ObjectContext during the SaveChanges method to persist any modified entities
using UPDATE commands.
For these proxies to work, however, they need a way to hook into property get and set operations on an entity, and
the proxies achieve this goal by overriding virtual members. Thus, if we want to have implicit lazy loading and
efficient change tracking we need to go back to our POCO class definitions and mark properties as virtual.
We can still say the Employee entity is mostly persistence ignorant. The only requirement is to use virtual members
and this does not impact the testability of the code. We don’t need to derive from any special base class, or even
use a special collection dedicated to lazy loading. As the code demonstrates, any class implementing
ICollection<T> is available to hold related entities.
There is also one minor change we need to make inside our unit of work. Lazy loading is off by default when
working directly with an ObjectContext object. There is a property we can set on the ContextOptions property to
enable deferred loading, and we can set this property inside our real unit of work if we want to enable lazy loading
everywhere.
With implicit lazy loading enabled, application code can use an employee and the employee’s associated time cards
while remaining blissfully unaware of the work required for EF to load the extra data.
Lazy loading makes the application code easier to write, and with the proxy magic the code remains completely
testable. In-memory fakes of the unit of work can simply preload fake entities with associated data when needed
during a test.
At this point we’ll turn our attention from building repositories using IObjectSet<T> and look at abstractions to
hide all signs of the persistence framework.
Custom Repositories
When we first presented the unit of work design pattern in this article we provided some sample code for what the
unit of work might look like. Let’s re-present this original idea using the employee and employee time card
scenario we’ve been working with.
Notice we’ll drop back to using an IQueryable<T> interface to expose entity collections. IQueryable<T> allows
LINQ expression trees to flow into the EF4 provider and give the provider a holistic view of the query. A second
option would be to return IEnumerable<T>, which means the EF4 LINQ provider will only see the expressions
built inside of the repository. Any grouping, ordering, and projection done outside of the repository will not be
composed into the SQL command sent to the database, which can hurt performance. On the other hand, a
repository returning only IEnumerable<T> results will never surprise you with a new SQL command. Both
approaches will work, and both approaches remain testable.
It’s straightforward to provide a single implementation of the IRepository<T> interface using generics and the EF4
ObjectContext API.
The IRepository<T> approach gives us some additional control over our queries because a client has to invoke a
method to get to an entity. Inside the method we could provide additional checks and LINQ operators to enforce
application constraints. Notice the interface has two constraints on the generic type parameter. The first constraint
is the class cons taint required by ObjectSet<T>, and the second constraint forces our entities to implement IEntity
– an abstraction created for the application. The IEntity interface forces entities to have a readable Id property, and
we can then use this property in the FindById method. IEntity is defined with the following code.
IEntity could be considered a small violation of persistence ignorance since our entities are required to implement
this interface. Remember persistence ignorance is about tradeoffs, and for many the FindById functionality will
outweigh the constraint imposed by the interface. The interface has no impact on testability.
Instantiating a live IRepository<T> requires an EF4 ObjectContext, so a concrete unit of work implementation
should manage the instantiation.
Notice the custom Include operator we implemented previously will work without change. The repository’s
FindById method removes duplicated logic from actions trying to retrieve a single entity.
There is no significant difference in the testability of the two approaches we’ve examined. We could provide fake
implementations of IRepository<T> by building concrete classes backed by HashSet<Employee> - just like what
we did in the last section. However, some developers prefer to use mock objects and mock object frameworks
instead of building fakes. We’ll look at using mocks to test our implementation and discuss the differences between
mocks and fakes in the next section.
Testing with Mocks
There are different approaches to building what Martin Fowler calls a “test double”. A test double (like a movie
stunt double) is an object you build to “stand in” for real, production objects during tests. The in-memory
repositories we created are test doubles for the repositories that talk to SQL Server. We’ve seen how to use these
test-doubles during the unit tests to isolate code and keep tests running fast.
The test doubles we’ve built have real, working implementations. Behind the scenes each one stores a concrete
collection of objects, and they will add and remove objects from this collection as we manipulate the repository
during a test. Some developers like to build their test doubles this way – with real code and working
implementations. These test doubles are what we call fakes. They have working implementations, but they aren’t
real enough for production use. The fake repository doesn’t actually write to the database. The fake SMTP server
doesn’t actually send an email message over the network.
Mocks versus Fakes
There is another type of test double known as a mock. While fakes have working implementations, mocks come
with no implementation. With the help of a mock object framework we construct these mock objects at run time
and use them as test doubles. In this section we’ll be using the open source mocking framework Moq. Here is a
simple example of using Moq to dynamically create a test double for an employee repository.
Mock<IRepository<Employee>> mock =
new Mock<IRepository<Employee>>();
IRepository<Employee> repository = mock.Object;
repository.Add(new Employee());
var employee = repository.FindById(1);
We ask Moq for an IRepository<Employee> implementation and it builds one dynamically. We can get to the
object implementing IRepository<Employee> by accessing the Object property of the Mock<T> object. It is this
inner object we can pass into our controllers, and they won’t know if this is a test double or the real repository. We
can invoke methods on the object just like we would invoke methods on an object with a real implementation.
You must be wondering what the mock repository will do when we invoke the Add method. Since there is no
implementation behind the mock object, Add does nothing. There is no concrete collection behind the scenes like
we had with the fakes we wrote, so the employee is discarded. What about the return value of FindById? In this
case the mock object does the only thing it can do, which is return a default value. Since we are returning a
reference type (an Employee), the return value is a null value.
Mocks might sound worthless; however, there are two more features of mocks we haven’t talked about. First, the
Moq framework records all the calls made on the mock object. Later in the code we can ask Moq if anyone invoked
the Add method, or if anyone invoked the FindById method. We’ll see later how we can use this “black box”
recording feature in tests.
The second great feature is how we can use Moq to program a mock object with expectations. An expectation tells
the mock object how to respond to any given interaction. For example, we can program an expectation into our
mock and tell it to return an employee object when someone invokes FindById. The Moq framework uses a Setup
API and lambda expressions to program these expectations.
[TestMethod]
public void MockSample() {
Mock<IRepository<Employee>> mock =
new Mock<IRepository<Employee>>();
mock.Setup(m => m.FindById(5))
.Returns(new Employee {Id = 5});
IRepository<Employee> repository = mock.Object;
var employee = repository.FindById(5);
Assert.IsTrue(employee.Id == 5);
}
In this sample we ask Moq to dynamically build a repository, and then we program the repository with an
expectation. The expectation tells the mock object to return a new employee object with an Id value of 5 when
someone invokes the FindById method passing a value of 5. This test passes, and we didn’t need to build a full
implementation to fake IRepository<T>.
Let’s revisit the tests we wrote earlier and rework them to use mocks instead of fakes. Just like before, we’ll use a
base class to setup the common pieces of infrastructure we need for all of the controller’s tests.
The setup code remains mostly the same. Instead of using fakes, we’ll use Moq to construct mock objects. The base
class arranges for the mock unit of work to return a mock repository when code invokes the Employees property.
The rest of the mock setup will take place inside the test fixtures dedicated to each specific scenario. For example,
the test fixture for the Index action will setup the mock repository to return a list of employees when the action
invokes the FindAll method of the mock repository.
[TestClass]
public class EmployeeControllerIndexActionTests
: EmployeeControllerTestBase {
public EmployeeControllerIndexActionTests() {
_repository.Setup(r => r.FindAll())
.Returns(_employeeData);
}
// .. tests
[TestMethod]
public void ShouldBuildModelWithAllEmployees() {
var result = _controller.Index();
var model = result.ViewData.Model
as IEnumerable<Employee>;
Assert.IsTrue(model.Count() == _employeeData.Count());
}
// .. and more tests
}
Except for the expectations, our tests look similar to the tests we had before. However, with the recording ability of
a mock framework we can approach testing from a different angle. We’ll look at this new perspective in the next
section.
State versus Interaction Testing
There are different techniques you can use to test software with mock objects. One approach is to use state based
testing, which is what we have done in this paper so far. State based testing makes assertions about the state of the
software. In the last test we invoked an action method on the controller and made an assertion about the model it
should build. Here are some other examples of testing state:
Verify the repository contains the new employee object after Create executes.
Verify the model holds a list of all employees after Index executes.
Verify the repository does not contain a given employee after Delete executes.
Another approach you’ll see with mock objects is to verify interactions. While state based testing makes assertions
about the state of objects, interaction based testing makes assertions about how objects interact. For example:
Verify the controller invokes the repository’s Add method when Create executes.
Verify the controller invokes the repository’s FindAll method when Index executes.
Verify the controller invokes the unit of work’s Commit method to save changes when Edit executes.
Interaction testing often requires less test data, because we aren’t poking inside of collections and verifying counts.
For example, if we know the Details action invokes a repository’s FindById method with the correct value - then the
action is probably behaving correctly. We can verify this behavior without setting up any test data to return from
FindById.
[TestClass]
public class EmployeeControllerDetailsActionTests
: EmployeeControllerTestBase {
// ...
[TestMethod]
public void ShouldInvokeRepositoryToFindEmployee() {
var result = _controller.Details(_detailsId);
_repository.Verify(r => r.FindById(_detailsId));
}
int _detailsId = 1;
}
The only setup required in the above test fixture is the setup provided by the base class. When we invoke the
controller action, Moq will record the interactions with the mock repository. Using the Verify API of Moq, we can
ask Moq if the controller invoked FindById with the proper ID value. If the controller did not invoke the method, or
invoked the method with an unexpected parameter value, the Verify method will throw an exception and the test
will fail.
Here is another example to verify the Create action invokes Commit on the current unit of work.
[TestMethod]
public void ShouldCommitUnitOfWork() {
_controller.Create(_newEmployee);
_unitOfWork.Verify(u => u.Commit());
}
One danger with interaction testing is the tendency to over specify interactions. The ability of the mock object to
record and verify every interaction with the mock object doesn’t mean the test should try to verify every
interaction. Some interactions are implementation details and you should only verify the interactions required to
satisfy the current test.
The choice between mocks or fakes largely depends on the system you are testing and your personal (or team)
preferences. Mock objects can drastically reduce the amount of code you need to implement test doubles, but not
everyone is comfortable programming expectations and verifying interactions.
Conclusions
In this paper we’ve demonstrated several approaches to creating testable code while using the ADO.NET Entity
Framework for data persistence. We can leverage built in abstractions like IObjectSet<T>, or create our own
abstractions like IRepository<T>. In both cases, the POCO support in the ADO.NET Entity Framework 4.0 allows
the consumers of these abstractions to remain persistent ignorant and highly testable. Additional EF4 features like
implicit lazy loading allows business and application service code to work without worrying about the details of a
relational data store. Finally, the abstractions we create are easy to mock or fake inside of unit tests, and we can use
these test doubles to achieve fast running, highly isolated, and reliable tests.
Additional Resources
Robert C. Martin, “ The Single Responsibility Principle”
Martin Fowler, Catalog of Patterns from Patterns of Enterprise Application Architecture
Griffin Caprio, “ Dependency Injection”
Data Programmability Blog, “ Walkthrough: Test Driven Development with the Entity Framework 4.0”.
Data Programmability Blog, “ Using Repository and Unit of Work patterns with Entity Framework 4.0”
Dave Astels, “ BDD Intro”
Aaron Jensen, “ Introducing Machine Specifications”
Eric Lee, “ BDD with MSTest”
Eric Evans, “ Domain Driven Design”
Martin Fowler, “ Mocks Aren’t Stubs”
Martin Fowler, “ Test Double”
Jeremy Miller, “ State versus Interaction Testing”
Moq
Biography
Scott Allen is a member of the technical staff at Pluralsight and the founder of OdeToCode.com. In 15 years of
commercial software development, Scott has worked on solutions for everything from 8-bit embedded devices to
highly scalable ASP.NET web applications. You can reach Scott on his blog at OdeToCode, or on Twitter at
http://twitter.com/OdeToCode.
Creating a Model
9/18/2018 • 2 minutes to read • Edit Online
An EF model stores the details about how application classes and properties map to database tables and columns.
There are two main ways to create an EF model:
Using Code First: The developer writes code to specify the model. EF generates the models and mappings
at runtime based on entity classes and additional model configuration provided by the developer.
Using the EF Designer: The developer draws boxes and lines to specify the model using the EF Designer.
The resulting model is stored as XML in a file with the EDMX extension. The application's domain objects
are typically generated automatically from the conceptual model.
EF workflows
Both of these approaches can be used to target an existing database or create a new database, resulting in 4
different workflows. Find out about which one is best for you:
I am creating a new database Use Code First to define your model in Use Model First to define your model
code and then generate a database. using boxes and lines and then
generate a database.
I need to access an existing Use Code First to create a code based Use Database First to create a boxes
database model that maps to an existing and lines model that maps to an
database. existing database.
This video and step-by-step walkthrough provide an introduction to Code First development targeting a new
database. This scenario includes targeting a database that doesn’t exist and Code First will create, or an empty
database that Code First will add new tables to. Code First allows you to define your model using C# or VB.Net
classes. Additional configuration can optionally be performed using attributes on your classes and properties or
by using a fluent API.
Pre-Requisites
You will need to have at least Visual Studio 2010 or Visual Studio 2012 installed to complete this walkthrough.
If you are using Visual Studio 2010, you will also need to have NuGet installed.
You’ll notice that we’re making the two navigation properties (Blog.Posts and Post.Blog) virtual. This enables the
Lazy Loading feature of Entity Framework. Lazy Loading means that the contents of these properties will be
automatically loaded from the database when you try to access them.
3. Create a Context
Now it’s time to define a derived context, which represents a session with the database, allowing us to query and
save data. We define a context that derives from System.Data.Entity.DbContext and exposes a typed
DbSet<TEntity> for each class in our model.
We’re now starting to use types from the Entity Framework so we need to add the EntityFramework NuGet
package.
Project –> Manage NuGet Packages… Note: If you don’t have the Manage NuGet Packages… option
you should install the latest version of NuGet
Select the Online tab
Select the EntityFramework package
Click Install
Add a using statement for System.Data.Entity at the top of Program.cs.
using System.Data.Entity;
Below the Post class in Program.cs add the following derived context.
namespace CodeFirstNewDatabaseSample
{
class Program
{
static void Main(string[] args)
{
}
}
That is all the code we need to start storing and retrieving data. Obviously there is quite a bit going on behind the
scenes and we’ll take a look at that in a moment but first let’s see it in action.
Where’s My Data?
By convention DbContext has created a database for you.
If a local SQL Express instance is available (installed by default with Visual Studio 2010) then Code First has
created the database on that instance
If SQL Express isn’t available then Code First will try and use LocalDB (installed by default with Visual Studio
2012)
The database is named after the fully qualified name of the derived context, in our case that is
CodeFirstNewDatabaseSample.BloggingContext
These are just the default conventions and there are various ways to change the database that Code First uses,
more information is available in the How DbContext Discovers the Model and Database Connection topic.
You can connect to this database using Server Explorer in Visual Studio
View -> Server Explorer
Right click on Data Connections and select Add Connection…
If you haven’t connected to a database from Server Explorer before you’ll need to select Microsoft SQL
Server as the data source
Connect to either LocalDB or SQL Express, depending on which one you have installed
We can now inspect the schema that Code First created.
DbContext worked out what classes to include in the model by looking at the DbSet properties that we defined. It
then uses the default set of Code First conventions to determine table and column names, determine data types,
find primary keys, etc. Later in this walkthrough we’ll look at how you can override these conventions.
Run the Add-Migration AddUrl command in Package Manager Console. The Add-Migration command
checks for changes since your last migration and scaffolds a new migration with any changes that are found.
We can give migrations a name; in this case we are calling the migration ‘AddUrl’. The scaffolded code is
saying that we need to add a Url column, that can hold string data, to the dbo.Blogs table. If needed, we could
edit the scaffolded code but that’s not required in this case.
namespace CodeFirstNewDatabaseSample.Migrations
{
using System;
using System.Data.Entity.Migrations;
Run the Update-Database command in Package Manager Console. This command will apply any pending
migrations to the database. Our InitialCreate migration has already been applied so migrations will just apply
our new AddUrl migration. Tip: You can use the –Verbose switch when calling Update-Database to see the
SQL that is being executed against the database.
The new Url column is now added to the Blogs table in the database:
6. Data Annotations
So far we’ve just let EF discover the model using its default conventions, but there are going to be times when our
classes don’t follow the conventions and we need to be able to perform further configuration. There are two
options for this; we’ll look at Data Annotations in this section and then the fluent API in the next section.
Let’s add a User class to our model
If we tried to add a migration we’d get an error saying “EntityType ‘User’ has no key defined. Define the key for
this EntityType.” because EF has no way of knowing that Username should be the primary key for User.
We’re going to use Data Annotations now so we need to add a using statement at the top of Program.cs
using System.ComponentModel.DataAnnotations;
Now annotate the Username property to identify that it is the primary key
Use the Add-Migration AddUser command to scaffold a migration to apply these changes to the database
Run the Update-Database command to apply the new migration to the database
The new table is now added to the database:
The full list of annotations supported by EF is:
KeyAttribute
StringLengthAttribute
MaxLengthAttribute
ConcurrencyCheckAttribute
RequiredAttribute
TimestampAttribute
ComplexTypeAttribute
ColumnAttribute
TableAttribute
InversePropertyAttribute
ForeignKeyAttribute
DatabaseGeneratedAttribute
NotMappedAttribute
7. Fluent API
In the previous section we looked at using Data Annotations to supplement or override what was detected by
convention. The other way to configure the model is via the Code First fluent API.
Most model configuration can be done using simple data annotations. The fluent API is a more advanced way of
specifying model configuration that covers everything that data annotations can do in addition to some more
advanced configuration not possible with data annotations. Data annotations and the fluent API can be used
together.
To access the fluent API you override the OnModelCreating method in DbContext. Let’s say we wanted to
rename the column that User.DisplayName is stored in to display_name.
Override the OnModelCreating method on BloggingContext with the following code
public class BloggingContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
public DbSet<User> Users { get; set; }
Use the Add-Migration ChangeDisplayName command to scaffold a migration to apply these changes to
the database.
Run the Update-Database command to apply the new migration to the database.
The DisplayName column is now renamed to display_name:
Summary
In this walkthrough we looked at Code First development using a new database. We defined a model using
classes then used that model to create a database and store and retrieve data. Once the database was created we
used Code First Migrations to change the schema as our model evolved. We also saw how to configure a model
using Data Annotations and the Fluent API.
Code First to an Existing Database
9/13/2018 • 5 minutes to read • Edit Online
This video and step-by-step walkthrough provide an introduction to Code First development targeting an existing
database. Code First allows you to define your model using C# or VB.Net classes. Optionally additional
configuration can be performed using attributes on your classes and properties or by using a fluent API.
Pre-Requisites
You will need to have Visual Studio 2012 or Visual Studio 2013 installed to complete this walkthrough.
You will also need version 6.1 (or later) of the Entity Framework Tools for Visual Studio installed. See Get
Entity Framework for information on installing the latest version of the Entity Framework Tools.
Connect to your LocalDB instance, and enter Blogging as the database name
Select OK and you will be asked if you want to create a new database, select Yes
The new database will now appear in Server Explorer, right-click on it and select New Query
Copy the following SQL into the new query, then right-click on the query and select Execute
CREATE TABLE [dbo].[Blogs] (
[BlogId] INT IDENTITY (1, 1) NOT NULL,
[Name] NVARCHAR (200) NULL,
[Url] NVARCHAR (200) NULL,
CONSTRAINT [PK_dbo.Blogs] PRIMARY KEY CLUSTERED ([BlogId] ASC)
);
Click the checkbox next to Tables to import all tables and click Finish
Once the reverse engineer process completes a number of items will have been added to the project, let's take a
look at what's been added.
Configuration file
An App.config file has been added to the project, this file contains the connection string to the existing database.
<connectionStrings>
<add
name="BloggingContext"
connectionString="data source=(localdb)\mssqllocaldb;initial catalog=Blogging;integrated
security=True;MultipleActiveResultSets=True;App=EntityFramework"
providerName="System.Data.SqlClient" />
</connectionStrings>
You’ll notice some other settings in the configuration file too, these are default EF settings that tell Code First
where to create databases. Since we are mapping to an existing database these setting will be ignored in our
application.
Derived Context
A BloggingContext class has been added to the project. The context represents a session with the database,
allowing us to query and save data. The context exposes a DbSet<TEntity> for each type in our model. You’ll
also notice that the default constructor calls a base constructor using the name= syntax. This tells Code First that
the connection string to use for this context should be loaded from the configuration file.
public partial class BloggingContext : DbContext
{
public BloggingContext()
: base("name=BloggingContext")
{
}
You should always use the name= syntax when you are using a connection string in the config file. This ensures
that if the connection string is not present then Entity Framework will throw rather than creating a new database
by convention.
Model classes
Finally, a Blog and Post class have also been added to the project. These are the domain classes that make up
our model. You'll see Data Annotations applied to the classes to specify configuration where the Code First
conventions would not align with the structure of the existing database. For example, you'll see the StringLength
annotation on Blog.Name and Blog.Url since they have a maximum length of 200 in the database (the Code
First default is to use the maximun length supported by the database provider - nvarchar(max) in SQL Server).
[StringLength(200)]
public string Name { get; set; }
[StringLength(200)]
public string Url { get; set; }
Summary
In this walkthrough we looked at Code First development using an existing database. We used the Entity
Framework Tools for Visual Studio to reverse engineer a set of classes that mapped to the database and could be
used to store and retrieve data.
Code First Data Annotations
3/6/2019 • 16 minutes to read • Edit Online
NOTE
EF4.1 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 4.1. If you are
using an earlier version, some or all of this information does not apply.
The content on this page is adapted from an article originally written by Julie Lerman
(<http://thedatafarm.com>).
Entity Framework Code First allows you to use your own domain classes to represent the model that EF relies on
to perform querying, change tracking, and updating functions. Code First leverages a programming pattern
referred to as 'convention over configuration.' Code First will assume that your classes follow the conventions of
Entity Framework, and in that case, will automatically work out how to perform it's job. However, if your classes
do not follow those conventions, you have the ability to add configurations to your classes to provide EF with the
requisite information.
Code First gives you two ways to add these configurations to your classes. One is using simple attributes called
DataAnnotations, and the second is using Code First’s Fluent API, which provides you with a way to describe
configurations imperatively, in code.
This article will focus on using DataAnnotations (in the System.ComponentModel.DataAnnotations namespace)
to configure your classes – highlighting the most commonly needed configurations. DataAnnotations are also
understood by a number of .NET applications, such as ASP.NET MVC which allows these applications to leverage
the same annotations for client-side validations.
The model
I’ll demonstrate Code First DataAnnotations with a simple pair of classes: Blog and Post.
As they are, the Blog and Post classes conveniently follow code first convention and require no tweaks to enable
EF compatability. However, you can also use the annotations to provide more information to EF about the classes
and the database to which they map.
Key
Entity Framework relies on every entity having a key value that is used for entity tracking. One convention of
Code First is implicit key properties; Code First will look for a property named “Id”, or a combination of class
name and “Id”, such as “BlogId”. This property will map to a primary key column in the database.
The Blog and Post classes both follow this convention. What if they didn’t? What if Blog used the name
PrimaryTrackingKey instead, or even foo? If code first does not find a property that matches this convention it
will throw an exception because of Entity Framework’s requirement that you must have a key property. You can
use the key annotation to specify which property is to be used as the EntityKey.
If you are using code first’s database generation feature, the Blog table will have a primary key column named
PrimaryTrackingKey, which is also defined as Identity by default.
Composite keys
Entity Framework supports composite keys - primary keys that consist of more than one property. For example,
you could have a Passport class whose primary key is a combination of PassportNumber and IssuingCountry.
Attempting to use the above class in your EF model would result in an InvalidOperationException :
Unable to determine composite primary key ordering for type 'Passport'. Use the ColumnAttribute or the HasKey
method to specify an order for composite primary keys.
In order to use composite keys, Entity Framework requires you to define an order for the key properties. You can
do this by using the Column annotation to specify an order.
NOTE
The order value is relative (rather than index based) so any values can be used. For example, 100 and 200 would be
acceptable in place of 1 and 2.
public class Passport
{
[Key]
[Column(Order=1)]
public int PassportNumber { get; set; }
[Key]
[Column(Order = 2)]
public string IssuingCountry { get; set; }
public DateTime Issued { get; set; }
public DateTime Expires { get; set; }
}
If you have entities with composite foreign keys, then you must specify the same column ordering that you used
for the corresponding primary key properties.
Only the relative ordering within the foreign key properties needs to be the same, the exact values assigned to
Order do not need to match. For example, in the following class, 3 and 4 could be used in place of 1 and 2.
[ForeignKey("Passport")]
[Column(Order = 1)]
public int PassportNumber { get; set; }
[ForeignKey("Passport")]
[Column(Order = 2)]
public string IssuingCountry { get; set; }
Required
The Required annotation tells EF that a particular property is required.
Adding Required to the Title property will force EF (and MVC ) to ensure that the property has data in it.
[Required]
public string Title { get; set; }
With no additional no code or markup changes in the application, an MVC application will perform client side
validation, even dynamically building a message using the property and annotation names.
The Required attribute will also affect the generated database by making the mapped property non-nullable.
Notice that the Title field has changed to “not null”.
NOTE
In some cases it may not be possible for the column in the database to be non-nullable even though the property is
required. For example, when using a TPH inheritance strategy data for multiple types is stored in a single table. If a derived
type includes a required property the column cannot be made non-nullable since not all types in the hierarchy will have this
property.
[MaxLength(10),MinLength(5)]
public string BloggerName { get; set; }
The MaxLength annotation will impact the database by setting the property’s length to 10.
MVC client-side annotation and EF 4.1 server-side annotation will both honor this validation, again dynamically
building an error message: “The field BloggerName must be a string or array type with a maximum length of
'10'.” That message is a little long. Many annotations let you specify an error message with the ErrorMessage
attribute.
NotMapped
Code first convention dictates that every property that is of a supported data type is represented in the database.
But this isn’t always the case in your applications. For example you might have a property in the Blog class that
creates a code based on the Title and BloggerName fields. That property can be created dynamically and does not
need to be stored. You can mark any properties that do not map to the database with the NotMapped annotation
such as this BlogCode property.
[NotMapped]
public string BlogCode
{
get
{
return Title.Substring(0, 1) + ":" + BloggerName.Substring(0, 1);
}
}
ComplexType
It’s not uncommon to describe your domain entities across a set of classes and then layer those classes to
describe a complete entity. For example, you may add a class called BlogDetails to your model.
[MaxLength(250)]
public string Description { get; set; }
}
Notice that BlogDetails does not have any type of key property. In domain driven design, BlogDetails is referred
to as a value object. Entity Framework refers to value objects as complex types. Complex types cannot be tracked
on their own.
However as a property in the Blog class, BlogDetails it will be tracked as part of a Blog object. In order for code
first to recognize this, you must mark the BlogDetails class as a ComplexType.
[ComplexType]
public class BlogDetails
{
public DateTime? DateCreated { get; set; }
[MaxLength(250)]
public string Description { get; set; }
}
Now you can add a property in the Blog class to represent the BlogDetails for that blog.
In the database, the Blog table will contain all of the properties of the blog including the properties contained in
its BlogDetail property. By default, each one is preceded with the name of the complex type, BlogDetail.
ConcurrencyCheck
The ConcurrencyCheck annotation allows you to flag one or more properties to be used for concurrency checking
in the database when a user edits or deletes an entity. If you've been working with the EF Designer, this aligns
with setting a property's ConcurrencyMode to Fixed.
Let’s see how ConcurrencyCheck works by adding it to the BloggerName property.
When SaveChanges is called, because of the ConcurrencyCheck annotation on the BloggerName field, the
original value of that property will be used in the update. The command will attempt to locate the correct row by
filtering not only on the key value but also on the original value of BloggerName. Here are the critical parts of the
UPDATE command sent to the database, where you can see the command will update the row that has a
PrimaryTrackingKey is 1 and a BloggerName of “Julie” which was the original value when that blog was retrieved
from the database.
If someone has changed the blogger name for that blog in the meantime, this update will fail and you’ll get a
DbUpdateConcurrencyException that you'll need to handle.
TimeStamp
It's more common to use rowversion or timestamp fields for concurrency checking. But rather than using the
ConcurrencyCheck annotation, you can use the more specific TimeStamp annotation as long as the type of the
property is byte array. Code first will treat Timestamp properties the same as ConcurrencyCheck properties, but it
will also ensure that the database field that code first generates is non-nullable. You can only have one timestamp
property in a given class.
Adding the following property to the Blog class:
[Timestamp]
public Byte[] TimeStamp { get; set; }
results in code first creating a non-nullable timestamp column in the database table.
[Table("InternalBlogs")]
public class Blog
The Column annotation is a more adept in specifying the attributes of a mapped column. You can stipulate a
name, data type or even the order in which a column appears in the table. Here is an example of the Column
attribute.
[Column("BlogDescription", TypeName="ntext")]
public String Description {get;set;}
Don’t confuse Column’s TypeName attribute with the DataType DataAnnotation. DataType is an annotation used
for the UI and is ignored by Code First.
Here is the table after it’s been regenerated. The table name has changed to InternalBlogs and Description column
from the complex type is now BlogDescription. Because the name was specified in the annotation, code first will
not use the convention of starting the column name with the name of the complex type.
DatabaseGenerated
An important database features is the ability to have computed properties. If you're mapping your Code First
classes to tables that contain computed columns, you don't want Entity Framework to try to update those
columns. But you do want EF to return those values from the database after you've inserted or updated data. You
can use the DatabaseGenerated annotation to flag those properties in your class along with the Computed enum.
Other enums are None and Identity.
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public DateTime DateCreated { get; set; }
You can use database generated on byte or timestamp columns when code first is generating the database,
otherwise you should only use this when pointing to existing databases because code first won't be able to
determine the formula for the computed column.
You read above that by default, a key property that is an integer will become an identity key in the database. That
would be the same as setting DatabaseGenerated to DatabaseGeneratedOption.Identity. If you do not want it to
be an identity key, you can set the value to DatabaseGeneratedOption.None.
Index
NOTE
EF6.1 Onwards Only - The Index attribute was introduced in Entity Framework 6.1. If you are using an earlier version the
information in this section does not apply.
You can create an index on one or more columns using the IndexAttribute. Adding the attribute to one or more
properties will cause EF to create the corresponding index in the database when it creates the database, or
scaffold the corresponding CreateIndex calls if you are using Code First Migrations.
For example, the following code will result in an index being created on the Rating column of the Posts table in
the database.
By default, the index will be named IX_<property name> (IX_Rating in the above example). You can also specify
a name for the index though. The following example specifies that the index should be named PostRatingIndex.
[Index("PostRatingIndex")]
public int Rating { get; set; }
By default, indexes are non-unique, but you can use the IsUnique named parameter to specify that an index
should be unique. The following example introduces a unique index on a User's login name.
public class User
{
public int UserId { get; set; }
[Index(IsUnique = true)]
[StringLength(200)]
public string Username { get; set; }
Code first convention will take care of the most common relationships in your model, but there are some cases
where it needs help.
Changing the name of the key property in the Blog class created a problem with its relationship to Post.
When generating the database, code first sees the BlogId property in the Post class and recognizes it, by the
convention that it matches a class name plus “Id”, as a foreign key to the Blog class. But there is no BlogId
property in the blog class. The solution for this is to create a navigation property in the Post and use the Foreign
DataAnnotation to help code first understand how to build the relationship between the two classes —using the
Post.BlogId property — as well as how to specify constraints in the database.
public class Post
{
public int Id { get; set; }
public string Title { get; set; }
public DateTime DateCreated { get; set; }
public string Content { get; set; }
public int BlogId { get; set; }
[ForeignKey("BlogId")]
public Blog Blog { get; set; }
public ICollection<Comment> Comments { get; set; }
}
The constraint in the database shows a relationship between InternalBlogs.PrimaryTrackingKey and Posts.BlogId.
The InverseProperty is used when you have multiple relationships between classes.
In the Post class, you may want to keep track of who wrote a blog post as well as who edited it. Here are two new
navigation properties for the Post class.
You’ll also need to add in the Person class referenced by these properties. The Person class has navigation
properties back to the Post, one for all of the posts written by the person and one for all of the posts updated by
that person.
Code first is not able to match up the properties in the two classes on its own. The database table for Posts should
have one foreign key for the CreatedBy person and one for the UpdatedBy person but code first will create four
foreign key properties: Person_Id, Person_Id1, CreatedBy_Id and UpdatedBy_Id.
To fix these problems, you can use the InverseProperty annotation to specify the alignment of the properties.
[InverseProperty("CreatedBy")]
public List<Post> PostsWritten { get; set; }
[InverseProperty("UpdatedBy")]
public List<Post> PostsUpdated { get; set; }
Because the PostsWritten property in Person knows that this refers to the Post type, it will build the relationship
to Post.CreatedBy. Similarly, PostsUpdated will be connected to Post.UpdatedBy. And code first will not create the
extra foreign keys.
Summary
DataAnnotations not only let you describe client and server side validation in your code first classes, but they also
allow you to enhance and even correct the assumptions that code first will make about your classes based on its
conventions. With DataAnnotations you can not only drive database schema generation, but you can also map
your code first classes to a pre-existing database.
While they are very flexible, keep in mind that DataAnnotations provide only the most commonly needed
configuration changes you can make on your code first classes. To configure your classes for some of the edge
cases, you should look to the alternate configuration mechanism, Code First’s Fluent API .
Defining DbSets
9/13/2018 • 2 minutes to read • Edit Online
When developing with the Code First workflow you define a derived DbContext that represents your session with
the database and exposes a DbSet for each type in your model. This topic covers the various ways you can define
the DbSet properties.
When used in Code First mode, this will configure Blogs and Posts as entity types, as well as configuring other
types reachable from these. In addition DbContext will automatically call the setter for each of these properties to
set an instance of the appropriate DbSet.
This context works in exactly the same way as the context that uses the DbSet class for its set properties.
NOTE
EF5 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 5. If you are using
an earlier version, some or all of the information does not apply.
This video and step-by-step walkthrough shows how to use enum types with Entity Framework Code First. It also
demonstrates how to use enums in a LINQ query.
This walkthrough will use Code First to create a new database, but you can also use Code First to map to an
existing database.
Enum support was introduced in Entity Framework 5. To use the new features like enums, spatial data types, and
table-valued functions, you must target .NET Framework 4.5. Visual Studio 2012 targets .NET 4.5 by default.
In Entity Framework, an enumeration can have the following underlying types: Byte, Int16, Int32, Int64 , or
SByte.
Pre-Requisites
You will need to have Visual Studio 2012, Ultimate, Premium, Professional, or Web Express edition installed to
complete this walkthrough.
using System.Data.Entity;
context.SaveChanges();
Console.WriteLine(
"DepartmentID: {0} Name: {1}",
department.DepartmentID,
department.Name);
}
Compile and run the application. The program produces the following output:
Summary
In this walkthrough we looked at how to use enum types with Entity Framework Code First.
Spatial - Code First
9/18/2018 • 4 minutes to read • Edit Online
NOTE
EF5 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 5. If you are using
an earlier version, some or all of the information does not apply.
The video and step-by-step walkthrough shows how to map spatial types with Entity Framework Code First. It
also demonstrates how to use a LINQ query to find a distance between two locations.
This walkthrough will use Code First to create a new database, but you can also use Code First to an existing
database.
Spatial type support was introduced in Entity Framework 5. Note that to use the new features like spatial type,
enums, and Table-valued functions, you must target .NET Framework 4.5. Visual Studio 2012 targets .NET 4.5 by
default.
To use spatial data types you must also use an Entity Framework provider that has spatial support. See provider
support for spatial types for more information.
There are two main spatial data types: geography and geometry. The geography data type stores ellipsoidal data
(for example, GPS latitude and longitude coordinates). The geometry data type represents Euclidean (flat)
coordinate system.
Pre-Requisites
You will need to have Visual Studio 2012, Ultimate, Premium, Professional, or Web Express edition installed to
complete this walkthrough.
using System.Data.Spatial;
using System.Data.Entity;
context.SaveChanges();
Console.WriteLine(
"The closest University to you is: {0}.",
university.Name);
}
Compile and run the application. The program produces the following output:
Summary
In this walkthrough we looked at how to use spatial types with Entity Framework Code First.
Code First Conventions
9/13/2018 • 6 minutes to read • Edit Online
Code First enables you to describe a model by using C# or Visual Basic .NET classes. The basic shape of the model
is detected by using conventions. Conventions are sets of rules that are used to automatically configure a
conceptual model based on class definitions when working with Code First. The conventions are defined in the
System.Data.Entity.ModelConfiguration.Conventions namespace.
You can further configure your model by using data annotations or the fluent API. Precedence is given to
configuration through the fluent API followed by data annotations and then conventions. For more information
see Data Annotations, Fluent API - Relationships, Fluent API - Types & Properties and Fluent API with VB.NET.
A detailed list of Code First conventions is available in the API Documentation. This topic provides an overview of
the conventions used by Code First.
Type Discovery
When using Code First development you usually begin by writing .NET Framework classes that define your
conceptual (domain) model. In addition to defining the classes, you also need to let DbContext know which types
you want to include in the model. To do this, you define a context class that derives from DbContext and exposes
DbSet properties for the types that you want to be part of the model. Code First will include these types and also
will pull in any referenced types, even if the referenced types are defined in a different assembly.
If your types participate in an inheritance hierarchy, it is enough to define a DbSet property for the base class, and
the derived types will be automatically included, if they are in the same assembly as the base class.
In the following example, there is only one DbSet property defined on the SchoolEntities class (Departments).
Code First uses this property to discover and pull in any referenced types.
public class SchoolEntities : DbContext
{
public DbSet<Department> Departments { get; set; }
}
// Navigation property
public virtual ICollection<Course> Courses { get; set; }
}
// Foreign key
public int DepartmentID { get; set; }
// Navigation properties
public virtual Department Department { get; set; }
}
If you want to exclude a type from the model, use the NotMapped attribute or the DbModelBuilder.Ignore
fluent API.
modelBuilder.Ignore<Department>();
. . .
}
Relationship Convention
In Entity Framework, navigation properties provide a way to navigate a relationship between two entity types.
Every object can have a navigation property for every relationship in which it participates. Navigation properties
allow you to navigate and manage relationships in both directions, returning either a reference object (if the
multiplicity is either one or zero-or-one) or a collection (if the multiplicity is many). Code First infers relationships
based on the navigation properties defined on your types.
In addition to navigation properties, we recommend that you include foreign key properties on the types that
represent dependent objects. Any property with the same data type as the principal primary key property and with
a name that follows one of the following formats represents a foreign key for the relationship: '<navigation
property name><principal primary key property name>', '<principal class name><primary key property name>',
or '<principal primary key property name>'. If multiple matches are found then precedence is given in the order
listed above. Foreign key detection is not case sensitive. When a foreign key property is detected, Code First infers
the multiplicity of the relationship based on the nullability of the foreign key. If the property is nullable then the
relationship is registered as optional; otherwise the relationship is registered as required.
If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a
foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and
when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior
detected by convention can be overridden by using the fluent API.
In the following example the navigation properties and a foreign key are used to define the relationship between
the Department and Course classes.
// Navigation property
public virtual ICollection<Course> Courses { get; set; }
}
// Foreign key
public int DepartmentID { get; set; }
// Navigation properties
public virtual Department Department { get; set; }
}
NOTE
If you have multiple relationships between the same types (for example, suppose you define the Person and Book classes,
where the Person class contains the ReviewedBooks and AuthoredBooks navigation properties and the Book class
contains the Author and Reviewer navigation properties) you need to manually configure the relationships by using Data
Annotations or the fluent API. For more information, see Data Annotations - Relationships and Fluent API - Relationships.
Complex Types Convention
When Code First discovers a class definition where a primary key cannot be inferred, and no primary key is
registered through data annotations or the fluent API, then the type is automatically registered as a complex type.
Complex type detection also requires that the type does not have properties that reference entity types and is not
referenced from a collection property on another type. Given the following class definitions Code First would infer
that Details is a complex type because it has no primary key.
Removing Conventions
You can remove any of the conventions defined in the System.Data.Entity.ModelConfiguration.Conventions
namespace. The following example removes PluralizingTableNameConvention.
Custom Conventions
Custom conventions are supported in EF6 onwards. For more information see Custom Code First Conventions.
Custom Code First Conventions
9/13/2018 • 11 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
When using Code First your model is calculated from your classes using a set of conventions. The default Code
First Conventions determine things like which property becomes the primary key of an entity, the name of the
table an entity maps to, and what precision and scale a decimal column has by default.
Sometimes these default conventions are not ideal for your model, and you have to work around them by
configuring many individual entities using Data Annotations or the Fluent API. Custom Code First Conventions
let you define your own conventions that provide configuration defaults for your model. In this walkthrough, we
will explore the different types of custom conventions and how to create each of them.
Model-Based Conventions
This page covers the DbModelBuilder API for custom conventions. This API should be sufficient for authoring
most custom conventions. However, there is also the ability to author model-based conventions - conventions that
manipulate the final model once it is created - to handle advanced scenarios. For more information, see Model-
Based Conventions.
Our Model
Let's start by defining a simple model that we can use with our conventions. Add the following classes to your
project.
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
Now, any property in our model named Key will be configured as the primary key of whatever entity its part of.
We could also make our conventions more specific by filtering on the type of property that we are going to
configure:
modelBuilder.Properties<int>()
.Where(p => p.Name == "Key")
.Configure(p => p.IsKey());
This will configure all properties called Key to be the primary key of their entity, but only if they are an integer.
An interesting feature of the IsKey method is that it is additive. Which means that if you call IsKey on multiple
properties and they will all become part of a composite key. The one caveat for this is that when you specify
multiple properties for a key you must also specify an order for those properties. You can do this by calling the
HasColumnOrder method like below:
modelBuilder.Properties<int>()
.Where(x => x.Name == "Key")
.Configure(x => x.IsKey().HasColumnOrder(1));
modelBuilder.Properties()
.Where(x => x.Name == "Name")
.Configure(x => x.IsKey().HasColumnOrder(2));
This code will configure the types in our model to have a composite key consisting of the int Key column and the
string Name column. If we view the model in the designer it would look like this:
Another example of property conventions is to configure all DateTime properties in my model to map to the
datetime2 type in SQL Server instead of datetime. You can achieve this with the following:
modelBuilder.Properties<DateTime>()
.Configure(c => c.HasColumnType("datetime2"));
Convention Classes
Another way of defining conventions is to use a Convention Class to encapsulate your convention. When using a
Convention Class then you create a type that inherits from the Convention class in the
System.Data.Entity.ModelConfiguration.Conventions namespace.
We can create a Convention Class with the datetime2 convention that we showed earlier by doing the following:
modelBuilder.Conventions.Add(new DateTime2Convention());
}
As you can see we add an instance of our convention to the conventions collection. Inheriting from Convention
provides a convenient way of grouping and sharing conventions across teams or projects. You could, for example,
have a class library with a common set of conventions that all of your organizations projects use.
Custom Attributes
Another great use of conventions is to enable new attributes to be used when configuring a model. To illustrate
this, let’s create an attribute that we can use to mark String properties as non-Unicode.
modelBuilder.Properties()
.Where(x => x.GetCustomAttributes(false).OfType<NonUnicode>().Any())
.Configure(c => c.IsUnicode(false));
With this convention we can add the NonUnicode attribute to any of our string properties, which means the
column in the database will be stored as varchar instead of nvarchar.
One thing to note about this convention is that if you put the NonUnicode attribute on anything other than a
string property then it will throw an exception. It does this because you cannot configure IsUnicode on any type
other than a string. If this happens, then you can make your convention more specific, so that it filters out anything
that isn’t a string.
While the above convention works for defining custom attributes there is another API that can be much easier to
use, especially when you want to use properties from the attribute class.
For this example we are going to update our attribute and change it to an IsUnicode attribute, so it looks like this:
modelBuilder.Properties()
.Where(x => x.GetCustomAttributes(false).OfType<IsUnicode>().Any())
.Configure(c => c.IsUnicode(c.ClrPropertyInfo.GetCustomAttribute<IsUnicode>().Unicode));
This is easy enough, but there is a more succinct way of achieving this by using the Having method of the
conventions API. The Having method has a parameter of type Func<PropertyInfo, T> which accepts the
PropertyInfo the same as the Where method, but is expected to return an object. If the returned object is null then
the property will not be configured, which means you can filter out properties with it just like Where, but it is
different in that it will also capture the returned object and pass it to the Configure method. This works like the
following:
modelBuilder.Properties()
.Having(x =>x.GetCustomAttributes(false).OfType<IsUnicode>().FirstOrDefault())
.Configure((config, att) => config.IsUnicode(att.Unicode));
Custom attributes are not the only reason to use the Having method, it is useful anywhere that you need to reason
about something that you are filtering on when configuring your types or properties.
Configuring Types
So far all of our conventions have been for properties, but there is another area of the conventions API for
configuring the types in your model. The experience is similar to the conventions we have seen so far, but the
options inside configure will be at the entity instead of property level.
One of the things that Type level conventions can be really useful for is changing the table naming convention,
either to map to an existing schema that differs from the EF default or to create a new database with a different
naming convention. To do this we first need a method that can accept the TypeInfo for a type in our model and
return what the table name for that type should be:
return result.ToLower();
}
This method takes a type and returns a string that uses lower case with underscores instead of CamelCase. In our
model this means that the ProductCategory class will be mapped to a table called product_category instead of
ProductCategories.
Once we have that method we can call it in a convention like this:
modelBuilder.Types()
.Configure(c => c.ToTable(GetTableName(c.ClrType)));
This convention configures every type in our model to map to the table name that is returned from our
GetTableName method. This convention is the equivalent to calling the ToTable method for each entity in the
model using the Fluent API.
One thing to note about this is that when you call ToTable EF will take the string that you provide as the exact table
name, without any of the pluralization that it would normally do when determining table names. This is why the
table name from our convention is product_category instead of product_categories. We can resolve that in our
convention by making a call to the pluralization service ourselves.
In the following code we will use the Dependency Resolution feature added in EF6 to retrieve the pluralization
service that EF would have used and pluralize our table name.
return result.ToLower();
}
NOTE
The generic version of GetService is an extension method in the System.Data.Entity.Infrastructure.DependencyResolution
namespace, you will need to add a using statement to your context in order to use it.
By default both employee and manager are mapped to the same table (Employees) in the database. The table will
contain both employees and managers with a discriminator column that will tell you what type of instance is
stored in each row. This is TPH mapping as there is a single table for the hierarchy. However, if you call ToTable on
both classe then each type will instead be mapped to its own table, also known as TPT since each type has its own
table.
modelBuilder.Types()
.Configure(c=>c.ToTable(c.ClrType.Name));
The code above will map to a table structure that looks like the following:
You can avoid this, and maintain the default TPH mapping, in a couple ways:
1. Call ToTable with the same table name for each type in the hierarchy.
2. Call ToTable only on the base class of the hierarchy, in our example that would be employee.
Execution Order
Conventions operate in a last wins manner, the same as the Fluent API. What this means is that if you write two
conventions that configure the same option of the same property, then the last one to execute wins. As an
example, in the code below the max length of all strings is set to 500 but we then configure all properties called
Name in the model to have a max length of 250.
modelBuilder.Properties<string>()
.Configure(c => c.HasMaxLength(500));
modelBuilder.Properties<string>()
.Where(x => x.Name == "Name")
.Configure(c => c.HasMaxLength(250));
Because the convention to set max length to 250 is after the one that sets all strings to 500, all the properties
called Name in our model will have a MaxLength of 250 while any other strings, such as descriptions, would be
500. Using conventions in this way means that you can provide a general convention for types or properties in
your model and then overide them for subsets that are different.
The Fluent API and Data Annotations can also be used to override a convention in specific cases. In our example
above if we had used the Fluent API to set the max length of a property then we could have put it before or after
the convention, because the more specific Fluent API will win over the more general Configuration Convention.
Built-in Conventions
Because custom conventions could be affected by the default Code First conventions, it can be useful to add
conventions to run before or after another convention. To do this you can use the AddBefore and AddAfter
methods of the Conventions collection on your derived DbContext. The following code would add the convention
class we created earlier so that it will run before the built in key discovery convention.
modelBuilder.Conventions.AddBefore<IdKeyDiscoveryConvention>(new DateTime2Convention());
This is going to be of the most use when adding conventions that need to run before or after the built in
conventions, a list of the built in conventions can be found here:
System.Data.Entity.ModelConfiguration.Conventions Namespace.
You can also remove conventions that you do not want applied to your model. To remove a convention, use the
Remove method. Here is an example of removing the PluralizingTableNameConvention.
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
Model based conventions are an advanced method of convention based model configuration. For most scenarios
the Custom Code First Convention API on DbModelBuilder should be used. An understanding of the
DbModelBuilder API for conventions is recommended before using model based conventions.
Model based conventions allow the creation of conventions that affect properties and tables which are not
configurable through standard conventions. Examples of these are discriminator columns in table per hierarchy
models and Independent Association columns.
Creating a Convention
The first step in creating a model based convention is choosing when in the pipeline the convention needs to be
applied to the model. There are two types of model conventions, Conceptual (C -Space) and Store (S -Space). A C -
Space convention is applied to the model that the application builds, whereas an S -Space convention is applied to
the version of the model that represents the database and controls things such as how automatically-generated
columns are named.
A model convention is a class that extends from either IConceptualModelConvention or IStoreModelConvention.
These interfaces both accept a generic type that can be of type MetadataItem which is used to filter the data type
that the convention applies to.
Adding a Convention
Model conventions are added in the same way as regular conventions classes. In the OnModelCreating method,
add the convention to the list of conventions for a model.
using System.Data.Entity;
using System.Data.Entity.Core.Metadata.Edm;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.ModelConfiguration.Conventions;
A convention can also be added in relation to another convention using the Conventions.AddBefore<> or
Conventions.AddAfter<> methods. For more information about the conventions that Entity Framework applies
see the notes section.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.AddAfter<IdKeyDiscoveryConvention>(new MyModelBasedConvention());
}
using System.Data.Entity;
using System.Data.Entity.Core.Metadata.Edm;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.ModelConfiguration.Conventions;
// Provides a convention for fixing the independent association (IA) foreign key column names.
public class ForeignKeyNamingConvention : IStoreModelConvention<AssociationType>
{
using System.Data.Entity;
using System.Data.Entity.Core.Metadata.Edm;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.ModelConfiguration.Conventions;
using System.Linq;
if (!matches.Any())
{
matches = primitiveProperties
.Where(p => (entityType.Name + Id).Equals(p.Name, StringComparison.OrdinalIgnoreCase));
}
// If the number of matches is more than one, then multiple properties matched differing only by
// case--for example, "Key" and "key".
if (matches.Count() > 1)
{
throw new InvalidOperationException("Multiple properties match the key convention");
}
return matches;
}
}
We then need to add our new convention before the existing key convention. After we add the
CustomKeyDiscoveryConvention, we can remove the IdKeyDiscoveryConvention. If we didn’t remove the existing
IdKeyDiscoveryConvention this convention would still take precedence over the Id discovery convention since it is
run first, but in the case where no “key” property is found, the “id” convention will run. We see this behavior
because each convention sees the model as updated by the previous convention (rather than operating on it
independently and all being combined together) so that if for example, a previous convention updated a column
name to match something of interest to your custom convention (when before that the name was not of interest)
then it will apply to that column.
public class BlogContext : DbContext
{
public DbSet<Post> Posts { get; set; }
public DbSet<Comment> Comments { get; set; }
Notes
A list of conventions that are currently applied by Entity Framework is available in the MSDN documentation here:
http://msdn.microsoft.com/library/system.data.entity.modelconfiguration.conventions.aspx. This list is pulled
directly from our source code. The source code for Entity Framework 6 is available on GitHub and many of the
conventions used by Entity Framework are good starting points for custom model based conventions.
Fluent API - Relationships
9/13/2018 • 6 minutes to read • Edit Online
NOTE
This page provides information about setting up relationships in your Code First model using the fluent API. For general
information about relationships in EF and how to access and manipulate data using relationships, see Relationships &
Navigation Properties.
When working with Code First, you define your model by defining your domain CLR classes. By default, Entity
Framework uses the Code First conventions to map your classes to the database schema. If you use the Code First
naming conventions, in most cases you can rely on Code First to set up relationships between your tables based
on the foreign keys and navigation properties that you define on the classes. If you do not follow the conventions
when defining your classes, or if you want to change the way the conventions work, you can use the fluent API or
data annotations to configure your classes so Code First can map the relationships between your tables.
Introduction
When configuring a relationship with the fluent API, you start with the EntityTypeConfiguration instance and then
use the HasRequired, HasOptional, or HasMany method to specify the type of relationship this entity participates
in. The HasRequired and HasOptional methods take a lambda expression that represents a reference navigation
property. The HasMany method takes a lambda expression that represents a collection navigation property. You
can then configure an inverse navigation property by using the WithRequired, WithOptional, and WithMany
methods. These methods have overloads that do not take arguments and can be used to specify cardinality with
unidirectional navigations.
You can then configure foreign key properties by using the HasForeignKey method. This method takes a lambda
expression that represents the property to be used as the foreign key.
modelBuilder.Entity<Instructor>()
.HasRequired(t => t.OfficeAssignment)
.WithRequiredPrincipal(t => t.Instructor);
modelBuilder.Entity<Course>()
.HasMany(t => t.Instructors)
.WithMany(t => t.Courses)
If you want to specify the join table name and the names of the columns in the table you need to do additional
configuration by using the Map method. The following code generates the CourseInstructor table with CourseID
and InstructorID columns.
modelBuilder.Entity<Course>()
.HasMany(t => t.Instructors)
.WithMany(t => t.Courses)
.Map(m =>
{
m.ToTable("CourseInstructor");
m.MapLeftKey("CourseID");
m.MapRightKey("InstructorID");
});
modelBuilder.Entity<Instructor>()
.HasRequired(t => t.OfficeAssignment)
.WithRequiredPrincipal();
modelBuilder.Entity<Course>()
.HasRequired(t => t.Department)
.WithMany(t => t.Courses)
.HasForeignKey(d => d.DepartmentID)
.WillCascadeOnDelete(false);
modelBuilder.Entity<Course>()
.HasRequired(c => c.Department)
.WithMany(t => t.Courses)
.Map(m => m.MapKey("ChangedDepartmentID"));
Configuring a Foreign Key Name That Does Not Follow the Code First
Convention
If the foreign key property on the Course class was called SomeDepartmentID instead of DepartmentID, you
would need to do the following to specify that you want SomeDepartmentID to be the foreign key:
modelBuilder.Entity<Course>()
.HasRequired(c => c.Department)
.WithMany(d => d.Courses)
.HasForeignKey(c => c.SomeDepartmentID);
Model Used in Samples
The following Code First model is used for the samples on this page.
using System.Data.Entity;
using System.Data.Entity.ModelConfiguration.Conventions;
// add a reference to System.ComponentModel.DataAnnotations DLL
using System.ComponentModel.DataAnnotations;
using System.Collections.Generic;
using System;
// Navigation property
public virtual ICollection<Course> Courses { get; private set; }
}
// Foreign key
public int DepartmentID { get; set; }
// Navigation properties
public virtual Department Department { get; set; }
public virtual ICollection<Instructor> Instructors { get; private set; }
}
// Primary key
public int InstructorID { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
public System.DateTime HireDate { get; set; }
// Navigation properties
public virtual ICollection<Course> Courses { get; private set; }
}
// Navigation property
public virtual Instructor Instructor { get; set; }
}
Fluent API - Configuring and Mapping Properties
and Types
11/27/2018 • 11 minutes to read • Edit Online
When working with Entity Framework Code First the default behavior is to map your POCO classes to tables
using a set of conventions baked into EF. Sometimes, however, you cannot or do not want to follow those
conventions and need to map entities to something other than what the conventions dictate.
There are two main ways you can configure EF to use something other than conventions, namely annotations or
EFs fluent API. The annotations only cover a subset of the fluent API functionality, so there are mapping scenarios
that cannot be achieved using annotations. This article is designed to demonstrate how to use the fluent API to
configure properties.
The code first fluent API is most commonly accessed by overriding the OnModelCreating method on your derived
DbContext. The following samples are designed to show how to do various tasks with the fluent api and allow you
to copy the code out and customize it to suit your model, if you wish to see the model that they can be used with
as-is then it is provided at the end of this article.
Model-Wide Settings
Default Schema (EF6 onwards)
Starting with EF6 you can use the HasDefaultSchema method on DbModelBuilder to specify the database schema
to use for all tables, stored procedures, etc. This default setting will be overridden for any objects that you explicitly
configure a different schema for.
modelBuilder.HasDefaultSchema("sales");
Property Mapping
The Property method is used to configure attributes for each property belonging to an entity or complex type. The
Property method is used to obtain a configuration object for a given property. The options on the configuration
object are specific to the type being configured; IsUnicode is available only on string properties for example.
Configuring a Primary Key
The Entity Framework convention for primary keys is:
1. Your class defines a property whose name is “ID” or “Id”
2. or a class name followed by “ID” or “Id”
To explicitly set a property to be a primary key, you can use the HasKey method. In the following example, the
HasKey method is used to configure the InstructorID primary key on the OfficeAssignment type.
NOTE
In some cases it may not be possible for the column in the database to be non-nullable even though the property is
required. For example, when using a TPH inheritance strategy data for multiple types is stored in a single table. If a derived
type includes a required property the column cannot be made non-nullable since not all types in the hierarchy will have this
property.
NOTE
EF6.1 Onwards Only - The Index attribute was introduced in Entity Framework 6.1. If you are using an earlier version the
information in this section does not apply.
Creating indexes isn't natively supported by the Fluent API, but you can make use of the support for
IndexAttribute via the Fluent API. Index attributes are processed by including a model annotation on the model
that is then turned into an Index in the database later in the pipeline. You can manually add these same
annotations using the Fluent API.
The easiest way to do this is to create an instance of IndexAttribute that contains all the settings for the new
index. You can then create an instance of IndexAnnotation which is an EF specific type that will convert the
IndexAttribute settings into a model annotation that can be stored on the EF model. These can then be passed to
the HasColumnAnnotation method on the Fluent API, specifying the name Index for the annotation.
modelBuilder
.Entity<Department>()
.Property(t => t.Name)
.HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute()));
For a complete list of the settings available in IndexAttribute, see the Index section of Code First Data
Annotations. This includes customizing the index name, creating unique indexes, and creating multi-column
indexes.
You can specify multiple index annotations on a single property by passing an array of IndexAttribute to the
constructor of IndexAnnotation.
modelBuilder
.Entity<Department>()
.Property(t => t.Name)
.HasColumnAnnotation(
"Index",
new IndexAnnotation(new[]
{
new IndexAttribute("Index1"),
new IndexAttribute("Index2") { IsUnique = true }
})));
modelBuilder.Entity<Department>()
.Property(t => t.Name)
.HasColumnName("DepartmentName");
modelBuilder.Entity<Course>()
.HasRequired(c => c.Department)
.WithMany(t => t.Courses)
.Map(m => m.MapKey("ChangedDepartmentID"));
modelBuilder.Entity<Department>()
.Property(p => p.Name)
.HasColumnType("varchar");
modelBuilder.ComplexType<Details>()
.Property(t => t.Location)
.HasMaxLength(20);
You can also use the dot notation to access a property of a complex type.
modelBuilder.Entity<OnsiteCourse>()
.Property(t => t.Details.Location)
.HasMaxLength(20);
modelBuilder.Entity<OfficeAssignment>()
.Property(t => t.Timestamp)
.IsConcurrencyToken();
You can also use the IsRowVersion method to configure the property to be a row version in the database. Setting
the property to be a row version automatically configures it to be an optimistic concurrency token.
modelBuilder.Entity<OfficeAssignment>()
.Property(t => t.Timestamp)
.IsRowVersion();
Type Mapping
Specifying That a Class Is a Complex Type
By convention, a type that has no primary key specified is treated as a complex type. There are some scenarios
where Code First will not detect a complex type (for example, if you do have a property called ID, but you do not
mean for it to be a primary key). In such cases, you would use the fluent API to explicitly specify that a type is a
complex type.
modelBuilder.ComplexType<Details>();
modelBuilder.Ignore<OnlineCourse>();
modelBuilder.Entity<Department>()
.ToTable("t_Department");
modelBuilder.Entity<Department>()
.ToTable("t_Department", "school");
modelBuilder.Entity<Course>()
.Map<Course>(m => m.Requires("Type").HasValue("Course"))
.Map<OnsiteCourse>(m => m.Requires("Type").HasValue("OnsiteCourse"));
modelBuilder.Entity<Course>().ToTable("Course");
modelBuilder.Entity<OnsiteCourse>().ToTable("OnsiteCourse");
modelBuilder.Entity<Course>()
.Property(c => c.CourseID)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
modelBuilder.Entity<OnsiteCourse>().Map(m =>
{
m.MapInheritedProperties();
m.ToTable("OnsiteCourse");
});
modelBuilder.Entity<OnlineCourse>().Map(m =>
{
m.MapInheritedProperties();
m.ToTable("OnlineCourse");
});
Mapping Properties of an Entity Type to Multiple Tables in the Database (Entity Splitting)
Entity splitting allows the properties of an entity type to be spread across multiple tables. In the following example,
the Department entity is split into two tables: Department and DepartmentDetails. Entity splitting uses multiple
calls to the Map method to map a subset of properties to a specific table.
modelBuilder.Entity<Department>()
.Map(m =>
{
m.Properties(t => new { t.DepartmentID, t.Name });
m.ToTable("Department");
})
.Map(m =>
{
m.Properties(t => new { t.DepartmentID, t.Administrator, t.StartDate, t.Budget });
m.ToTable("DepartmentDetails");
});
Mapping Multiple Entity Types to One Table in the Database (Table Splitting)
The following example maps two entity types that share a primary key to one table.
modelBuilder.Entity<OfficeAssignment>()
.HasKey(t => t.InstructorID);
modelBuilder.Entity<Instructor>()
.HasRequired(t => t.OfficeAssignment)
.WithRequiredPrincipal(t => t.Instructor);
modelBuilder.Entity<Instructor>().ToTable("Instructor");
modelBuilder.Entity<OfficeAssignment>().ToTable("Instructor");
using System.Data.Entity;
using System.Data.Entity.ModelConfiguration.Conventions;
// add a reference to System.ComponentModel.DataAnnotations DLL
using System.ComponentModel.DataAnnotations;
using System.Collections.Generic;
using System;
// Navigation property
public virtual ICollection<Course> Courses { get; private set; }
}
// Foreign key
public int DepartmentID { get; set; }
// Navigation properties
public virtual Department Department { get; set; }
public virtual ICollection<Instructor> Instructors { get; private set; }
}
// Primary key
public int InstructorID { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
public System.DateTime HireDate { get; set; }
// Navigation properties
public virtual ICollection<Course> Courses { get; private set; }
}
// Navigation property
public virtual Instructor Instructor { get; set; }
}
Fluent API with VB.NET
9/18/2018 • 9 minutes to read • Edit Online
Code First allows you to define your model using C# or VB.NET classes. Additional configuration can optionally
be performed using attributes on your classes and properties or by using a fluent API. This walkthrough shows
how to perform fluent API configuration using VB.NET.
This page assumes you have a basic understanding of Code First. Check out the following walkthroughs for more
information on Code First:
Code First to a New Database
Code First to an Existing Database
Pre-Requisites
You will need to have at least Visual Studio 2010 or Visual Studio 2012 installed to complete this walkthrough.
If you are using Visual Studio 2010, you will also need to have NuGet installed
' Foreign key that does not follow the Code First convention.
' The fluent API will be used to configure DepartmentID_FK to be the foreign key for this entity.
Public Property DepartmentID_FK() As Integer
NOTE
If you don’t have the Manage NuGet Packages… option you should install the latest version of NuGet
Imports System.Data.Entity
Imports System.Data.Entity.Infrastructure
Imports System.Data.Entity.ModelConfiguration.Conventions
Imports System.ComponentModel.DataAnnotations
Imports System.ComponentModel.DataAnnotations.Schema
' In the TPT mapping scenario, all types are mapped to individual tables.
' Properties that belong solely to a base type or derived type are stored
' in a table that maps to that type. Tables that map to derived types
' also store a foreign key that joins the derived table with the base table.
modelBuilder.Entity(Of Course)().ToTable("Course")
modelBuilder.Entity(Of OnsiteCourse)().ToTable("OnsiteCourse")
modelBuilder.Entity(Of OnlineCourse)().ToTable("OnlineCourse")
' Configuring a foreign key name that does not follow the Code First convention
' The foreign key property on the Course class is called DepartmentID_FK
' since that does not follow Code First conventions you need to explicitly specify
' that you want DepartmentID_FK to be the foreign key.
modelBuilder.Entity(Of Course)().
HasRequired(Function(t) t.Department).
WithMany(Function(t) t.Courses).
HasForeignKey(Function(t) t.DepartmentID_FK)
' You can also remove the cascade delete conventions by using:
' modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>()
' and modelBuilder.Conventions.Remove<ManyToManyCascadeDeleteConvention>().
modelBuilder.Entity(Of Course)().
HasRequired(Function(t) t.Department).
WithMany(Function(t) t.Courses).
HasForeignKey(Function(d) d.DepartmentID_FK).
WillCascadeOnDelete(False)
Module Module1
Sub Main()
End Using
End Sub
End Module
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
By default, Code First will configure all entities to perform insert, update and delete commands using direct table
access. Starting in EF6 you can configure your Code First model to use stored procedures for some or all entities
in your model.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures();
Doing this will cause Code First to use some conventions to build the expected shape of the stored procedures in
the database.
Three stored procedures named <type_name>_Insert, <type_name>_Update and <type_name>_Delete
(for example, Blog_Insert, Blog_Update and Blog_Delete).
Parameter names correspond to the property names.
NOTE
If you use HasColumnName() or the Column attribute to rename the column for a given property then this name is
used for parameters instead of the property name.
The insert stored procedure will have a parameter for every property, except for those marked as store
generated (identity or computed). The stored procedure should return a result set with a column for each store
generated property.
The update stored procedure will have a parameter for every property, except for those marked with a store
generated pattern of 'Computed'. Some concurrency tokens require a parameter for the original value, see the
Concurrency Tokens section below for details. The stored procedure should return a result set with a column for
each computed property.
The delete stored procedure should have a parameter for the key value of the entity (or multiple parameters
if the entity has a composite key). Additionally, the delete procedure should also have parameters for any
independent association foreign keys on the target table (relationships that do not have corresponding foreign
key properties declared in the entity). Some concurrency tokens require a parameter for the original value, see
the Concurrency Tokens section below for details.
Using the following class as an example:
public class Blog
{
public int BlogId { get; set; }
public string Name { get; set; }
public string Url { get; set; }
}
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
s.Update(u => u.HasName("modify_blog")));
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
s.Update(u => u.HasName("modify_blog"))
.Delete(d => d.HasName("delete_blog"))
.Insert(i => i.HasName("insert_blog")));
In these examples the calls are chained together, but you can also use lambda block syntax.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
{
s.Update(u => u.HasName("modify_blog"));
s.Delete(d => d.HasName("delete_blog"));
s.Insert(i => i.HasName("insert_blog"));
});
This example renames the parameter for the BlogId property on the update stored procedure.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
s.Update(u => u.Parameter(b => b.BlogId, "blog_id")));
These calls are all chainable and composable. Here is an example that renames all three stored procedures and
their parameters.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
s.Update(u => u.HasName("modify_blog")
.Parameter(b => b.BlogId, "blog_id")
.Parameter(b => b.Name, "blog_name")
.Parameter(b => b.Url, "blog_url"))
.Delete(d => d.HasName("delete_blog")
.Parameter(b => b.BlogId, "blog_id"))
.Insert(i => i.HasName("insert_blog")
.Parameter(b => b.Name, "blog_name")
.Parameter(b => b.Url, "blog_url")));
You can also change the name of the columns in the result set that contains database generated values.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
s.Insert(i => i.Result(b => b.BlogId, "generated_blog_identity")));
modelBuilder
.Entity<Post>()
.MapToStoredProcedures(s =>
s.Insert(i => i.Parameter(p => p.Blog.BlogId, "blog_id")));
If you don’t have a navigation property on the dependent entity (i.e no Post.Blog property) then you can use the
Association method to identify the other end of the relationship and then configure the parameters that
correspond to each of the key property(s).
modelBuilder
.Entity<Post>()
.MapToStoredProcedures(s =>
s.Insert(i => i.Navigation<Blog>(
b => b.Posts,
c => c.Parameter(b => b.BlogId, "blog_id"))));
Concurrency Tokens
Update and delete stored procedures may also need to deal with concurrency:
If the entity contains concurrency tokens, the stored procedure can optionally have an output parameter that
returns the number of rows updated/deleted (rows affected). Such a parameter must be configured using the
RowsAffectedParameter method.
By default EF uses the return value from ExecuteNonQuery to determine how many rows were affected.
Specifying a rows affected output parameter is useful if you perform any logic in your sproc that would result
in the return value of ExecuteNonQuery being incorrect (from EF's perspective) at the end of execution.
For each concurrency token there will be a parameter named <property_name>_Original (for example,
Timestamp_Original ). This will be passed the original value of this property – the value when queried from the
database.
Concurrency tokens that are computed by the database – such as timestamps – will only have an original
value parameter.
Non-computed properties that are set as concurrency tokens will also have a parameter for the new
value in the update procedure. This uses the naming conventions already discussed for new values. An
example of such a token would be using a Blog's URL as a concurrency token, the new value is required
because this can be updated to a new value by your code (unlike a Timestamp token which is only
updated by the database).
This is an example class and update stored procedure with a timestamp concurrency token.
Here is an example class and update stored procedure with non-computed concurrency token.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
s.Update(u => u.RowsAffectedParameter("rows_affected")));
For database computed concurrency tokens – where only the original value is passed – you can just use the
standard parameter renaming mechanism to rename the parameter for the original value.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s =>
s.Update(u => u.Parameter(b => b.Timestamp, "blog_timestamp")));
For non-computed concurrency tokens – where both the original and new value are passed – you can use an
overload of Parameter that allows you to supply a name for each parameter.
modelBuilder
.Entity<Blog>()
.MapToStoredProcedures(s => s.Update(u => u.Parameter(b => b.Url, "blog_url", "blog_original_url")));
Many to many relationships can be mapped to stored procedures with the following syntax.
modelBuilder
.Entity<Post>()
.HasMany(p => p.Tags)
.WithMany(t => t.Posts)
.MapToStoredProcedures();
If no other configuration is supplied then the following stored procedure shape is used by default.
Two stored procedures named <type_one><type_two>_Insert and <type_one><type_two>_Delete (for
example, PostTag_Insert and PostTag_Delete).
The parameters will be the key value(s) for each type. The name of each parameter being
<type_name>_<property_name> (for example, Post_PostId and Tag_TagId).
Here are example insert and update stored procedures.
CREATE PROCEDURE [dbo].[PostTag_Insert]
@Post_PostId int,
@Tag_TagId int
AS
INSERT INTO [dbo].[Post_Tags] (Post_PostId, Tag_TagId)
VALUES (@Post_PostId, @Tag_TagId)
CREATE PROCEDURE [dbo].[PostTag_Delete]
@Post_PostId int,
@Tag_TagId int
AS
DELETE FROM [dbo].[Post_Tags]
WHERE Post_PostId = @Post_PostId AND Tag_TagId = @Tag_TagId
modelBuilder
.Entity<Post>()
.HasMany(p => p.Tags)
.WithMany(t => t.Posts)
.MapToStoredProcedures(s =>
s.Insert(i => i.HasName("add_post_tag")
.LeftKeyParameter(p => p.PostId, "post_id")
.RightKeyParameter(t => t.TagId, "tag_id"))
.Delete(d => d.HasName("remove_post_tag")
.LeftKeyParameter(p => p.PostId, "post_id")
.RightKeyParameter(t => t.TagId, "tag_id")));
Code First Migrations
2/3/2019 • 11 minutes to read • Edit Online
Code First Migrations is the recommended way to evolve your application's database schema if you are using the
Code First workflow. Migrations provide a set of tools that allow:
1. Create an initial database that works with your EF model
2. Generating migrations to keep track of changes you make to your EF model
3. Keep your database up to date with those changes
The following walkthrough will provide an overview Code First Migrations in Entity Framework. You can either
complete the entire walkthrough or skip to the topic you are interested in. The following topics are covered:
using System.Data.Entity;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Data.Entity.Infrastructure;
namespace MigrationsDemo
{
public class BlogContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
Now that we have a model it’s time to use it to perform data access. Update the Program.cs file with the code
shown below.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace MigrationsDemo
{
class Program
{
static void Main(string[] args)
{
using (var db = new BlogContext())
{
db.Blogs.Add(new Blog { Name = "Another Blog " });
db.SaveChanges();
Run your application and you will see that a MigrationsCodeDemo.BlogContext database is created for
you.
Enabling Migrations
It’s time to make some more changes to our model.
Let’s introduce a Url property to the Blog class.
If you were to run the application again you would get an InvalidOperationException stating The model backing
the 'BlogContext' context has changed since the database was created. Consider using Code First Migrations to
update the database ( http://go.microsoft.com/fwlink/?LinkId=238269 ).
As the exception suggests, it’s time to start using Code First Migrations. The first step is to enable migrations for
our context.
Run the Enable-Migrations command in Package Manager Console
This command has added a Migrations folder to our project. This new folder contains two files:
The Configuration class. This class allows you to configure how Migrations behaves for your context.
For this walkthrough we will just use the default configuration. Because there is just a single Code First
context in your project, Enable-Migrations has automatically filled in the context type this configuration
applies to.
An InitialCreate migration. This migration was generated because we already had Code First create a
database for us, before we enabled migrations. The code in this scaffolded migration represents the objects
that have already been created in the database. In our case that is the Blog table with a BlogId and Name
columns. The filename includes a timestamp to help with ordering. If the database had not already been
created this InitialCreate migration would not have been added to the project. Instead, the first time we
call Add -Migration the code to create these tables would be scaffolded to a new migration.
Multiple Models Targeting the Same Database
When using versions prior to EF6, only one Code First model could be used to generate/manage the schema of a
database. This is the result of a single __MigrationsHistory table per database with no way to identify which
entries belong to which model.
Starting with EF6, the Configuration class includes a ContextKey property. This acts as a unique identifier for
each Code First model. A corresponding column in the __MigrationsHistory table allows entries from multiple
models to share the table. By default, this property is set to the fully qualified name of your context.
namespace MigrationsDemo.Migrations
{
using System;
using System.Data.Entity.Migrations;
We could now edit or add to this migration but everything looks pretty good. Let’s use Update-Database to
apply this migration to the database.
Run the Update-Database command in Package Manager Console
Code First Migrations will compare the migrations in our Migrations folder with the ones that have been
applied to the database. It will see that the AddBlogUrl migration needs to be applied, and run it.
The MigrationsDemo.BlogContext database is now updated to include the Url column in the Blogs table.
Customizing Migrations
So far we’ve generated and run a migration without making any changes. Now let’s look at editing the code that
gets generated by default.
It’s time to make some more changes to our model, let’s add a new Rating property to the Blog class
We'll also add a Posts collection to the Blog class to form the other end of the relationship between Blog and
Post
We'll use the Add-Migration command to let Code First Migrations scaffold its best guess at the migration for
us. We’re going to call this migration AddPostClass.
Run the Add-Migration AddPostClass command in Package Manager Console.
Code First Migrations did a pretty good job of scaffolding these changes, but there are some things we might
want to change:
1. First up, let’s add a unique index to Posts.Title column (Adding in line 22 & 29 in the code below ).
2. We’re also adding a non-nullable Blogs.Rating column. If there is any existing data in the table it will get
assigned the CLR default of the data type for new column (Rating is integer, so that would be 0). But we want
to specify a default value of 3 so that existing rows in the Blogs table will start with a decent rating. (You can
see the default value specified on line 24 of the code below )
namespace MigrationsDemo.Migrations
{
using System;
using System.Data.Entity.Migrations;
Our edited migration is ready to go, so let’s use Update-Database to bring the database up-to-date. This time
let’s specify the –Verbose flag so that you can see the SQL that Code First Migrations is running.
Run the Update-Database –Verbose command in Package Manager Console.
We'll use the Add-Migration command to let Code First Migrations scaffold its best guess at the migration for
us.
Run the Add-Migration AddPostAbstract command in Package Manager Console.
The generated migration takes care of the schema changes but we also want to pre-populate the Abstract
column using the first 100 characters of content for each post. We can do this by dropping down to SQL and
running an UPDATE statement after the column is added. (Adding in line 12 in the code below )
namespace MigrationsDemo.Migrations
{
using System;
using System.Data.Entity.Migrations;
Our edited migration is looking good, so let’s use Update-Database to bring the database up-to-date. We’ll
specify the –Verbose flag so that we can see the SQL being run against the database.
Run the Update-Database –Verbose command in Package Manager Console.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.Entity;
using MigrationsDemo.Migrations;
namespace MigrationsDemo
{
class Program
{
static void Main(string[] args)
{
Database.SetInitializer(new MigrateDatabaseToLatestVersion<BlogContext, Configuration>());
Now whenever our application runs it will first check if the database it is targeting is up-to-date, and apply any
pending migrations if it is not.
Automatic Code First Migrations
9/18/2018 • 6 minutes to read • Edit Online
Automatic Migrations allows you to use Code First Migrations without having a code file in your project for each
change you make. Not all changes can be applied automatically - for example column renames require the use of a
code-based migration.
NOTE
This article assumes you know how to use Code First Migrations in basic scenarios. If you don’t, then you’ll need to read
Code First Migrations before continuing.
using System.Data.Entity;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Data.Entity.Infrastructure;
namespace MigrationsAutomaticDemo
{
public class BlogContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
}
Now that we have a model it’s time to use it to perform data access. Update the Program.cs file with the code
shown below.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace MigrationsAutomaticDemo
{
class Program
{
static void Main(string[] args)
{
using (var db = new BlogContext())
{
db.Blogs.Add(new Blog { Name = "Another Blog " });
db.SaveChanges();
Run your application and you will see that a MigrationsAutomaticCodeDemo.BlogContext database is
created for you.
Enabling Migrations
It’s time to make some more changes to our model.
Let’s introduce a Url property to the Blog class.
If you were to run the application again you would get an InvalidOperationException stating The model backing
the 'BlogContext' context has changed since the database was created. Consider using Code First Migrations to
update the database ( http://go.microsoft.com/fwlink/?LinkId=238269 ).
As the exception suggests, it’s time to start using Code First Migrations. Because we want to use automatic
migrations we’re going to specify the –EnableAutomaticMigrations switch.
Run the Enable-Migrations –EnableAutomaticMigrations command in Package Manager Console This
command has added a Migrations folder to our project. This new folder contains one file:
The Configuration class. This class allows you to configure how Migrations behaves for your context. For
this walkthrough we will just use the default configuration. Because there is just a single Code First context
in your project, Enable-Migrations has automatically filled in the context type this configuration applies to.
We'll also add a Posts collection to the Blog class to form the other end of the relationship between Blog and
Post
Now use Update-Database to bring the database up-to-date. This time let’s specify the –Verbose flag so that
you can see the SQL that Code First Migrations is running.
Run the Update-Database –Verbose command in Package Manager Console.
We could just run Update-Database to push these changes to the database. However, we're adding a non-
nullable Blogs.Rating column, if there is any existing data in the table it will get assigned the CLR default of the
data type for new column (Rating is integer, so that would be 0). But we want to specify a default value of 3 so that
existing rows in the Blogs table will start with a decent rating. Let’s use the Add-Migration command to write this
change out to a code-based migration so that we can edit it. The Add-Migration command allows us to give
these migrations a name, let’s just call ours AddBlogRating.
Run the Add-Migration AddBlogRating command in Package Manager Console.
In the Migrations folder we now have a new AddBlogRating migration. The migration filename is pre-fixed
with a timestamp to help with ordering. Let’s edit the generated code to specify a default value of 3 for
Blog.Rating (Line 10 in the code below )
The migration also has a code-behind file that captures some metadata. This metadata will allow Code First
Migrations to replicate the automatic migrations we performed before this code-based migration. This is
important if another developer wants to run our migrations or when it’s time to deploy our application.
namespace MigrationsAutomaticDemo.Migrations
{
using System;
using System.Data.Entity.Migrations;
Our edited migration is looking good, so let’s use Update-Database to bring the database up-to-date.
Run the Update-Database command in Package Manager Console.
Now we can use Update-Database to get Code First Migrations to push this change to the database using an
automatic migration.
Run the Update-Database command in Package Manager Console.
Summary
In this walkthrough you saw how to use automatic migrations to push model changes to the database. You also
saw how to scaffold and run code-based migrations in between automatic migrations when you need more
control.
Code First Migrations with an existing database
9/13/2018 • 7 minutes to read • Edit Online
NOTE
EF4.3 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 4.1. If you are
using an earlier version, some or all of the information does not apply.
This article covers using Code First Migrations with an existing database, one that wasn’t created by Entity
Framework.
NOTE
This article assumes you know how to use Code First Migrations in basic scenarios. If you don’t, then you’ll need to read
Code First Migrations before continuing.
Screencasts
If you'd rather watch a screencast than read this article, the following two videos cover the same content as this
article.
Video One: "Migrations - Under the Hood"
This screencast covers how migrations tracks and uses information about the model to detect model changes.
Video Two: "Migrations - Existing Databases"
Building on the concepts from the previous video, this screencast covers how to enable and use migrations with an
existing database.
NOTE
It is important to follow the rest of the steps in this topic before making any changes to your model that would require
changes to the database schema. The following steps require the model to be in-sync with the database schema.
Things to be aware of
There are a few things you need to be aware of when using Migrations against an existing database.
Default/calculated names may not match existing schema
Migrations explicitly specifies names for columns and tables when it scaffolds a migrations. However, there are
other database objects that Migrations calculates a default name for when applying the migrations. This includes
indexes and foreign key constraints. When targeting an existing schema, these calculated names may not match
what actually exists in your database.
Here are some examples of when you need to be aware of this:
If you used ‘Option One: Use existing schema as a starting point’ from Step 3:
If future changes in your model require changing or dropping one of the database objects that is named
differently, you will need to modify the scaffolded migration to specify the correct name. The Migrations APIs
have an optional Name parameter that allows you to do this. For example, your existing schema may have a
Post table with a BlogId foreign key column that has an index named IndexFk_BlogId. However, by default
Migrations would expect this index to be named IX_BlogId. If you make a change to your model that results in
dropping this index, you will need to modify the scaffolded DropIndex call to specify the IndexFk_BlogId name.
If you used ‘Option Two: Use empty database as a starting point’ from Step 3:
Trying to run the Down method of the initial migration (that is, reverting to an empty database) against your
local database may fail because Migrations will try to drop indexes and foreign key constraints using the
incorrect names. This will only affect your local database since other databases will be created from scratch
using the Up method of the initial migration. If you want to downgrade your existing local database to an empty
state it is easiest to do this manually, either by dropping the database or dropping all the tables. After this initial
downgrade all database objects will be recreated with the default names, so this issue will not present itself
again.
If future changes in your model require changing or dropping one of the database objects that is named
differently, this will not work against your existing local database – since the names won’t match the defaults.
However, it will work against databases that were created ‘from scratch’ since they will have used the default
names chosen by Migrations. You could either make these changes manually on your local existing database, or
consider having Migrations recreate your database from scratch – as it will on other machines.
Databases created using the Up method of your initial migration may differ slightly from the local database
since the calculated default names for indexes and foreign key constraints will be used. You may also end up
with extra indexes as Migrations will create indexes on foreign key columns by default – this may not have been
the case in your original local database.
Not all database objects are represented in the model
Database objects that are not part of your model will not be handled by Migrations. This can include views, stored
procedures, permissions, tables that are not part of your model, additional indexes, etc.
Here are some examples of when you need to be aware of this:
Regardless of the option you chose in ‘Step 3’, if future changes in your model require changing or dropping
these additional objects Migrations will not know to make these changes. For example, if you drop a column
that has an additional index on it, Migrations will not know to drop the index. You will need to manually add this
to the scaffolded Migration.
If you used ‘Option Two: Use empty database as a starting point’, these additional objects will not be created by
the Up method of your initial migration. You can modify the Up and Down methods to take care of these
additional objects if you wish. For objects that are not natively supported in the Migrations API – such as views
– you can use the Sql method to run raw SQL to create/drop them.
Customizing the migrations history table
1/6/2019 • 3 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
NOTE
This article assumes you know how to use Code First Migrations in basic scenarios. If you don’t, then you’ll need to read
Code First Migrations before continuing.
Words of precaution
Changing the migration history table is powerful but you need to be careful to not overdo it. EF runtime currently
does not check whether the customized migrations history table is compatible with the runtime. If it is not your
application may break at runtime or behave in unpredictable ways. This is even more important if you use multiple
contexts per database in which case multiple contexts can use the same migration history table to store
information about migrations.
NOTE
Typically when you configure EF models you don’t need to call base.OnModelCreating() from the overridden
OnModelCreating method since the DbContext.OnModelCreating() has empty body. This is not the case when configuring
the migrations history table. In this case the first thing to do in your OnModelCreating() override is to actually call
base.OnModelCreating(). This will configure the migrations history table in the default way which you then tweak in the
overriding method.
Let’s say you want to rename the migrations history table and put it to a custom schema called “admin”. In
addition your DBA would like you to rename the MigrationId column to Migration_ID. You could achieve this by
creating the following class derived from HistoryContext:
using System.Data.Common;
using System.Data.Entity;
using System.Data.Entity.Migrations.History;
namespace CustomizableMigrationsHistoryTableSample
{
public class MyHistoryContext : HistoryContext
{
public MyHistoryContext(DbConnection dbConnection, string defaultSchema)
: base(dbConnection, defaultSchema)
{
}
Once your custom HistoryContext is ready you need to make EF aware of it by registering it via code-based
configuration:
using System.Data.Entity;
namespace CustomizableMigrationsHistoryTableSample
{
public class ModelConfiguration : DbConfiguration
{
public ModelConfiguration()
{
this.SetHistoryContext("System.Data.SqlClient",
(connection, defaultSchema) => new MyHistoryContext(connection, defaultSchema));
}
}
}
That’s pretty much it. Now you can go to the Package Manager Console, Enable-Migrations, Add-Migration and
finally Update-Database. This should result in adding to the database a migrations history table configured
according to the details you specified in your HistoryContext derived class.
Using migrate.exe
9/30/2018 • 4 minutes to read • Edit Online
Code First Migrations can be used to update a database from inside visual studio, but can also be executed via the
command line tool migrate.exe. This page will give a quick overview on how to use migrate.exe to execute
migrations against a database.
NOTE
This article assumes you know how to use Code First Migrations in basic scenarios. If you don’t, then you’ll need to read
Code First Migrations before continuing.
Copy migrate.exe
When you install Entity Framework using NuGet migrate.exe will be inside the tools folder of the downloaded
package. In <project folder>\packages\EntityFramework.<version>\tools
Once you have migrate.exe then you need to copy it to the location of the assembly that contains your migrations.
If your application targets .NET 4, and not 4.5, then you will need to copy the Redirect.config into the location as
well and rename it migrate.exe.config. This is so that migrate.exe gets the correct binding redirects to be able to
locate the Entity Framework assembly.
NOTE
migrate.exe doesn't support x64 assemblies.
Once you have moved migrate.exe to the correct folder then you should be able to use it to execute migrations
against the database. All the utility is designed to do is execute migrations. It cannot generate migrations or create
a SQL script.
See options
Migrate.exe /?
The above will display the help page associated with this utility, note that you will need to have the
EntityFramework.dll in the same location that you are running migrate.exe in order for this to work.
When running migrate.exe the only mandatory parameter is the assembly, which is the assembly that contains the
migrations that you are trying to run, but it will use all convention based settings if you do not specify the
configuration file.
If you want to run migrations up to a specific migration, then you can specify the name of the migration. This will
run all previous migrations as required until getting to the migration specified.
If you assembly has dependencies or reads files relative to the working directory then you will need to set
startupDirectory.
If you have multiple migration configuration classes, classes inheriting from DbMigrationConfiguration, then you
need to specify which is to be used for this execution. This is specified by providing the optional second parameter
without a switch as above.
If you wish to specify a connection string at the command line then you must also provide the provider name. Not
specifying the provider name will cause an exception.
Common Problems
ERROR MESSAGE SOLUTION
Unhandled Exception: System.IO.FileLoadException: Could not This typically means that you are running a .NET 4 application
load file or assembly 'EntityFramework, Version=5.0.0.0, without the Redirect.config file. You need to copy the
Culture=neutral, PublicKeyToken=b77a5c561934e089' or one Redirect.config to the same location as migrate.exe and
of its dependencies. The located assembly's manifest definition rename it to migrate.exe.config.
does not match the assembly reference. (Exception from
HRESULT: 0x80131040)
Unhandled Exception: System.IO.FileLoadException: Could not This exception means that you are running a .NET 4.5
load file or assembly 'EntityFramework, Version=4.4.0.0, application with the Redirect.config copied to the migrate.exe
Culture=neutral, PublicKeyToken=b77a5c561934e089' or one location. If your app is .NET 4.5 then you do not need to have
of its dependencies. The located assembly's manifest definition the config file with the redirects inside. Delete the
does not match the assembly reference. (Exception from migrate.exe.config file.
HRESULT: 0x80131040)
ERROR: Unable to update database to match the current This error occurs if running migrate when you haven’t created
model because there are pending changes and automatic a migration to cope with changes made to the model, and the
migration is disabled. Either write the pending model changes database does not match the model. Adding a property to a
to a code-based migration or enable automatic migration. Set model class then running migrate.exe without creating a
DbMigrationsConfiguration.AutomaticMigrationsEnabled to migration to upgrade the database is an example of this.
true to enable automatic migration.
ERROR: Type is not resolved for member This error can be caused by specifying an incorrect startup
'System.Data.Entity.Migrations.Design.ToolingFacade+Update directory. This must be the location of migrate.exe
Runner,EntityFramework, Version=5.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089'.
Unhandled Exception: System.NullReferenceException: Object This can be caused by not specifying a required parameter for
reference not set to an instance of an object. a scenario that you are using. For example specifying a
at connection string without specifying the provider name.
System.Data.Entity.Migrations.Console.Program.Main(String[]
args)
ERROR: More than one migrations configuration type was As the error states, there is more than one configuration class
found in the assembly 'ClassLibrary1'. Specify the name of the in the given assembly. You must use the /configurationType
one to use. switch to specify which to use.
ERROR: Could not load file or assembly ‘<assemblyName>’ or This can be caused by specifying an assembly name incorrectly
one of its dependencies. The given assembly name or or not having
codebase was invalid. (Exception from HRESULT: 0x80131047)
ERROR: Could not load file or assembly ‘<assemblyName>' or This happens if you are trying to run migrate.exe against an
one of its dependencies. An attempt was made to load a x64 application. EF 5.0 and below will only work on x86.
program with an incorrect format.
Code First Migrations in Team Environments
12/4/2018 • 14 minutes to read • Edit Online
NOTE
This article assumes you know how to use Code First Migrations in basic scenarios. If you don’t, then you’ll need to read
Code First Migrations before continuing.
Screencasts
If you'd rather watch a screencast than read this article, the following two videos cover the same content as this
article.
Video One: "Migrations - Under the Hood"
This screencast covers how migrations tracks and uses information about the model to detect model changes.
Video Two: "Migrations - Team Environments"
Building on the concepts from the previous video, this screencast covers the issues that arise in a team
environment and how to solve them.
The current model is calculated from your code (1). The required database objects are then calculated by the
model differ (2) – since this is the first migration the model differ just uses an empty model for the comparison.
The required changes are passed to the code generator to build the required migration code (3) which is then
added to your Visual Studio solution (4).
In addition to the actual migration code that is stored in the main code file, migrations also generates some
additional code-behind files. These files are metadata that is used by migrations and are not something you should
edit. One of these files is a resource file (.resx) that contains a snapshot of the model at the time the migration was
generated. You’ll see how this is used in the next step.
At this point you would probably run Update-Database to apply your changes to the database, and then go
about implementing other areas of your application.
Subsequent migrations
Later you come back and make some changes to your model – in our example we’ll add a Url property to Blog.
You would then issue a command such as Add-Migration AddUrl to scaffold a migration to apply the
corresponding database changes. The high level steps that this command performs are pictured below.
Just like last time, the current model is calculated from code (1). However, this time there are existing migrations
so the previous model is retrieved from the latest migration (2). These two models are diffed to find the required
database changes (3) and then the process completes as before.
This same process is used for any further migrations that you add to the project.
Why bother with the model snapshot?
You may be wondering why EF bothers with the model snapshot – why not just look at the database. If so, read on.
If you’re not interested then you can skip this section.
There are a number of reasons EF keeps the model snapshot around:
It allows your database to drift from the EF model. These changes can be made directly in the database, or you
can change the scaffolded code in your migrations to make the changes. Here are a couple of examples of this
in practice:
You want to add an Inserted and Updated to column to one or more of your tables but you don’t want to
include these columns in the EF model. If migrations looked at the database it would continually try to
drop these columns every time you scaffolded a migration. Using the model snapshot, EF will only ever
detect legitimate changes to the model.
You want to change the body of a stored procedure used for updates to include some logging. If
migrations looked at this stored procedure from the database it would continually try and reset it back to
the definition that EF expects. By using the model snapshot, EF will only ever scaffold code to alter the
stored procedure when you change the shape of the procedure in the EF model.
These same principles apply to adding extra indexes, including extra tables in your database, mapping EF
to a database view that sits over a table, etc.
The EF model contains more than just the shape of the database. Having the entire model allows migrations to
look at information about the properties and classes in your model and how they map to the columns and
tables. This information allows migrations to be more intelligent in the code that it scaffolds. For example, if you
change the name of the column that a property maps to migrations can detect the rename by seeing that it’s
the same property – something that can’t be done if you only have the database schema.
Developer #1 and developer #2 now makes some changes to the EF model in their local code base. Developer #1
adds a Rating property to Blog – and generates an AddRating migration to apply the changes to the database.
Developer #2 adds a Readers property to Blog – and generates the corresponding AddReaders migration. Both
developers run Update-Database, to apply the changes to their local databases, and then continue developing
the application.
NOTE
Migrations are prefixed with a timestamp, so our graphic represents that the AddReaders migration from Developer #2
comes after the AddRating migration from Developer #1. Whether developer #1 or #2 generated the migration first makes
no difference to the issues of working in a team, or the process for merging them that we’ll look at in the next section.
It’s a lucky day for Developer #1 as they happen to submit their changes first. Because no one else has checked in
since they synced their repository, they can just submit their changes without performing any merging.
Now it’s time for Developer #2 to submit. They aren’t so lucky. Because someone else has submitted changes since
they synced, they will need to pull down the changes and merge. The source control system will likely be able to
automatically merge the changes at the code level since they are very simple. The state of Developer #2’s local
repository after syncing is depicted in the following graphic.
At this stage Developer #2 can run Update-Database which will detect the new AddRating migration (which
hasn’t been applied to Developer #2’s database) and apply it. Now the Rating column is added to the Blogs table
and the database is in sync with the model.
There are a couple of problems though:
1. Although Update-Database will apply the AddRating migration it will also raise a warning: Unable to update
database to match the current model because there are pending changes and automatic migration is
disabled… The problem is that the model snapshot stored in the last migration (AddReader) is missing the
Rating property on Blog (since it wasn’t part of the model when the migration was generated). Code First
detects that the model in the last migration doesn’t match the current model and raises the warning.
2. Running the application would result in an InvalidOperationException stating that “The model backing the
'BloggingContext' context has changed since the database was created. Consider using Code First Migrations
to update the database…” Again, the problem is the model snapshot stored in the last migration doesn’t match
the current model.
3. Finally, we would expect running Add-Migration now would generate an empty migration (since there are no
changes to apply to the database). But because migrations compares the current model to the one from the last
migration (which is missing the Rating property) it will actually scaffold another AddColumn call to add in the
Rating column. Of course, this migration would fail during Update-Database because the Rating column
already exists.
This video and step-by-step walkthrough provide an introduction to Model First development using Entity
Framework. Model First allows you to create a new model using the Entity Framework Designer and then
generate a database schema from the model. The model is stored in an EDMX file (.edmx extension) and can be
viewed and edited in the Entity Framework Designer. The classes that you interact with in your application are
automatically generated from the EDMX file.
Pre-Requisites
You will need to have Visual Studio 2010 or Visual Studio 2012 installed to complete this walkthrough.
If you are using Visual Studio 2010, you will also need to have NuGet installed.
2. Create Model
We’re going to make use of Entity Framework Designer, which is included as part of Visual Studio, to create our
model.
Project -> Add New Item…
Select Data from the left menu and then ADO.NET Entity Data Model
Enter BloggingModel as the name and click OK, this launches the Entity Data Model Wizard
Select Empty Model and click Finish
The Entity Framework Designer is opened with a blank model. Now we can start adding entities, properties and
associations to the model.
Right-click on the design surface and select Properties
In the Properties window change the Entity Container Name to BloggingContext This is the name of
the derived context that will be generated for you, the context represents a session with the database,
allowing us to query and save data
Right-click on the design surface and select Add New -> Entity…
Enter Blog as the entity name and BlogId as the key name and click OK
Right-click on the new entity on the design surface and select Add New -> Scalar Property, enter Name
as the name of the property.
Repeat this process to add a Url property.
Right-click on the Url property on the design surface and select Properties, in the Properties window
change the Nullable setting to True This allows us to save a Blog to the database without assigning it a Url
Using the techniques you just learnt, add a Post entity with a PostId key property
Add Title and Content scalar properties to the Post entity
Now that we have a couple of entities, it’s time to add an association (or relationship) between them.
Right-click on the design surface and select Add New -> Association…
Make one end of the relationship point to Blog with a multiplicity of One and the other end point to Post
with a multiplicity of Many This means that a Blog has many Posts and a Post belongs to one Blog
Ensure the Add foreign key properties to 'Post' Entity box is checked and click OK
We now have a simple model that we can generate a database from and use to read and write data.
class Program
{
static void Main(string[] args)
{
using (var db = new BloggingContext())
{
// Create and save a new Blog
Console.Write("Enter a name for a new Blog: ");
var name = Console.ReadLine();
Right-click on the Username property on the design surface and select Properties, In the Properties
window change the MaxLength setting to 50 This restricts the data that can be stored in username to 50
characters
Add a DisplayName scalar property to the User entity
We now have an updated model and we are ready to update the database to accommodate our new User entity
type.
Right-click on the design surface and select Generate Database from Model…, Entity Framework will
calculate a script to recreate a schema based on the updated model.
Click Finish
You may receive warnings about overwriting the existing DDL script and the mapping and storage parts of the
model, click Yes for both these warnings
The updated SQL script to create the database is opened for you
The script that is generated will drop all existing tables and then recreate the schema from scratch. This may
work for local development but is not a viable for pushing changes to a database that has already been
deployed. If you need to publish changes to a database that has already been deployed, you will need to edit
the script or use a schema compare tool to calculate a migration script.
Right-click on the script and select Execute, you will be prompted to specify the database to connect to, specify
LocalDB or SQL Server Express, depending on which version of Visual Studio you are using
Summary
In this walkthrough we looked at Model First development, which allowed us to create a model in the EF Designer
and then generate a database from that model. We then used the model to read and write some data from the
database. Finally, we updated the model and then recreated the database schema to match the model.
Database First
9/18/2018 • 6 minutes to read • Edit Online
This video and step-by-step walkthrough provide an introduction to Database First development using Entity
Framework. Database First allows you to reverse engineer a model from an existing database. The model is stored
in an EDMX file (.edmx extension) and can be viewed and edited in the Entity Framework Designer. The classes
that you interact with in your application are automatically generated from the EDMX file.
Pre-Requisites
You will need to have at least Visual Studio 2010 or Visual Studio 2012 installed to complete this walkthrough.
If you are using Visual Studio 2010, you will also need to have NuGet installed.
The new database will now appear in Server Explorer, right-click on it and select New Query
Copy the following SQL into the new query, then right-click on the query and select Execute
CREATE TABLE [dbo].[Blogs] (
[BlogId] INT IDENTITY (1, 1) NOT NULL,
[Name] NVARCHAR (200) NULL,
[Url] NVARCHAR (200) NULL,
CONSTRAINT [PK_dbo.Blogs] PRIMARY KEY CLUSTERED ([BlogId] ASC)
);
Click the checkbox next to ‘Tables’ to import all tables and click ‘Finish’
Once the reverse engineer process completes the new model is added to your project and opened up for you to
view in the Entity Framework Designer. An App.config file has also been added to your project with the
connection details for the database.
class Program
{
static void Main(string[] args)
{
using (var db = new BloggingContext())
{
// Create and save a new Blog
Console.Write("Enter a name for a new Blog: ");
var name = Console.ReadLine();
The model is now updated to include a new User entity that maps to the Users table we added to the database.
Summary
In this walkthrough we looked at Database First development, which allowed us to create a model in the EF
Designer based on an existing database. We then used that model to read and write some data from the database.
Finally, we updated the model to reflect changes we made to the database schema.
Complex Types - EF Designer
9/13/2018 • 6 minutes to read • Edit Online
This topic shows how to map complex types with the Entity Framework Designer (EF Designer) and how to query
for entities that contain properties of complex type.
The following image shows the main windows that are used when working with the EF Designer.
NOTE
When you build the conceptual model, warnings about unmapped entities and associations may appear in the Error List. You
can ignore these warnings because after you choose to generate the database from the model, the errors will go away.
A new complex type with the selected properties is added to the Model Browser. The complex type is given a
default name.
A complex property of the newly created type replaces the selected properties. All property mappings are
preserved.
NOTE
To delete a column mapping, select the column that you want to map, and then click the Value/Property field. Then, select
Delete from the drop-down list.
Click OK. The function import entry is created in the conceptual model.
Customize Column Mapping for Function Import
Right-click the function import in the Model Browser and select Function Import Mapping. The Mapping
Details window appears and shows the default mapping for the function import. Arrows indicate the mappings
between column values and property values. By default, the column names are assumed to be the same as the
complex type's property names. The default column names appear in gray text.
If necessary, change the column names to match the column names that are returned by the stored procedure
that corresponds to the function import.
NOTE
EF5 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 5. If you are using
an earlier version, some or all of the information does not apply.
This video and step-by-step walkthrough shows how to use enum types with the Entity Framework Designer. It
also demonstrates how to use enums in a LINQ query.
This walkthrough will use Model First to create a new database, but the EF Designer can also be used with the
Database First workflow to map to an existing database.
Enum support was introduced in Entity Framework 5. To use the new features like enums, spatial data types, and
table-valued functions, you must target .NET Framework 4.5. Visual Studio 2012 targets .NET 4.5 by default.
In Entity Framework, an enumeration can have the following underlying types: Byte, Int16, Int32, Int64 , or
SByte.
Pre-Requisites
You will need to have Visual Studio 2012, Ultimate, Premium, Professional, or Web Express edition installed to
complete this walkthrough.
2. In the Add Enum dialog box type DepartmentNames for the Enum Type Name, change the Underlying
Type to Int32, and then add the following members to the type: English, Math, and Economics
3. Press OK
4. Save the model and build the project
NOTE
When you build, warnings about unmapped entities and associations may appear in the Error List. You can ignore
these warnings because after we choose to generate the database from the model, the errors will go away.
If you look at the Properties window, you will notice that the type of the Name property was changed to
DepartmentNames and the newly added enum type was added to the list of types.
If you switch to the Model Browser window, you will see that the type was also added to the Enum Types node.
NOTE
You can also add new enum types from this window by clicking the right mouse button and selecting Add Enum Type.
Once the type is created it will appear in the list of types and you would be able to associate with a property
context.SaveChanges();
Console.WriteLine(
"DepartmentID: {0} and Name: {1}",
department.DepartmentID,
department.Name);
}
Compile and run the application. The program produces the following output:
To view data in the database, right-click on the database name in SQL Server Object Explorer and select Refresh.
Then, click the right mouse button on the table and select View Data.
Summary
In this walkthrough we looked at how to map enum types using the Entity Framework Designer and how to use
enums in code.
Spatial - EF Designer
9/18/2018 • 5 minutes to read • Edit Online
NOTE
EF5 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 5. If you are using
an earlier version, some or all of the information does not apply.
The video and step-by-step walkthrough shows how to map spatial types with the Entity Framework Designer. It
also demonstrates how to use a LINQ query to find a distance between two locations.
This walkthrough will use Model First to create a new database, but the EF Designer can also be used with the
Database First workflow to map to an existing database.
Spatial type support was introduced in Entity Framework 5. Note that to use the new features like spatial type,
enums, and Table-valued functions, you must target .NET Framework 4.5. Visual Studio 2012 targets .NET 4.5 by
default.
To use spatial data types you must also use an Entity Framework provider that has spatial support. See provider
support for spatial types for more information.
There are two main spatial data types: geography and geometry. The geography data type stores ellipsoidal data
(for example, GPS latitude and longitude coordinates). The geometry data type represents Euclidean (flat)
coordinate system.
Pre-Requisites
You will need to have Visual Studio 2012, Ultimate, Premium, Professional, or Web Express edition installed to
complete this walkthrough.
NOTE
When you build, warnings about unmapped entities and associations may appear in the Error List. You can ignore
these warnings because after we choose to generate the database from the model, the errors will go away.
context.Universities.Add(new University()
{
Name = "School of Fine Art",
Location = DbGeography.FromText("POINT(-122.335197 47.646711)"),
});
context.SaveChanges();
Console.WriteLine(
"The closest University to you is: {0}.",
university.Name);
}
Compile and run the application. The program produces the following output:
To view data in the database, right-click on the database name in SQL Server Object Explorer and select Refresh.
Then, click the right mouse button on the table and select View Data.
Summary
In this walkthrough we looked at how to map spatial types using the Entity Framework Designer and how to use
spatial types in code.
Designer Entity Splitting
9/13/2018 • 4 minutes to read • Edit Online
This walkthrough shows how to map an entity type to two tables by modifying a model with the Entity Framework
Designer (EF Designer). You can map an entity to multiple tables when the tables share a common key. The
concepts that apply to mapping an entity type to two tables are easily extended to mapping an entity type to more
than two tables.
The following image shows the main windows that are used when working with the EF Designer.
Prerequisites
Visual Studio 2012 or Visual Studio 2010, Ultimate, Premium, Professional, or Web Express edition.
The next steps require the Mapping Details window. If you cannot see this window, right-click the design surface
and select Mapping Details.
Select the Person entity type and click <Add a Table or View> in the Mapping Details window.
Select **PersonInfo ** from the drop-down list. The Mapping Details window is updated with default column
mappings, these are fine for our scenario.
The Person entity type is now mapped to the Person and PersonInfo tables.
context.People.Add(person);
context.SaveChanges();
The following SELECT was executed as a result of enumerating the people in the database. It combines the
data from the Person and PersonInfo table.
Designer Table Splitting
9/13/2018 • 3 minutes to read • Edit Online
This walkthrough shows how to map multiple entity types to a single table by modifying a model with the Entity
Framework Designer (EF Designer).
One reason you may want to use table splitting is delaying the loading of some properties when using lazy
loading to load your objects. You can separate the properties that might contain very large amount of data into a
seperate entity and only load it when required.
The following image shows the main windows that are used when working with the EF Designer.
Prerequisites
To complete this walkthrough, you will need:
A recent version of Visual Studio.
The School sample database.
NOTE
The Person entity does not contain any properties that may contain large amount of data; it is just used as an example.
Right-click an empty area of the design surface, point to Add New, and click Entity. The New Entity dialog
box appears.
Type HireInfo for the Entity name and PersonID for the Key Property name.
Click OK.
A new entity type is created and displayed on the design surface.
Select the HireDate property of the Person entity type and press Ctrl+X keys.
Select the HireInfo entity and press Ctrl+V keys.
Create an association between Person and HireInfo. To do this, right-click an empty area of the design surface,
point to Add New, and click Association.
The Add Association dialog box appears. The PersonHireInfo name is given by default.
Specify multiplicity 1(One) on both ends of the relationship.
Press OK.
The next step requires the Mapping Details window. If you cannot see this window, right-click the design surface
and select Mapping Details.
Select the HireInfo entity type and click <Add a Table or View> in the Mapping Details window.
Select Person from the <Add a Table or View> field drop-down list. The list contains tables or views to
which the selected entity can be mapped. The appropriate properties should be mapped by default.
The following SELECT was executed as a result of executing context.People.FirstOrDefault() and selects just
the columns mapped to Person
The following SELECT was executed as a result of accessing the navigation property
existingPerson.Instructor and selects just the columns mapped to HireInfo
Designer TPH Inheritance
9/13/2018 • 5 minutes to read • Edit Online
This step-by-step walkthrough shows how to implement table-per-hierarchy (TPH) inheritance in your conceptual
model with the Entity Framework Designer (EF Designer). TPH inheritance uses one database table to maintain
data for all of the entity types in an inheritance hierarchy.
In this walkthrough we will map the Person table to three entity types: Person (the base type), Student (derives
from Person), and Instructor (derives from Person). We'll create a conceptual model from the database (Database
First) and then alter the model to implement the TPH inheritance using the EF Designer.
It is possible to map to a TPH inheritance using Model First but you would have to write your own database
generation workflow which is complex. You would then assign this workflow to the Database Generation
Workflow property in the EF Designer. An easier alternative is to use Code First.
Prerequisites
To complete this walkthrough, you will need:
A recent version of Visual Studio.
The School sample database.
Create a Model
Right-click the project name in Solution Explorer, and select Add -> New Item.
Select Data from the left menu and then select ADO.NET Entity Data Model in the Templates pane.
Enter TPHModel.edmx for the file name, and then click Add.
In the Choose Model Contents dialog box, select Generate from database, and then click Next.
Click New Connection. In the Connection Properties dialog box, enter the server name (for example,
(localdb)\mssqllocaldb), select the authentication method, type School for the database name, and then
click OK. The Choose Your Data Connection dialog box is updated with your database connection setting.
In the Choose Your Database Objects dialog box, under the Tables node, select the Person table.
Click Finish.
The Entity Designer, which provides a design surface for editing your model, is displayed. All the objects that you
selected in the Choose Your Database Objects dialog box are added to the model.
That is how the Person table looks in the database.
Repeat these steps for the Student entity type, but make the condition equal to Student value.
The reason we wanted to remove the Discriminator property, is because you cannot map a table column
more than once. This column will be used for conditional mapping, so it cannot be used for property
mapping as well. The only way it can be used for both, if a condition uses an Is Null or Is Not
Null comparison.
Table-per-hierarchy inheritance is now implemented.
This step-by-step walkthrough shows how to implement table-per-type (TPT) inheritance in your model using the
Entity Framework Designer (EF Designer). Table-per-type inheritance uses a separate table in the database to
maintain data for non-inherited properties and key properties for each type in the inheritance hierarchy.
In this walkthrough we will map the Course (base type), OnlineCourse (derives from Course),
and OnsiteCourse (derives from Course) entities to tables with the same names. We'll create a model from the
database and then alter the model to implement the TPT inheritance.
You can also start with the Model First and then generate the database from the model. The EF Designer uses the
TPT strategy by default and so any inheritance in the model will be mapped to separate tables.
Prerequisites
To complete this walkthrough, you will need:
A recent version of Visual Studio.
The School sample database.
Create a Model
Right-click the project in Solution Explorer, and select Add -> New Item.
Select Data from the left menu and then select ADO.NET Entity Data Model in the Templates pane.
Enter TPTModel.edmx for the file name, and then click Add.
In the Choose Model Contents dialog box, select** Generate from database**, and then click Next.
Click New Connection. In the Connection Properties dialog box, enter the server name (for example,
(localdb)\mssqllocaldb), select the authentication method, type School for the database name, and then
click OK. The Choose Your Data Connection dialog box is updated with your database connection setting.
In the Choose Your Database Objects dialog box, under the Tables node, select the Department, Course,
OnlineCourse, and OnsiteCourse tables.
Click Finish.
The Entity Designer, which provides a design surface for editing your model, is displayed. All the objects that you
selected in the Choose Your Database Objects dialog box are added to the model.
This step-by-step walkthrough show how to use the Entity Framework Designer (EF Designer) to import stored
procedures into a model and then call the imported stored procedures to retrieve results.
Note, that Code First does not support mapping to stored procedures or functions. However, you can call stored
procedures or functions by using the System.Data.Entity.DbSet.SqlQuery method. For example:
Prerequisites
To complete this walkthrough, you will need:
A recent version of Visual Studio.
The School sample database.
Create a Model
Right-click the project in Solution Explorer and select Add -> New Item.
Select Data from the left menu and then select ADO.NET Entity Data Model in the Templates pane.
Enter EFwithSProcsModel.edmx for the file name, and then click Add.
In the Choose Model Contents dialog box, select Generate from database, and then click Next.
Click New Connection.
In the Connection Properties dialog box, enter the server name (for example, (localdb)\mssqllocaldb),
select the authentication method, type School for the database name, and then click OK.
The Choose Your Data Connection dialog box is updated with your database connection setting.
In the Choose Your Database Objects dialog box, check the Tables checkbox to select all the tables.
Also, select the following stored procedures under the Stored Procedures and Functions node:
GetStudentGrades and GetDepartmentName.
Starting with Visual Studio 2012 the EF Designer supports bulk import of stored procedures. The Import
selected stored procedures and functions into theentity model is checked by default.
Click Finish.
By default, the result shape of each imported stored procedure or function that returns more than one column will
automatically become a new complex type. In this example we want to map the results of the GetStudentGrades
function to the StudentGrade entity and the results of the GetDepartmentName to none (none is the default
value).
For a function import to return an entity type, the columns returned by the corresponding stored procedure must
exactly match the scalar properties of the returned entity type. A function import can also return collections of
simple types, complex types, or no value.
Right-click the design surface and select Model Browser.
In Model Browser, select Function Imports, and then double-click the GetStudentGrades function.
In the Edit Function Import dialog box, select Entities and choose StudentGrade.
The Function Import is composable checkbox at the top of the Function Imports dialog will let you map to
composable functions. If you do check this box, only composable functions (Table-valued Functions) will appear
in the Stored Procedure / Function Name drop -down list. If you do not check this box, only non-composable
functions will be shown in the list.
// Call GetDepartmentName.
// Declare the name variable that will contain the value returned by the output parameter.
ObjectParameter name = new ObjectParameter("Name", typeof(String));
context.GetDepartmentName(1, name);
Console.WriteLine("The department name is {0}", name.Value);
Compile and run the application. The program produces the following output:
StudentID: 2
Student grade: 4.00
StudentID: 2
Student grade: 3.50
The department name is Engineering
Output Parameters
If output parameters are used, their values will not be available until the results have been read completely. This is
due to the underlying behavior of DbDataReader, see Retrieving Data Using a DataReader for more details.
Designer CUD Stored Procedures
9/13/2018 • 5 minutes to read • Edit Online
This step-by-step walkthrough show how to map the create\insert, update, and delete (CUD ) operations of an
entity type to stored procedures using the Entity Framework Designer (EF Designer). By default, the Entity
Framework automatically generates the SQL statements for the CUD operations, but you can also map stored
procedures to these operations.
Note, that Code First does not support mapping to stored procedures or functions. However, you can call stored
procedures or functions by using the System.Data.Entity.DbSet.SqlQuery method. For example:
Prerequisites
To complete this walkthrough, you will need:
A recent version of Visual Studio.
The School sample database.
Create a Model
Right-click the project name in Solution Explorer, and select Add -> New Item.
Select Data from the left menu and then select ADO.NET Entity Data Model in the Templates pane.
Enter CUDSProcs.edmx for the file name, and then click Add.
In the Choose Model Contents dialog box, select Generate from database, and then click Next.
Click New Connection. In the Connection Properties dialog box, enter the server name (for example,
(localdb)\mssqllocaldb), select the authentication method, type School for the database name, and then
click OK. The Choose Your Data Connection dialog box is updated with your database connection setting.
In the Choose Your Database Objects dialog box, under the Tables node, select the Person table.
Also, select the following stored procedures under the Stored Procedures and Functions node:
DeletePerson, InsertPerson, and UpdatePerson.
Starting with Visual Studio 2012 the EF Designer supports bulk import of stored procedures. The Import
selected stored procedures and functions into the entity model is checked by default. Since in this
example we have stored procedures that insert, update, and delete entity types, we do not want to import
them and will uncheck this checkbox.
Click Finish. The EF Designer, which provides a design surface for editing your model, is displayed.
Click <Select Update Function> and select UpdatePerson from the resulting drop-down list.
Default mappings between stored procedure parameters and entity properties appear.
Click <Select Delete Function> and select DeletePerson from the resulting drop-down list.
Default mappings between stored procedure parameters and entity properties appear.
The insert, update, and delete operations of the Person entity type are now mapped to stored procedures.
If you want to enable concurrency checking when updating or deleting an entity with stored procedures, use one
of the following options:
Use an OUTPUT parameter to return the number of affected rows from the stored procedure and check the
Rows Affected Parameter checkbox next to the parameter name. If the value returned is zero when the
operation is called, an OptimisticConcurrencyException will be thrown.
Check the Use Original Value checkbox next to a property that you want to use for concurrency checking.
When an update is attempted, the value of the property that was originally read from the database will be used
when writing data back to the database. If the value does not match the value in the database, an
OptimisticConcurrencyException will be thrown.
if (deletedInstructor == null)
Console.WriteLine("A person with PersonID {0} was deleted.",
newInstructor.PersonID);
}
Compile and run the application. The program produces the following output *
NOTE
PersonID is auto-generated by the server, so you will most likely see a different number*
If you are working with the Ultimate version of Visual Studio, you can use Intellitrace with the debugger to see the
SQL statements that get executed.
Relationships - EF Designer
9/13/2018 • 5 minutes to read • Edit Online
NOTE
This page provides information about setting up relationships in your model using the EF Designer. For general information
about relationships in EF and how to access and manipulate data using relationships, see Relationships & Navigation
Properties.
Associations define relationships between entity types in a model. This topic shows how to map associations with
the Entity Framework Designer (EF Designer). The following image shows the main windows that are used when
working with the EF Designer.
NOTE
When you build the conceptual model, warnings about unmapped entities and associations may appear in the Error List. You
can ignore these warnings because after you choose to generate the database from the model, the errors will go away.
Associations Overview
When you design your model using the EF Designer, an .edmx file represents your model. In the .edmx file, an
Association element defines a relationship between two entity types. An association must specify the entity types
that are involved in the relationship and the possible number of entity types at each end of the relationship, which
is known as the multiplicity. The multiplicity of an association end can have a value of one (1), zero or one (0..1), or
many (*). This information is specified in two child End elements.
At run time, entity type instances at one end of an association can be accessed through navigation properties or
foreign keys (if you choose to expose foreign keys in your entities). With foreign keys exposed, the relationship
between the entities is managed with a ReferentialConstraint element (a child element of the Association
element). It is recommended that you always expose foreign keys for relationships in your entities.
NOTE
In many-to-many (*:*) you cannot add foreign keys to the entities. In a *:* relationship, the association information is
managed with an independent object.
For information about CSDL elements (ReferentialConstraint, Association, etc.) see the CSDL specification.
NOTE
This section assumes that you already added the entities you wish to create an association between to your model.
To create an association
1. Right-click an empty area of the design surface, point to Add New, and select Association….
2. Fill in the settings for the association in the Add Association dialog.
NOTE
You can choose to not add navigation properties or foreign key properties to the entities at the ends of the
association by clearing the **Navigation Property **and **Add foreign key properties to the <entity type name>
Entity **checkboxes. If you add only one navigation property, the association will be traversable in only one direction.
If you add no navigation properties, you must choose to add foreign key properties in order to access entities at the
ends of the association.
3. Click OK.
To delete an association
To delete an association do one of the following:
Right-click the association on the EF Designer surface and select Delete.
OR -
Select one or more associations and press the DELETE key.
Click OK.
NOTE
You can only map details for the associations that do not have a referential constraint specified. If a referential constraint is
specified then a foreign key property is included in the entity and you can use the Mapping Details for the entity to control
which column the foreign key maps to.
Create an association mapping
Right-click an association in the design surface and select Table Mapping. This displays the association
mapping in the Mapping Details window.
Click Add a Table or View. A drop-down list appears that includes all the tables in the storage model.
Select the table to which the association will map. The Mapping Details window displays both ends of the
association and the key properties for the entity type at each End.
For each key property, click the Column field, and select the column to which the property will map.
NOTE
EF5 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 5. If you are using
an earlier version, some or all of the information does not apply.
This video and page shows how to split a model into multiple diagrams using the Entity Framework Designer (EF
Designer). You might want to use this feature when your model becomes too large to view or edit.
In earlier versions of the EF Designer you could only have one diagram per the EDMX file. Starting with Visual
Studio 2012, you can use the EF Designer to split your EDMX file into multiple diagrams.
EF Designer Overview
When you create a model using the EF Designer’s Entity Data Model Wizard, an .edmx file is created and added to
your solution. This file defines the shape of your entities and how they map to the database.
The EF Designer consists of the following components:
A visual design surface for editing the model. You can create, modify, or delete entities and associations.
A Model Browser window that provides tree views of the model. The entities and their associations are
located under the [ModelName] folder. The database tables and constraints are located under the
[ModelName].Store folder.
A Mapping Details window for viewing and editing mappings. You can map entity types or associations to
database tables, columns, and stored procedures.
The visual design surface window is automatically opened when the Entity Data Model Wizard finishes. If the
Model Browser is not visible, right-click the main design surface and select Model Browser.
The following screenshot shows an .edmx file opened in the EF Designer. The screenshot shows the visual design
surface (to the left) and the Model Browser window (to the right).
To undo an operation done in the EF Designer, click Ctrl-Z.
The diagrams content (shape and color of entities and associations) is stored in the .edmx.diagram file. To view this
file, select Solution Explorer and unfold the .edmx file.
You should not edit the .edmx.diagram file manually, the content of this file maybe overwritten by the EF Designer.
Summary
In this topic we looked at how to split a model into multiple diagrams and also how to specify a different color for
an entity using the Entity Framework Designer.
Selecting Entity Framework Runtime Version for EF
Designer Models
9/13/2018 • 2 minutes to read • Edit Online
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
Starting with EF6 the following screen was added to the EF Designer to allow you to select the version of the
runtime you wish to target when creating a model. The screen will appear when the latest version of Entity
Framework is not already installed in the project. If the latest version is already installed it will just be used by
default.
Targeting EF6.x
You can choose EF6 from the 'Choose Your Version' screen to add the EF6 runtime to your project. Once you've
added EF6, you’ll stop seeing this screen in the current project.
EF6 will be disabled if you already have an older version of EF installed (since you can't target multiple versions of
the runtime from the same project). If EF6 option is not enabled here, follow these steps to upgrade your project to
EF6:
1. Right-click on your project in Solution Explorer and select Manage NuGet Packages...
2. Select Updates
3. Select EntityFramework (make sure it is going to update it to the version you want)
4. Click Update
Targeting EF5.x
You can choose EF5 from the 'Choose Your Version' screen to add the EF5 runtime to your project. Once you've
added EF5, you’ll still see the screen with the EF6 option disabled.
If you have an EF4.x version of the runtime already installed then you will see that version of EF listed in the screen
rather than EF5. In this situation you can upgrade to EF5 using the following steps:
1. Select Tools -> Library Package Manager -> Package Manager Console
2. Run Install-Package EntityFramework -version 5.0.0
Targeting EF4.x
You can install the EF4.x runtime to your project using the following steps:
1. Select Tools -> Library Package Manager -> Package Manager Console
2. Run Install-Package EntityFramework -version 4.3.0
Designer Code Generation Templates
9/18/2018 • 7 minutes to read • Edit Online
When you create a model using the Entity Framework Designer your classes and derived context are automatically
generated for you. In addition to the default code generation we also provide a number of templates that can be
used to customize the code that gets generated. These templates are provided as T4 Text Templates, allowing you
to customize the templates if needed.
The code that gets generated by default depends on which version of Visual Studio you create your model in:
Models created in Visual Studio 2012 & 2013 will generate simple POCO entity classes and a context that
derives from the simplified DbContext.
Models created in Visual Studio 2010 will generate entity classes that derive from EntityObject and a context
that derives from ObjectContext.
NOTE
We recommend switching to the DbContext Generator template once you've added your model.
This page covers the available templates and then provides instructions for adding a template to your model.
Available Templates
The following templates are provided by the Entity Framework team:
DbContext Generator
This template will generate simple POCO entity classes and a context that derives from DbContext using EF6. This
is the recommended template unless you have a reason to use one of the other templates listed below. It is also the
code generation template you get by default if you are using recent versions of Visual Studio (Visual Studio 2013
onwards): When you create a new model this template is used by default and the T4 files (.tt) are nested under
your .edmx file.
Older versions of Visual Studio
Visual Studio 2012: To get the EF 6.x DbContextGenerator templates you will need to install the latest
Entity Framework Tools for Visual Studio - see the Get Entity Framework page for more information.
Visual Studio 2010: The EF 6.x DbContextGenerator templates are not available for Visual Studio 2010.
DbContext Generator for EF 5.x
If you are using an older version of the EntityFramework NuGet package (one with a major version of 5) you will
need to use the EF 5.x DbContext Generator template.
If you are using Visual Studio 2013 or 2012 this template is already installed.
If you are using Visual Studio 2010 you will need to select the Online tab when adding the template to download
it from Visual Studio Gallery. Alternatively you can install the template directly from Visual Studio Gallery ahead of
time. Because the templates are included in later versions of Visual Studio the versions on the gallery can only be
installed on Visual Studio 2010.
EF 5.x DbContext Generator for C#
EF 5.x DbContext Generator for C# Web Sites
EF 5.x DbContext Generator for VB.NET
EF 5.x DbContext Generator for VB.NET Web Sites
DbContext Generator for EF 4.x
If you are using an older version of the EntityFramework NuGet package (one with a major version of 4) you will
need to use the EF 4.x DbContext Generator template. This can be found in the Online tab when adding the
template, or you can install the template directly from Visual Studio Gallery ahead of time.
EF 4.x DbContext Generator for C#
EF 4.x DbContext Generator for C# Web Sites
EF 4.x DbContext Generator for VB.NET
EF 4.x DbContext Generator for VB.NET Web Sites
EntityObject Generator
This template will generate entity classes that derive from EntityObject and a context that derives from
ObjectContext.
NOTE
Consider using the DbContext Generator
The DbContext Generator is now the recommended template for new applications. The DbContext Generator
takes advantage of the simpler DbContext API. The EntityObject Generator continues to be available to support
existing applications.
Visual Studio 2010, 2012 & 2013
You will need to select the Online tab when adding the template to download it from Visual Studio Gallery.
Alternatively you can install the template directly from Visual Studio Gallery ahead of time.
EF 6.x EntityObject Generator for C#
EF 6.x EntityObject Generator for C# Web Sites
EF 6.x EntityObject Generator for VB.NET
EF 6.x EntityObject Generator for VB.NET Web Sites
EntityObject Generator for EF 5.x
If you are using Visual Studio 2012 or 2013 you will need to select the Online tab when adding the template to
download it from Visual Studio Gallery. Alternatively you can install the template directly from Visual Studio
Gallery ahead of time. Because the templates are included in Visual Studio 2010 the versions on the gallery can
only be installed on Visual Studio 2012 & 2013.
EF 5.x EntityObject Generator for C#
EF 5.x EntityObject Generator for C# Web Sites
EF 5.x EntityObject Generator for VB.NET
EF 5.x EntityObject Generator for VB.NET Web Sites
If you just want ObjectContext code generation without needing to edit the template you can revert to EntityObject
code generation.
If you are using Visual Studio 2010 this template is already installed. If you create a new model in Visual Studio
2010 this template is used by default but the .tt files are not included in your project. If you want to customize the
template you will need to add it to your project.
Self-Tracking Entities (STE) Generator
This template will generate Self-Tracking Entity classes and a context that derives from ObjectContext. In an EF
application, a context is responsible for tracking changes in the entities. However, in N -Tier scenarios, the context
might not be available on the tier that modifies the entities. Self-tracking entities help you track changes in any tier.
For more information, see Self-Tracking Entities.
NOTE
STE Template Not Recommended
We no longer recommend using the STE template in new applications, it continues to be available to support
existing applications. Visit the disconnected entities article for other options we recommend for N -Tier scenarios.
NOTE
There is no EF 6.x version of the STE template.
NOTE
There is no Visual Studio 2013 version of the STE template.
NOTE
Consider using the DbContext Generator
The DbContext Generator is now the recommended template for generating POCO classes in new applications.
The DbContext Generator takes advantage of the new DbContext API and can generate simpler POCO classes.
The POCO Entity Generator continues to be available to support existing applications.
NOTE
There is no EF 5.x or EF 6.x version of the STE template.
NOTE
There is no Visual Studio 2013 version of the POCO template.
Using a Template
To start using a code generation template, right-click an empty spot on the design surface in the EF Designer and
select Add Code Generation Item....
If you've already installed the template you want to use (or it was included in Visual Studio), then it will be available
under either the Code or Data section from the left menu.
If you don't already have the template installed, select Online from the left menu and search for the template you
want.
If you are using Visual Studio 2012, the new .tt files will be nested under the .edmx file.*
NOTE
For models created in Visual Studio 2012 you will need to delete the templates used for default code generation, otherwise
you will have duplicate classes and context generated. The default files are <model name>.tt and <model
name>.context.tt.
If you are using Visual Studio 2010, the tt files are added directly to your project.
Reverting to ObjectContext in Entity Framework
Designer
9/13/2018 • 2 minutes to read • Edit Online
With previous version of Entity Framework a model created with the EF Designer would generate a context that
derived from ObjectContext and entity classes that derived from EntityObject.
Starting with EF4.1 we recommended swapping to a code generation template that generates a context deriving
from DbContext and POCO entity classes.
In Visual Studio 2012 you get DbContext code generated by default for all new models created with the EF
Designer. Existing models will continue to generate ObjectContext based code unless you decide to swap to the
DbContext based code generator.
If you are using VB.NET you will need to select the Show All Files button to see the nested files.
Conceptual schema definition language (CSDL ) is an XML -based language that describes the entities,
relationships, and functions that make up a conceptual model of a data-driven application. This conceptual model
can be used by the Entity Framework or WCF Data Services. The metadata that is described with CSDL is used by
the Entity Framework to map entities and relationships that are defined in a conceptual model to a data source. For
more information, see SSDL Specification and MSL Specification.
CSDL is the Entity Framework's implementation of the Entity Data Model.
In an Entity Framework application, conceptual model metadata is loaded from a .csdl file (written in CSDL ) into an
instance of the System.Data.Metadata.Edm.EdmItemCollection and is accessible by using methods in the
System.Data.Metadata.Edm.MetadataWorkspace class. Entity Framework uses conceptual model metadata to
translate queries against the conceptual model to data source-specific commands.
The EF Designer stores conceptual model information in an .edmx file at design time. At build time, the EF
Designer uses information in an .edmx file to create the .csdl file that is needed by Entity Framework at runtime.
Versions of CSDL are differentiated by XML namespaces.
CSDL v1 http://schemas.microsoft.com/ado/2006/04/edm
CSDL v2 http://schemas.microsoft.com/ado/2008/09/edm
CSDL v3 http://schemas.microsoft.com/ado/2009/11/edm
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Association element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an Association element that defines the CustomerOrders association when foreign
keys have not been exposed on the Customer and Order entity types. The Multiplicity values for each End of the
association indicate that many Orders can be associated with a Customer, but only one Customer can be
associated with an Order. Additionally, the OnDelete element indicates that all Orders that are related to a
particular Customer and have been loaded into the ObjectContext will be deleted if the Customer is deleted.
<Association Name="CustomerOrders">
<End Type="ExampleModel.Customer" Role="Customer" Multiplicity="1" >
<OnDelete Action="Cascade" />
</End>
<End Type="ExampleModel.Order" Role="Order" Multiplicity="*" />
</Association>
The following example shows an Association element that defines the CustomerOrders association when foreign
keys have been exposed on the Customer and Order entity types. With foreign keys exposed, the relationship
between the entities is managed with a ReferentialConstraint element. A corresponding AssociationSetMapping
element is not necessary to map this association to the data source.
<Association Name="CustomerOrders">
<End Type="ExampleModel.Customer" Role="Customer" Multiplicity="1" >
<OnDelete Action="Cascade" />
</End>
<End Type="ExampleModel.Order" Role="Order" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customer">
<PropertyRef Name="Id" />
</Principal>
<Dependent Role="Order">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
AssociationSet Element (CSDL)
The AssociationSet element in conceptual schema definition language (CSDL ) is a logical container for
association instances of the same type. An association set provides a definition for grouping association instances
so that they can be mapped to a data source.
The AssociationSet element can have the following child elements (in the order listed):
Documentation (zero or one elements allowed)
End (exactly two elements required)
Annotation elements (zero or more elements allowed)
The Association attribute specifies the type of association that an association set contains. The entity sets that
make up the ends of an association set are specified with exactly two child End elements.
Applicable Attributes
The table below describes the attributes that can be applied to the AssociationSet element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the AssociationSet element. However,
custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows an EntityContainer element with two AssociationSet elements:
NOTE
A model will not validate if the type of a collection is specified with both the Type attribute and a child element.
Applicable Attributes
The following table describes the attributes that can be applied to the CollectionType element. Note that the
DefaultValue, MaxLength, FixedLength, Precision, Scale, Unicode, and Collation attributes are only
applicable to collections of EDMSimpleTypes.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the CollectionType element. However,
custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows a model-defined function that that uses a CollectionType element to specify that
the function returns a collection of Person entity types (as specified with the ElementType attribute).
<Function Name="LastNamesAfter">
<Parameter Name="someString" Type="Edm.String"/>
<ReturnType>
<CollectionType ElementType="SchoolModel.Person"/>
</ReturnType>
<DefiningExpression>
SELECT VALUE p
FROM SchoolEntities.People AS p
WHERE p.LastName >= someString
</DefiningExpression>
</Function>
The following example shows a model-defined function that uses a CollectionType element to specify that the
function returns a collection of rows (as specified in the RowType element).
<Function Name="LastNamesAfter">
<Parameter Name="someString" Type="Edm.String" />
<ReturnType>
<CollectionType>
<RowType>
<Property Name="FirstName" Type="Edm.String" Nullable="false" />
<Property Name="LastName" Type="Edm.String" Nullable="false" />
</RowType>
</CollectionType>
</ReturnType>
<DefiningExpression>
SELECT VALUE ROW(p.FirstName, p.LastName)
FROM SchoolEntities.People AS p
WHERE p.LastName >= somestring
</DefiningExpression>
</Function>
The following example shows a model-defined function that uses the CollectionType element to specify that the
function accepts as a parameter a collection of Department entity types.
<Function Name="GetAvgBudget">
<Parameter Name="Departments">
<CollectionType>
<TypeRef Type="SchoolModel.Department"/>
</CollectionType>
</Parameter>
<ReturnType Type="Collection(Edm.Decimal)"/>
<DefiningExpression>
SELECT VALUE AVG(d.Budget) FROM Departments AS d
</DefiningExpression>
</Function>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the ComplexType element. However,
custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows a complex type, Address, with the EdmSimpleType properties StreetAddress,
City, StateOrProvince, Country, and PostalCode.
To define the complex type Address (above) as a property of an entity type, you must declare the property type in
the entity type definition. The following example shows the Address property as a complex type on an entity type
(Publisher):
<EntityType Name="Publisher">
<Key>
<PropertyRef Name="Id" />
</Key>
<Property Type="Int32" Name="Id" Nullable="false" />
<Property Type="String" Name="Name" Nullable="false" />
<Property Type="BooksModel.Address" Name="Address" Nullable="false" />
<NavigationProperty Name="Books" Relationship="BooksModel.PublishedBy"
FromRole="Publisher" ToRole="Book" />
</EntityType>
NOTE
For validation purposes, a DefiningExpression element can contain arbitrary content. However, Entity Framework will throw
an exception at runtime if a DefiningExpression element does not contain valid Entity SQL.
Applicable Attributes
Any number of annotation attributes (custom XML attributes) may be applied to the DefiningExpression
element. However, custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-
qualified names for any two custom attributes cannot be the same.
Example
The following example uses a DefiningExpression element to define a function that returns the number of years
since a book was published. The content of the DefiningExpression element is written in Entity SQL.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Dependent element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows a ReferentialConstraint element being used as part of the definition of the
PublishedBy association. The PublisherId property of the Book entity type makes up the dependent end of the
referential constraint.
<Association Name="PublishedBy">
<End Type="BooksModel.Book" Role="Book" Multiplicity="*" >
</End>
<End Type="BooksModel.Publisher" Role="Publisher" Multiplicity="1" />
<ReferentialConstraint>
<Principal Role="Publisher">
<PropertyRef Name="Id" />
</Principal>
<Dependent Role="Book">
<PropertyRef Name="PublisherId" />
</Dependent>
</ReferentialConstraint>
</Association>
<EntityType Name="Customer">
<Documentation>
<Summary>Summary here.</Summary>
<LongDescription>Long description here.</LongDescription>
</Documentation>
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Type="Int32" Name="CustomerId" Nullable="false" />
<Property Type="String" Name="Name" Nullable="false" />
</EntityType>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the End element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an Association element that defines the CustomerOrders association. The
Multiplicity values for each End of the association indicate that many Orders can be associated with a Customer,
but only one Customer can be associated with an Order. Additionally, the OnDelete element indicates that all
Orders that are related to a particular Customer and that have been loaded into the ObjectContext will be deleted
if the Customer is deleted.
<Association Name="CustomerOrders">
<End Type="ExampleModel.Customer" Role="Customer" Multiplicity="1" />
<End Type="ExampleModel.Order" Role="Order" Multiplicity="*">
<OnDelete Action="Cascade" />
</End>
</Association>
Applicable Attributes
The following table describes the attributes that can be applied to the End element when it is the child of an
AssociationSet element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the End element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EntityContainer element with two AssociationSet elements, each with two
End elements:
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the EntityContainer element. However,
custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows an EntityContainer element that defines three entity sets and two association sets.
<EntityContainer Name="BooksContainer" >
<EntitySet Name="Books" EntityType="BooksModel.Book" />
<EntitySet Name="Publishers" EntityType="BooksModel.Publisher" />
<EntitySet Name="Authors" EntityType="BooksModel.Author" />
<AssociationSet Name="PublishedBy" Association="BooksModel.PublishedBy">
<End Role="Book" EntitySet="Books" />
<End Role="Publisher" EntitySet="Publishers" />
</AssociationSet>
<AssociationSet Name="WrittenBy" Association="BooksModel.WrittenBy">
<End Role="Book" EntitySet="Books" />
<End Role="Author" EntitySet="Authors" />
</AssociationSet>
</EntityContainer>
NOTE
The EF Designer does not support conceptual models that contain multiple entity sets per type.
The EntitySet element can have the following child elements (in the order listed):
Documentation Element (zero or one elements allowed)
Annotation elements (zero or more elements allowed)
Applicable Attributes
The table below describes the attributes that can be applied to the EntitySet element.
Example
The following example shows an EntityContainer element with three EntitySet elements:
It is possible to define multiple entity sets per type (MEST). The following example defines an entity container with
two entity sets for the Book entity type:
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the EntityType element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EntityType element with three Property elements and two
NavigationProperty elements:
<EntityType Name="Book">
<Key>
<PropertyRef Name="ISBN" />
</Key>
<Property Type="String" Name="ISBN" Nullable="false" />
<Property Type="String" Name="Title" Nullable="false" />
<Property Type="Decimal" Name="Revision" Nullable="false" Precision="29" Scale="29" />
<NavigationProperty Name="Publisher" Relationship="BooksModel.PublishedBy"
FromRole="Book" ToRole="Publisher" />
<NavigationProperty Name="Authors" Relationship="BooksModel.WrittenBy"
FromRole="Book" ToRole="Author" />
</EntityType>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the EnumType element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EnumType element with three Member elements:
<EnumType Name="Color" IsFlags=”false” UnderlyingTyp=”Edm.Byte”>
<Member Name="Red" />
<Member Name="Green" />
<Member Name="Blue" />
</EntityType>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Function element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example uses a Function element to define a function that returns the number of years since an
instructor was hired.
<Function Name="YearsSince" ReturnType="Edm.Int32">
<Parameter Name="date" Type="Edm.DateTime" />
<DefiningExpression>
Year(CurrentDateTime()) - Year(date)
</DefiningExpression>
</Function>
Example
The following example shows a FunctionImport element that accepts one parameter and returns a collection of
entity types:
<FunctionImport Name="GetStudentGrades"
EntitySet="StudentGrade"
ReturnType="Collection(SchoolModel.StudentGrade)">
<Parameter Name="StudentID" Mode="In" Type="Int32" />
</FunctionImport>
The ISBN property is a good choice for the entity key because an International Standard Book Number (ISBN )
uniquely identifies a book.
The following example shows an entity type (Author) that has an entity key that consists of two properties, Name
and Address.
<EntityType Name="Author">
<Key>
<PropertyRef Name="Name" />
<PropertyRef Name="Address" />
</Key>
<Property Type="String" Name="Name" Nullable="false" />
<Property Type="String" Name="Address" Nullable="false" />
<NavigationProperty Name="Books" Relationship="BooksModel.WrittenBy"
FromRole="Author" ToRole="Book" />
</EntityType>
Using Name and Address for the entity key is a reasonable choice, because two authors of the same name are
unlikely to live at the same address. However, this choice for an entity key does not absolutely guarantee unique
entity keys in an entity set. Adding a property, such as AuthorId, that could be used to uniquely identify an author
would be recommended in this case.
Example
The following example shows an EnumType element with three Member elements:
<EnumType Name="Color">
<Member Name="Red" Value=”1”/>
<Member Name="Green" Value=”3” />
<Member Name="Blue" Value=”5”/>
</EntityType>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the NavigationProperty element.
However, custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for
any two custom attributes cannot be the same.
Example
The following example defines an entity type (Book) with two navigation properties ( PublishedBy and
WrittenBy):
<EntityType Name="Book">
<Key>
<PropertyRef Name="ISBN" />
</Key>
<Property Type="String" Name="ISBN" Nullable="false" />
<Property Type="String" Name="Title" Nullable="false" />
<Property Type="Decimal" Name="Revision" Nullable="false" Precision="29" Scale="29" />
<NavigationProperty Name="Publisher" Relationship="BooksModel.PublishedBy"
FromRole="Book" ToRole="Publisher" />
<NavigationProperty Name="Authors" Relationship="BooksModel.WrittenBy"
FromRole="Book" ToRole="Author" />
</EntityType>
An OnDelete element can have the following child elements (in the order listed):
Documentation (zero or one element)
Annotation elements (zero or more elements)
Applicable Attributes
The table below describes the attributes that can be applied to the OnDelete element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Association element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an Association element that defines the CustomerOrders association. The
OnDelete element indicates that all Orders that are related to a particular Customer and have been loaded into
the ObjectContext will be deleted when the Customer is deleted.
<Association Name="CustomerOrders">
<End Type="ExampleModel.Customer" Role="Customer" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Type="ExampleModel.Order" Role="Order" Multiplicity="*" />
</Association>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Parameter element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows a FunctionImport element with one Parameter child element. The function accepts
one input parameter and returns a collection of entity types.
<FunctionImport Name="GetStudentGrades"
EntitySet="StudentGrade"
ReturnType="Collection(SchoolModel.StudentGrade)">
<Parameter Name="StudentID" Mode="In" Type="Int32" />
</FunctionImport>
Function Element Application
A Parameter element (as a child of the Function element) defines parameters for functions that are defined or
declared in a conceptual model.
The Parameter element can have the following child elements (in the order listed):
Documentation (zero or one elements)
CollectionType (zero or one elements)
ReferenceType (zero or one elements)
RowType (zero or one elements)
NOTE
Only one of the CollectionType, ReferenceType, or RowType elements can be a child element of a Property element.
NOTE
Annotation elements must appear after all other child elements. Annotation elements are only allowed in CSDL v2 and later.
Applicable Attributes
The following table describes the attributes that can be applied to the Parameter element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Parameter element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows a Function element that uses one Parameter child element to define a function
parameter.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Principal element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows a ReferentialConstraint element that is part of the definition of the PublishedBy
association. The Id property of the Publisher entity type makes up the principal end of the referential constraint.
<Association Name="PublishedBy">
<End Type="BooksModel.Book" Role="Book" Multiplicity="*" >
</End>
<End Type="BooksModel.Publisher" Role="Publisher" Multiplicity="1" />
<ReferentialConstraint>
<Principal Role="Publisher">
<PropertyRef Name="Id" />
</Principal>
<Dependent Role="Book">
<PropertyRef Name="PublisherId" />
</Dependent>
</ReferentialConstraint>
</Association>
NOTE
Facets can only be applied to properties of type EDMSimpleType.
Applicable Attributes
The following table describes the attributes that can be applied to the Property element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Property element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EntityType element with three Property elements:
<EntityType Name="Book">
<Key>
<PropertyRef Name="ISBN" />
</Key>
<Property Type="String" Name="ISBN" Nullable="false" />
<Property Type="String" Name="Title" Nullable="false" />
<Property Type="Decimal" Name="Revision" Nullable="false" Precision="29" Scale="29" />
<NavigationProperty Name="Publisher" Relationship="BooksModel.PublishedBy"
FromRole="Book" ToRole="Publisher" />
<NavigationProperty Name="Authors" Relationship="BooksModel.WrittenBy"
FromRole="Book" ToRole="Author" />
</EntityType>
The following example shows a ComplexType element with five Property elements:
Applicable Attributes
The following table describes the attributes that can be applied to the Property element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Property element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows Property elements used to define the shape of the return type of a model-defined
function.
<Function Name="LastNamesAfter">
<Parameter Name="someString" Type="Edm.String" />
<ReturnType>
<CollectionType>
<RowType>
<Property Name="FirstName" Type="Edm.String" Nullable="false" />
<Property Name="LastName" Type="Edm.String" Nullable="false" />
</RowType>
</CollectionType>
</ReturnType>
<DefiningExpression>
SELECT VALUE ROW(p.FirstName, p.LastName)
FROM SchoolEntities.People AS p
WHERE p.LastName >= somestring
</DefiningExpression>
</Function>
NOTE
Annotation elements are only allowed in CSDL v2 and later.
Applicable Attributes
The table below describes the attributes that can be applied to the PropertyRef element.
Example
The example below defines an entity type (Book). The entity key is defined by referencing the ISBN property of
the entity type.
<EntityType Name="Book">
<Key>
<PropertyRef Name="ISBN" />
</Key>
<Property Type="String" Name="ISBN" Nullable="false" />
<Property Type="String" Name="Title" Nullable="false" />
<Property Type="Decimal" Name="Revision" Nullable="false" Precision="29" Scale="29" />
<NavigationProperty Name="Publisher" Relationship="BooksModel.PublishedBy"
FromRole="Book" ToRole="Publisher" />
<NavigationProperty Name="Authors" Relationship="BooksModel.WrittenBy"
FromRole="Book" ToRole="Author" />
</EntityType>
In the next example, two PropertyRef elements are used to indicate that two properties ( Id and PublisherId) are
the principal and dependent ends of a referential constraint.
<Association Name="PublishedBy">
<End Type="BooksModel.Book" Role="Book" Multiplicity="*" >
</End>
<End Type="BooksModel.Publisher" Role="Publisher" Multiplicity="1" />
<ReferentialConstraint>
<Principal Role="Publisher">
<PropertyRef Name="Id" />
</Principal>
<Dependent Role="Book">
<PropertyRef Name="PublisherId" />
</Dependent>
</ReferentialConstraint>
</Association>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the ReferenceType element. However,
custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows the ReferenceType element used as a child of a Parameter element in a model-
defined function that accepts a reference to a Person entity type:
The following example shows the ReferenceType element used as a child of a ReturnType (Function) element in
a model-defined function that returns a reference to a Person entity type:
<Function Name="GetPersonReference">
<Parameter Name="p" Type="SchoolModel.Person" />
<ReturnType>
<ReferenceType Type="SchoolModel.Person" />
</ReturnType>
<DefiningExpression>
REF(p)
</DefiningExpression>
</Function>
<Association Name="PublishedBy">
<End Type="BooksModel.Book" Role="Book" Multiplicity="*" >
</End>
<End Type="BooksModel.Publisher" Role="Publisher" Multiplicity="1" />
<ReferentialConstraint>
<Principal Role="Publisher">
<PropertyRef Name="Id" />
</Principal>
<Dependent Role="Book">
<PropertyRef Name="PublisherId" />
</Dependent>
</ReferentialConstraint>
</Association>
NOTE
A model will not validate if you specify a function return type with both the Type attribute of the ReturnType (Function)
element and one of the child elements.
Applicable Attributes
The following table describes the attributes that can be applied to the ReturnType (Function) element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the ReturnType (Function) element.
However, custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for
any two custom attributes cannot be the same.
Example
The following example uses a Function element to define a function that returns the number of years a book has
been in print. Note that the return type is specified by the Type attribute of a ReturnType (Function) element.
<Function Name="GetYearsInPrint">
<ReturnType Type=="Edm.Int32">
<Parameter Name="book" Type="BooksModel.Book" />
<DefiningExpression>
Year(CurrentDateTime()) - Year(cast(book.PublishedDate as DateTime))
</DefiningExpression>
</Function>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the ReturnType (FunctionImport) element.
However, custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for
any two custom attributes cannot be the same.
Example
The following example uses a FunctionImport that returns books and publishers. Note that the function returns
two result sets and therefore two ReturnType (FunctionImport) elements are specified.
<FunctionImport Name="GetBooksAndPublishers">
<ReturnType Type=="Collection(BooksModel.Book )" EntitySet=”Books”>
<ReturnType Type=="Collection(BooksModel.Publisher)" EntitySet=”Publishers”>
</FunctionImport>
<Function Name="LastNamesAfter">
<Parameter Name="someString" Type="Edm.String" />
<ReturnType>
<CollectionType>
<RowType>
<Property Name="FirstName" Type="Edm.String" Nullable="false" />
<Property Name="LastName" Type="Edm.String" Nullable="false" />
</RowType>
</CollectionType>
</ReturnType>
<DefiningExpression>
SELECT VALUE ROW(p.FirstName, p.LastName)
FROM SchoolEntities.People AS p
WHERE p.LastName >= somestring
</DefiningExpression>
</Function>
NOTE
The Function element and annotation elements are only allowed in CSDL v2 and later.
The Schema element uses the Namespace attribute to define the namespace for the entity type, complex type,
and association objects in a conceptual model. Within a namespace, no two objects can have the same name.
Namespaces can span multiple Schema elements and multiple .csdl files.
A conceptual model namespace is different from the XML namespace of the Schema element. A conceptual model
namespace (as defined by the Namespace attribute) is a logical container for entity types, complex types, and
association types. The XML namespace (indicated by the xmlns attribute) of a Schema element is the default
namespace for child elements and attributes of the Schema element. XML namespaces of the form
http://schemas.microsoft.com/ado/YYYY/MM/edm (where YYYY and MM represent a year and month
respectively) are reserved for CSDL. Custom elements and attributes cannot be in namespaces that have this form.
Applicable Attributes
The table below describes the attributes can be applied to the Schema element.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Schema element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows a Schema element that contains an EntityContainer element, two EntityType
elements, and one Association element.
<Schema xmlns="http://schemas.microsoft.com/ado/2009/11/edm"
xmlns:cg="http://schemas.microsoft.com/ado/2009/11/codegeneration"
xmlns:store="http://schemas.microsoft.com/ado/2009/11/edm/EntityStoreSchemaGenerator"
Namespace="ExampleModel" Alias="Self">
<EntityContainer Name="ExampleModelContainer">
<EntitySet Name="Customers"
EntityType="ExampleModel.Customer" />
<EntitySet Name="Orders" EntityType="ExampleModel.Order" />
<AssociationSet
Name="CustomerOrder"
Association="ExampleModel.CustomerOrders">
<End Role="Customer" EntitySet="Customers" />
<End Role="Order" EntitySet="Orders" />
</AssociationSet>
</EntityContainer>
<EntityType Name="Customer">
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Type="Int32" Name="CustomerId" Nullable="false" />
<Property Type="String" Name="Name" Nullable="false" />
<NavigationProperty
Name="Orders"
Relationship="ExampleModel.CustomerOrders"
FromRole="Customer" ToRole="Order" />
</EntityType>
<EntityType Name="Order">
<Key>
<PropertyRef Name="OrderId" />
</Key>
<Property Type="Int32" Name="OrderId" Nullable="false" />
<Property Type="Int32" Name="ProductId" Nullable="false" />
<Property Type="Int32" Name="Quantity" Nullable="false" />
<NavigationProperty
Name="Customer"
Relationship="ExampleModel.CustomerOrders"
FromRole="Order" ToRole="Customer" />
<Property Type="Int32" Name="CustomerId" Nullable="false" />
</EntityType>
<Association Name="CustomerOrders">
<End Type="ExampleModel.Customer"
Role="Customer" Multiplicity="1" />
<End Type="ExampleModel.Order"
Role="Order" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customer">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Order">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
</Schema>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the CollectionType element. However,
custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows a model-defined function that uses the TypeRef element (as a child of a
CollectionType element) to specify that the function accepts a collection of Department entity types.
<Function Name="GetAvgBudget">
<Parameter Name="Departments">
<CollectionType>
<TypeRef Type="SchoolModel.Department"/>
</CollectionType>
</Parameter>
<ReturnType Type="Collection(Edm.Decimal)"/>
<DefiningExpression>
SELECT VALUE AVG(d.Budget) FROM Departments AS d
</DefiningExpression>
</Function>
NOTE
The Using element in CSDL does not function exactly like a using statement in a programming language. By importing a
namespace with a using statement in a programming language, you do not affect objects in the original namespace. In CSDL,
an imported namespace can contain an entity type that is derived from an entity type in the original namespace. This can
affect entity sets declared in the original namespace.
Example
The following example demonstrates the Using element being used to import a namespace that is defined
elsewhere. Note that the namespace for the Schema element shown is BooksModel . The Address property on the
Publisher EntityType is a complex type that is defined in the ExtendedBooksModel namespace (imported with the
Using element).
<Schema xmlns="http://schemas.microsoft.com/ado/2009/11/edm"
xmlns:cg="http://schemas.microsoft.com/ado/2009/11/codegeneration"
xmlns:store="http://schemas.microsoft.com/ado/2009/11/edm/EntityStoreSchemaGenerator"
Namespace="BooksModel" Alias="Self">
<EntityType Name="Publisher">
<Key>
<PropertyRef Name="Id" />
</Key>
<Property Type="Int32" Name="Id" Nullable="false" />
<Property Type="String" Name="Name" Nullable="false" />
<Property Type="BMExt.Address" Name="Address" Nullable="false" />
</EntityType>
</Schema>
The following code retrieves the metadata in the annotation attribute and writes it to the console:
The code above assumes that the School.csdl file is in the project's output directory and that you have added the
following Imports and Using statements to your project:
using System.Data.Metadata.Edm;
The following code retrieves the metadata in the annotation element and writes it to the console:
The code above assumes that the School.csdl file is in the project's output directory and that you have added the
following Imports and Using statements to your project:
using System.Data.Metadata.Edm;
Conceptual Model Types (CSDL)
Conceptual schema definition language (CSDL ) supports a set of abstract primitive data types, called
EDMSimpleTypes, that define properties in a conceptual model. EDMSimpleTypes are proxies for primitive
data types that are supported in the storage or hosting environment.
The table below lists the primitive data types that are supported by CSDL. The table also lists the facets that can be
applied to each EDMSimpleType.
Facets (CSDL)
Facets in conceptual schema definition language (CSDL ) represent constraints on properties of entity types and
complex types. Facets appear as XML attributes on the following CSDL elements:
Property
TypeRef
Parameter
The following table describes the facets that are supported in CSDL. All facets are optional. Some facets listed
below are used by the Entity Framework when generating a database from a conceptual model.
NOTE
For information about data types in a conceptual model, see Conceptual Model Types (CSDL).
NOTE
When generating a database from a conceptual model, the Generate Database Wizard will recognize the value of the
StoreGeneratedPattern attribute on a Property element if it is in the following namespace:
http://schemas.microsoft.com/ado/2009/02/edm/annotation. The supported values for the attribute are Identity and
Computed. A value of Identity will produce a database column with an identity value that is generated in the database. A
value of Computed will produce a column with a value that is computed in the database.
Example
The following example shows facets applied to the properties of an entity type:
<EntityType Name="Product">
<Key>
<PropertyRef Name="ProductId" />
</Key>
<Property Type="Int32"
Name="ProductId" Nullable="false"
a:StoreGeneratedPattern="Identity"
xmlns:a="http://schemas.microsoft.com/ado/2009/02/edm/annotation" />
<Property Type="String"
Name="ProductName"
Nullable="false"
MaxLength="50" />
<Property Type="String"
Name="Location"
Nullable="true"
MaxLength="25" />
</EntityType>
MSL Specification
10/1/2018 • 40 minutes to read • Edit Online
Mapping specification language (MSL ) is an XML -based language that describes the mapping between the
conceptual model and storage model of an Entity Framework application.
In an Entity Framework application, mapping metadata is loaded from an .msl file (written in MSL ) at build time.
Entity Framework uses mapping metadata at runtime to translate queries against the conceptual model to store-
specific commands.
The Entity Framework Designer (EF Designer) stores mapping information in an .edmx file at design time. At build
time, the Entity Designer uses information in an .edmx file to create the .msl file that is needed by Entity Framework
at runtime
Names of all conceptual or storage model types that are referenced in MSL must be qualified by their respective
namespace names. For information about the conceptual model namespace name, see CSDL Specification. For
information about the storage model namespace name, see SSDL Specification.
Versions of MSL are differentiated by XML namespaces.
MSL v1 urn:schemas-microsoft-com:windows:storage:mapping:CS
MSL v2 http://schemas.microsoft.com/ado/2008/09/mapping/cs
MSL v3 http://schemas.microsoft.com/ado/2009/11/mapping/cs
Example
The following example shows an Alias element that defines an alias, c , for types that are defined in the
conceptual model.
<Mapping Space="C-S"
xmlns="http://schemas.microsoft.com/ado/2009/11/mapping/cs">
<Alias Key="c" Value="SchoolModel"/>
<EntityContainerMapping StorageEntityContainer="SchoolModelStoreContainer"
CdmEntityContainer="SchoolModelEntities">
<EntitySetMapping Name="Courses">
<EntityTypeMapping TypeName="c.Course">
<MappingFragment StoreEntitySet="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
<ScalarProperty Name="Title" ColumnName="Title" />
<ScalarProperty Name="Credits" ColumnName="Credits" />
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
<EntitySetMapping Name="Departments">
<EntityTypeMapping TypeName="c.Department">
<MappingFragment StoreEntitySet="Department">
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
<ScalarProperty Name="Name" ColumnName="Name" />
<ScalarProperty Name="Budget" ColumnName="Budget" />
<ScalarProperty Name="StartDate" ColumnName="StartDate" />
<ScalarProperty Name="Administrator" ColumnName="Administrator" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
</EntityContainerMapping>
</Mapping>
Example
Consider the following conceptual model entity type:
<EntityType Name="Course">
<Key>
<PropertyRef Name="CourseID" />
</Key>
<Property Type="Int32" Name="CourseID" Nullable="false" />
<Property Type="String" Name="Title" Nullable="false" MaxLength="100"
FixedLength="false" Unicode="true" />
<Property Type="Int32" Name="Credits" Nullable="false" />
<NavigationProperty Name="Department"
Relationship="SchoolModel.FK_Course_Department"
FromRole="Course" ToRole="Department" />
</EntityType>
In order to map the update function of the Course entity to this stored procedure, you must supply a value to the
DepartmentID parameter. The value for DepartmentID does not correspond to a property on the entity type; it is
contained in an independent association whose mapping is shown here:
<AssociationSetMapping Name="FK_Course_Department"
TypeName="SchoolModel.FK_Course_Department"
StoreEntitySet="Course">
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
<EndProperty Name="Department">
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
</EndProperty>
</AssociationSetMapping>
The following code shows the AssociationEnd element used to map the DepartmentID property of the
FK_Course_Department association to the UpdateCourse stored procedure (to which the update function of the
Course entity type is mapped):
<EntitySetMapping Name="Courses">
<EntityTypeMapping TypeName="SchoolModel.Course">
<MappingFragment StoreEntitySet="Course">
<ScalarProperty Name="Credits" ColumnName="Credits" />
<ScalarProperty Name="Title" ColumnName="Title" />
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="SchoolModel.Course">
<ModificationFunctionMapping>
<UpdateFunction FunctionName="SchoolModel.Store.UpdateCourse">
<AssociationEnd AssociationSet="FK_Course_Department"
From="Course" To="Department">
<ScalarProperty Name="DepartmentID"
ParameterName="DepartmentID"
Version="Current" />
</AssociationEnd>
<ScalarProperty Name="Credits" ParameterName="Credits"
Version="Current" />
<ScalarProperty Name="Title" ParameterName="Title"
Version="Current" />
<ScalarProperty Name="CourseID" ParameterName="CourseID"
Version="Current" />
</UpdateFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
</EntitySetMapping>
NOTE
If a referential constraint is defined for an association in the conceptual model, the association does not need to be mapped
with an AssociationSetMapping element. If an AssociationSetMapping element is present for an association that has a
referential constraint, the mappings defined in the AssociationSetMapping element will be ignored. For more information,
see ReferentialConstraint Element (CSDL).
Example
The following example shows an AssociationSetMapping element in which the FK_Course_Department
association set in the conceptual model is mapped to the Course table in the database. Mappings between
association type properties and table columns are specified in child EndProperty elements.
<AssociationSetMapping Name="FK_Course_Department"
TypeName="SchoolModel.FK_Course_Department"
StoreEntitySet="Course">
<EndProperty Name="Department">
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
</EndProperty>
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
</AssociationSetMapping>
Example
The following example is based on the School model. The following complex type has been added to the
conceptual model:
<ComplexType Name="FullName">
<Property Type="String" Name="LastName"
Nullable="false" MaxLength="50"
FixedLength="false" Unicode="true" />
<Property Type="String" Name="FirstName"
Nullable="false" MaxLength="50"
FixedLength="false" Unicode="true" />
</ComplexType>
The LastName and FirstName properties of the Person entity type have been replaced with one complex
property, Name:
<EntityType Name="Person">
<Key>
<PropertyRef Name="PersonID" />
</Key>
<Property Name="PersonID" Type="Int32" Nullable="false"
annotation:StoreGeneratedPattern="Identity" />
<Property Name="HireDate" Type="DateTime" />
<Property Name="EnrollmentDate" Type="DateTime" />
<Property Name="Name" Type="SchoolModel.FullName" Nullable="false" />
</EntityType>
The following MSL shows the ComplexProperty element used to map the Name property to columns in the
underlying database:
<EntitySetMapping Name="People">
<EntityTypeMapping TypeName="SchoolModel.Person">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="HireDate" ColumnName="HireDate" />
<ScalarProperty Name="EnrollmentDate" ColumnName="EnrollmentDate" />
<ComplexProperty Name="Name" TypeName="SchoolModel.FullName">
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="LastName" ColumnName="LastName" />
</ComplexProperty>
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
Example
Consider the following stored procedure:
<ComplexType Name="GradeInfo">
<Property Type="Int32" Name="EnrollmentID" Nullable="false" />
<Property Type="Decimal" Name="Grade" Nullable="true"
Precision="3" Scale="2" />
<Property Type="Int32" Name="CourseID" Nullable="false" />
<Property Type="Int32" Name="StudentID" Nullable="false" />
</ComplexType>
In order to create a function import that returns instances of the previous complex type, the mapping between the
columns returned by the stored procedure and the entity type must be defined in a ComplexTypeMapping
element:
<FunctionImportMapping FunctionImportName="GetGrades"
FunctionName="SchoolModel.Store.GetGrades" >
<ResultMapping>
<ComplexTypeMapping TypeName="SchoolModel.GradeInfo">
<ScalarProperty Name="EnrollmentID" ColumnName="enroll_id"/>
<ScalarProperty Name="CourseID" ColumnName="course_id"/>
<ScalarProperty Name="StudentID" ColumnName="student_id"/>
<ScalarProperty Name="Grade" ColumnName="grade"/>
</ComplexTypeMapping>
</ResultMapping>
</FunctionImportMapping>
NOTE
When the Condition element is used within a FunctionImportMapping element, only the Name attribute is not applicable.
Example
The following example shows Condition elements as children of MappingFragment elements. When HireDate
is not null and EnrollmentDate is null, data is mapped between the SchoolModel.Instructor type and the
PersonID and HireDate columns of the Person table. When EnrollmentDate is not null and HireDate is null,
data is mapped between the SchoolModel.Student type and the PersonID and Enrollment columns of the
Person table.
<EntitySetMapping Name="People">
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel.Person)">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="LastName" ColumnName="LastName" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel.Instructor)">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="HireDate" ColumnName="HireDate" />
<Condition ColumnName="HireDate" IsNull="false" />
<Condition ColumnName="EnrollmentDate" IsNull="true" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel.Student)">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="EnrollmentDate"
ColumnName="EnrollmentDate" />
<Condition ColumnName="EnrollmentDate" IsNull="false" />
<Condition ColumnName="HireDate" IsNull="true" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
NOTE
If you do not map all three of the insert, update, or delete operations of a entity type to stored procedures, the unmapped
operations will fail if executed at runtime and an UpdateException is thrown.
Example
The following example is based on the School model and shows the DeleteFunction element mapping the delete
function of the Person entity type to the DeletePerson stored procedure. The DeletePerson stored procedure is
declared in the storage model.
<EntitySetMapping Name="People">
<EntityTypeMapping TypeName="SchoolModel.Person">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="LastName" ColumnName="LastName" />
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="HireDate" ColumnName="HireDate" />
<ScalarProperty Name="EnrollmentDate"
ColumnName="EnrollmentDate" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="SchoolModel.Person">
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertPerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate" />
<ScalarProperty Name="HireDate" ParameterName="HireDate" />
<ScalarProperty Name="FirstName" ParameterName="FirstName" />
<ScalarProperty Name="LastName" ParameterName="LastName" />
<ResultBinding Name="PersonID" ColumnName="NewPersonID" />
</InsertFunction>
<UpdateFunction FunctionName="SchoolModel.Store.UpdatePerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate"
Version="Current" />
<ScalarProperty Name="HireDate" ParameterName="HireDate"
Version="Current" />
<ScalarProperty Name="FirstName" ParameterName="FirstName"
Version="Current" />
<ScalarProperty Name="LastName" ParameterName="LastName"
Version="Current" />
<ScalarProperty Name="PersonID" ParameterName="PersonID"
Version="Current" />
</UpdateFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeletePerson">
<ScalarProperty Name="PersonID" ParameterName="PersonID" />
</DeleteFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
</EntitySetMapping>
Example
The following example is based on the School model and shows the DeleteFunction element used to map delete
function of the CourseInstructor association to the DeleteCourseInstructor stored procedure. The
DeleteCourseInstructor stored procedure is declared in the storage model.
<AssociationSetMapping Name="CourseInstructor"
TypeName="SchoolModel.CourseInstructor"
StoreEntitySet="CourseInstructor">
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
</EndProperty>
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertCourseInstructor" >
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</InsertFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeleteCourseInstructor">
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</DeleteFunction>
</ModificationFunctionMapping>
</AssociationSetMapping>
Example
The following example shows an AssociationSetMapping element in which the FK_Course_Department
association in the conceptual model is mapped to the Course table in the database. Mappings between association
type properties and table columns are specified in child EndProperty elements.
<AssociationSetMapping Name="FK_Course_Department"
TypeName="SchoolModel.FK_Course_Department"
StoreEntitySet="Course">
<EndProperty Name="Department">
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
</EndProperty>
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
</AssociationSetMapping>
Example
The following example shows the EndProperty element mapping the insert and delete functions of an association
(CourseInstructor) to stored procedures in the underlying database. The functions that are mapped to are
declared in the storage model.
<AssociationSetMapping Name="CourseInstructor"
TypeName="SchoolModel.CourseInstructor"
StoreEntitySet="CourseInstructor">
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
</EndProperty>
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertCourseInstructor" >
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</InsertFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeleteCourseInstructor">
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</DeleteFunction>
</ModificationFunctionMapping>
</AssociationSetMapping>
Example
The following example shows an EntityContainerMapping element that maps the SchoolModelEntities
container (the conceptual model entity container) to the SchoolModelStoreContainer container (the storage
model entity container):
<EntityContainerMapping StorageEntityContainer="SchoolModelStoreContainer"
CdmEntityContainer="SchoolModelEntities">
<EntitySetMapping Name="Courses">
<EntityTypeMapping TypeName="c.Course">
<MappingFragment StoreEntitySet="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
<ScalarProperty Name="Title" ColumnName="Title" />
<ScalarProperty Name="Credits" ColumnName="Credits" />
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
<EntitySetMapping Name="Departments">
<EntityTypeMapping TypeName="c.Department">
<MappingFragment StoreEntitySet="Department">
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
<ScalarProperty Name="Name" ColumnName="Name" />
<ScalarProperty Name="Budget" ColumnName="Budget" />
<ScalarProperty Name="StartDate" ColumnName="StartDate" />
<ScalarProperty Name="Administrator" ColumnName="Administrator" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
</EntityContainerMapping>
1 The TypeName and StoreEntitySet attributes can be used in place of the EntityTypeMapping and
MappingFragment child elements to map a single entity type to a single table.
Example
The following example shows an EntitySetMapping element that maps three types (a base type and two derived
types) in the Courses entity set of the conceptual model to three different tables in the underlying database. The
tables are specified by the StoreEntitySet attribute in each MappingFragment element.
<EntitySetMapping Name="Courses">
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel1.Course)">
<MappingFragment StoreEntitySet="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
<ScalarProperty Name="Credits" ColumnName="Credits" />
<ScalarProperty Name="Title" ColumnName="Title" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel1.OnlineCourse)">
<MappingFragment StoreEntitySet="OnlineCourse">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
<ScalarProperty Name="URL" ColumnName="URL" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel1.OnsiteCourse)">
<MappingFragment StoreEntitySet="OnsiteCourse">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
<ScalarProperty Name="Time" ColumnName="Time" />
<ScalarProperty Name="Days" ColumnName="Days" />
<ScalarProperty Name="Location" ColumnName="Location" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
NOTE
MappingFragment and ModificationFunctionMapping elements cannot be child elements of the EntityTypeMapping
element at the same time.
NOTE
The ScalarProperty and Condition elements can only be child elements of the EntityTypeMapping element when it is
used within a FunctionImportMapping element.
Applicable Attributes
The following table describes the attributes that can be applied to the EntityTypeMapping element.
Example
The following example shows an EntitySetMapping element with two child EntityTypeMapping elements. In the
first EntityTypeMapping element, the SchoolModel.Person entity type is mapped to the Person table. In the
second EntityTypeMapping element, the update functionality of the SchoolModel.Person type is mapped to a
stored procedure, UpdatePerson, in the database.
<EntitySetMapping Name="People">
<EntityTypeMapping TypeName="SchoolModel.Person">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="LastName" ColumnName="LastName" />
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="HireDate" ColumnName="HireDate" />
<ScalarProperty Name="EnrollmentDate" ColumnName="EnrollmentDate" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="SchoolModel.Person">
<ModificationFunctionMapping>
<UpdateFunction FunctionName="SchoolModel.Store.UpdatePerson">
<ScalarProperty Name="EnrollmentDate" ParameterName="EnrollmentDate"
Version="Current" />
<ScalarProperty Name="HireDate" ParameterName="HireDate"
Version="Current" />
<ScalarProperty Name="FirstName" ParameterName="FirstName"
Version="Current" />
<ScalarProperty Name="LastName" ParameterName="LastName"
Version="Current" />
<ScalarProperty Name="PersonID" ParameterName="PersonID"
Version="Current" />
</UpdateFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
</EntitySetMapping>
Example
The next example shows the mapping of a type hierarchy in which the root type is abstract. Note the use of the
IsOfType syntax for the TypeName attributes.
<EntitySetMapping Name="People">
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel.Person)">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="LastName" ColumnName="LastName" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel.Instructor)">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="HireDate" ColumnName="HireDate" />
<Condition ColumnName="HireDate" IsNull="false" />
<Condition ColumnName="EnrollmentDate" IsNull="true" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel.Student)">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="EnrollmentDate"
ColumnName="EnrollmentDate" />
<Condition ColumnName="EnrollmentDate" IsNull="false" />
<Condition ColumnName="HireDate" IsNull="true" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
NOTE
By default, if a function import returns a conceptual model entity type or complex type, then the names of the columns
returned by the underlying stored procedure must exactly match the names of the properties on the conceptual model type.
If the column names do not exactly match the property names, the mapping must be defined in a ResultMapping element.
Example
The following example is based on the School model. Consider the following function in the storage model:
The following example show a FunctionImportMapping element used to map the function and function import
above to each other:
<FunctionImportMapping FunctionImportName="GetStudentGrades"
FunctionName="SchoolModel.Store.GetStudentGrades" />
NOTE
If you do not map all three of the insert, update, or delete operations of a entity type to stored procedures, the unmapped
operations will fail if executed at runtime and an UpdateException is thrown.
The InsertFunction element can be a child of the ModificationFunctionMapping element and applied to the
EntityTypeMapping element or the AssociationSetMapping element.
InsertFunction Applied to EntityTypeMapping
When applied to the EntityTypeMapping element, the InsertFunction element maps the insert function of an
entity type in the conceptual model to a stored procedure.
The InsertFunction element can have the following child elements when applied to an EntityTypeMapping
element:
AssociationEnd (zero or more)
ComplexProperty (zero or more)
ResultBinding (zero or one)
ScarlarProperty (zero or more)
Applicable Attributes
The following table describes the attributes that can be applied to the InsertFunction element when applied to an
EntityTypeMapping element.
Example
The following example is based on the School model and shows the InsertFunction element used to map insert
function of the Person entity type to the InsertPerson stored procedure. The InsertPerson stored procedure is
declared in the storage model.
<EntityTypeMapping TypeName="SchoolModel.Person">
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertPerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate" />
<ScalarProperty Name="HireDate" ParameterName="HireDate" />
<ScalarProperty Name="FirstName" ParameterName="FirstName" />
<ScalarProperty Name="LastName" ParameterName="LastName" />
<ResultBinding Name="PersonID" ColumnName="NewPersonID" />
</InsertFunction>
<UpdateFunction FunctionName="SchoolModel.Store.UpdatePerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate"
Version="Current" />
<ScalarProperty Name="HireDate" ParameterName="HireDate"
Version="Current" />
<ScalarProperty Name="FirstName" ParameterName="FirstName"
Version="Current" />
<ScalarProperty Name="LastName" ParameterName="LastName"
Version="Current" />
<ScalarProperty Name="PersonID" ParameterName="PersonID"
Version="Current" />
</UpdateFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeletePerson">
<ScalarProperty Name="PersonID" ParameterName="PersonID" />
</DeleteFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
Example
The following example is based on the School model and shows the InsertFunction element used to map insert
function of the CourseInstructor association to the InsertCourseInstructor stored procedure. The
InsertCourseInstructor stored procedure is declared in the storage model.
<AssociationSetMapping Name="CourseInstructor"
TypeName="SchoolModel.CourseInstructor"
StoreEntitySet="CourseInstructor">
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
</EndProperty>
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertCourseInstructor" >
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</InsertFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeleteCourseInstructor">
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</DeleteFunction>
</ModificationFunctionMapping>
</AssociationSetMapping>
Example
The following example shows a Mapping element that is based on part of the School model. For more
information about the School model, see Quickstart (Entity Framework):
<Mapping Space="C-S"
xmlns="http://schemas.microsoft.com/ado/2009/11/mapping/cs">
<Alias Key="c" Value="SchoolModel"/>
<EntityContainerMapping StorageEntityContainer="SchoolModelStoreContainer"
CdmEntityContainer="SchoolModelEntities">
<EntitySetMapping Name="Courses">
<EntityTypeMapping TypeName="c.Course">
<MappingFragment StoreEntitySet="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
<ScalarProperty Name="Title" ColumnName="Title" />
<ScalarProperty Name="Credits" ColumnName="Credits" />
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
<EntitySetMapping Name="Departments">
<EntityTypeMapping TypeName="c.Department">
<MappingFragment StoreEntitySet="Department">
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
<ScalarProperty Name="Name" ColumnName="Name" />
<ScalarProperty Name="Budget" ColumnName="Budget" />
<ScalarProperty Name="StartDate" ColumnName="StartDate" />
<ScalarProperty Name="Administrator" ColumnName="Administrator" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
</EntityContainerMapping>
</Mapping>
Example
The following example shows a MappingFragment element as the child of an EntityTypeMapping element. In
this example, properties of the Course type in the conceptual model are mapped to columns of the Course table in
the database.
<EntitySetMapping Name="Courses">
<EntityTypeMapping TypeName="SchoolModel.Course">
<MappingFragment StoreEntitySet="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
<ScalarProperty Name="Title" ColumnName="Title" />
<ScalarProperty Name="Credits" ColumnName="Credits" />
<ScalarProperty Name="DepartmentID" ColumnName="DepartmentID" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
Example
The following example shows a MappingFragment element as the child of an EntitySetMapping element. As in
the example above, properties of the Course type in the conceptual model are mapped to columns of the Course
table in the database.
NOTE
If you do not map all three of the insert, update, or delete operations of a entity type to stored procedures, the unmapped
operations will fail if executed at runtime and an UpdateException is thrown.
NOTE
If the modification functions for one entity in an inheritance hierarchy are mapped to stored procedures, then modification
functions for all types in the hierarchy must be mapped to stored procedures.
<EntitySetMapping Name="People">
<EntityTypeMapping TypeName="SchoolModel.Person">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="LastName" ColumnName="LastName" />
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="HireDate" ColumnName="HireDate" />
<ScalarProperty Name="EnrollmentDate"
ColumnName="EnrollmentDate" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="SchoolModel.Person">
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertPerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate" />
<ScalarProperty Name="HireDate" ParameterName="HireDate" />
<ScalarProperty Name="FirstName" ParameterName="FirstName" />
<ScalarProperty Name="LastName" ParameterName="LastName" />
<ResultBinding Name="PersonID" ColumnName="NewPersonID" />
</InsertFunction>
<UpdateFunction FunctionName="SchoolModel.Store.UpdatePerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate"
Version="Current" />
<ScalarProperty Name="HireDate" ParameterName="HireDate"
Version="Current" />
<ScalarProperty Name="FirstName" ParameterName="FirstName"
Version="Current" />
<ScalarProperty Name="LastName" ParameterName="LastName"
Version="Current" />
<ScalarProperty Name="PersonID" ParameterName="PersonID"
Version="Current" />
</UpdateFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeletePerson">
<ScalarProperty Name="PersonID" ParameterName="PersonID" />
</DeleteFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
</EntitySetMapping>
Example
The following example shows the association set mapping for the CourseInstructor association set in the School
model. In addition to the column mapping for the CourseInstructor association, the mapping of the insert and
delete functions of the CourseInstructor association are shown. The functions that are mapped to are declared in
the storage model.
<AssociationSetMapping Name="CourseInstructor"
TypeName="SchoolModel.CourseInstructor"
StoreEntitySet="CourseInstructor">
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
</EndProperty>
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertCourseInstructor" >
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</InsertFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeleteCourseInstructor">
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</DeleteFunction>
</ModificationFunctionMapping>
</AssociationSetMapping>
NOTE
In the QueryView element, Entity SQL expressions that contain GroupBy, group aggregates, or navigation properties are
not supported.
The QueryView element can be a child of the EntitySetMapping element or the AssociationSetMapping element.
In the former case, the query view defines a read-only mapping for an entity in the conceptual model. In the latter
case, the query view defines a read-only mapping for an association in the conceptual model.
NOTE
If the AssociationSetMapping element is for an association with a referential constraint, the AssociationSetMapping
element is ignored. For more information, see ReferentialConstraint Element (CSDL).
The QueryView element cannot have any child elements.
Applicable Attributes
The following table describes the attributes that can be applied to the QueryView element.
Example
The following example shows the QueryView element as a child of the EntitySetMapping element and defines a
query view mapping for the Department entity type in the School Model.
Because the query only returns a subset of the members of the Department type in the storage model, the
Department type in the School model has been modified based on this mapping as follows:
<EntityType Name="Department">
<Key>
<PropertyRef Name="DepartmentID" />
</Key>
<Property Type="Int32" Name="DepartmentID" Nullable="false" />
<Property Type="String" Name="Name" Nullable="false"
MaxLength="50" FixedLength="false" Unicode="true" />
<Property Type="Decimal" Name="Budget" Nullable="false"
Precision="19" Scale="4" />
<Property Type="DateTime" Name="StartDate" Nullable="false" />
<NavigationProperty Name="Courses"
Relationship="SchoolModel.FK_Course_Department"
FromRole="Department" ToRole="Course" />
</EntityType>
Example
The next example shows the QueryView element as the child of an AssociationSetMapping element and
defines a read-only mapping for the FK_Course_Department association in the School model.
<EntityContainerMapping StorageEntityContainer="SchoolModelStoreContainer"
CdmEntityContainer="SchoolEntities">
<EntitySetMapping Name="Courses" >
<QueryView>
SELECT VALUE SchoolModel.Course(c.CourseID,
c.Title,
c.Credits)
FROM SchoolModelStoreContainer.Course AS c
</QueryView>
</EntitySetMapping>
<EntitySetMapping Name="Departments" >
<QueryView>
SELECT VALUE SchoolModel.Department(d.DepartmentID,
d.Name,
d.Budget,
d.StartDate)
FROM SchoolModelStoreContainer.Department AS d
WHERE d.Budget > 150000
</QueryView>
</EntitySetMapping>
<AssociationSetMapping Name="FK_Course_Department" >
<QueryView>
SELECT VALUE SchoolModel.FK_Course_Department(
CREATEREF(SchoolEntities.Departments, row(c.DepartmentID), SchoolModel.Department),
CREATEREF(SchoolEntities.Courses, row(c.CourseID)) )
FROM SchoolModelStoreContainer.Course AS c
</QueryView>
</AssociationSetMapping>
</EntityContainerMapping>
Comments
You can define query views to enable the following scenarios:
Define an entity in the conceptual model that doesn't include all the properties of the entity in the storage
model. This includes properties that do not have default values and do not support null values.
Map computed columns in the storage model to properties of entity types in the conceptual model.
Define a mapping where conditions used to partition entities in the conceptual model are not based on equality.
When you specify a conditional mapping using the Condition element, the supplied condition must equal the
specified value. For more information, see Condition Element (MSL ).
Map the same column in the storage model to multiple types in the conceptual model.
Map multiple types to the same table.
Define associations in the conceptual model that are not based on foreign keys in the relational schema.
Use custom business logic to set the value of properties in the conceptual model. For example, you could map
the string value "T" in the data source to a value of true, a Boolean, in the conceptual model.
Define conditional filters for query results.
Enforce fewer restrictions on data in the conceptual model than in the storage model. For example, you could
make a property in the conceptual model nullable even if the column to which it is mapped does not support
nullvalues.
The following considerations apply when you define query views for entities:
Query views are read-only. You can only make updates to entities by using modification functions.
When you define an entity type by a query view, you must also define all related entities by query views.
When you map a many-to-many association to an entity in the storage model that represents a link table in the
relational schema, you must define a QueryView element in the AssociationSetMapping element for this
link table.
Query views must be defined for all types in a type hierarchy. You can do this in the following ways:
With a single QueryView element that specifies a single Entity SQL query that returns a union of all of
the entity types in the hierarchy.
With a single QueryView element that specifies a single Entity SQL query that uses the CASE operator
to return a specific entity type in the hierarchy based on a specific condition.
With an additional QueryView element for a specific type in the hierarchy. In this case, use the
TypeName attribute of the QueryView element to specify the entity type for each view.
When a query view is defined, you cannot specify the StorageSetName attribute on the EntitySetMapping
element.
When a query view is defined, the EntitySetMappingelement cannot also contain Property mappings.
Example
The following example is based on the School model and shows an InsertFunction element used to map the
insert function of the Person entity type to the InsertPerson stored procedure. (The InsertPerson stored
procedure is shown below and is declared in the storage model.) A ResultBinding element is used to map a
column value that is returned by the stored procedure (NewPersonID ) to an entity type property ( PersonID ).
<EntityTypeMapping TypeName="SchoolModel.Person">
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertPerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate" />
<ScalarProperty Name="HireDate" ParameterName="HireDate" />
<ScalarProperty Name="FirstName" ParameterName="FirstName" />
<ScalarProperty Name="LastName" ParameterName="LastName" />
<ResultBinding Name="PersonID" ColumnName="NewPersonID" />
</InsertFunction>
<UpdateFunction FunctionName="SchoolModel.Store.UpdatePerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate"
Version="Current" />
<ScalarProperty Name="HireDate" ParameterName="HireDate"
Version="Current" />
<ScalarProperty Name="FirstName" ParameterName="FirstName"
Version="Current" />
<ScalarProperty Name="LastName" ParameterName="LastName"
Version="Current" />
<ScalarProperty Name="PersonID" ParameterName="PersonID"
Version="Current" />
</UpdateFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeletePerson">
<ScalarProperty Name="PersonID" ParameterName="PersonID" />
</DeleteFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
<EntityType Name="StudentGrade">
<Key>
<PropertyRef Name="EnrollmentID" />
</Key>
<Property Name="EnrollmentID" Type="Int32" Nullable="false"
annotation:StoreGeneratedPattern="Identity" />
<Property Name="CourseID" Type="Int32" Nullable="false" />
<Property Name="StudentID" Type="Int32" Nullable="false" />
<Property Name="Grade" Type="Decimal" Precision="3" Scale="2" />
</EntityType>
In order to create a function import that returns instances of the previous entity type, the mapping between the
columns returned by the stored procedure and the entity type must be defined in a ResultMapping element:
<FunctionImportMapping FunctionImportName="GetGrades"
FunctionName="SchoolModel.Store.GetGrades" >
<ResultMapping>
<EntityTypeMapping TypeName="SchoolModel.StudentGrade">
<ScalarProperty Name="EnrollmentID" ColumnName="enroll_id"/>
<ScalarProperty Name="CourseID" ColumnName="course_id"/>
<ScalarProperty Name="StudentID" ColumnName="student_id"/>
<ScalarProperty Name="Grade" ColumnName="grade"/>
</EntityTypeMapping>
</ResultMapping>
</FunctionImportMapping>
The following table describes the attributes that are applicable to the ScalarProperty element when it is used to
map a conceptual model property to a stored procedure parameter:
Example
The following example shows the ScalarProperty element used in two ways:
To map the properties of the Person entity type to the columns of the Persontable.
To map the properties of the Person entity type to the parameters of the UpdatePerson stored procedure. The
stored procedures are declared in the storage model.
<EntitySetMapping Name="People">
<EntityTypeMapping TypeName="SchoolModel.Person">
<MappingFragment StoreEntitySet="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
<ScalarProperty Name="LastName" ColumnName="LastName" />
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="HireDate" ColumnName="HireDate" />
<ScalarProperty Name="EnrollmentDate"
ColumnName="EnrollmentDate" />
</MappingFragment>
</EntityTypeMapping>
<EntityTypeMapping TypeName="SchoolModel.Person">
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertPerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate" />
<ScalarProperty Name="HireDate" ParameterName="HireDate" />
<ScalarProperty Name="FirstName" ParameterName="FirstName" />
<ScalarProperty Name="LastName" ParameterName="LastName" />
<ResultBinding Name="PersonID" ColumnName="NewPersonID" />
</InsertFunction>
<UpdateFunction FunctionName="SchoolModel.Store.UpdatePerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate"
Version="Current" />
<ScalarProperty Name="HireDate" ParameterName="HireDate"
Version="Current" />
<ScalarProperty Name="FirstName" ParameterName="FirstName"
Version="Current" />
<ScalarProperty Name="LastName" ParameterName="LastName"
Version="Current" />
<ScalarProperty Name="PersonID" ParameterName="PersonID"
Version="Current" />
</UpdateFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeletePerson">
<ScalarProperty Name="PersonID" ParameterName="PersonID" />
</DeleteFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
</EntitySetMapping>
Example
The next example shows the ScalarProperty element used to map the insert and delete functions of a conceptual
model association to stored procedures in the database. The stored procedures are declared in the storage model.
<AssociationSetMapping Name="CourseInstructor"
TypeName="SchoolModel.CourseInstructor"
StoreEntitySet="CourseInstructor">
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ColumnName="PersonID" />
</EndProperty>
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</EndProperty>
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertCourseInstructor" >
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</InsertFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeleteCourseInstructor">
<EndProperty Name="Course">
<ScalarProperty Name="CourseID" ParameterName="courseId"/>
</EndProperty>
<EndProperty Name="Person">
<ScalarProperty Name="PersonID" ParameterName="instructorId"/>
</EndProperty>
</DeleteFunction>
</ModificationFunctionMapping>
</AssociationSetMapping>
NOTE
If you do not map all three of the insert, update, or delete operations of a entity type to stored procedures, the unmapped
operations will fail if executed at runtime and an UpdateException is thrown.
The UpdateFunction element can be a child of the ModificationFunctionMapping element and applied to the
EntityTypeMapping element.
The UpdateFunction element can have the following child elements:
AssociationEnd (zero or more)
ComplexProperty (zero or more)
ResultBinding (zero or one)
ScarlarProperty (zero or more)
Applicable Attributes
The following table describes the attributes that can be applied to the UpdateFunction element.
Example
The following example is based on the School model and shows the UpdateFunction element used to map
update function of the Person entity type to the UpdatePerson stored procedure. The UpdatePerson stored
procedure is declared in the storage model.
<EntityTypeMapping TypeName="SchoolModel.Person">
<ModificationFunctionMapping>
<InsertFunction FunctionName="SchoolModel.Store.InsertPerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate" />
<ScalarProperty Name="HireDate" ParameterName="HireDate" />
<ScalarProperty Name="FirstName" ParameterName="FirstName" />
<ScalarProperty Name="LastName" ParameterName="LastName" />
<ResultBinding Name="PersonID" ColumnName="NewPersonID" />
</InsertFunction>
<UpdateFunction FunctionName="SchoolModel.Store.UpdatePerson">
<ScalarProperty Name="EnrollmentDate"
ParameterName="EnrollmentDate"
Version="Current" />
<ScalarProperty Name="HireDate" ParameterName="HireDate"
Version="Current" />
<ScalarProperty Name="FirstName" ParameterName="FirstName"
Version="Current" />
<ScalarProperty Name="LastName" ParameterName="LastName"
Version="Current" />
<ScalarProperty Name="PersonID" ParameterName="PersonID"
Version="Current" />
</UpdateFunction>
<DeleteFunction FunctionName="SchoolModel.Store.DeletePerson">
<ScalarProperty Name="PersonID" ParameterName="PersonID" />
</DeleteFunction>
</ModificationFunctionMapping>
</EntityTypeMapping>
SSDL Specification
9/13/2018 • 32 minutes to read • Edit Online
Store schema definition language (SSDL ) is an XML -based language that describes the storage model of an Entity
Framework application.
In an Entity Framework application, storage model metadata is loaded from a .ssdl file (written in SSDL ) into an
instance of the System.Data.Metadata.Edm.StoreItemCollection and is accessible by using methods in the
System.Data.Metadata.Edm.MetadataWorkspace class. Entity Framework uses storage model metadata to
translate queries against the conceptual model to store-specific commands.
The Entity Framework Designer (EF Designer) stores storage model information in an .edmx file at design time. At
build time the Entity Designer uses information in an .edmx file to create the .ssdl file that is needed by Entity
Framework at runtime.
Versions of SSDL are differentiated by XML namespaces.
SSDL v1 http://schemas.microsoft.com/ado/2006/04/edm/ssdl
SSDL v2 http://schemas.microsoft.com/ado/2009/02/edm/ssdl
SSDL v3 http://schemas.microsoft.com/ado/2009/11/edm/ssdl
Example
The following example shows an Association element that uses a ReferentialConstraint element to specify the
columns that participate in the FK_CustomerOrders foreign key constraint:
<Association Name="FK_CustomerOrders">
<End Role="Customers"
Type="ExampleModel.Store.Customers" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Role="Orders"
Type="ExampleModel.Store.Orders" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customers">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Orders">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
Example
The following example shows an AssociationSet element that represents the FK_CustomerOrders foreign key
constraint in the underlying database:
<AssociationSet Name="FK_CustomerOrders"
Association="ExampleModel.Store.FK_CustomerOrders">
<End Role="Customers" EntitySet="Customers" />
<End Role="Orders" EntitySet="Orders" />
</AssociationSet>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the CollectionType element. However,
custom attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows a function that uses a CollectionType element to specify that the function returns a
collection of rows.
<Schema>
<EntitySet Name="Tables" EntityType="Self.STable">
<DefiningQuery>
SELECT TABLE_CATALOG,
'test' as TABLE_SCHEMA,
TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
</DefiningQuery>
</EntitySet>
</Schema>
You can use stored procedures in the Entity Framework to enable read-write scenarios over views. You can use
either a data source view or an Entity SQL view as the base table for retrieving data and for change processing by
stored procedures.
You can use the DefiningQuery element to target Microsoft SQL Server Compact 3.5. Though SQL Server
Compact 3.5 does not support stored procedures, you can implement similar functionality with the
DefiningQuery element. Another place where it can be useful is in creating stored procedures to overcome a
mismatch between the data types used in the programming language and those of the data source. You could
write a DefiningQuery that takes a certain set of parameters and then calls a stored procedure with a different set
of parameters, for example, a stored procedure that deletes data.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Dependent element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an Association element that uses a ReferentialConstraint element to specify the
columns that participate in the FK_CustomerOrders foreign key constraint. The Dependent element specifies the
CustomerId column of the Order table as the dependent end of the constraint.
<Association Name="FK_CustomerOrders">
<End Role="Customers"
Type="ExampleModel.Store.Customers" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Role="Orders"
Type="ExampleModel.Store.Orders" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customers">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Orders">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
<EntityType Name="Customers">
<Documentation>
<Summary>Summary here.</Summary>
<LongDescription>Long description here.</LongDescription>
</Documentation>
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Name="CustomerId" Type="int" Nullable="false" />
<Property Name="Name" Type="nvarchar(max)" Nullable="false" />
</EntityType>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the End element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an Association element that defines the FK_CustomerOrders foreign key
constraint. The Multiplicity values specified on each End element indicate that many rows in the Orders table
can be associated with a row in the Customers table, but only one row in the Customers table can be associated
with a row in the Orders table. Additionally, the OnDelete element indicates that all rows in the Orders table that
reference a particular row in the Customers table will be deleted if the row in the Customers table is deleted.
<Association Name="FK_CustomerOrders">
<End Role="Customers"
Type="ExampleModel.Store.Customers" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Role="Orders"
Type="ExampleModel.Store.Orders" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customers">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Orders">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the End element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EntityContainer element with an AssociationSet element with two End
elements:
<EntityContainer Name="ExampleModelStoreContainer">
<EntitySet Name="Customers"
EntityType="ExampleModel.Store.Customers"
Schema="dbo" />
<EntitySet Name="Orders"
EntityType="ExampleModel.Store.Orders"
Schema="dbo" />
<AssociationSet Name="FK_CustomerOrders"
Association="ExampleModel.Store.FK_CustomerOrders">
<End Role="Customers" EntitySet="Customers" />
<End Role="Orders" EntitySet="Orders" />
</AssociationSet>
</EntityContainer>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the EntityContainer element. However,
custom attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows an EntityContainer element that defines two entity sets and one association set.
Note that entity type and association type names are qualified by the conceptual model namespace name.
<EntityContainer Name="ExampleModelStoreContainer">
<EntitySet Name="Customers"
EntityType="ExampleModel.Store.Customers"
Schema="dbo" />
<EntitySet Name="Orders"
EntityType="ExampleModel.Store.Orders"
Schema="dbo" />
<AssociationSet Name="FK_CustomerOrders"
Association="ExampleModel.Store.FK_CustomerOrders">
<End Role="Customers" EntitySet="Customers" />
<End Role="Orders" EntitySet="Orders" />
</AssociationSet>
</EntityContainer>
NOTE
Some attributes (not listed here) may be qualified with the store alias. These attributes are used by the Update Model
Wizard when updating a model.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the EntitySet element. However, custom
attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EntityContainer element that has two EntitySet elements and one
AssociationSet element:
<EntityContainer Name="ExampleModelStoreContainer">
<EntitySet Name="Customers"
EntityType="ExampleModel.Store.Customers"
Schema="dbo" />
<EntitySet Name="Orders"
EntityType="ExampleModel.Store.Orders"
Schema="dbo" />
<AssociationSet Name="FK_CustomerOrders"
Association="ExampleModel.Store.FK_CustomerOrders">
<End Role="Customers" EntitySet="Customers" />
<End Role="Orders" EntitySet="Orders" />
</AssociationSet>
</EntityContainer>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the EntityType element. However, custom
attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EntityType element with two properties:
<EntityType Name="Customers">
<Documentation>
<Summary>Summary here.</Summary>
<LongDescription>Long description here.</LongDescription>
</Documentation>
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Name="CustomerId" Type="int" Nullable="false" />
<Property Name="Name" Type="nvarchar(max)" Nullable="false" />
</EntityType>
1A built-in function is a function that is defined in the database. For information about functions that are defined in
the storage model, see CommandText Element (SSDL ).
2A niladic function is a function that accepts no parameters and, when called, does not require parentheses.
3 Two functions are composable if the output of one function can be the input for the other function.
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Function element. However, custom
attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows a Function element that corresponds to the UpdateOrderQuantity stored
procedure. The stored procedure accepts two parameters and does not return a value.
<Function Name="UpdateOrderQuantity"
Aggregate="false"
BuiltIn="false"
NiladicFunction="false"
IsComposable="false"
ParameterTypeSemantics="AllowImplicitConversion"
Schema="dbo">
<Parameter Name="orderId" Type="int" Mode="In" />
<Parameter Name="newQuantity" Type="int" Mode="In" />
</Function>
<EntityType Name="Customers">
<Documentation>
<Summary>Summary here.</Summary>
<LongDescription>Long description here.</LongDescription>
</Documentation>
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Name="CustomerId" Type="int" Nullable="false" />
<Property Name="Name" Type="nvarchar(max)" Nullable="false" />
</EntityType>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the OnDelete element. However, custom
attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an Association element that defines the FK_CustomerOrders foreign key
constraint. The OnDelete element indicates that all rows in the Orders table that reference a particular row in the
Customers table will be deleted if the row in the Customers table is deleted.
<Association Name="FK_CustomerOrders">
<End Role="Customers"
Type="ExampleModel.Store.Customers" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Role="Orders"
Type="ExampleModel.Store.Orders" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customers">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Orders">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Parameter element. However, custom
attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows a Function element that has two Parameter elements that specify input
parameters:
<Function Name="UpdateOrderQuantity"
Aggregate="false"
BuiltIn="false"
NiladicFunction="false"
IsComposable="false"
ParameterTypeSemantics="AllowImplicitConversion"
Schema="dbo">
<Parameter Name="orderId" Type="int" Mode="In" />
<Parameter Name="newQuantity" Type="int" Mode="In" />
</Function>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Principal element. However, custom
attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an Association element that uses a ReferentialConstraint element to specify the
columns that participate in the FK_CustomerOrders foreign key constraint. The Principal element specifies the
CustomerId column of the Customer table as the principal end of the constraint.
<Association Name="FK_CustomerOrders">
<End Role="Customers"
Type="ExampleModel.Store.Customers" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Role="Orders"
Type="ExampleModel.Store.Orders" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customers">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Orders">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the Property element. However, custom
attributes may not belong to any XML namespace that is reserved for SSDL. The fully-qualified names for any two custom
attributes cannot be the same.
Example
The following example shows an EntityType element with two child Property elements:
<EntityType Name="Customers">
<Documentation>
<Summary>Summary here.</Summary>
<LongDescription>Long description here.</LongDescription>
</Documentation>
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Name="CustomerId" Type="int" Nullable="false" />
<Property Name="Name" Type="nvarchar(max)" Nullable="false" />
</EntityType>
NOTE
Any number of annotation attributes (custom XML attributes) may be applied to the PropertyRef element. However,
custom attributes may not belong to any XML namespace that is reserved for CSDL. The fully-qualified names for any two
custom attributes cannot be the same.
Example
The following example shows a PropertyRef element used to define a primary key by referencing a property that
is defined on an EntityType element.
<EntityType Name="Customers">
<Documentation>
<Summary>Summary here.</Summary>
<LongDescription>Long description here.</LongDescription>
</Documentation>
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Name="CustomerId" Type="int" Nullable="false" />
<Property Name="Name" Type="nvarchar(max)" Nullable="false" />
</EntityType>
ReferentialConstraint Element (SSDL)
The ReferentialConstraint element in store schema definition language (SSDL ) represents a foreign key
constraint (also called a referential integrity constraint) in the underlying database. The principal and dependent
ends of the constraint are specified by the Principal and Dependent child elements, respectively. Columns that
participate in the principal and dependent ends are referenced with PropertyRef elements.
The ReferentialConstraint element is an optional child element of the Association element. If a
ReferentialConstraint element is not used to map the foreign key constraint that is specified in the Association
element, an AssociationSetMapping element must be used to do this.
The ReferentialConstraint element can have the following child elements:
Documentation (zero or one)
Principal (exactly one)
Dependent (exactly one)
Annotation elements (zero or more)
Applicable Attributes
Any number of annotation attributes (custom XML attributes) may be applied to the ReferentialConstraint
element. However, custom attributes may not belong to any XML namespace that is reserved for SSDL. The fully-
qualified names for any two custom attributes cannot be the same.
Example
The following example shows an Association element that uses a ReferentialConstraint element to specify the
columns that participate in the FK_CustomerOrders foreign key constraint:
<Association Name="FK_CustomerOrders">
<End Role="Customers"
Type="ExampleModel.Store.Customers" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Role="Orders"
Type="ExampleModel.Store.Orders" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customers">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Orders">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
Example
The following example uses a Function that returns a collection of rows.
Example
The following example shows a Schema element that contains an EntityContainer element, two EntityType
elements, and one Association element.
<Schema Namespace="ExampleModel.Store"
Alias="Self" Provider="System.Data.SqlClient"
ProviderManifestToken="2008"
xmlns="http://schemas.microsoft.com/ado/2009/11/edm/ssdl">
<EntityContainer Name="ExampleModelStoreContainer">
<EntitySet Name="Customers"
EntityType="ExampleModel.Store.Customers"
Schema="dbo" />
<EntitySet Name="Orders"
EntityType="ExampleModel.Store.Orders"
Schema="dbo" />
<AssociationSet Name="FK_CustomerOrders"
Association="ExampleModel.Store.FK_CustomerOrders">
<End Role="Customers" EntitySet="Customers" />
<End Role="Orders" EntitySet="Orders" />
</AssociationSet>
</EntityContainer>
<EntityType Name="Customers">
<Documentation>
<Summary>Summary here.</Summary>
<LongDescription>Long description here.</LongDescription>
</Documentation>
<Key>
<PropertyRef Name="CustomerId" />
</Key>
<Property Name="CustomerId" Type="int" Nullable="false" />
<Property Name="Name" Type="nvarchar(max)" Nullable="false" />
</EntityType>
<EntityType Name="Orders" xmlns:c="http://CustomNamespace">
<Key>
<PropertyRef Name="OrderId" />
</Key>
<Property Name="OrderId" Type="int" Nullable="false"
c:CustomAttribute="someValue"/>
<Property Name="ProductId" Type="int" Nullable="false" />
<Property Name="Quantity" Type="int" Nullable="false" />
<Property Name="CustomerId" Type="int" Nullable="false" />
<c:CustomElement>
Custom data here.
</c:CustomElement>
</EntityType>
<Association Name="FK_CustomerOrders">
<End Role="Customers"
Type="ExampleModel.Store.Customers" Multiplicity="1">
<OnDelete Action="Cascade" />
</End>
<End Role="Orders"
Type="ExampleModel.Store.Orders" Multiplicity="*" />
<ReferentialConstraint>
<Principal Role="Customers">
<PropertyRef Name="CustomerId" />
</Principal>
<Dependent Role="Orders">
<PropertyRef Name="CustomerId" />
</Dependent>
</ReferentialConstraint>
</Association>
<Function Name="UpdateOrderQuantity"
Aggregate="false"
BuiltIn="false"
NiladicFunction="false"
IsComposable="false"
ParameterTypeSemantics="AllowImplicitConversion"
Schema="dbo">
<Parameter Name="orderId" Type="int" Mode="In" />
<Parameter Name="newQuantity" Type="int" Mode="In" />
<Parameter Name="newQuantity" Type="int" Mode="In" />
</Function>
<Function Name="UpdateProductInOrder" IsComposable="false">
<CommandText>
UPDATE Orders
SET ProductId = @productId
WHERE OrderId = @orderId;
</CommandText>
<Parameter Name="productId"
Mode="In"
Type="int"/>
<Parameter Name="orderId"
Mode="In"
Type="int"/>
</Function>
</Schema>
Annotation Attributes
Annotation attributes in store schema definition language (SSDL ) are custom XML attributes in the storage model
that provide extra metadata about the elements in the storage model. In addition to having valid XML structure,
the following constraints apply to annotation attributes:
Annotation attributes must not be in any XML namespace that is reserved for SSDL.
The fully-qualified names of any two annotation attributes must not be the same.
More than one annotation attribute may be applied to a given SSDL element. Metadata contained in annotation
elements can be accessed at runtime by using classes in the System.Data.Metadata.Edm namespace.
Example
The following example shows an EntityType element that has an annotation attribute applied to the OrderId
property. The example also show an annotation element added to the EntityType element.
Facets (SSDL)
Facets in store schema definition language (SSDL ) represent constraints on column types that are specified in
Property elements. Facets are implemented as XML attributes on Property elements.
The following table describes the facets that are supported in SSDL:
FACET DESCRIPTION
FixedLength Specifies whether the length of the column value can vary.
Scale Specifies the number of digits to the right of the decimal point
for the column value.
This walkthrough demonstrates how to add a defining query and a corresponding entity type to a model using the
EF Designer. A defining query is commonly used to provide functionality similar to that provided by a database
view, but the view is defined in the model, not the database. A defining query allows you to execute a SQL
statement that is specified in the DefiningQuery element of an .edmx file. For more information,
see DefiningQuery in the SSDL Specification.
When using defining queries, you also have to define an entity type in your model. The entity type is used to
surface data exposed by the defining query. Note that data surfaced through this entity type is read-only.
Parameterized queries cannot be executed as defining queries. However, the data can be updated by mapping the
insert, update, and delete functions of the entity type that surfaces the data to stored procedures. For more
information, see Insert, Update, and Delete with Stored Procedures.
This topic shows how to perform the following tasks.
Add a Defining Query
Add an Entity Type to the Model
Map the Defining Query to the Entity Type
Prerequisites
To complete this walkthrough, you will need:
A recent version of Visual Studio.
The School sample database.
Add the EntityType element to the SSDL section of the .edmx. file as shown below. Note the following:
The value of the Name attribute corresponds to the value of the EntityType attribute in the EntitySet
element above, although the fully-qualified name of the entity type is used in the EntityType attribute.
The property names correspond to the column names returned by the SQL statement in the
DefiningQuery element (above).
In this example, the entity key is composed of three properties to ensure a unique key value.
<EntityType Name="GradeReport">
<Key>
<PropertyRef Name="CourseID" />
<PropertyRef Name="FirstName" />
<PropertyRef Name="LastName" />
</Key>
<Property Name="CourseID"
Type="int"
Nullable="false" />
<Property Name="Grade"
Type="decimal"
Precision="3"
Scale="2" />
<Property Name="FirstName"
Type="nvarchar"
Nullable="false"
MaxLength="50" />
<Property Name="LastName"
Type="nvarchar"
Nullable="false"
MaxLength="50" />
</EntityType>
NOTE
If later you run the Update Model Wizard dialog, any changes made to the storage model, including defining queries, will
be overwritten.
The Entity Designer, which provides a design surface for editing your model, is displayed.
Right-click the designer surface and select Add New->Entity….
Specify GradeReport for the entity name and CourseID for the Key Property.
Right-click the GradeReport entity and select Add New-> Scalar Property.
Change the default name of the property to FirstName.
Add another scalar property and specify LastName for the name.
Add another scalar property and specify Grade for the name.
In the Properties window, change the Grade’s Type property to Decimal.
Select the FirstName and LastName properties.
In the Properties window, change the EntityKey property value to True.
As a result, the following elements were added to the CSDL section of the .edmx file.
<EntityType Name="GradeReport">
. . .
</EntityType>
<EntitySetMapping Name="GradeReports">
<EntityTypeMapping TypeName="IsTypeOf(SchoolModel.GradeReport)">
<MappingFragment StoreEntitySet="GradeReport">
<ScalarProperty Name="LastName" ColumnName="LastName" />
<ScalarProperty Name="FirstName" ColumnName="FirstName" />
<ScalarProperty Name="Grade" ColumnName="Grade" />
<ScalarProperty Name="CourseID" ColumnName="CourseID" />
</MappingFragment>
</EntityTypeMapping>
</EntitySetMapping>
Sometimes when using stored procedures you will need to return more than one result set. This scenario is
commonly used to reduce the number of database round trips required to compose a single screen. Prior to EF5,
Entity Framework would allow the stored procedure to be called but would only return the first result set to the
calling code.
This article will show you two ways that you can use to access more than one result set from a stored procedure in
Entity Framework. One that uses just code and works with both Code first and the EF Designer and one that only
works with the EF Designer. The tooling and API support for this should improve in future versions of Entity
Framework.
Model
The examples in this article use a basic Blog and Posts model where a blog has many posts and a post belongs to a
single blog. We will use a stored procedure in the database that returns all blogs and posts, something like this:
try
{
db.Database.Connection.Open();
// Run the sproc
var reader = cmd.ExecuteReader();
The Translate method accepts the reader that we received when we executed the procedure, an EntitySet name,
and a MergeOption. The EntitySet name will be the same as the DbSet property on your derived context. The
MergeOption enum controls how results are handled if the same entity already exists in memory.
Here we iterate through the collection of blogs before we call NextResult, this is important given the above code
because the first result set must be consumed before moving to the next result set.
Once the two translate methods are called then the Blog and Post entities are tracked by EF the same way as any
other entity and so can be modified or deleted and saved as normal.
NOTE
EF does not take any mapping into account when it creates entities using the Translate method. It will simply match column
names in the result set with property names on your classes.
NOTE
That if you have lazy loading enabled, accessing the posts property on one of the blog entities then EF will connect to the
database to lazily load all posts, even though we have already loaded them all. This is because EF cannot know whether or
not you have loaded all posts or if there are more in the database. If you want to avoid this then you will need to disable lazy
loading.
If you are using the EF Designer, you can also modify your model so that it knows about the different result sets
that will be returned. One thing to know before hand is that the tooling is not multiple result set aware, so you will
need to manually edit the edmx file. Editing the edmx file like this will work but it will also break the validation of
the model in VS. So if you validate your model you will always get errors.
In order to do this you need to add the stored procedure to your model as you would for a single result set
query.
Once you have this then you need to right click on your model and select Open With.. then Xml
Once you have the model opened as XML then you need to do the following steps:
Find the complex type and function import in your model:
<!-- CSDL content -->
<edmx:ConceptualModels>
...
...
<ComplexType Name="GetAllBlogsAndPosts_Result">
<Property Type="Int32" Name="BlogId" Nullable="false" />
<Property Type="String" Name="Name" Nullable="false" MaxLength="255" />
<Property Type="String" Name="Description" Nullable="true" />
</ComplexType>
...
</edmx:ConceptualModels>
<FunctionImport Name="GetAllBlogsAndPosts">
<ReturnType EntitySet="Blogs" Type="Collection(BlogModel.Blog)" />
<ReturnType EntitySet="Posts" Type="Collection(BlogModel.Post)" />
</FunctionImport>
This tells the model that the stored procedure will return two collections, one of blog entries and one of post
entries.
Find the function mapping element:
...
<FunctionImportMapping FunctionImportName="GetAllBlogsAndPosts"
FunctionName="BlogModel.Store.GetAllBlogsAndPosts">
<ResultMapping>
<ComplexTypeMapping TypeName="BlogModel.GetAllBlogsAndPosts_Result">
<ScalarProperty Name="BlogId" ColumnName="BlogId" />
<ScalarProperty Name="Name" ColumnName="Name" />
<ScalarProperty Name="Description" ColumnName="Description" />
</ComplexTypeMapping>
</ResultMapping>
</FunctionImportMapping>
...
</edmx:Mappings>
Replace the result mapping with one for each entity being returned, such as the following:
<ResultMapping>
<EntityTypeMapping TypeName ="BlogModel.Blog">
<ScalarProperty Name="BlogId" ColumnName="BlogId" />
<ScalarProperty Name="Name" ColumnName="Name" />
<ScalarProperty Name="Description" ColumnName="Description" />
</EntityTypeMapping>
</ResultMapping>
<ResultMapping>
<EntityTypeMapping TypeName="BlogModel.Post">
<ScalarProperty Name="BlogId" ColumnName="BlogId" />
<ScalarProperty Name="PostId" ColumnName="PostId"/>
<ScalarProperty Name="Title" ColumnName="Title" />
<ScalarProperty Name="Text" ColumnName="Text" />
</EntityTypeMapping>
</ResultMapping>
It is also possible to map the result sets to complex types, such as the one created by default. To do this you create
a new complex type, instead of removing them, and use the complex types everywhere that you had used the
entity names in the examples above.
Once these mappings have been changed then you can save the model and execute the following code to use the
stored procedure:
Console.ReadLine();
}
NOTE
If you manually edit the edmx file for your model it will be overwritten if you ever regenerate the model from the database.
Summary
Here we have shown two different methods of accessing multiple result sets using Entity Framework. Both of them
are equally valid depending on your situation and preferences and you should choose the one that seems best for
your circumstances. It is planned that the support for multiple result sets will be improved in future versions of
Entity Framework and that performing the steps in this document will no longer be necessary.
Table-Valued Functions (TVFs)
9/18/2018 • 3 minutes to read • Edit Online
NOTE
EF5 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 5. If you are using
an earlier version, some or all of the information does not apply.
The video and step-by-step walkthrough shows how to map table-valued functions (TVFs) using the Entity
Framework Designer. It also demonstrates how to call a TVF from a LINQ query.
TVFs are currently only supported in the Database First workflow.
TVF support was introduced in Entity Framework version 5. Note that to use the new features like table-valued
functions, enums, and spatial types you must target .NET Framework 4.5. Visual Studio 2012 targets .NET 4.5 by
default.
TVFs are very similar to stored procedures with one key difference: the result of a TVF is composable. That means
the results from a TVF can be used in a LINQ query while the results of a stored procedure cannot.
Pre-Requisites
To complete this walkthrough, you need to:
Install the School database.
Have a recent version of Visual Studio
(@CourseID INT)
RETURNS TABLE
RETURN
SELECT [EnrollmentID],
[CourseID],
[StudentID],
[Grade]
FROM [dbo].[StudentGrade]
WHERE CourseID = @CourseID
Click the right mouse button on the T-SQL editor and select Execute
The GetStudentGradesForCourse function is added to the School database
Create a Model
1. Right-click the project name in Solution Explorer, point to Add, and then click New Item
2. Select Data from the left menu and then select ADO.NET Entity Data Model in the Templates pane
3. Enter TVFModel.edmx for the file name, and then click Add
4. In the Choose Model Contents dialog box, select Generate from database, and then click Next
5. Click New Connection Enter (localdb)\mssqllocaldb in the Server name text box Enter School for the
database name Click OK
6. In the Choose Your Database Objects dialog box, under the Tables node, select the Person, StudentGrade,
and Course tables
7. Select the GetStudentGradesForCourse function located under the Stored Procedures and Functions node
Note, that starting with Visual Studio 2012, the Entity Designer allows you to batch import your Stored
Procedures and Functions
8. Click Finish
9. The Entity Designer, which provides a design surface for editing your model, is displayed. All the objects that
you selected in the Choose Your Database Objects dialog box are added to the model.
10. By default, the result shape of each imported stored procedure or function will automatically become a new
complex type in your entity model. But we want to map the results of the GetStudentGradesForCourse function
to the StudentGrade entity: Right-click the design surface and select Model Browser In Model Browser,
select Function Imports, and then double-click the GetStudentGradesForCourse function In the Edit
Function Import dialog box, select Entities and choose StudentGrade
Compile and run the application. The program produces the following output:
Summary
In this walkthrough we looked at how to map Table-valued Functions (TVFs) using the Entity Framework
Designer. It also demonstrated how to call a TVF from a LINQ query.
Entity Framework Designer Keyboard Shortcuts
9/13/2018 • 6 minutes to read • Edit Online
This page provides a list of keyboard shorcuts that are available in the various screens of the Entity Framework
Tools for Visual Studio.
Alt+n Move to next screen Not available for all selections of model
contents.
Alt+c Open the “Connection Properties” Allows for the definition of a new
window database connection.
Alt+w Switch focus to Entity Framework Allows for specifying a different version
version selection of Entity Framework for use in the
project.
Alt+w Switch focus to database object Allows for specifying database objects
selection pane to be reverse engineered.
Alt+k Toggle the “Include foreign key columns Not available for all selections of model
in the model” option contents.
Alt+i Toggle the “Import selected stored Not available for all selections of model
procedures and functions into the entity contents.
model” option
Alt+m Switches focus to the “Model Not available for all selections of model
Namespace” text field contents.
EF Designer Surface
Down arrow Move down Moves selected entity down one grid
increment.
If in a list, moves to the next sibling
subfield.
Left arrow Move left Moves selected entity left one grid
increment.
If in a list, moves to the previous sibling
subfield.
SHORTCUT ACTION NOTES
Right arrow Move right Moves selected entity right one grid
increment.
If in a list, moves to the next sibling
subfield.
Shift + left arrow Size shape left Reduces the width of the selected entity
by one grid increment.
Shift + right arrow Size shape right Increases the width of the selected
entity by one grid increment.
Ctrl + Home First Peer (focus) Same as first peer, but moves focus
instead of moving focus and selection.
Ctrl + End Last Peer (focus) Same as last peer, but moves focus
instead of moving focus and selection.
Alt+Ctrl+Tab Next Peer (focus) Same as next peer, but moves focus
instead of moving focus and selection.
Alt+Ctrl+Shift+Tab Previous Peer (focus) Same as previous peer, but moves focus
instead of moving focus and selection.
Shift + Pg Down Scroll diagram right Scrolls the design surface to the right.
Shift + Pg Up Scroll diagram left Scrolls the design surface to the left.
Control + Shift + Mouse Left Click Semantic Zoom In Zooms in on the area of the Diagram
Control + Shift + MouseWheel View beneath the mouse pointer.
forward
Control + Shift + Mouse Right Click Semantic Zoom Out Zooms out from the area of the
Control + Shift + MouseWheel Diagram View beneath the mouse
backward pointer. It re-centers the diagram when
you zoom out too far for the current
diagram center.
Control + Shift + '-' Zoom Out Zooms out from the clicked area of the
Control + MouseWheel backward Diagram View. It re-centers the diagram
when you zoom out too far for the
current diagram center.
Control + Shift + Draw a rectangle Zoom Area Zooms in centered on the area that
with the left mouse button down you've selected. When you hold down
the Control + Shift keys, you'll see that
the cursor changes to a magnifying
glass, which allows you to define the
area to zoom into.
SHORTCUT ACTION NOTES
Context Menu Key + ‘M’ Open Mapping Details Window Opens the Mapping Details window to
edit mappings for selected entity
Alt + Down Arrow Open List Drop down a list if a cell is selected that
has a drop down list.
OtherContextMenus.MicrosoftDataEntityDesignContext.Add.ComplexProperty.ComplexTypes
OtherContextMenus.MicrosoftDataEntityDesignContext.AddCodeGenerationItem
OtherContextMenus.MicrosoftDataEntityDesignContext.AddFunctionImport
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.AddEnumType
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.Association
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.ComplexProperty
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.ComplexType
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.Entity
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.FunctionImport
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.Inheritance
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.NavigationProperty
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNew.ScalarProperty
OtherContextMenus.MicrosoftDataEntityDesignContext.AddNewDiagram
OtherContextMenus.MicrosoftDataEntityDesignContext.AddtoDiagram
OtherContextMenus.MicrosoftDataEntityDesignContext.Close
OtherContextMenus.MicrosoftDataEntityDesignContext.Collapse
OtherContextMenus.MicrosoftDataEntityDesignContext.ConverttoEnum
OtherContextMenus.MicrosoftDataEntityDesignContext.Diagram.CollapseAll
OtherContextMenus.MicrosoftDataEntityDesignContext.Diagram.ExpandAll
OtherContextMenus.MicrosoftDataEntityDesignContext.Diagram.ExportasImage
OtherContextMenus.MicrosoftDataEntityDesignContext.Diagram.LayoutDiagram
OtherContextMenus.MicrosoftDataEntityDesignContext.Edit
OtherContextMenus.MicrosoftDataEntityDesignContext.EntityKey
OtherContextMenus.MicrosoftDataEntityDesignContext.Expand
OtherContextMenus.MicrosoftDataEntityDesignContext.FunctionImportMapping
SHORTCUT
OtherContextMenus.MicrosoftDataEntityDesignContext.GenerateDatabasefromModel
OtherContextMenus.MicrosoftDataEntityDesignContext.GoToDefinition
OtherContextMenus.MicrosoftDataEntityDesignContext.Grid.ShowGrid
OtherContextMenus.MicrosoftDataEntityDesignContext.Grid.SnaptoGrid
OtherContextMenus.MicrosoftDataEntityDesignContext.IncludeRelated
OtherContextMenus.MicrosoftDataEntityDesignContext.MappingDetails
OtherContextMenus.MicrosoftDataEntityDesignContext.ModelBrowser
OtherContextMenus.MicrosoftDataEntityDesignContext.MoveDiagramstoSeparateFile
OtherContextMenus.MicrosoftDataEntityDesignContext.MoveProperties.Down
OtherContextMenus.MicrosoftDataEntityDesignContext.MoveProperties.Down5
OtherContextMenus.MicrosoftDataEntityDesignContext.MoveProperties.ToBottom
OtherContextMenus.MicrosoftDataEntityDesignContext.MoveProperties.ToTop
OtherContextMenus.MicrosoftDataEntityDesignContext.MoveProperties.Up
OtherContextMenus.MicrosoftDataEntityDesignContext.MoveProperties.Up5
OtherContextMenus.MicrosoftDataEntityDesignContext.MovetonewDiagram
OtherContextMenus.MicrosoftDataEntityDesignContext.Open
OtherContextMenus.MicrosoftDataEntityDesignContext.Refactor.MovetoNewComplexType
OtherContextMenus.MicrosoftDataEntityDesignContext.Refactor.Rename
OtherContextMenus.MicrosoftDataEntityDesignContext.RemovefromDiagram
OtherContextMenus.MicrosoftDataEntityDesignContext.Rename
OtherContextMenus.MicrosoftDataEntityDesignContext.ScalarPropertyFormat.DisplayName
OtherContextMenus.MicrosoftDataEntityDesignContext.ScalarPropertyFormat.DisplayNameandType
OtherContextMenus.MicrosoftDataEntityDesignContext.Select.BaseType
OtherContextMenus.MicrosoftDataEntityDesignContext.Select.Entity
OtherContextMenus.MicrosoftDataEntityDesignContext.Select.Property
SHORTCUT
OtherContextMenus.MicrosoftDataEntityDesignContext.Select.Subtype
OtherContextMenus.MicrosoftDataEntityDesignContext.SelectAll
OtherContextMenus.MicrosoftDataEntityDesignContext.SelectAssociation
OtherContextMenus.MicrosoftDataEntityDesignContext.ShowinDiagram
OtherContextMenus.MicrosoftDataEntityDesignContext.ShowinModelBrowser
OtherContextMenus.MicrosoftDataEntityDesignContext.StoredProcedureMapping
OtherContextMenus.MicrosoftDataEntityDesignContext.TableMapping
OtherContextMenus.MicrosoftDataEntityDesignContext.UpdateModelfromDatabase
OtherContextMenus.MicrosoftDataEntityDesignContext.Validate
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.10
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.100
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.125
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.150
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.200
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.25
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.300
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.33
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.400
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.50
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.66
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.75
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.Custom
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.ZoomIn
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.ZoomOut
OtherContextMenus.MicrosoftDataEntityDesignContext.Zoom.ZoomtoFit
SHORTCUT
View.EntityDataModelBrowser
View.EntityDataModelMappingDetails
Querying and Finding Entities
9/13/2018 • 3 minutes to read • Edit Online
This topic covers the various ways you can query for data using Entity Framework, including LINQ and the Find
method. The techniques shown in this topic apply equally to models created with Code First and the EF Designer.
Note that DbSet and IDbSet always create queries against the database and will always involve a round trip to the
database even if the entities returned already exist in the context. A query is executed against the database when:
It is enumerated by a foreach (C#) or For Each (Visual Basic) statement.
It is enumerated by a collection operation such as ToArray, ToDictionary, or ToList.
LINQ operators such as First or Any are specified in the outermost part of the query.
The following methods are called: the Load extension method on a DbSet, DbEntityEntry.Reload, and
Database.ExecuteSqlCommand.
When results are returned from the database, objects that do not exist in the context are attached to the context. If
an object is already in the context, the existing object is returned (the current and original values of the object's
properties in the entry are not overwritten with database values).
When you perform a query, entities that have been added to the context but have not yet been saved to the
database are not returned as part of the result set. To get the data that is in the context, see Local Data.
If a query returns no rows from the database, the result will be an empty collection, rather than null.
// Will find the new blog even though it does not exist in the database
var newBlog = context.Blogs.Find(-1);
Note that when you have composite keys you need to use ColumnAttribute or the fluent API to specify an
ordering for the properties of the composite key. The call to Find must use this order when specifying the values
that form the key.
The Load Method
10/25/2018 • 2 minutes to read • Edit Online
There are several scenarios where you may want to load entities from the database into the context without
immediately doing anything with those entities. A good example of this is loading entities for data binding as
described in Local Data. One common way to do this is to write a LINQ query and then call ToList on it, only to
immediately discard the created list. The Load extension method works just like ToList except that it avoids the
creation of the list altogether.
The techniques shown in this topic apply equally to models created with Code First and the EF Designer.
Here are two examples of using Load. The first is taken from a Windows Forms data binding application where
Load is used to query for entities before binding to the local collection, as described in Local Data:
_context.Categories.Load();
categoryBindingSource.DataSource = _context.Categories.Local.ToBindingList();
}
The second example shows using Load to load a filtered collection of related entities, as described in Loading
Related Entities:
// Load the posts with the 'entity-framework' tag related to a given blog
context.Entry(blog)
.Collection(b => b.Posts)
.Query()
.Where(p => p.Tags.Contains("entity-framework"))
.Load();
}
Local Data
9/13/2018 • 10 minutes to read • Edit Online
Running a LINQ query directly against a DbSet will always send a query to the database, but you can access the
data that is currently in-memory using the DbSet.Local property. You can also access the extra information EF is
tracking about your entities using the DbContext.Entry and DbContext.ChangeTracker.Entries methods. The
techniques shown in this topic apply equally to models created with Code First and the EF Designer.
If we had two blogs in the database - 'ADO.NET Blog' with a BlogId of 1 and 'The Visual Studio Blog' with a
BlogId of 2 - we could expect the following output:
In Local:
Found 0: My New Blog with state Added
Found 2: The Visual Studio Blog with state Unchanged
In DbSet query:
Found 1: ADO.NET Blog with state Deleted
Found 2: The Visual Studio Blog with state Unchanged
Assuming we had a few posts tagged with 'entity-framework' and 'asp.net' the output may look something like
this:
return base.SaveChanges();
}
The code above uses the Local collection to find all posts and marks any that do not have a blog reference as
deleted. The ToList call is required because otherwise the collection will be modified by the Remove call while it is
being enumerated. In most other situations you can query directly against the Local property without using ToList
first.
You'll notice we are introducing a Author and Reader class into the example - both of these classes implement the
IPerson interface.
public class Author : IPerson
{
public int AuthorId { get; set; }
public string Name { get; set; }
public string Biography { get; set; }
}
Tracked blogs:
Found Blog 1: The New ADO.NET Blog with original Name ADO.NET Blog
Found Blog 2: The Visual Studio Blog with original Name The Visual Studio Blog
Found Blog 3: .NET Framework Blog with original Name .NET Framework Blog
People:
Found Person John Doe
Found Person Joe Bloggs
Found Person Jane Doe
Sometimes you may want to get entities back from a query but not have those entities be tracked by the context.
This may result in better performance when querying for large numbers of entities in read-only scenarios. The
techniques shown in this topic apply equally to models created with Code First and the EF Designer.
A new extension method AsNoTracking allows any query to be run in this way. For example:
Entity Framework allows you to query using LINQ with your entity classes. However, there may be times that you
want to run queries using raw SQL directly against the database. This includes calling stored procedures, which
can be helpful for Code First models that currently do not support mapping to stored procedures. The techniques
shown in this topic apply equally to models created with Code First and the EF Designer.
Note that, just as for LINQ queries, the query is not executed until the results are enumerated—in the example
above this is done with the call to ToList.
Care should be taken whenever raw SQL queries are written for two reasons. First, the query should be written to
ensure that it only returns entities that are really of the requested type. For example, when using features such as
inheritance it is easy to write a query that will create entities that are of the wrong CLR type.
Second, some types of raw SQL query expose potential security risks, especially around SQL injection attacks.
Make sure that you use parameters in your query in the correct way to guard against such attacks.
Loading entities from stored procedures
You can use DbSet.SqlQuery to load entities from the results of a stored procedure. For example, the following
code calls the dbo.GetBlogs procedure in the database:
You can also pass parameters to a stored procedure using the following syntax:
The results returned from SqlQuery on Database will never be tracked by the context even if the objects are
instances of an entity type.
Note that any changes made to data in the database using ExecuteSqlCommand are opaque to the context until
entities are loaded or reloaded from the database.
Output Parameters
If output parameters are used, their values will not be available until the results have been read completely. This is
due to the underlying behavior of DbDataReader, see Retrieving Data Using a DataReader for more details.
Loading Related Entities
9/27/2018 • 5 minutes to read • Edit Online
Entity Framework supports three ways to load related data - eager loading, lazy loading and explicit loading. The
techniques shown in this topic apply equally to models created with Code First and the EF Designer.
Eagerly Loading
Eager loading is the process whereby a query for one type of entity also loads related entities as part of the query.
Eager loading is achieved by use of the Include method. For example, the queries below will load blogs and all the
posts related to each blog.
Note that Include is an extension method in the System.Data.Entity namespace so make sure you are using that
namespace.
Eagerly loading multiple levels
It is also possible to eagerly load multiple levels of related entities. The queries below show examples of how to do
this for both collection and reference navigation properties.
using (var context = new BloggingContext())
{
// Load all blogs, all related posts, and all related comments
var blogs1 = context.Blogs
.Include(b => b.Posts.Select(p => p.Comments))
.ToList();
// Load all blogs, all related posts, and all related comments
// using a string to specify the relationships
var blogs2 = context.Blogs
.Include("Posts.Comments")
.ToList();
Note that it is not currently possible to filter which related entities are loaded. Include will always bring in all
related entities.
Lazy Loading
Lazy loading is the process whereby an entity or collection of entities is automatically loaded from the database
the first time that a property referring to the entity/entities is accessed. When using POCO entity types, lazy
loading is achieved by creating instances of derived proxy types and then overriding virtual properties to add the
loading hook. For example, when using the Blog entity class defined below, the related Posts will be loaded the
first time the Posts navigation property is accessed:
Loading of the Posts collection can still be achieved using eager loading (see Eagerly Loading above) or the Load
method (see Explicitly Loading below ).
Turn off lazy loading for all entities
Lazy loading can be turned off for all entities in the context by setting a flag on the Configuration property. For
example:
Loading of related entities can still be achieved using eager loading (see Eagerly Loading above) or the Load
method (see Explicitly Loading below ).
Explicitly Loading
Even with lazy loading disabled it is still possible to lazily load related entities, but it must be done with an explicit
call. To do so you use the Load method on the related entity’s entry. For example:
Note that the Reference method should be used when an entity has a navigation property to another single entity.
On the other hand, the Collection method should be used when an entity has a navigation property to a collection
of other entities.
Applying filters when explicitly loading related entities
The Query method provides access to the underlying query that Entity Framework will use when loading related
entities. You can then use LINQ to apply filters to the query before executing it with a call to a LINQ extension
method such as ToList, Load, etc. The Query method can be used with both reference and collection navigation
properties but is most useful for collections where it can be used to load only part of the collection. For example:
// Load the posts with the 'entity-framework' tag related to a given blog
context.Entry(blog)
.Collection(b => b.Posts)
.Query()
.Where(p => p.Tags.Contains("entity-framework"))
.Load();
// Load the posts with the 'entity-framework' tag related to a given blog
// using a string to specify the relationship
context.Entry(blog)
.Collection("Posts")
.Query()
.Where(p => p.Tags.Contains("entity-framework"))
.Load();
}
When using the Query method it is usually best to turn off lazy loading for the navigation property. This is
because otherwise the entire collection may get loaded automatically by the lazy loading mechanism either before
or after the filtered query has been executed.
Note that while the relationship can be specified as a string instead of a lambda expression, the returned
IQueryable is not generic when a string is used and so the Cast method is usually needed before anything useful
can be done with it.
In this section you can find information about EF's change tracking capabilities and what happens when you call
SaveChanges to persist any changes to objects into the database.
Automatic detect changes
9/13/2018 • 2 minutes to read • Edit Online
When using most POCO entities the determination of how an entity has changed (and therefore which updates
need to be sent to the database) is handled by the Detect Changes algorithm. Detect Changes works by detecting
the differences between the current property values of the entity and the original property values that are stored in
a snapshot when the entity was queried or attached. The techniques shown in this topic apply equally to models
created with Code First and the EF Designer.
By default, Entity Framework performs Detect Changes automatically when the following methods are called:
DbSet.Find
DbSet.Local
DbSet.Add
DbSet.AddRange
DbSet.Remove
DbSet.RemoveRange
DbSet.Attach
DbContext.SaveChanges
DbContext.GetValidationErrors
DbContext.Entry
DbChangeTracker.Entries
Don’t forget to re-enable detection of changes after the loop — We've used a try/finally to ensure it is always re-
enabled even if code in the loop throws an exception.
An alternative to disabling and re-enabling is to leave automatic detection of changes turned off at all times and
either call context.ChangeTracker.DetectChanges explicitly or use change tracking proxies diligently. Both of these
options are advanced and can easily introduce subtle bugs into your application so use them with care.
If you need to add or remove many objects from a context, consider using DbSet.AddRange and
DbSet.RemoveRange. This methods automatically detect changes only once after the add or remove operations
are completed.
Working with entity states
12/4/2018 • 6 minutes to read • Edit Online
This topic will cover how to add and attach entities to a context and how Entity Framework processes these during
SaveChanges. Entity Framework takes care of tracking the state of entities while they are connected to a context,
but in disconnected or N -Tier scenarios you can let EF know what state your entities should be in. The techniques
shown in this topic apply equally to models created with Code First and the EF Designer.
Another way to add a new entity to the context is to change its state to Added. For example:
context.SaveChanges();
}
Note that for all of these examples if the entity being added has references to other entities that are not yet tracked
then these new entities will also be added to the context and will be inserted into the database the next time that
SaveChanges is called.
context.SaveChanges();
}
Note that no changes will be made to the database if SaveChanges is called without doing any other manipulation
of the attached entity. This is because the entity is in the Unchanged state.
Another way to attach an existing entity to the context is to change its state to Unchanged. For example:
context.SaveChanges();
}
Note that for both of these examples if the entity being attached has references to other entities that are not yet
tracked then these new entities will also attached to the context in the Unchanged state.
context.SaveChanges();
}
When you change the state to Modified all the properties of the entity will be marked as modified and all the
property values will be sent to the database when SaveChanges is called.
Note that if the entity being attached has references to other entities that are not yet tracked, then these new
entities will attached to the context in the Unchanged state—they will not automatically be made Modified. If you
have multiple entities that need to be marked Modified you should set the state for each of these entities
individually.
context.SaveChanges();
}
Note that calling Add or Attach for an entity that is already tracked can also be used to change the entity state. For
example, calling Attach for an entity that is currently in the Added state will change its state to Unchanged.
context.SaveChanges();
}
}
Note that when you change the state to Modified all the properties of the entity will be marked as modified and all
the property values will be sent to the database when SaveChanges is called.
Working with property values
6/27/2019 • 10 minutes to read • Edit Online
For the most part Entity Framework will take care of tracking the state, original values, and current values of the
properties of your entity instances. However, there may be some cases - such as disconnected scenarios - where
you want to view or manipulate the information EF has about the properties. The techniques shown in this topic
apply equally to models created with Code First and the EF Designer.
Entity Framework keeps track of two values for each property of a tracked entity. The current value is, as the name
indicates, the current value of the property in the entity. The original value is the value that the property had when
the entity was queried from the database or attached to the context.
There are two general mechanisms for working with property values:
The value of a single property can be obtained in a strongly typed way using the Property method.
Values for all properties of an entity can be read into a DbPropertyValues object. DbPropertyValues then acts as
a dictionary-like object to allow property values to be read and set. The values in a DbPropertyValues object
can be set from values in another DbPropertyValues object or from values in some other object, such as
another copy of the entity or a simple data transfer object (DTO ).
The sections below show examples of using both of the above mechanisms.
// Read the current value of the Name property using a string for the property name
object currentName2 = context.Entry(blog).Property("Name").CurrentValue;
// Set the Name property to a new value using a string for the property name
context.Entry(blog).Property("Name").CurrentValue = "My Boring Blog";
}
Use the OriginalValue property instead of the CurrentValue property to read or set the original value.
Note that the returned value is typed as “object” when a string is used to specify the property name. On the other
hand, the returned value is strongly typed if a lambda expression is used.
Setting the property value like this will only mark the property as modified if the new value is different from the
old value.
When a property value is set in this way the change is automatically detected even if AutoDetectChanges is turned
off.
Getting and setting the current value of an unmapped property
The current value of a property that is not mapped to the database can also be read. An example of an unmapped
property could be an RssLink property on Blog. This value may be calculated based on the BlogId, and therefore
doesn't need to be stored in the database. For example:
The current value can also be set if the property exposes a setter.
Reading the values of unmapped properties is useful when performing Entity Framework validation of unmapped
properties. For the same reason current values can be read and set for properties of entities that are not currently
being tracked by the context. For example:
Note that original values are not available for unmapped properties or for properties of entities that are not being
tracked by the context.
The values of modified properties are sent as updates to the database when SaveChanges is called.
Marking a property as modified forces an update to be send to the database for the property when SaveChanges
is called even if the current value of the property is the same as its original value.
It is not currently possible to reset an individual property to be not modified after it has been marked as modified.
This is something we plan to support in a future release.
Console.WriteLine("\nOriginal values:");
PrintValues(context.Entry(blog).OriginalValues);
Console.WriteLine("\nDatabase values:");
PrintValues(context.Entry(blog).GetDatabaseValues());
}
The current values are the values that the properties of the entity currently contain. The original values are the
values that were read from the database when the entity was queried. The database values are the values as they
are currently stored in the database. Getting the database values is useful when the values in the database may
have changed since the entity was queried such as when a concurrent edit to the database has been made by
another user.
// Change the current and original values by copying the values from other objects
var entry = context.Entry(blog);
entry.CurrentValues.SetValues(coolBlog);
entry.OriginalValues.SetValues(boringBlog);
Console.WriteLine("\nOriginal values:");
PrintValues(entry.OriginalValues);
}
Current values:
Property Id has value 1
Property Name has value My Cool Blog
Original values:
Property Id has value 1
Property Name has value My Boring Blog
This technique is sometimes used when updating an entity with values obtained from a service call or a client in an
n-tier application. Note that the object used does not have to be of the same type as the entity so long as it has
properties whose names match those of the entity. In the example above, an instance of BlogDTO is used to
update the original values.
Note that only properties that are set to different values when copied from the other object will be marked as
modified.
PrintValues(currentValues);
}
Use the OriginalValues property instead of the CurrentValues property to set original values.
In the example above complex properties are accessed using dotted names. For other ways to access complex
properties see the two sections later in this topic specifically about complex properties.
Note that the object returned is not the entity and is not being tracked by the context. The returned object also
does not have any relationships set to other objects.
The cloned object can be useful for resolving issues related to concurrent updates to the database, especially
where a UI that involves data binding to objects of a certain type is being used.
// Get the nested State complex object using a single lambda expression
var state2 = context.Entry(user)
.Property(u => u.Location.State)
.CurrentValue;
// Get the value of the Name property on the nested State complex object using chained calls
var name1 = context.Entry(user)
.ComplexProperty(u => u.Location)
.ComplexProperty(l => l.State)
.Property(s => s.Name)
.CurrentValue;
// Get the value of the Name property on the nested State complex object using a single lambda expression
var name2 = context.Entry(user)
.Property(u => u.Location.State.Name)
.CurrentValue;
// Get the value of the Name property on the nested State complex object using a dotted string
var name3 = context.Entry(user)
.Property("Location.State.Name")
.CurrentValue;
}
Use the OriginalValue property instead of the CurrentValue property to get or set an original value.
Note that either the Property or the ComplexProperty method can be used to access a complex property.
However, the ComplexProperty method must be used if you wish to drill down into the complex object with
additional Property or ComplexProperty calls.
To print out all current property values the method would be called like this:
WritePropertyValues("", context.Entry(user).CurrentValues);
}
Handling Concurrency Conflicts
9/13/2018 • 5 minutes to read • Edit Online
Optimistic concurrency involves optimistically attempting to save your entity to the database in the hope that the
data there has not changed since the entity was loaded. If it turns out that the data has changed then an exception
is thrown and you must resolve the conflict before attempting to save again. This topic covers how to handle such
exceptions in Entity Framework. The techniques shown in this topic apply equally to models created with Code
First and the EF Designer.
This post is not the appropriate place for a full discussion of optimistic concurrency. The sections below assume
some knowledge of concurrency resolution and show patterns for common tasks.
Many of these patterns make use of the topics discussed in Working with Property Values.
Resolving concurrency issues when you are using independent associations (where the foreign key is not mapped
to a property in your entity) is much more difficult than when you are using foreign key associations. Therefore if
you are going to do concurrency resolution in your application it is advised that you always map foreign keys into
your entities. All the examples below assume that you are using foreign key associations.
A DbUpdateConcurrencyException is thrown by SaveChanges when an optimistic concurrency exception is
detected while attempting to save an entity that uses foreign key associations.
bool saveFailed;
do
{
saveFailed = false;
try
{
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Update the values of the entity that failed to save from the store
ex.Entries.Single().Reload();
}
} while (saveFailed);
}
A good way to simulate a concurrency exception is to set a breakpoint on the SaveChanges call and then modify
an entity that is being saved in the database using another tool such as SQL Management Studio. You could also
insert a line before SaveChanges to update the database directly using SqlCommand. For example:
context.Database.SqlCommand(
"UPDATE dbo.Blogs SET Name = 'Another Name' WHERE BlogId = 1");
The Entries method on DbUpdateConcurrencyException returns the DbEntityEntry instances for the entities that
failed to update. (This property currently always returns a single value for concurrency issues. It may return
multiple values for general update exceptions.) An alternative for some situations might be to get entries for all
entities that may need to be reloaded from the database and call reload for each of these.
bool saveFailed;
do
{
saveFailed = false;
try
{
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
} while (saveFailed);
}
bool saveFailed;
do
{
saveFailed = false;
try
{
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Get the current entity values and the values in the database
var entry = ex.Entries.Single();
var currentValues = entry.CurrentValues;
var databaseValues = entry.GetDatabaseValues();
bool saveFailed;
do
{
saveFailed = false;
try
{
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Get the current entity values and the values in the database
// as instances of the entity type
var entry = ex.Entries.Single();
var databaseValues = entry.GetDatabaseValues();
var databaseValuesAsBlog = (Blog)databaseValues.ToObject();
} while (saveFailed);
}
NOTE
EF6 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 6. If you are using
an earlier version, some or all of the information does not apply.
This document will describe using transactions in EF6 including the enhancements we have added since EF5 to
make working with transactions easy.
NOTE
The limitation of only accepting closed connections was removed in Entity Framework 6. For details, see Connection
Management.
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
static void StartOwnTransactionWithinContext()
{
using (var context = new BloggingContext())
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
context.Database.ExecuteSqlCommand(
@"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'"
);
context.SaveChanges();
dbContextTransaction.Commit();
}
catch (Exception)
{
dbContextTransaction.Rollback();
}
}
}
}
}
}
NOTE
Beginning a transaction requires that the underlying store connection is open. So calling Database.BeginTransaction() will
open the connection if it is not already opened. If DbContextTransaction opened the connection then it will close it when
Dispose() is called.
NOTE
The contextOwnsConnection flag must be set to false when called in this scenario. This is important as it informs Entity
Framework that it should not close the connection when it is done with it (for example, see line 4 below):
Furthermore, you must start the transaction yourself (including the IsolationLevel if you want to avoid the default
setting) and let Entity Framework know that there is an existing transaction already started on the connection (see
line 33 below ).
Then you are free to execute database operations either directly on the SqlConnection itself, or on the DbContext.
All such operations are executed within one transaction. You take responsibility for committing or rolling back the
transaction and for calling Dispose() on it, as well as for closing and disposing the database connection. For
example:
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
static void UsingExternalTransaction()
{
using (var conn = new SqlConnection("..."))
{
conn.Open();
sqlTxn.Commit();
}
catch (Exception)
{
sqlTxn.Rollback();
}
}
}
}
}
}
namespace TransactionsExamples
{
class TransactionsExample
{
static void UsingTransactionScope()
{
using (var scope = new TransactionScope(TransactionScopeOption.Required))
{
using (var conn = new SqlConnection("..."))
{
conn.Open();
scope.Complete();
}
}
}
}
The SqlConnection and Entity Framework would both use the ambient TransactionScope transaction and hence
be committed together.
Starting with .NET 4.5.1 TransactionScope has been updated to also work with asynchronous methods via the use
of the TransactionScopeAsyncFlowOption enumeration:
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
public static void AsyncTransactionScope()
{
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
using (var conn = new SqlConnection("..."))
{
await conn.OpenAsync();
await context.SaveChangesAsync();
}
}
}
}
}
}
NOTE
EF4.1 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 4.1. If you are
using an earlier version, some or all of the information does not apply
The content on this page is adapted from an article originally written by Julie Lerman ( http://thedatafarm.com).
Entity Framework provides a great variety of validation features that can feed through to a user interface for client-
side validation or be used for server-side validation. When using code first, you can specify validations using
annotation or fluent API configurations. Additional validations, and more complex, can be specified in code and will
work whether your model hails from code first, model first or database first.
The model
I’ll demonstrate the validations with a simple pair of classes: Blog and Post.
Data Annotations
Code First uses annotations from the System.ComponentModel.DataAnnotations assembly as one means of
configuring code first classes. Among these annotations are those which provide rules such as the Required,
MaxLength and MinLength. A number of .NET client applications also recognize these annotations, for example,
ASP.NET MVC. You can achieve both client side and server side validation with these annotations. For example,
you can force the Blog Title property to be a required property.
[Required]
public string Title { get; set; }
With no additional code or markup changes in the application, an existing MVC application will perform client side
validation, even dynamically building a message using the property and annotation names.
In the post back method of this Create view, Entity Framework is used to save the new blog to the database, but
MVC’s client-side validation is triggered before the application reaches that code.
Client side validation is not bullet-proof however. Users can impact features of their browser or worse yet, a hacker
might use some trickery to avoid the UI validations. But Entity Framework will also recognize the Required
annotation and validate it.
A simple way to test this is to disable MVC’s client-side validation feature. You can do this in the MVC application’s
web.config file. The appSettings section has a key for ClientValidationEnabled. Setting this key to false will prevent
the UI from performing validations.
<appSettings>
<add key="ClientValidationEnabled"value="false"/>
...
</appSettings>
Even with the client-side validation disabled, you will get the same response in your application. The error message
“The Title field is required” will be displayed as. Except now it will be a result of server-side validation. Entity
Framework will perform the validation on the Required annotation (before it even bothers to build and INSERT
command to send to the database) and return the error to MVC which will display the message.
Fluent API
You can use code first’s fluent API instead of annotations to get the same client side & server side validation.
Rather than use Required, I’ll show you this using a MaxLength validation.
Fluent API configurations are applied as code first is building the model from the classes. You can inject the
configurations by overriding the DbContext class’ OnModelCreating method. Here is a configuration specifying
that the BloggerName property can be no longer than 10 characters.
Validation errors thrown based on the Fluent API configurations will not automatically reach the UI, but you can
capture it in code and then respond to it accordingly.
Here’s some exception handling error code in the application’s BlogController class that captures that validation
error when Entity Framework attempts to save a blog with a BloggerName that exceeds the 10 character
maximum.
[HttpPost]
public ActionResult Edit(int id, Blog blog)
{
try
{
db.Entry(blog).State = EntityState.Modified;
db.SaveChanges();
return RedirectToAction("Index");
}
catch(DbEntityValidationException ex)
{
var error = ex.EntityValidationErrors.First().ValidationErrors.First();
this.ModelState.AddModelError(error.PropertyName, error.ErrorMessage);
return View();
}
}
The validation doesn’t automatically get passed back into the view which is why the additional code that uses
ModelState.AddModelError is being used. This ensures that the error details make it to the view which will then
use the ValidationMessageFor Htmlhelper to display the error.
IValidatableObject
IValidatableObject is an interface that lives in System.ComponentModel.DataAnnotations. While it is not part of the
Entity Framework API, you can still leverage it for server-side validation in your Entity Framework classes.
IValidatableObject provides a Validate method that Entity Framework will call during SaveChanges or you can call
yourself any time you want to validate the classes.
Configurations such as Required and MaxLength perform validaton on a single field. In the Validate method you
can have even more complex logic, for example, comparing two fields.
In the following example, the Blog class has been extended to implement IValidatableObject and then provide a
rule that the Title and BloggerName cannot match.
DbContext.ValidateEntity
DbContext has an Overridable method called ValidateEntity. When you call SaveChanges, Entity Framework will
call this method for each entity in its cache whose state is not Unchanged. You can put validation logic directly in
here or even use this method to call, for example, the Blog.Validate method added in the previous section.
Here’s an example of a ValidateEntity override that validates new Posts to ensure that the post title hasn’t been
used already. It first checks to see if the entity is a post and that its state is Added. If that’s the case, then it looks in
the database to see if there is already a post with the same title. If there is an existing post already, then a new
DbEntityValidationResult is created.
DbEntityValidationResult houses a DbEntityEntry and an ICollection of DbValidationErrors for a single entity. At the
start of this method, a DbEntityValidationResult is instantiated and then any errors that are discovered are added
into its ValidationErrors collection.
protected override DbEntityValidationResult ValidateEntity (
System.Data.Entity.Infrastructure.DbEntityEntry entityEntry,
IDictionary\<object, object> items)
{
var result = new DbEntityValidationResult(entityEntry, new List<DbValidationError>());
if (entityEntry.Entity is Post && entityEntry.State == EntityState.Added)
{
Post post = entityEntry.Entity as Post;
//check for uniqueness of post title
if (Posts.Where(p => p.Title == post.Title).Count() > 0)
{
result.ValidationErrors.Add(
new System.Data.Entity.Validation.DbValidationError("Title",
"Post title must be unique."));
}
}
if (result.ValidationErrors.Count > 0)
{
return result;
}
else
{
return base.ValidateEntity(entityEntry, items);
}
}
Summary
The validation API in Entity Framework plays very nicely with client side validation in MVC but you don't have to
rely on client-side validation. Entity Framework will take care of the validation on the server side for
DataAnnotations or configurations you've applied with the code first Fluent API.
You also saw a number of extensibility points for customizing the behavior whether you use the IValidatableObject
interface or tap into the DbContet.ValidateEntity method. And these last two means of validation are available
through the DbContext, whether you use the Code First, Model First or Database First workflow to describe your
conceptual model.
Entity Framework Resources
9/13/2018 • 2 minutes to read • Edit Online
Here you will find links and references to additional information related to EF, like blogs, third-party providers,
tools and extensions, case studies, etc.
Entity Framework Blogs
9/13/2018 • 2 minutes to read • Edit Online
Besides the product documentation, these blogs can be a source of useful information on Entity Framework:
EF Team blogs
.NET Blog - Tag: Entity Framework
ADO.NET Blog (no longer in use)
EF Design Blog (no longer in use)
EF Community Bloggers
Julie Lerman
Shawn Wildermuth
Microsoft Case Studies for Entity Framework
9/13/2018 • 3 minutes to read • Edit Online
The case studies on this page highlight a few real-world production projects that have employed Entity Framework.
NOTE
The detailed versions of these case studies are no longer available on the Microsoft website. Therefore the links have been
removed.
Epicor
Epicor is a large global software company (with over 400 developers) that develops Enterprise Resource Planning
(ERP ) solutions for companies in more than 150 countries. Their flagship product, Epicor 9, is based on a Service-
Oriented Architecture (SOA) using the .NET Framework. Faced with numerous customer requests to provide
support for Language Integrated Query (LINQ ), and also wanting to reduce the load on their back-end SQL
Servers, the team decided to upgrade to Visual Studio 2010 and the .NET Framework 4.0. Using the Entity
Framework 4.0, they were able to achieve these goals and also greatly simplify development and maintenance. In
particular, the Entity Framework’s rich T4 support allowed them to take full control of their generated code and
automatically build in performance-saving features such as pre-compiled queries and caching.
“We conducted some performance tests recently with existing code, and we were able to reduce the requests to
SQL Server by 90 percent. That is because of the ADO.NET Entity Framework 4.” – Erik Johnson, Vice
President, Product Research
Veracity Solutions
Having acquired an event-planning software system that was going to be difficult to maintain and extend over the
long-term, Veracity Solutions used Visual Studio 2010 to re-write it as a powerful and easy-to-use Rich Internet
Application built on Silverlight 4. Using .NET RIA Services, they were able to quickly build a service layer on top of
the Entity Framework that avoided code duplication and allowed for common validation and authentication logic
across tiers.
“We were sold on the Entity Framework when it was first introduced, and the Entity Framework 4 has proven
to be even better. Tooling is improved, and it’s easier to manipulate the .edmx files that define the conceptual
model, storage model, and mapping between those models... With the Entity Framework, I can get that data
access layer working in a day—and build it out as I go along. The Entity Framework is our de facto data access
layer; I don’t know why anyone wouldn’t use it.” – Joe McBride, Senior Developer
“With SQL Server, we felt we could get the throughput we needed to serve advertisers and networks with
information in real time and the reliability to help ensure that the information in our mission-critical
applications would always be available”- Mike Corcoran, Director of IT
Darwin Dimensions
Using a wide range of Microsoft technologies, the team at Darwin set out to create Evolver - an online avatar portal
that consumers could use to create stunning, lifelike avatars for use in games, animations, and social networking
pages. With the productivity benefits of the Entity Framework, and pulling in components like Windows Workflow
Foundation (WF ) and Windows Server AppFabric (a highly-scalable in-memory application cache), the team was
able to deliver an amazing product in 35% less development time. Despite having team members split across
multiple countries, the team following an agile development process with weekly releases.
“We try not to create technology for technology’s sake. As a startup, it is crucial that we leverage technology
that saves time and money. .NET was the choice for fast, cost-effective development.” – Zachary Olsen,
Architect
Silverware
With more than 15 years of experience in developing point-of-sale (POS ) solutions for small and midsize
restaurant groups, the development team at Silverware set out to enhance their product with more enterprise-level
features in order to attract larger restaurant chains. Using the latest version of Microsoft’s development tools, they
were able to build the new solution four times faster than before. Key new features like LINQ and the Entity
Framework made it easier to move from Crystal Reports to SQL Server 2008 and SQL Server Reporting Services
(SSRS ) for their data storage and reporting needs.
“Effective data management is key to the success of SilverWare – and this is why we decided to adopt SQL
Reporting.” - Nicholas Romanidis, Director of IT/Software Engineering
Contribute to Entity Framework 6
9/13/2018 • 2 minutes to read • Edit Online
Entity Framework 6 is developed using an open source model on GitHub. Although the main focus of the Entity
Framework Team at Microsoft is on adding new features to Entity Framework Core, and we don't expect any major
features to be added to Entity Framework 6, we still accept contributions.
For product contributions, please start at the Contributing wiki page in our GitHub repository.
For documentation contributions, please start read the contribution guidance in our documentation repository.
Get Help Using Entity Framework
9/13/2018 • 2 minutes to read • Edit Online
Code First
Creating an Entity Framework model using code. The model can target an existing database or a new database.
Context
A class that represents a session with the database, allowing you to query and save data. A context derives from
the DbContext or ObjectContext class.
Database First
Creating an Entity Framework model, using the EF Designer, that targets an existing database.
Eager loading
A pattern of loading related data where a query for one type of entity also loads related entities as part of the
query.
EF Designer
A visual designer in Visual Studio that allows you to create an Entity Framework model using boxes and lines.
Entity
A class or object that represents application data such as customers, products, and orders.
Explicit loading
A pattern of loading related data where related objects are loaded by calling an API.
Fluent API
An API that can be used to configure a Code First model.
Identifying relationship
A relationship where the primary key of the principal entity is part of the primary key of the dependent entity. In
this kind of relationship, the dependent entity cannot exist without the principal entity.
Independent association
An association between entities where there is no property representing the foreign key in the class of the
dependent entity. For example, a Product class contains a relationship to Category but no CategoryId property.
Entity Framework tracks the state of the association independently of the state of the entities at the two association
ends.
Lazy loading
A pattern of loading related data where related objects are automatically loaded when a navigation property is
accessed.
Model First
Creating an Entity Framework model, using the EF Designer, that is then used to create a new database.
Navigation property
A property of an entity that references another entity. For example, Product contains a Category navigation
property and Category contains a Products navigation property.
POCO
Acronym for Plain-Old CLR Object. A simple user class that has no dependencies with any framework. In the
context of EF, an entity class that does not derive from EntityObject, implements any interfaces or carries any
attributes defined in EF. Such entity classes that are decoupled from the persistence framework are also said to be
"persistence ignorant".
Relationship inverse
The opposite end of a relationship, for example, product.Category and category.Product.
Self-tracking entity
An entity built from a code generation template that helps with N -Tier development.
Table-per-hierarchy (TPH)
A method of mapping the inheritance where all types in the hierarchy are mapped to the same table in the
database. A discriminator column(s) is used to identify what type each row is associated with.
Table-per-type (TPT)
A method of mapping the inheritance where the common properties of all types in the hierarchy are mapped to
the same table in the database, but properties unique to each type are mapped to a separate table.
Type discovery
The process of identifying the types that should be part of an Entity Framework model.
School Sample Database
9/13/2018 • 14 minutes to read • Edit Online
This topic contains the schema and data for the School database. The sample School database is used in various
places throughout the Entity Framework documentation.
NOTE
The database server that is installed with Visual Studio is different depending on the version of Visual Studio you use. See
Visual Studio Releases for details on what to use.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
IMPORTANT
Extensions are built by a variety of sources and not maintained as part of Entity Framework. When considering a third party
extension, be sure to evaluate quality, licensing, compatibility, support, etc. to ensure they meet your requirements.
Entity Framework has been a popular O/RM for many years. Here are some examples of free and paid tools and
extensions developed for it:
EF Power Tools Community Edition
EF Profiler
ORM Profiler
LINQPad
LLBLGen Pro
Huagati DBML/EDMX Tools
Entity Developer
Entity Framework 5 License (CHS)
9/13/2018 • 2 minutes to read • Edit Online
下列许可条款说明了本补充程序的使用条款。这些条款和软件的许可条款在您使用本补充程序时适用。如果发生冲
突,则以这些补充程序许可条款为准。
如果您遵守 这 些 许 可条款,您将具有下列 权 利。
1. **可分发代码。**本补充程序中包含"可分发代码"。"可分发代码"是指,如果您遵守下述条款,则可以在您开发的
程序中分发这些代码。
a. 使用和分 发权 。
您可以复制和分发对象代码形式的补充程序。
第三方分发。您可以允许您的程序分销商作为这些程序的一部分复制和分发"可分发代码"。
c. 分 发 限制。您不可以
更改"可分发代码"中的任何版权、商标或专利声明;
在您的程序名称中使用 Microsoft 的商标,或者以其他方式暗示您的程序来自 Microsoft 或经 Microsoft 认可;
分发可分发代码,以便在 Windows 平台以外的任何平台上运行;
在恶意的、欺骗性的或非法的程序中包括可分发代码;或者
修改或分发任何可分发代码的源代码,致使其任何部分受到"排除许可"的制约。"排除许可"指符合以下使用、修
改或分发条件的许可:
以源代码形式披露或分发代码;或
其他人有权对其进行修改。
MICROSOFT 軟體增補程式授權條款
ENTITY FRAMEWORK 5.0 之 MICROSOFT WINDOWS OPERATING SYSTEM
Microsoft 公司(或其關係企業,視 貴用戶所居住的地點而定)授權 貴用戶使用本增補程式。 若 貴用戶取得
Microsoft Windows Operating System 軟體(「軟體」)之使用授權,即可使用本增補程式。 貴用戶若未取得軟體授
權,即不得使用本增補程式。 貴用戶擁有之每份有效授權軟體拷貝,均得使用本增補程式。
以下授權款條說明本增補程式之其他使用條款。 貴用戶使用本增補程式時,請遵守上述條款與軟體授權條款。若發
生使用爭議,亦適用這些補充的授權條款。
**1. 可散布程式碼。**增補程式由「可散佈程式碼」構成。「可散布程式碼」係若 貴用戶遵守以下條款,則可於 貴用
戶開發的程式中散布之程式碼。
a. 使用及散布的權利。
貴用戶得以目的碼形式複製與散布增補程式。
第三者廠商散布。 貴用戶得同意程式經銷商將「可散布程式碼」視為 貴用戶那些程式之一部分,進行複製與散
布。
b. 散布要件。針對 貴用 戶 散布的任何「可散布程式碼」, 貴用 戶 必須
在程式中,為「可散布程式碼」加入重要的新功能;
針對任何副檔名為 .lib 的「可散布程式碼」, 貴用戶僅得散布透過 貴用戶程式之連結器執行這類「可散布程式
碼」所產生的結果;
散布包含於某一安裝程式中的「可散布程式碼」時,僅能做為該安裝程式之一部分進行散布,且不得經過任何修
改;
要求散布者及外部終端使用者,需同意「保護『可散布程式碼』的程度不得低於本合約」之相關條款;
在程式中顯示有效的著作權聲明;以及
若因 貴用戶散布或使用程式而使 Microsoft 遭他人提出損害賠償請求權時, 貴用戶應賠償 Microsoft 之損失
(包括律師費),為之辯護,並使其不受損害。
c. 散布限制。 貴用 戶 不得
變更「可散布程式碼」中之任何著作權、商標或專利聲明;
於 貴用戶的程式名稱使用 Microsoft 的商標,或暗示該程式來自 Microsoft 或由 Microsoft 背書;
散布並於非 Windows 的平台上執行;
將「可散布程式碼」置於惡意、欺騙或違法的程式碼中;或
修改或散布任何「可散布程式碼」的原始碼,如此將會使其任何部分受到「排除性授權」之限制。「排除性授權」係指
在使用、修改或散布時,應遵守下列條件:
程式碼必須以原始碼形式揭露或散布,或
他人有修改的權利。
以下のライセンス条項は、について説明しています。これらの条項と本ソフトウェアのライセンス条項が本追加ソフトウェアの使用に
適用されます。両者の間に矛盾がある場合は、本追加ライセンス条項が適用されます。
本追加ソフトウェアを使用することにより、お客様はこれらの条項に同意されたものとします。これらの条項に同意されない
場合、本追加ソフトウェアを使用することはできません。
お客様がこれらのライセンス条項を遵守することを条件として、お客様は以下が許諾されます。
1. 再頒布可能コード 本追加ソフトウェアは再頒布可能コードで構成されています。「再頒布可能コード」とは、お客様が開発
されたプログラムに含めて再頒布することができるコードです。ただし、お客様は以下の条件に従うものとします。
**a. 使用および再頒布の権利 **
お客様は、本追加ソフトウェアをオブジェクト コード形式で複製し、再頒布することができます。
第三者による再頒布 お客様は、お客様のプログラムの頒布者に対して、お客様のプログラムの一部として再頒布可能コード
の複製および頒布を許可することができます。
b. 再頒布の条件 お客様は、お客様が頒布するすべての再頒布可能コードにつき、以下に従わなければなりません。
お客様のプログラムにおいて再頒布可能コードに重要な新しい機能を追加すること
.lib というファイル名拡張子が付いた再頒布可能コードの場合は、リンカーによってその再頒布可能コードを実行した結果だけ
をお客様のプログラムと共に再頒布すること。
セットアップ プログラムに含まれる再頒布可能コードを、改変されていないセットアップ プログラムの一部としてのみ頒布すること。
お客様のアプリケーションの頒布者およびエンド ユーザーに、本ライセンス条項と同等以上に再頒布可能コードを保護する条
項に同意させること
お客様のアプリケーションにお客様名義の有効な著作権表示を行うこと
お客様のプログラムの頒布または使用に関するクレームについて、マイクロソフトを免責、保護、補償すること (弁護士費用につ
いての免責、保護、補償も含む)
c. 再頒布の制限 以下の行為は一切禁止されています。
再頒布可能コードの著作権、商標または特許の表示を改変すること
お客様のプログラムの名称の一部にマイクロソフトの商標を使用したり、お客様の製品がマイクロソフトから由来したり、マイクロ
ソフトが推奨するように見せかけること
Windows プラットフォーム以外のプラット フォームで実行ために再頒布可能コードを再頒布すること
再頒布可能コードを悪質、詐欺的または違法なプログラムに組み込むこと
除外ライセンスのいずれかの条項が適用されることとなるような方法で再頒布可能コードのソース コードを改変または再頒布す
ること。「除外ライセンス」とは、使用、改変または再頒布の条件として以下の条件を満たすことを要求するライセンスです。
コードをソース コード形式で公表または頒布すること
他者が改変を行う権利を有すること
2. 本追加ソフトウェアのサポート サービス マイクロソフトは、本ソフトウェアに対し
www.support.microsoft.com/common/international.aspx で説明されるサポート サービスを提供します。
Entity Framework 5 License (KOR)
9/13/2018 • 2 minutes to read • Edit Online
이 추가 구성 요소를 사용하는 것으로 귀하는 아래의 조건들에 동의하게 됩니다. 동의하지 않을 경우에는 추
가 구성 요소를 사용하지 마십시오.
a. 사용 및 배포 권한.
귀하는 본 추가 구성 요소를 개체 코드 형태로 복사 및 배포할 수 있습니다.
제3 자에 의한 배포. 배포 가능 코드를 프로그램의 일부로 복사 및 배포할 수 있도록 프로그램 배포자에게 허용
할 수 있습니다.
MICROSOFT 软 件 许 可条款
MICROSOFT ENTITY FRAMEWORK
这些许可条款是Microsoft Corporation(或您所在地的 Microsoft 关联公司)与您之间达成的协议。请阅读条款内容。
这些条款适用于上述软件,包括您用来接收该软件的介质(如果有)。这些条款也适用于 Microsoft 为本软件提供的
任何
更新
补充程序
基于 Internet 的服务,以及
支持服务
(除非这些项目附带有其他条款)。如果确实附带有其他条款,应遵守那些条款。
**1. 安装和使用权利。**您可以在您的设备上安装和使用本软件任意数量的副本。
2. 其他 许 可要求和 /或使用 权 利。
**a. 可分发代码。**该软件包含可分发代码。如果您遵守下述条款,则可以在您开发的程序中分发这些代码。
i. 使用 权 利和分 发权 利。下列代 码 和文件 为 “可分 发 代 码 ”。
您可以复制和分发对象代码形式的软件文件。
第三方分发。您可以允许您的程序分销商将可分发代码作为这些程序的一部分进行复制和分发。
ii. 分 发 要求。 对 于您分 发 的任何可分 发 代 码 ,您必 须
在您的程序中为其添加重要的主要功能;
要求分销商及外部最终用户同意遵守保护条款且保护范围不得小于本协议;
在您的程序上显示有效的版权声明;以及
对于与分发或使用您的程序有关的任何索赔,为 Microsoft 提供辩护、补偿,包括支付律师费,并使
Microsoft 免受损害。
iii. 分 发 限制。您不得
改变可分发代码中的任何版权、商标或专利声明;
在您的程序名称中使用 Microsoft 的商标,或者以其他方式暗示您的程序来自 Microsoft 或经
Microsoft 认可;
分发“可分发代码”以在 Windows 平台以外的任何平台上运行;
在恶意的、欺骗性的或非法的程序中添加可分发代码;或者
修改或分发任何可分发代码的源代码,致使其任何部分受到“排除许可”的制约。排除许可指要求以
如下规定为使用、修改或分发条件的许可:
以源代码形式公布或分发代码;或者
其他人有权对其进行修改。
**4. 文档。**能够合法访问您的计算机或内部网络的所有用户都可以复制该文档,但仅供内部参考之用。
**5. 出口限制。**该软件受美国出口法律和法规的约束。您必须遵守适用于该软件的所有国内和国际出口法律和
法规。这些法律包括对目的地、最终用户和最终用途的各种限制。有关详细信息,请参阅
www.microsoft.com/exporting。
**6. 支持服务。**该软件是按“现状”提供的,所以我们可能不为其提供支持服务。
8. 适用的法律。
**a. 美国。**如果您在美国购买该软件,则对本协议的解释以及由于违反本协议而引起的索赔均以华盛顿州法
律为准并受其管辖,而不考虑冲突法原则。您所居住的州的法律管辖其他所有索赔项目,包括根据州消费者保护
法、不正当竞争法以及侵权行为提出的相关索赔。
**b. 美国以外。**如果您在其他任何国家/地区购买该软件,则应遵守该国家/地区的法律。
**9. 法律效力。**本协议规定了某些合法权利。根据您所在国家/地区的法律规定,您可能享有其他权利。您还可能
享有与您的软件卖方相关的权利。如果您所在国家/地区的法律不允许本协议改变您所在国家/地区法律赋予您的权
利,则本协议将不改变您按照所在国家/地区的法律应享有的权利。
此限制适用于:
即使 Microsoft 知道或应该知道可能会出现损害,此项限制也同样适用。由于您所在国家/地区可能不允许排除或限
制附带损害、后果性损害或其他损害的赔偿责任,因此上述限制或排除条款可能对您不适用。
Entity Framework 6 Runtime License (CHT)
9/13/2018 • 2 minutes to read • Edit Online
MICROSOFT 軟體授權條款
MICROSOFT ENTITY FRAMEWORK
本授權條款係一份由 貴用戶與Microsoft Corporation (或其關係企業,視 貴用戶所居住的地點而定) 之間所成立
之協議。請仔細閱讀這些授權條款。這些授權條款適用於上述軟體,包括 貴用戶所收受的媒體(如果有的話)。這些
條款亦適用於任何Microsoft 之
更新程式、
增補程式、
網際網路服務與
支援服務
但若上述項目另附有其他條款,如遇此情形,則其他條款優先適用。
若 貴用 戶 遵守本授權條款,即可永久享有以下權利。
1. 安裝與使用權利。 貴用戶得於裝置上安裝和使用任何數量之軟體拷貝。
2. 其他授權要件及 /或使用權利。
**a. 可散布程式碼。**若 貴用戶遵守以下條款,則 貴用戶得於自己開發的程式中散布軟體包含的部分程式
碼。
i. 使用及散布權利。下列程式碼與檔案為「可散布程式碼」。
貴用戶得以軟體檔案的目的碼形式複製與散布。
第三人散布。 貴用戶得同意程式經銷商將「可散布程式碼」視為 貴用戶之程式的一部分,進行複
製與散布。
ii. 散布要件。針對 貴用 戶 散布的任何「可散布程式碼」, 貴用 戶 必須
在 貴用戶的程式中,為「可散布程式碼」加入重要的主要功能;
要求散布者及外部終端使用者同意「保護『可散布程式碼』的程度不得低於本合約」之相關條款;
在程式中顯示有效的著作權標示;和
若因散布或使用 貴用戶之程式而使 Microsoft 遭他人提出索賠時, 貴用戶應賠償 Microsoft 之損
失 (包括律師費),使之免遭損害,並出面代為辯護。
iii. 散布限制。 貴用 戶 不得
變更「可散布程式碼」中之任何著作權、商標或專利聲明;
於 貴用戶的程式名稱使用 Microsoft 的商標,或暗示程式來自 Microsoft 或經由 Microsoft 背書;
散佈「可散佈程式碼」並於非 Windows 的平台上執行;
將「可散布程式碼」置於惡意、欺騙或違法的程式中;或
修改或散布任何可散布程式碼的原始碼,使其任何部分受到除外授權之約束。「除外授權」係指在使
用、修改或散布時,應遵守下列條件:
程式碼必須以原始碼形式揭露或散布,或
提供他人修改的權利。
規避軟體中所包含的科技保護措施;
對軟體進行還原工程、解編或反向組譯,但儘管有此限制相關法律仍明文允許者,不在此限;
將軟體發佈給其他人進行複製;
出租、租賃或出借軟體;
將軟體或本合約移轉給任何第三人;或者
利用軟體提供商業軟體主機服務。
**4. 說明文件。**任何有權存取 貴用戶之電腦或內部網路的人,皆得基於 貴用戶內部參考之目的,複製及使用
該說明文件。
**5. 出口限制。**軟體受到美國出口法令規定之規範。 貴用戶必須遵守適用於軟體之一切本國及國際出口法令規
定之規範。這些法規包括目的地限制、使用者限制和使用用途限制。如需詳細資訊,請參閱
www.microsoft.com/exporting。
**6. 支援服務。**本軟體係依「現況」提供,因此本公司得不提供支援服務。
**7. 整份合約。**本合約以及 貴用戶所使用的增補程式、更新程式、網際網路服務和支援服務之條款構成關於軟
體和支援服務之整份合約。
8. 準據法。
**a. 美國。**若 貴用戶在美國境內取得軟體,本合約之解釋或任何違反本合約所衍生的訴訟,無論是否有法規
衝突產生,均應以美國華盛頓州之法律做為準據法。所有其他訴訟將以 貴用戶居住之州法律為準據法,包含違
反州消費者保護法、不當競爭法和侵權行為的訴訟。
**b. 美國境外。**若 貴用戶在美國以外的國家/地區取得軟體,則本合約應以 貴用戶所居住之國家/地區的法
律為準據法。
**9. 法律效力。**本合約敘述了特定的法律權利。 貴用戶所在國家的法律可能會提供 貴用戶其他權利。此外,
貴用戶取得軟體的單位可能也會提供相關的權利。若 貴用戶所在之國家/地區法律不允許,則本合約無法改變 貴
用戶所在之國家/地區法律提供給 貴用戶的權利。
這項限制適用於
即使 Microsoft 已知悉或應知悉該等損害發生之可能性,此項限制仍然適用。此外, 貴用戶所在之國家/地區也可能
不允許對附隨性損害、衍生性損害或其他損害加以排除或限制,這種情況也可能造成上述限制或排除規定並不適用
於 貴用戶。
Entity Framework 6 Runtime License (DEU)
9/13/2018 • 6 minutes to read • Edit Online
MICROSOFT-SOFTWARELIZENZBESTIMMUNGEN
MICROSOFT ENTITY FRAMEWORK
Diese Lizenzbestimmungen sind ein Vertrag zwischen Ihnen und der Microsoft Corporation (oder einer anderen
Microsoft-Konzerngesellschaft, wenn diese an dem Ort, an dem Sie leben, die Software lizenziert). Bitte lesen Sie
die Bestimmungen aufmerksam durch. Sie gelten für die oben genannte Software und gegebenenfalls für die
Medien, auf denen Sie diese erhalten haben. Diese Bestimmungen gelten auch für alle von Microsoft diesbezüglich
angebotenen
Updates
Ergänzungen
internetbasierten Dienste und
Supportservices.
Liegen letztgenannten Elementen eigene Bestimmungen bei, gelten diese eigenen Bestimmungen.
Durch die Verwendung der Software erkennen Sie diese Bestimmungen an. Falls Sie die Bestimmungen
nicht akzeptieren, sind Sie nicht berechtigt, die Software zu verwenden.
Wenn Sie diese Lizenzbestimmungen einhalten, verfügen Sie über die nachfolgend aufgeführten
zeitlich unbeschränkten Rechte.
1. RECHTE ZUR INSTALLATION UND NUTZUNG. Sie sind berechtigt, eine beliebige Anzahl von Kopien der
Software auf Ihren Geräten zu installieren und zu verwenden.
2. ZUSÄTZLICHE LIZENZANFORDERUNGEN UND/ODER NUTZUNGSRECHTE.
a. Vertreibbarer Code. Die Software enthält Code, den Sie in von Ihnen entwickelten Programmen
vertreiben dürfen, wenn Sie die nachfolgenden Bestimmungen einhalten.
i. Recht zur Nutzung und zum Vertrieb. Bei dem nachfolgend aufgelisteten Code und den
nachfolgend aufgelisteten Dateien handelt es sich um „Vertreibbaren Code“.
Sie sind berechtigt, die Objektcodeform der Softwaredateien zu kopieren und zu vertreiben.
Vertrieb durch Dritte. Sie sind berechtigt, Distributoren Ihrer Programme zu erlauben, den
Vertreibbaren Code als Teil dieser Programme zu kopieren und zu vertreiben.
ii. Vertriebsbedingungen. Für Vertreibbaren Code, den Sie vertreiben, sind Sie verpflichtet:
diesem in Ihren Programmen wesentliche primäre Funktionalität hinzuzufügen
von Distributoren und externen Endbenutzern die Zustimmung zu Bestimmungen zu verlangen,
die einen mindestens gleichwertigen Schutz für ihn bieten wie dieser Vertrag
Ihren gültigen Urheberrechtshinweis auf Ihren Programmen anzubringen
Microsoft von allen Ansprüchen freizustellen und gegen alle Ansprüche zu verteidigen,
einschließlich Anwaltsgebühren, die mit dem Vertrieb oder der Verwendung Ihrer Programme in
Zusammenhang stehen.
iii. Vertriebsbeschränkungen. Sie sind nicht dazu berechtigt:
Urheberrechts-, Markenrechts- oder Patenthinweise im Vertreibbaren Code zu ändern
die Marken von Microsoft in den Namen Ihrer Programme oder auf eine Weise zu verwenden, die
nahe legt, dass Ihre Programme von Microsoft stammen oder von Microsoft empfohlen werden
Vertreibbaren Code zur Ausführung auf einer anderen Plattform als der Windows-Plattform zu
vertreiben
Vertreibbaren Code in bösartige, täuschende oder rechtswidrige Programme aufzunehmen
den Quellcode von Vertreibbarem Code so zu ändern oder zu vertreiben, dass irgendein Teil von
ihm einer Ausgeschlossenen Lizenz unterliegt. Eine Ausgeschlossene Lizenz ist eine Lizenz, die als
Bedingung für eine Verwendung, Änderung oder einen Vertrieb erfordert, dass:
der Code in Quellcodeform offengelegt oder vertrieben wird oder
andere das Recht haben, ihn zu ändern.
3. GÜLTIGKEITSBEREICH DER LIZENZ. Die Software wird lizenziert, nicht verkauft. Dieser Vertrag gibt
Ihnen nur einige Rechte zur Verwendung der Software. Microsoft behält sich alle anderen Rechte vor. Sie dürfen
die Software nur wie in diesem Vertrag ausdrücklich gestattet verwenden, es sei denn, das anwendbare Recht gibt
Ihnen ungeachtet dieser Einschränkung umfassendere Rechte. Dabei sind Sie verpflichtet, alle technischen
Beschränkungen der Software einzuhalten, die Ihnen nur spezielle Verwendungen gestatten. Sie sind nicht dazu
berechtigt:
technische Beschränkungen der Software zu umgehen
die Software zurückzuentwickeln (Reverse Engineering), zu dekompilieren oder zu disassemblieren, es sei denn,
dass (und nur insoweit) es das anwendbare Recht ungeachtet dieser Einschränkung ausdrücklich gestattet
die Software zu veröffentlichen, damit andere sie kopieren können
die Software zu vermieten, zu verleasen oder zu verleihen
die Software oder diesen Vertrag an Dritte zu übertragen oder
die Software für kommerzielle Software-Hostingdienste zu verwenden.
4. DOKUMENTATION. Jede Person, die über einen gültigen Zugriff auf Ihren Computer oder Ihr internes
Netzwerk verfügt, ist berechtigt, die Dokumentation zu Ihren internen Referenzzwecken zu kopieren und zu
verwenden.
5. AUSFUHRBESCHRÄNKUNGEN. Die Software unterliegt den Exportgesetzen und -regelungen der USA
sowie des Landes, aus dem sie ausgeführt wird. Sie sind verpflichtet, alle nationalen und internationalen
Exportgesetze und -regelungen einzuhalten, die für die Software gelten. Diese Gesetze enthalten auch
Beschränkungen in Bezug auf die Endnutzer und Endnutzung. Weitere Informationen finden Sie unter
www.microsoft.com/exporting.
6. SUPPORTSERVICES. Da diese Software „wie besehen“ bereitgestellt wird, stellen wir möglicherweise keine
Supportservices für sie bereit.
7. GESAMTER VERTRAG. Dieser Vertrag sowie die Bestimmungen für von Ihnen verwendete Ergänzungen,
Updates, internetbasierte Dienste und Supportservices stellen den gesamten Vertrag für die Software und die
Supportservices dar.
8. ANWENDBARES RECHT.
a. Vereinigte Staaten. Wenn Sie die Software in den Vereinigten Staaten erworben haben, regelt das Gesetz
des Staates Washington die Auslegung dieses Vertrages und gilt für Ansprüche, die aus einer
Vertragsverletzung entstehen, ungeachtet der Bestimmungen des internationalen Privatrechts. Die Gesetze des
Staates Ihres Wohnorts regeln alle anderen Ansprüche, einschließlich Ansprüche aus den
Verbraucherschutzgesetzen des Staates, aus Gesetzen gegen unlauteren Wettbewerb und aus Deliktsrecht.
b. Außerhalb der Vereinigten Staaten. Wenn Sie die Software in einem anderen Land erworben haben,
gelten die Gesetze dieses Landes.
9. RECHTLICHE WIRKUNG. Dieser Vertrag beschreibt bestimmte Rechte. Möglicherweise haben Sie unter den
Gesetzen Ihres Landes weitergehende Rechte. Möglicherweise verfügen Sie außerdem über Rechte im Hinblick auf
die Partei, von der Sie die Software erworben haben. Dieser Vertrag ändert nicht Ihre Rechte, die sich aus den
Gesetzen Ihres Landes ergeben, sofern die Gesetze Ihres Landes dies nicht zulassen.
10. AUSSCHLUSS VON GARANTIEN. Die Software wird „wie besehen“ lizenziert. Sie tragen das mit
der Verwendung verbundene Risiko. Microsoft gewährt keine ausdrücklichen Gewährleistungen
oder Garantien. Möglicherweise gelten unter den örtlich anwendbaren Gesetzen zusätzliche
Verbraucherrechte oder gesetzliche Garantien, die durch diesen Vertrag nicht abgeändert werden
können. Im durch das örtlich anwendbare Recht zugelassenen Umfang schließt Microsoft konkludente
Garantien der Handelsüblichkeit, Eignung für einen bestimmten Zweck und Nichtverletzung von
Rechten Dritter aus.
FÜR AUSTRALIEN – Nach dem Australian Consumer Law gelten gesetzliche Garantien, und es besteht
an keiner Stelle in diesen Bestimmungen die Absicht, diese Rechte einzuschränken.
11. BESCHRÄNKUNG UND AUSSCHLUSS DES SCHADENERSATZES. Sie können von Microsoft und
deren Lieferanten nur einen Ersatz für direkte Schäden bis zu einem Betrag von 5,00 US -Dollar erhalten.
Sie können keinen Ersatz für andere Schäden erhalten, einschließlich Folgeschäden, Schäden aus
entgangenem Gewinn, spezielle, indirekte oder zufällige Schäden.
Diese Beschränkung gilt für
jeden Gegenstand im Zusammenhang mit der Software, Diensten, Inhalten (einschließlich Code) auf
Internetseiten von Drittanbietern oder Programmen von Drittanbietern und
Ansprüche aus Vertragsverletzungen, Verletzungen der Garantie oder der Gewährleistung,
verschuldensunabhängiger Haftung, Fahrlässigkeit oder anderen unerlaubten Handlungen im durch das
anwendbare Recht zugelassenen Umfang.
Sie hat auch dann Gültigkeit, wenn Microsoft von der Möglichkeit der Schäden gewusst hat oder hätte wissen
müssen. Obige Beschränkung und obiger Ausschluss gelten möglicherweise nicht für Sie, weil Ihr Land den
Ausschluss oder die Beschränkung von zufälligen Schäden, Folgeschäden oder sonstigen Schäden nicht gestattet.
Wenn Sie die Software in DEUTSCHLAND oder in ÖSTERREICH erworben haben, findet die Beschränkung im
vorstehenden Absatz „Beschränkung und Ausschluss des Schadenersatzes“ auf Sie keine Anwendung. Stattdessen
gelten für Schadenersatz oder Ersatz vergeblicher Aufwendungen, gleich aus welchem Rechtsgrund einschließlich
unerlaubter Handlung, die folgenden Regelungen: Microsoft haftet bei Vorsatz, grober Fahrlässigkeit, bei
Ansprüchen nach dem Produkthaftungsgesetz sowie bei Verletzung von Leben, Körper oder der Gesundheit nach
den gesetzlichen Vorschriften. Microsoft haftet nicht für leichte Fahrlässigkeit. Wenn Sie die Software jedoch in
Deutschland erworben haben, haftet Microsoft auch für leichte Fahrlässigkeit, wenn Microsoft eine Vertragspflicht
verletzt, deren Erfüllung die ordnungsgemäße Durchführung des Vertrages überhaupt erst ermöglicht, deren
Verletzung die Erreichung des Vertragszwecks gefährdet und auf deren Einhaltung Sie regelmäßig vertrauen
dürfen (sog. „Kardinalpflichten“). In diesen Fällen ist die Haftung von Microsoft auf typische und vorhersehbare
Schäden beschränkt. In allen anderen Fällen haftet Microsoft auch in Deutschland nicht für leichte Fahrlässigkeit.
Entity Framework 6 Runtime License (ENU)
9/13/2018 • 5 minutes to read • Edit Online
更新プログラム
追加ソフトウェア
インターネット ベースのサービス
サポート サービス
これらの製品に別途ライセンス条項が付属している場合には、当該ライセンス条項が適用されるものとします。
本ソフトウェアを使用することにより、お客様は本ライセンス条項に同意されたものとします。本ライセンス条項に同意され
ない場合、本ソフトウェアを使用することはできません。
お客様が本ライセンス条項を遵守することを条件として、お客様には以下の永続的な権利が許諾されます。
**1. インストールおよび使用に関する権利。**お客様は、本ソフトウェアの任意の部数の複製をお客様のデバイスにインストール
して使用することができます。
2. 追加のライセンス条件および追加の使用権。
**a. 再頒布可能コード。**本ソフトウェアには、お客様が開発されたプログラムに含めて再頒布可能なコードが含まれていま
す。ただし、以下の条件に従うものとします。
i. 使用および再頒布の権利。以下に記載するコードおよびファイルを「再頒布可能コード」と定義します。
お客様は、ソフトウェア ファイルをオブジェクト コード形式で複製し、再頒布することができます。
第三者による再頒布。お客様は、お客様のプログラムの頒布者に対して、お客様のプログラムの一部として再
頒布可能コードの複製および頒布を許可することができます。
ii. 再頒布の条件。お客様は、お客様が頒布するすべての再頒布可能コードにつき、以下の条項に従わな
ければなりません。
お客様のプログラムにおいて再頒布可能コードに重要かつ主要な機能を追加すること。
お客様のプログラムの頒布者および外部エンド ユーザーに、本ライセンス条項と同等以上に再頒布可能コード
を保護する条項に同意させること。
お客様のプログラムにお客様名義の有効な著作権表示を行うこと。
お客様のプログラムの頒布または使用に関する請求 (弁護士報酬を含みます ) について、マイクロソフトを免
責、防御、および補償すること。
iii. 再頒布の制限。お客様は、以下を行うことはできません。
再頒布可能コードの著作権、商標または特許の表示を改変すること。
お客様のプログラムの名称の一部にマイクロソフトの商標を使用したり、お客様の製品がマイクロソフトから由来
したり、マイクロソフトが推奨するように見せかけること。
Windows プラットフォーム以外のプラットフォームで実行するプログラムにおいて再配布可能コードを配布するこ
と
悪意のある、欺瞞的、または違法なプログラムに再頒布可能コードを含めること。
再頒布可能コードの一部に除外ライセンスが適用されることとなるような方法で再頒布可能コードのソース コー
ドを改変または頒布すること。「除外ライセンス」とは、使用、改変または頒布の条件として以下を満たすことを
要求するライセンスです。
コードをソース コード形式で公表または頒布すること。または
その他の者がコード改変の権利を有すること。
**3. ライセンスの適用範囲。**本ソフトウェアは使用許諾されるものであり、販売されるものではありません。本ライセンス条項
は、お客様に本ソフトウェアを使用する限定的な権利を付与します。マイクロソフトはその他の権利をすべて留保します。適用され
る法令により上記の制限を超える権利が与えられる場合を除き、お客様は本ライセンス条項で明示的に許可された方法でのみ
本ソフトウェアを使用することができます。お客様は、使用方法を制限するために本ソフトウェアに組み込まれている技術的制限に
従わなければなりません。お客様は、以下を行うことはできません。
本ソフトウェアの技術的な制限を回避して使用すること。
本ソフトウェアをリバース エンジニアリング、逆コンパイル、または逆アセンブルすること。ただし、適用される法令により明示的に
認められている場合を除きます。
第三者が複製できるように本ソフトウェアを公開すること。
本ソフトウェアをレンタル、リース、または貸与すること。
本ソフトウェアまたは本ライセンス条項を第三者に譲渡すること。
本ソフトウェアを商用ソフトウェア ホスティング サービスで使用すること。
**4. ドキュメンテーション。**お客様のコンピューターまたは内部ネットワークに有効なアクセス権を有する者は、お客様の内部使
用目的に限り、ドキュメントを複製して使用することができます。
**5. 輸出規制。**本ソフトウェアは米国および日本国の輸出に関する規制の対象となります。お客様は、本ソフトウェアに適用
されるすべての国内法および国際法 (輸出対象国、エンド ユーザーおよびエンド ユーザーによる使用に関する制限を含みます ) を
遵守しなければなりません。詳細については www.microsoft.com/japan/exporting をご参照ください。
8. 準拠法。
**a. 日本。**お客様が本ソフトウェアを日本国内で入手された場合、本ライセンス条項は日本法に準拠するものとします。
**b. 米国。**お客様が本ソフトウェアを米国内で入手された場合、抵触法に関わらず、本ライセンス条項の解釈および契
約違反への主張は、米国ワシントン州法に準拠するものとします。消費者保護法、公正取引法、および違法行為を含みま
すがこれに限定されない他の主張については、お客様が所在する地域の法律に準拠します。
**c. 日本および米国以外。**お客様が本ソフトウェアを日本国および米国以外の国で入手された場合、本ライセンス条項
は適用される地域法に準拠するものとします。
**9. 法的効力。**本ライセンス条項は、特定の法的な権利を規定します。お客様は、地域や国によっては、本ライセンス条項
の定めにかかわらず、本ライセンス条項と異なる権利を有する場合があります。また、お客様は本ソフトウェアの取得取引の相手
方に対して権利を取得できる場合もあります。本ライセンス条項は、お客様の地域または国の法律により権利の拡大が認められ
ない限り、それらの権利を変更しないものとします。
10. あらゆる保証の免責。本ソフトウェアは、現状有姿のまま瑕疵を問わない条件で提供されます。本ソフトウェアの使
用に伴う危険は、お客様の負担とします。マイクロソフトは、明示的な瑕疵担保責任または保証責任を一切負いませ
ん。本ライセンス条項では変更できないお客様の地域の法律による追加の消費者の権利または法定保証が存在する場
合があります。お客様の地域の国内法等によって認められる限り、マイクロソフトは、商品性、特定目的に対する適合
性、および侵害の不存在に関する瑕疵担保責任または黙示の保証責任を負いません。
オーストラリア限定。お客様は、オーストラリア消費者法に基づく法定保証を有し、これらの条項は、それらの権利に影響
を与えることを意図するものではありません。
この制限は、以下に適用されるものとします。
この制限は、マイクロソフトが損害の可能性を認識していたか、または認識し得た場合にも適用されます。また、一部の国では付
随的損害および派生的損害の免責、または責任の制限が認められないため、上記の制限事項が適用されない場合があります。
Entity Framework 6 Runtime License (KOR)
9/13/2018 • 4 minutes to read • Edit Online
업데이트,
추가 구성 요소,
인터넷 기반 서비스 및
지원 서비스
이 소프트웨어를 사용함으로써 귀하는 아래의 조항들에 동의하게 됩니다. 동의하지 않을 경우에는 소프트
웨어를 사용하지 마십시오.
3. 사용권의 범위. 본 소프트웨어는 판매되는 것이 아니라 그 사용이 허여되는 것입니다. 본 계약은 귀하에게 소
프트웨어를 사용할 수 있는 권한을 허여합니다. 기타 모든 권한은 Microsoft가 보유합니다. 이러한 제한과 관계없
이 관련 법률에서 귀하에게 더 많은 권한을 부여하지 않는 한, 귀하는 본 계약에서 명시적으로 허용되는 조건에 한
해서만 소프트웨어를 사용할 수 있습니다. 그렇게 하는 경우 귀하는 특정 방식으로만 사용할 수 있도록 하는 소프
트웨어의 모든 기술적 제한 사항을 준수해야 합니다. 다음과 같은 행위는 허용되지 않습니다.
4. 설명서. 귀하의 컴퓨터 또는 내부 네트워크에 유효한 액세스 권한이 있는 사용자는 내부적인 참고 목적으로
설명서를 복사 및 사용할 수 있습니다.
8. 관련 법률.
a. 미국. 소프트웨어를 미국에서 구입한 경우, 국제사법 원칙에 관계없이 본 계약의 해석은 워싱턴 주법을 따
르며 계약 위반에 대한 청구 발생 시에도 워싱턴 주법이 적용됩니다. 소비자 보호법, 불공정거래법 및 기타 불법
행위 관련 법규의 적용을 받는 청구가 발생한 경우 귀하가 거주하고 있는 주의 주법이 적용됩니다.
b. 미국 외 지역. 본 사용권 계약에는 대한민국 법이 적용됩니다.
9. 법적 효력. 본 계약은 특정 법적 권리에 대해 기술하고 있습니다. 귀하는 귀하가 거주하고 있는 국가의 법규가
보장하는 다른 권리를 보유할 수 있습니다. 또한 귀하가 소프트웨어를 구입한 당사자와 관련된 권리를 보유할 수
도 있습니다. 귀하가 거주하고 있는 국가의 법에서 권리 변경을 허용하지 않는 경우 본 계약은 해당 권리를 변경하
지 않습니다.
10. 보증의 부인. 이 소프트웨어는 "있는 그대로 " 사용권이 허여됩니다. 소프트웨어의 사용으로 발생하는
위험은 귀하의 책임입니다. Microsoft는 어떠한 명시적 보증 , 보장 또는 조건도 제시하지 않습니다. 귀하는
귀하가 거주하는 지역의 법규에 따른 추가적인 소비자 권리 또는 법적 권리를 보유할 수 있으며 , 이 권리는
본 계약을 통해 변경되지 않습니다. 귀하가 거주하는 지역의 법규가 허용하는 범위 내에서 Microsoft는 상업
성 , 특정 목적에의 적합성 및 비침해성과 관련된 묵시적 보증을 배제합니다.
Microsoft가 그러한 손해의 가능성에 대해 사전에 알고 있었거나 알아야만 했던 경우에도 적용됩니다. 귀하가 거주
하고 있는 국가에서 부수적, 결과적 또는 기타 손해의 배제나 제한을 허용하지 않는 경우에는 위의 제한이나 배제
가 적용되지 않을 수 있습니다.
Entity Framework 6 Runtime License (RUS)
9/13/2018 • 5 minutes to read • Edit Online